CLOB Datatype (Assgin more than 32k fails)

Dear All
Can anyone tell me why i am getting this error if i assign more than 32k character to clob variable in pl/sql
but i can assign it from table to a variable
Pl/sql 1(ORA-06502: PL/SQL: numeric or value error: character string buffer too small)
declare
c clob;
v varchar(32767);
begin
for i in 1..90000 (just assuming it as 90k)
loop
v:=v||'x';
if length(v) > 31000 then
c:=c||v; -- here iam getting error while assigning character to clob if it is more than 32k
v:=null;
end if;
end loop;
c:=v;
end;
Pl/sql 2 (works fine when assgin from table)
declare
c clob;
begin
select clob_data into c from x; -- clob data is more than 32k;
end;
But it works fine with database 9i rel2 but in 11g i am facing this problem.
Regards

hey U r getting the error because of v varchar(32767) , but clob data type can take large amont of data atleast 1000000 this much data can be stored in clob data type try it with this. or else see below
Hi see the example below
create table temp(col clob);
declare
v_string clob;
begin
for i in 1..1000000
v_string:=v_string||'A';
end loop;
insert into temp values(v_string);
end;
now
select length(col) from temp; you will get output as
length(col)
1000000
this means that clob datatype can store minimum of 1000000 character.

Similar Messages

  • How to insert more than 32k xml data into oracle clob column

    how to insert more than 32k xml data into oracle clob column.
    xml data is coming from java front end
    if we cannot use clob than what are the different options available

    Are you facing any issue with my code?
    String lateral size error will come when you try to insert the full xml in string format.
    public static boolean writeCLOBData(String tableName, String id, String columnName, String strContents) throws DataAccessException{
      boolean isUpdated = true;
      Connection connection = null;
      try {
      connection = ConnectionManager.getConnection ();
      //connection.setAutoCommit ( false );
      PreparedStatement PREPARE_STATEMENT = null;
      String sqlQuery = "UPDATE " + tableName + " SET " + columnName + "  = ?  WHERE ID =" + id;
      PREPARE_STATEMENT = connection.prepareStatement ( sqlQuery );
      // converting string to reader stream
      Reader reader = new StringReader ( strContents );
      PREPARE_STATEMENT.setClob ( 1, reader );
      // return false after updating the clob data to DB
      isUpdated = PREPARE_STATEMENT.execute ();
      PREPARE_STATEMENT.close ();
      } catch ( SQLException e ) {
      e.printStackTrace ();
      finally{
      return isUpdated;
    Try this JAVA code.

  • How can I write more than 32k file in oracle directory

    Hi experts,
    I am struggling while I write more than 32k file size in oracle directory, and throws an error ‘ORA-06502: PL/SQL: numeric or value error, like this.
    This is my procedure
    declare
    l_s_filename   UTL_FILE.file_type;
    begin
       l_s_filename := UTL_FILE.fopen ('INFO_MIGRATION', 'finfinne.txt', 'W');
    FOR rec
          IN ( SELECT SQL_REDO
                  FROM V$LOGMNR_CONTENTS
                 WHERE seg_owner <> 'SYS' AND username = 'GENTEST'
                 AND TABLE_NAME NOT LIKE '%_TEMP'
                       AND OPERATION IN ('UPDATE','INSERT','DELETE')
              ORDER BY TIMESTAMP)
       LOOP
          UTL_FILE.put_line (l_s_filename, rec.SQL_REDO);
       END LOOP;
       UTL_FILE.fclose (l_s_filename);
    end;can any please help me how can I overcome this problem
    Thanks,
    Arun

    You can write by breaking it into small chunks. Also you can try to use DBMS_XSLPROCESSOR.CLOB2FILE. For UTL_FILE the code snippets may looks like
    -- Read chunks of the CLOB and write them to the file
    -- until complete.
       WHILE l_pos < l_blob_len
       LOOP
          DBMS_LOB.READ (rec.l_clob, l_amount, l_pos, l_buffer);
          UTL_FILE.put_line (l_file, l_buffer, FALSE);
          l_pos := l_pos + l_amount;
       END LOOP;

  • Reading contents of a BLOB datatype having more than 4000 characters.

    Hi,
    I am unable to read contents of a column having BLOB datatype.I tried using ult_raw.cast_varchar2(Col_name) procedure,but since varchar2 has a size limit of 4000 bytes,complete contents are not visible.
    Pl. suggest some way to view the contents.
    Regards,
    Saket Bansal

    In the link that you mentioned a procedure is used that gets the length first then loops through if length is greater than 32760.
    I mean to say can user get the output in a single SQL query e.g.
    select utl_raw.cast_to_varchar2(column_name) from table_name;something like this for bytes greater than 4000 bytes.
    ----- Read your reply a bit late thanks.
    Edited by: Avi on Mar 2, 2009 1:21 AM

  • Output from Concurrent program in XML format for bytes greater than 32K

    Hi,
    I created a custom concurrent program where i send the generated XML data as an output to the program.
    I have an XML template attached to the program, so the template picks up the xml output and converts it to a PDF.
    so ultimately when i run the concurrent program, the output is a PDF file. This way it is easy for the user to just run a program and get a PDF file.
    Now the generated XML data is from an oracle seeded program and it is a BLOB.
    I am converting it to a CLOB so that i can write it to the output. if the CLOB doesnt exceed 32K bytes, i have no issues writing it to the output.
    But since we cannot write more than 32K bytes, i an using a substr to write chunks of the CLOB for every 30,000 bytes.
    since it is chunking at every 30,000 bytes the next 30,000 bytes are coming in the next line and XML publisher is throwing an error with invalid character.
    Any idea how i can overcome this?
    Either i write the whole 33000+ bytes in one line to the fnd output or try to remove the line breaks from the xml data.
    Thanks in advance for anyone reading this!

    >
    > since it is chunking at every 30,000 bytes the next 30,000 bytes are coming in the next line and XML publisher is throwing an error with invalid character.
    Any idea how i can overcome this?So I suppose you're using FND_FILE.PUT_LINE ?
    Why not use FND_FILE.PUT instead, so that no new line is generated after each chunk ?
    http://docs.oracle.com/cd/E18727_01/doc.121/e12897/T302934T458258.htm#I_fndfile

  • How to pass data more than raw limit..

    Hi,
    How can i send msg payload which has more
    data than 32k. I dont want to use OCI. I am
    trying to send a BIG XML through AQ.
    Thanks
    Dhiraj.
    Note in docs below it says:::
    For PL/SQL, Java and precompilers the limit is 32K; for the OCI the limit is 4G.
    From DOCS
    RAW" ---
    To store payload of type RAW, AQ creates a queue table with a LOB column as the payload repository. The theoretical maximum size of the message payload is the maximum amount of data that can be stored in a LOB column. However, the maximum size of the payload is determined by which programmatic environment you use to access AQ. For PL/SQL, Java and precompilers the limit is 32K; for the OCI the limit is 4G. Because the PL/SQL enqueue and dequeue interfaces accept RAW buffers as the payload parameters you will be limited to 32K bytes. In OCI, the maximum size of your RAW data will be limited to the maximum amount of contiguous memory (as an OCIRaw is simply an array of bytes) that the OCI Object Cache can allocate. Typically, this will be at least 32K bytes and much larger in many cases.
    Because LOB columns are used for storing RAW payload, the AQ administrator can choose the LOB tablespace and configure the LOB storage by constructing a LOB storage string in the storage_clause parameter during queue table creation time.

    Dhiraj,
    Its a PL SQL limitation that more than 32K cannot be passed as raw.
    You can use OCI Interface to achieve that. If you specifically dont want to use OCI, you can create an ADT with a lob in it and use PL/SQL to pass in raw data with size > 32K.
    null

  • 10g BLOB/CLOB can we store more than 4k in SQL

    Dear Friends
    I know we had issues with BLOB and CLOB storing more than 4k in Oracle 8i using SQL. Is it the same in 10g BLOB/CLOB can we store more than 4k in SQL?
    Please help me with some documentationw which explains these aspects with CLOB and BLOB in 10g
    Thanks
    Farouk

    Thanks for your help,
    I understand we can store blobs more than 4k in 10g using DBMB_LOG but there is a constraint in 8i that using SQL u can store directly only 4k and using a bind variable in PL/SQL we can store upto 32K. Is that the same in 10g or can we store directly using sql more than 4k?
    Thanks
    Farouk

  • Fail to load and read dbms_lob.bfile offset more than 2G

    Hi!
    Recently I have encountered a strange problem I can't find any mention about.
    I load a large file by pieces of 1 Gigabyte. When offset becomes more than 2 Gigabytes (2147483649 bytes for example) I have Exception ORA-22288: file or LOB operation FILEOPEN failed.
    (The file about 4G is already loaded.)
    declare
    p_dir_name varchar2(200) := 'DIR_BLOB_IN' ;
    p_file_name varchar2(200) := 'A vid2.avi' ;
    v_blob_id integer := 194 ;
    v_blob_loc blob;
    v_bfile_loc bfile;
    v_inc_var integer := 1073741824 ;
    v_blob_offset integer := 2147483649 ;
    v_bfile_offset integer := 2147483649 ; -- Offset more than 2G - the procedure doesn't work
    v_bfile_offset      integer        := 1073741825 ;  Offset = 2G
    v_err_message varchar2(200);
    v_bfile_is_open boolean := false;
    function blob_forupdate (blob_id in integer)
    return blob
    is
    cursor blob_to_open
    is
    select blob_in_body
    from tb_blob_in
    where blob_in_id = blob_id
    for update;
    blob_body blob_to_open%rowtype;
    begin
    open blob_to_open;
    fetch blob_to_open into blob_body;
    close blob_to_open;
    return blob_body.blob_in_body;
    end;
    begin
    v_blob_loc := blob_forupdate( v_blob_id );
    v_bfile_loc := bfilename(p_dir_name, p_file_name);
    dbms_output.put_line( 'Opening bfile...' );
    dbms_lob.fileopen( v_bfile_loc );
    dbms_output.put_line( 'Bfile is open.' );
    dbms_lob.loadfromfile( v_blob_loc, v_bfile_loc, v_inc_var, v_blob_offset, v_bfile_offset ); commit;
    dbms_output.put_line( 'Closing bfile...' );
    dbms_lob.fileclose( v_bfile_loc );
    dbms_output.put_line( 'Bfile is closed.' );
    exception
    when others
    then
    v_err_message := substr( sqlerrm, 1 , 200);
    dbms_output.put_line( 'OTHERS Exception ' || sqlerrm );
    v_bfile_is_open := dbms_lob.fileisopen(v_bfile_loc) = 1 ;
    if v_bfile_is_open
    then
    dbms_output.put_line( 'File ''' || p_file_name || ''' in the directory ''' || p_dir_name || ''' is open!' );
    dbms_output.put_line( 'Closing it...' );
    dbms_lob.fileclose( v_bfile_loc );
    dbms_output.put_line( 'The file is closed!' );
    else
    dbms_output.put_line('File ' || p_file_name || ' in the directory ''' || p_dir_name || ''' closed!' );
    end if;
    end;
    The same story with dbms_lob.read operation:
    declare
    p_dir_name varchar2(200) := 'DIR_BLOB_IN';
    p_file_name varchar2(200) := 'A vid2.avi';
    v_bfile_loc bfile;
    v_read_amount binary_integer := 10 ;
    v_read_offset integer;
    v_read_buffer varchar2(20);
    v_bfile_is_open boolean := false;
    v_message varchar2(200);
    begin
    dbms_output.put_line( 'File ' || p_file_name || ' in ' || p_dir_name || '' || '' );
    v_bfile_loc := bfilename(p_dir_name, p_file_name);
    dbms_lob.fileopen( v_bfile_loc );
    v_message := 'Offset < 2G: ' ;
    dbms_output.put_line( v_message );
    v_read_offset := 2000000000; -- offset < 2G
    dbms_lob.read( v_bfile_loc, v_read_amount, v_read_offset, v_read_buffer );
    dbms_output.put_line( v_message || to_char( v_read_buffer ) );
    v_message := 'Offset= 2G: ' ;
    dbms_output.put_line( v_message );
    v_read_offset := 2147483648; -- offset = 2G
    dbms_lob.read( v_bfile_loc, v_read_amount, v_read_offset, v_read_buffer );
    dbms_output.put_line( v_message || to_char( v_read_buffer ) );
    v_message := 'Offset > 2G: ' ;
    dbms_output.put_line( v_message );
    v_read_offset := 2147483649; -- offset > 2G
    dbms_lob.read( v_bfile_loc, v_read_amount, v_read_offset, v_read_buffer );
    dbms_output.put_line( v_message || to_char( v_read_buffer ) );
    dbms_lob.fileclose( v_bfile_loc );
    exception
    when others
    then
    dbms_output.put_line(sqlerrm);
    dbms_output.put_line('The Beginning');
    v_bfile_is_open := dbms_lob.fileisopen(v_bfile_loc) = 1 ;
    dbms_output.put_line('The End');
    if v_bfile_is_open
    then dbms_lob.fileclose( v_bfile_loc );
    end if;
    end;
    --END Read from bfile starting with offset
    I can load this file as a whole and I can get its length. I can load parts of this file equal or less than 2G into blob with any blob offset I want. But when bfile offset is more than 2 Gigabytes the error occures.
    I have an Oracle Database server Release 10.2.0.1.0 on Microsoft Windows XP (Professional Version 2002, Service Pack 3). And I have an Oracle Database 10.2.0.2.0 on Microsoft Windows Server 2003 R2 (Enterprise Edition Service Pack 1) also. It doesn't work on both of them.
    I have no Oracle Database on Unix or 64-bit Windows, so I can't test on those operational systems.
    Can anybody help me? (( I lost any hope to solve this problem (((
    Edited by: Shindou on Feb 17, 2009 4:40 AM

    See Mark Drake's (Product Manager Oracle XMLDB, Oracle US) response in this old post: ORA-31167: 64k size limit for XML node
    The "in a future release" reference, means that this boundary 64K / node issue, was lifted in 11g and onwards...
    So first of all, if not only due to performance improvements, I would strongly suggest to upgrade to a database version which is supported by Oracle, see My Oracle Support... In short Oracle 10.2.x was in extended support up to summer 2013, if I am not mistaken and is currently not supported anymore...
    If you are able to able to upgrade, please use the much, much more performing XMLType Securefile Binary XML storage option, instead of the XMLType (Basicfile) CLOB storage option.
    HTH

  • Datatype which takes more than 5000 characters

    I want to have a column which can take more than 5000 characters. Which datatype can I have ? varchar2 is not working.
    Please help me.

    Pradeep_Warangal wrote:
    I want to have a column which can take more than 5000 characters. Which datatype can I have ? varchar2 is not working.
    Please help me.Varchar2 can have 32000 chars in PL/SQL. Are you looking for an SQL datatype? Then CLOB ist the answer.

  • Export to Text failing when Report height is more than 23 inches

    I have a problem while exporting to Text, when report height is more than 23 inches. It's failing with "an unhandled exception occurred in crw32.exe [3856]"
    Both Crystal Reports 2008 and Crystal Reports XI Release 2 are failing. Export to PDF is working fine.
    I am trying to design a lengthy report and export to text.
    I searched in forum with out any results.
    It would be great, if some body can point me in right direction. Thank you.
    Regards,
    Raveendra

    Hi Raveendra
    To work around this error message, disable DEP by performing the following steps:
    1. Click Start > Settings > Control Panel.
    2. Double-click 'System'. The "System Properties" dialog box appears.
    3. Click the 'Advanced' tab.
    4. Click 'settings' under 'performance'.
    5. Click the 'Data Execution Prevention' tab
    6. Select 'Turn on DEP for all programs and services except those I select:'.
    7. Click 'Add'.
    8. Browse for the application CRW32.exe. This file is located in:
    C:\Program Files\Business Objects\Crystal Reports 12\
    9. Select CRW32.exe and click 'Apply'.
    10. Click 'OK' to close the 'System Properties' dialog box.
    11. Log on as an Administrator and start Crystal Reports.
    Now try to export report to the text format and observe the behavior.
    Regards
    Girish Bhosale

  • Is there a datatype that allows me to store more than one item at a time

    Hello Everyone,
    Is there a datatype that allows me to store more than one item at a time , in a column in a row?
    I have to prepare a monthly account purchase system. Basically in this system a customer purchases items in an entire month as and when required on credit and then pays at the end of the month to clear the dues. So, i need to search the item from the inventory and then add it to the customer. So that when i want to see all the items purchased by a customer in the current month i get to see them. Later i calculate the bill and then ask him to pay and flushout old items which customer has purchased.
    I am having great difficulty in preparing the database.
    Please can anyone guide me! i have to finish this project in a weeks time.
    Item Database:
    SQL> desc items;
    Name Null? Type
    ITEMID VARCHAR2(10)
    ITEMCODE VARCHAR2(10)
    ITEMPRICE NUMBER(10)
    ITEMQUAN NUMBER(10)
    Customer Database:
    SQL> desc customerdb;
    Name Null? Type
    CUSTID VARCHAR2(10)
    CUSTFNAME VARCHAR2(20)
    CUSTLNAME VARCHAR2(20)
    CUSTMOBNO NUMBER(10)
    CUSTADD VARCHAR2(20)
    I need to store for every customer the items he has purchased in a month. But if i add a items purchased by a customer to the customer table entries look this.
    SQL> select * from customerdb;
    CUSTID CUSTFNAME CUSTLNAME CUSTMOBNO CUSTADD ITEM ITEMPRICE ITEMQUANTITY
    123 abc xyz 9988556677 a1/8,hill dales soap 10 1
    123 abc xyz 9988556677 " toothbrush 18 1
    I can create a itempurchase table similar to above table without columns custfname,csutlnamecustmobno,custadd
    ItemPurchaseTable :
    CUSTID ITEM ITEMPRICE ITEMQUANTITY
    123 soap 10 1
    123 toothbrush 18 1
    ill just have it as follows. But still the CUSTID FK from CustomerDB repeats for every row. I dont know how to solve this issue. Please can anyone help me.
    I need to map 1 customer to the many items he has purchased in a month.
    Edited by: Yukta Lolap on Oct 8, 2012 10:58 PM
    Edited by: Yukta Lolap on Oct 8, 2012 11:00 PM

    You must seriously read and learn about Normalization of tables; It improves your database design (at times may increase or decrease performance, subjective cases) and eases the Understanding efforts for a new person.
    See the below tables and compare to the tables you have created
    create table customers
      customer_id       number      primary key,
      fname             varchar2(50)  not null,
      mname             varchar2(50),
      lname             varchar2(50)  not null,
      join_date         date          default sysdate not null,
      is_active         char(1)     default 'N',
      constraint chk_active check (is_active in ('Y', 'N')) enable
    create table customer_address
      address_id        number      primary key,
      customer_id       number      not null,
      line_1            varchar2(100)   not null,
      line_2            varchar2(100),
      line_3            varchar2(100),
      city              varchar2(100)   not null,
      state             varchar2(100)   not null,
      zip_code          number          not null,
      is_active         char(1)         default 'N' not null,
      constraint chk_add_active check (is_active in ('Y', 'N')),
      constraint fk_cust_id foreign key (customer_id) references customers(customer_id)
    create table customer_contact
      contact_id        number      primary key,
      address_id        number      not null,
      area_code         number,
      landline          number,
      mobile            number,
      is_active         char(1)   default 'N' not null,
      constraint chk_cont_active check (is_active in ('Y', 'N'))
      constraint fk_add_id foreign key (address_id) references customer_address(address_id)
    create table inventory
      inventory_id          number        primary key,
      item_code             varchar2(25)    not null,
      item_name             varchar2(100)   not null,
      item_price            number(8, 2)    default 0,
      item_quantity         number          default 0,
      constraint chk_item_quant check (item_quantity >= 0)
    );You may have to improvise and adapt these tables according to your data and design to add or remove Columns/Constraints/Foreign Keys etc. I created them according to my understanding.
    --Edit:- Added Purchases table and sample data;
    create table purchases
      purchase_id           number        primary key,
      purchase_lot          number        unique key  not null,     --> Unique Key to map all the Purchases, at a time, for a customer
      customer_id           number        not null,
      item_code             number        not null,
      item_price            number(8,2)   not null,
      item_quantity         number        not null,
      discount              number(3,1)   default 0,
      purchase_date         date          default sysdate   not null,
      payment_mode          varchar2(20),
      constraint fk_cust_id foreign key (customer_id) references customers(customer_id)
    insert into purchases values (1, 1001, 1, 'AZ123', 653, 10, 0, sysdate, 'Cash');
    insert into purchases values (2, 1001, 1, 'AZ124', 225.5, 15, 2, sysdate, 'Cash');
    insert into purchases values (3, 1001, 1, 'AZ125', 90, 20, 3.5, sysdate, 'Cash');
    insert into purchases values (4, 1002, 2, 'AZ126', 111, 10, 0, sysdate, 'Cash');
    insert into purchases values (5, 1002, 2, 'AZ127', 100, 10, 0, sysdate, 'Cash');
    insert into purchases values (6, 1003, 1, 'AZ123', 101.25, 2, 0, sysdate, 'Cash');
    insert into purchases values (7, 1003, 1, 'AZ121', 1000, 1, 0, sysdate, 'Cash');Edited by: Purvesh K on Oct 9, 2012 12:22 PM (Added Price Column and modified sample data.)

  • Need to insert into a table 1 of the fields (CLOB)with more than 4000 chars

    Dear Gurus,
    As far I understood, I need to write a function which get as parameter the large text and using bind variables I can return a CLOB containing more than 4000 chars. I tried all I can do and feel I want to died. Please, can I get specified help in this issue?
    I APPRICIATE YOUR HELP, MARCELO.

    *** Duplicate Post ***
    Please, Marcelo, use the forum properly. Pick a single group and post there.
    Thank you.

  • Wcf Data Service fails when more than 8properties  in the 'select=' portion

    Hi:
    I am using WCF Data Service and Oracle
    EF Provider is ODAC11.2 Release 4
    Wcf Data Service fails when more than 8 properties are specified in the 'select=' portion of the URI
    here is my code
    var q = from c in this.ctx.SALESORDER_ITEM
    select new
    c.SORDERDETAILID,
    c.IID,c.DMFLAG,c.OWNERID,c.SKUID,c.SKU_ID,c.TRADENO,c.SOURCEID,c.SORDERID
    excetion:
    InvalidOperationException: An error occurred for this query during batch execution. See the inner exception for details
    The inner exception is null, but the DataServiceClientException states: Value cannot be null Parameter name: value
    the exception is thrown in base.OnStartProcessingRequest(args) method (overridden).
    Here is the call stack as well:
    at System.Data.Services.WebUtil.CheckArgumentNull[T](T value, String parameterName)
    at System.Data.Services.Internal.ProjectedWrapper.set_PropertyNameList(String value)
    at lambda_method(Closure , Shaper )
    at System.Data.Common.Internal.Materialization.Coordinator`1.ReadNextElement(Shaper shaper)
    at System.Data.Common.Internal.Materialization.Shaper`1.SimpleEnumerator.MoveNext()
    at System.Data.Services.Internal.ProjectedWrapper.EnumeratorWrapper.MoveNext()
    at System.Data.Services.DataService`1.SerializeResponseBody(RequestDescription description, IDataService dataService)
    at System.Data.Services.DataService`1.HandleNonBatchRequest(RequestDescription description)
    at System.Data.Services.DataService`1.HandleRequest()
    Is there a max number of properties in $select statement
    I think may be it is oracle provider's problem ,but i don't konw how to debug it Can anyone help me
    Any help is greatly appreciated

    I believe the null/empty string issue is unrelated to the 8 column issue, at least for ODP.NET. For example, let's take the original query in the OBE:
    http://.../yoursvcfile.svc/EMPLOYEES?$select=EMPLOYEE_ID,FIRST_NAME,LAST_NAME,SALARY,DEPARTMENT_ID,DEPARTMENT,EMAIL,PHONE_NUMBER,MANAGER_ID
    Let's make all the columns selected not nullable. You can do this with the Oracle Dev Tools. Specifically, PHONE_NUMBER and FIRST_NAME are the only nullable fields. I make them non-nullable and re-run the query and the same error occurs. Thus, these values should never be made null. Moreover, in all 107 rows, none of these row values consist of empty strings anyway.
    Looking into the problem further, WCF DS is calling methods in the System.Data.Services.Internal namespace.
    http://msdn.microsoft.com/en-us/library/system.data.services.internal.aspx
    Specifically, we see your issue when the ProjectedWrapperMany method is called. You will notice that there is ProjectedWrapper0, ProjectedWrapper1...ProjectedWrapper8 methods also present in the same namespace. As soon as the number of columns exceeds 8, ProjectedWrapperMany is called and we see the error. We're going to ask MS to help analyze the issue since this is an .NET-internal method being called.

  • Failed to upgrade more than one table at same time

    Hi
    In Deployment Manager, I failed to upgrade more than one table at same time.
    I tried to hightlight 4 tables and set the default action as Upgrade, and click File/Generate Deploy. It passed the code generation step then I click Deploy, they are all failed with no error message.
    But they are all successful when I upgrade them one by one. Any one has any idea about this?
    For the known reason, we have no choice to do the deployment with action of 'upgrade' through OMBPlus, instead, can only do that interactively through OWB Client. I can't imagine to ask our Production side DBA to upgrade 80 tables one by one. Or I have to use the generated scripts to do the upgrade, which will resulted in 'no deploy status updated' in OWB. Any help will be very appricated.
    The version I'm using is OWB 10g.
    Thanks,
    Daming

    Hi
    First of all, Patrick's solution doesn't work for me. I didn't do cloning and there is no problem when check the WB tables.
    Second, I think your solution is only good for the developing environment just to get tables upgrade via deployment manager. In most cases, when you do a new release on PROD environment you just exp/imp the MDL file from DEV to PROD and any developing is not recommended on PROD. But your approach is trying to manually the DB, and then EDIT the mapping to do the Reconcile and then deploy. Surely you can do that if you have full control on your PROD side. However, in my situation, I have no access to the PROD for the security reason and an DBA Operator of the Client is responsible to implement my Migration process on PROD by himself.

  • I used scripting brigde to add a movie that has size bigger than 5GB, exactly after two minutes iTunes return a failed, but the processing of the file is actually added to iTunes Library successfully. The copying take more than 5 minutes to complete. Why?

    I used scripting brigde to add a movie that has size bigger than 5GB, exactly after two minutes iTunes return a failed, but the processing of the file is actually added to iTunes Library successfully. The copying take more than 5 minutes to complete. Why the iTunes Scripting Brigde returned failed when it is actually success? It occurred exactly 2 minutes after submit the request to Scripting Brigde. Is this 2 minutes related to the Apple Event time out? if it does, how do I get around this problem? thx

    I can tell you that this is some of the absolutely worst customer service I have ever dealt with. I found out from a store employee that when they are really busy with calls, they have third party companies taking overflow calls. One of those companies is Xerox. What can a Xerox call center rep possibly be able to authorize on a Verizon account?  I'm Sure there is a ton of misinformation out there due to this. They don't note the accounts properly or so everyone can see them. I have been transferred before and have asked if they work for Verizon or a third party also and was refused an answer so, apparently they aren't required to disclose that information. I spent a long time in the store on my last visit and it's not just customers that get the runaround. It happens to the store employees as well and it's beyond frustrating.

Maybe you are looking for

  • Customer Open Items Mapping

    Hi gurus, We're currently creating a new periodic statement specific for a new company code. I'm new to this so I'd like to ask your helping in mapping the open items per customer account. What we would like to map in our periodic statement are the f

  • File Upload UI Element

    Hi,   We are using the Standard Webdynpro File upload element in the screens. When the user clicks on the 'Browse' button the File Selector popuop is thrown using which the user can select the file.   Can we have on specific type of files be displaye

  • HT1338 I can't load driver for HP 6500A printer / scanner

    I can't load driver for HP 6500A printer / scanner

  • R12 External Access

    Hi Hussein, In my test env, I have one single node instance of R12 (12.0.6) running over RHEL 5.3. This instance environment is wrapped within a VPN. I want to give access to R12 instance to some users for testing who are outside the VPN (through int

  • Game folder doesnt open in my n86.

    no installed game runs in my n86.and game folder doesnt open at all.help me plz.tanx