Oracle: Expanded non LONG bind data supplied after actual LONG or LOB colum

I am getting this error message when I try to insert clob into oracle table.
ORA-24816: Expanded non LONG bind data supplied after actual LONG or LOB column. This error message is kind of misleading. For this error message, I should reorder the list of columns which means that the column with LONG RAW should come at end. So I reordered the list to make the LONG RAW column come at end. But I was still getting this error message. So I found out that data that needs to be inserted into the clob is causing this error.
Here is my code for inserting clob.
                    byte[] bytes1 = .....
                    statement.setAsciiStream(index, new ByteArrayInputStream(bytes1), bytes1.length);I don't know what is wrong with this code. I have been using this for a while and now it is throwing an exception.
     at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
     at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
     at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
     at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:213)
     at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:952)
     at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1160)
     at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
     at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3390)
I am using JDK5 and Oracle 10g driver.
Please help me.

I have these columns,
ROW_DESC - Char
Table_id - Char
Blob_desc - Char
Blob1 - Blob
LOB_DATE - Date
CLOB1_DESC - Char
CLOB1 - Clob
CHAR_25_Col - Char
VBIN_400_Col - Long Raw.
But what I noticed is that the one causing the problem is not actually Long Raw column. "CLOB1" is the one causing a problem. The database is configured as unicode (AL32UTF8). When I tested it against another database with non-unicode, it works fine with same table description. So somehow it is unable to bind unicode large clob data.
I ran into this problem while I was inserting data from the source table to the target table.
Here is how I read from the resultset.
                    InputStream inputStream = resultSet.getAsciiStream(index + 1);
                    if (inputStream == null) {
                        return null;
                    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
                    byte[] buffer = new byte[1024];
                    int length;
                    do {
                        length = inputStream.read(buffer);
                        if (length > 0) {
                            outputStream.write(buffer, 0, length);
                    } while (length > 0);
                    byte[] resultBytes = outputStream.toByteArray();Here is how I bind parameters.
                    statement.setAsciiStream(index, new ByteArrayInputStream(resultBytes), resultBytes.length);If i use "((OraclePreparedStatement) statement).setStringForClob", then it works, but it will impact the performance, because I need to convert the clob to string.
is there any way to do it without converting to string object?
Thanks.

Similar Messages

  • Non-compressed aggregates data lost after Delete Overlapping Requests?

    Hi,
    I am going to setup the following scenario:
    The cube is receiving the delta load from infosource 1and full load from infosource 2. Aggregates are created and initially filled for the cube.
    Now, the flow in the process chain should be:
    Delete indexes
    Load delta
    Load full
    Create indexes
    Delete overlapping requests
    Roll-up
    Compress
    In the Management of the Cube, on the Roll-up tab, the "Compress After Roll-up" is deactivated, so that the compression should take place only when the cube data is compressed (but I don't know whether this influences the way, how the roll-up is done via Adjust process type in process chain - will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only? ).
    Nevertheless, let's assume here, that aggregates will not be compressed until the compression will run on the cube. The Collapse process in the process chain is parametrized so that the newest 10 requests are not going to be compressed.
    Therefore, I expect that after the compression it should look like this:
    RNR | Compressed in cube | Compressed in Aggr | Rollup | Update
    110 |                    |                    | X      | F
    109 |                    |                    | X      | D
    108 |                    |                    | X      | D
    107 |                    |                    | X      | D
    106 |                    |                    | X      | D
    105 |                    |                    | X      | D
    104 |                    |                    | X      | D
    103 |                    |                    | X      | D
    102 |                    |                    | X      | D
    101 |                    |                    | X      | D
    100 | X                  | X                  | X      | D
    099 | X                  | X                  | X      | D
    098 | X                  | X                  | X      | D
    If you ask here, why ten newest requests are not compressed, then it is for sake of being able to delete the Full load by Req-ID (yes, I know, that 10 is too many...).
    My question is:
    What will happen during the next process chain run during Delete Overlapping Requests if new Full with RNR 111 will already be loaded?
    Some BW people say that using Delete Overlapping Requests will cause that the aggregates will be deactivated and rebuilt. I cannot afford this because of the long runtime needed for rebuilding the aggregates from scratch. But I still think that Delete Overlapping should work in the same way as deletion of the similar requests does (based on infopackage setup) when running on non-compressed requests, isn't it? Since the newest 10 requests are not compressed and the only overlapping is Full (last load) with RNR 111, then I assume that it should rather go for regular deleting the RNR 110 data from aggregate by Req-ID and then regular roll-up of RNR 111 instead of rebuilding the aggregates, am I right? Please, CONFIRM or DENY. Thanks! If the Delete Overlapping Requests still would lead to rebuilding of aggregates, then the only option would be to set up the infopackage for deleting the similar requests and remove Delete Overlapping Requests from process chain.
    I hope that my question is clear
    Any answer is highly appreciated.
    Thanks
    Michal

    Hi,
    If i get ur Q correct...
    Compress After Roll-up option is for the aggregtes of the cube not for the cube...
    So when this is selected then aggregates will be compressed if and only if roll-up is done on ur aggregates this doesn't affect ur compression on ur cube i.e movng the data from F to E fact table....
    If it is deselected then also tht doesn't affect ur compression of ur cube but here it won't chk the status of the rollup for the aggregates to compress ur aggregates...
    Will the deselected checkbox really avoid compression of aggregates after roll-up OR does the checkbox influences the manual start of roll-up only?
    This check box won't give u any influence even for the manual start of roll-up....i.e compression of aggreagates won't automatically start after ur roll-up...this has to done along with the compression staus of cube itself...
    And for the second Q I guess aggregates will be deactivated when deleting oveplapping request if tht particular request is rolled up....
    even it happens for the manual deleting also..i.e if u need to delete a request which is rolled up and aggregates are compressed u have to  deactivate the aggregates and refill the same....
    Here in detail unless and until a request is not compressed for cube and aggregates are not compressed it is anormal request only..we can delete without deactivating the aggregates...
    here in urcase i guess there is no need to remove the step from the chain...
    correct me if any issue u found......
    rgds,

  • I have a Problem in print the long description data through EDI.

    I have a Problem in print the long description data through EDI:Actually we want to print the long description data through EDI, but it not handling our huge long description data. Here is the example of that>We can print the first two lines into EDI output, but it is failing to print the below text: 
    <B>EPSON, TM-U590 Series: </B> Reliable 88 column slip printer. Operator friendly dot matrix impact printing. Ideal for hotel, bank, restaurant and many more applications .<br><BR> Includes: Printer, Black ribbon & a Connect-It Interface. Power Supply & interface cable sold separately. All printers are RoHS compliant & have a standard 1 year depot warranty. <BR> <BR>
       -------------------------------------Failing to print this lines------------------------------------------------------------------
    <b>COLORS:</b> Epson Cool White (ECW) Only<BR><BR><B><FONT COLOR=#FF0000#>Click links below for helpful Information</FONT><b><br><table border="0" bordercolor="" style="" width="100%" cellpadding="5" cellspacing="5"><tr><td><FONT SIZE="2"><CENTER><A HREF="www.sample.com"f.2605" target="_blank"><b>Spec Sheet</b></a></CENTER></FONT></td><td><FONT SIZE="2"><CENTER><A HREF="www.sample.com"f.2606" target="_blank"><b>MSRP Price List</b></a></CENTER></FONT></td><TD><FONT SIZE="2"><CENTER><A HREF="www.sample.com"f.2868" target="_blank"><b>Product Information Guide</b></a></CENTER></FONT></TD></TR><TR><td><FONT SIZE="2" COLOR=#0000FF><B>Warranty Information</B></FONT></td></tr><Tr><td><FONT SIZE="2"><CENTER><A HREF="www.sample.com"f.2554" target="_blank"><b>Depot Warranty</b></a></CENTER></FONT></td><td><FONT SIZE="2"><CENTER><A HREF="www.sample.com"f.2555" target="_blank"><b>Spare In the Air Warranty</b></a></CENTER></FONT></td><td></td></tr></table><br>
    Please provide some useful thoughts on this EDI issue.
    Thanks
    Ameer

    Try using the FM:
    ENQUE_READ2
    Passing the follwing values:
    GNAME --> VBAK (Sales Order header table)
    GARG   --> The lock argument
                       (This will be a combination of client number anb Sales Order No.
                        Eg: '3001210000054' where the first three digit i,e 300 is the client No
                       and 1210000054 is the sales order no.)
    Regards,
    Firoz.

  • Employee date change after pay roll run

    HI sapiers,
    i have an issue
    one employee join on 10th of the may but in sysytem it was enterd as 13th of the may
    and pay roll was processed to that employee
    now can we change the employe join date now
    if yes how can we do this
    please help me its urrgent
    thanks in advance
    shjish khan

    Hi Shajish,
    There are three scenarios when you may need to change hiring date:
    1) After payroll is run - when hiring date is before actual Hiring date.
    2) After payroll is run when hiring date is after actual Hiring Date.
    3) Before the payroll is run.
    1) PA30 -- Copy actions info type - action type - incorrect entry -- save and come out PA 30 copy actions info type -- action type - correct entry - now correct your entries, save your date is changed.
    2) PA30 - Utilities - change payroll status - delete accounted to field, save and come out - then again utilities change entry leaving date - correct the hiring date - save and come out.
    3) PA30 - Utilities change entry/leaving date change your date and save.
    Try this aslo
    Go to PA30
    Enter personnel number.
    Select action infotype.
    Select subtype hire mini master record.
    Change to new date.
    Under reasons, select new position.
    Click save.
    Please refer the below links:
    http://help.sap.com/saphelp_470/helpdata/en/48/35c5c34abf11d18a0f0000e816ae6e/content.htm
    http://help.sap.com/saphelp_470/helpdata/en/48/35c5c34abf11d18a0f0000e816ae6e/frameset.htm
    http://help.sap.com/saphelp_46c/helpdata/en/1b/3e5c16470f11d295a100a0c9308b52/frameset.htm
    Also we have several threads in the forum related to this.
    http://scn.sap.com/thread/1185695
    http://scn.sap.com/thread/1185695
    http://scn.sap.com/thread/2012142
    http://scn.sap.com/thread/1466306
    Thanks,
    Madhav

  • Issue with Oracle LONG RAW data type

    Hi All,
    I am facing some issues with Oracle LONG RAW DATA Type.
    We are using Oracle 9IR2 Database.
    I got a table having LONG RAW column and I need to transfer the same into another table having LONG RAW column.
    When I tried using INSERT INTO SELECT * command (or) CREATE TABLE as select * , it is throwing ORA-00997: illegal use of LONG datatype.
    I have gone through some docs and found we should not use LONG RAW using these operations.
    So I did some basic PLSQL block given below and I was able to insert most of the records. But records where the LONG RAW file is like 7O kb, the inserting is faliling.
    I tried to convert LONG RAW to BLOB and again for the record where the LONG RAW is big in size I am getting (ORA-06502: PL/SQL: numeric or value error) error.
    Appreciate if anyone can help me out here.
    DECLARE
    Y LONG RAW;
    BEGIN
    FOR REC IN (SELECT * FROM TRU_INT.TERRITORY WHERE TERRITORYSEQ=488480 ORDER BY TERRITORYSEQ ) LOOP
    INSERT INTO TRU_CMP.TERRITORY
    BUSINESSUNITSEQ, COMPELEMENTLIFETIMEID, COMPONENTIMAGE, DESCRIPTION, ENDPERIOD, GENERATION, NAME, STARTPERIOD, TERRITORYSEQ
    VALUES
    REC.BUSINESSUNITSEQ, REC.COMPELEMENTLIFETIMEID, REC.COMPONENTIMAGE, REC.DESCRIPTION, REC.ENDPERIOD, REC.GENERATION, REC.NAME,
    REC.STARTPERIOD, REC.TERRITORYSEQ
    END LOOP;
    END;
    /

    Maddy wrote:
    Hi All,
    I am facing some issues with Oracle LONG RAW DATA Type.
    We are using Oracle 9IR2 Database.
    I got a table having LONG RAW column and I need to transfer the same into another table having LONG RAW column.
    When I tried using INSERT INTO SELECT * command (or) CREATE TABLE as select * , it is throwing ORA-00997: illegal use of LONG datatype.
    I have gone through some docs and found we should not use LONG RAW using these operations.
    So I did some basic PLSQL block given below and I was able to insert most of the records. But records where the LONG RAW file is like 7O kb, the inserting is faliling.
    I tried to convert LONG RAW to BLOB and again for the record where the LONG RAW is big in size I am getting (ORA-06502: PL/SQL: numeric or value error) error.
    Appreciate if anyone can help me out here.
    DECLARE
    Y LONG RAW;
    BEGIN
    FOR REC IN (SELECT * FROM TRU_INT.TERRITORY WHERE TERRITORYSEQ=488480 ORDER BY TERRITORYSEQ ) LOOP
    INSERT INTO TRU_CMP.TERRITORY
    BUSINESSUNITSEQ, COMPELEMENTLIFETIMEID, COMPONENTIMAGE, DESCRIPTION, ENDPERIOD, GENERATION, NAME, STARTPERIOD, TERRITORYSEQ
    VALUES
    REC.BUSINESSUNITSEQ, REC.COMPELEMENTLIFETIMEID, REC.COMPONENTIMAGE, REC.DESCRIPTION, REC.ENDPERIOD, REC.GENERATION, REC.NAME,
    REC.STARTPERIOD, REC.TERRITORYSEQ
    END LOOP;
    END;
    /below might work
    12:06:23 SQL> help copy
    COPY
    Copies data from a query to a table in the same or another
    database. COPY supports CHAR, DATE, LONG, NUMBER and VARCHAR2.
    COPY {FROM database | TO database | FROM database TO database}
                {APPEND|CREATE|INSERT|REPLACE} destination_table
                [(column, column, column, ...)] USING query
    where database has the following syntax:
         username[/password]@connect_identifier

  • Import/Export of LONG RAW data to/from Oracle 7.3.3

    Is it possible to export the table with LONG RAW data from one instance and then import the dump file to another instance? Does Oracle 7.3.3 support these, if yes, how can I verify the LONG RAW data is successfully imported? If possible, please also provide me with some sample codes. Thanks a lot.

    NO you do not have to run catexp7, export files are UPWARD compatable. 10G import can read a 7 export (however 7 export could NOT understand a 10g export.)

  • After updating my 4S to iOS 5.1.1 last night, none of my data applications are working today, nor Siri, iMessage or anything of the like. I'm at a loss as to what might have caused it. Is anyone else having this issue/has anyone found a fix?

    After updating my 4S to iOS 5.1.1 last night, none of my data applications are working today, nor Siri, iMessage or anything of the like. I'm at a loss as to what might have caused it. Is anyone else having this issue/has anyone found a fix? Everything seems to work fine if I connect to wireless, but if I'm not connected to it then nothing is working. My service provider says everything should be working fine, I still have the 3G symbol and full bars.

    Somebody clearly hasn't read the User Guide.
    Basic troubleshooting steps are retart, reset, restore from backup, restore as NEW. 
    If you have tried ALL of these steps and you're still having problems, then you need to bring your phone into Apple for evaluation.

  • JDBC THEME-MAPVIEWER-05517 Request string is too long for Oracle Maps' non-

    hi,
    if I need a quite complex query to be added to dynamic JDBC theme I get this error:
    [MAPVIEWER-05517] Request string is too long for Oracle Maps' non-AJAX remoting.
    -why? I am using Oracle Maps JS API so it is AJAX remoting, or not?
    -what is the limit of a JDBC theme definition?
    regards,
    Brano

    hi,
    yes, having look at MVMapView.enableXMLHTTP(true) in doc explains a lot...
    thanks,
    Brano

  • Confusing result between 'to_date' and 'long to date' in oracle query

    I have a table called "subscription" as below.
    desc subscription;
    Name Null Type
    SUBSCRIPTION_ID NOT NULL NUMBER(38)
    EXPIRATIONDATE DATE
    And output of a query as below.
    select subscription_id,expirationdate from subscription where subscription_id = 41919;
    SUBSCRIPTION_ID EXPIRATIONDATE
    41919 18-JAN-14 13:45:56
    And I am trying to execute following query in different ways.
    1st Query:
    select s.subscription_id from subscription$active s where s.expirationdate - (116/24) between TO_DATE('13-JAN-14 11:38:22', 'dd/mm/yyyy hh24:mi:ss') and TO_DATE('13-JAN-14 18:30:00', 'dd/mm/yyyy hh24:mi:ss') and s.subscription_id=41919
    Output:
    SUBSCRIPTION_ID
    41919
    2nd Query:
    select s.subscription_id from subscription$active s where s.expirationdate - (116/24) between (trunc(1389613102220 / (1000), 0) / (24 * 60 * 60)) + to_date('01/01/1970','mm/dd/yyyy') and (trunc(1389637800000 / (1000), 0) / (24 * 60 * 60)) + to_date('01/01/1970','mm/dd/yyyy') and s.subscription_id=41919
    Output:
    SUBSCRIPTION_ID
    Here both the above where clause are same. 1st one is trying to use "to_date" and 2nd one converts "long to date". But when I see the out put, the first one returns a row and 2nd doesnot return any result. I couldn't find out what is difference the 'long to date' conversion makes here.
    The conversion between long to date is also correct.
    select (trunc(1389613102220 / (1000), 0) / (24 * 60 * 60)) + to_date('01/01/1970','mm/dd/yyyy') from dual
    Output:
    (TRUNC(1389613102220/(1000),0)/(24*60*60))+TO_DATE('01/01/1970','MM/DD/YYYY') -------------------------
    13-JAN-14 11:38:22
    And
    select (trunc(1389637800000 / (1000), 0) / (24 * 60 * 60)) + to_date('01/01/1970','mm/dd/yyyy') from dual
    Output:
    (TRUNC(1389637800000/(1000),0)/(24*60*60))+TO_DATE('01/01/1970','MM/DD/YYYY') -------------------------
    13-JAN-14 18:30:00
    Can someone help me to understand the difference between the 1st and 2nd query ?

    Hi,
    Not sure what exactly you asking for. What is the requirement?
    Just formatted for better readability:
    -->-- Query 1
    SELECT
      s.subscription_id
    FROM subscription$active s
    WHERE
      s.expirationdate - (116/24) BETWEEN
           to_date('13-JAN-14 11:38:22', 'dd/mm/yyyy hh24:mi:ss')
           AND
           to_date('13-JAN-14 18:30:00', 'dd/mm/yyyy hh24:mi:ss')
      AND s.subscription_id=41919;
    -->-- Query 2
    SELECT
      s.subscription_id
    FROM subscription$active s
    WHERE
      s.expirationdate - (116/24) BETWEEN
           (trunc(1389613102220 / (1000), 0) / (24 * 60 * 60)) + to_date('01/01/1970','mm/dd/yyyy')
           AND
           (trunc(1389637800000 / (1000), 0) / (24 * 60 * 60)) + to_date('01/01/1970','mm/dd/yyyy')
      AND s.subscription_id=41919;

  • [ORACLE 10G] Extract Long Raw data to disc

    Hi All,
    I want to extract a column which contain long raw data (pdf file) into files on my disque, but i don't know how to do it in SQL or PL/SQL, any help ???

    Or maybe just an alter table statement will do it for you...
    SQL> create table xx (x long raw);
    Table created.
    SQL> desc xx;
    Name                                                                   Null?    Type
    X                                                                               LONG RAW
    SQL> alter table xx modify (x blob);
    Table altered.
    SQL> desc xx;
    Name                                                                   Null?    Type
    X                                                                               BLOB
    SQL>I've not really used LONG RAW's before but apparently (according to sources on the net) the simlple alter table statement above will do the job.

  • Re:read  long raw  data in Oracle  and write to a file

    So we need an mechanism to read data from LONG RAW and convert into actual file.
    Regards

    I've branched your post off from the thread you posted on ({thread:id=620757}) as it was an old thread.
    When you have a question of your own, you should always start a new thread rather than tag onto old ones.
    Please read: {message:id=9360002} and post some relevant information so that people can help you.

  • How do you cyclicly trigger data acquisition after n pulses counted

    Hello all, please forgive my ignorance because I am very new
    to lab view and data acquisition. I am working on a system which is going to
    scan an object and produce an image. The gimble that I am scanning the object
    with is an X-Y type of gimble with stepper motors on each axis. The stepper
    motor controller will output pulses real time to indicate the real time
    position of the gimble in each axis. What I need to be able to do is count
    pulses from the stepper motor controller and then output a trigger pulse to
    trigger the data acquisition in a buffered mode when N number of pulses have
    passed and then generate another pulse to stop the acquisition after another N
    number of pulses have passed. The controller puts out 10,000 pulses per degree
    of travel. The velocity that I am traveling at is 20 degrees per second, so
    timing here is really important. I need to be able to utilize the speed of the
    daq card and not so much the speed of the computer to iterate through a loop. I
    have tried using the count down feature in the NIDAQ MX library but it does not
    appear to be useful to me. I set it up and it will count down but once it hits
    zero it continues to count down. My expectation was that it would either
    restart the down count or it would stop. I was expecting some sort of trigger
    event to take place once the count reached the zero point but I did not observe
    any sort of event taking place. Once again my knowledge and background is
    really limited so I could be missing something really fundamental here. I have
    tried using some of the legacy functions which would enable me to do exactly
    what I want to do but they do not seem to work with my daq card. I have a NI
    PCI-6122 and if anyone has any knowledge on how to get this type of card to
    talk to some of the non MX functions I would be more than happy to hear how. It
    seems to me though, that I am limited to the MX functions which I can not
    really translate into what I have learned I can do with the legacy functions. I
    thank you all once again for taking the time to read this I and I will
    appreciate any and all responses that can be helpful.
    ~ Randy Brown

    I have run a few more tests and obtained some data per the request of a telephone support engineer. I have some scope screen shots that might be able to shed some light on what is going on. I will provide a brief description of what I discovered before I show the resulting data. I discovered that using the number of up ticks and down ticks suggested does not yield the right timing for the clock pulses that I will need for triggering my data acquisition. When I use 55 low ticks and 2 high ticks as my settings I end up getting a pulse every 32 pulses read on the PFI line. I get the same results when I interchange the numbers, for example, when I set the program up for 2 low ticks and 55 high ticks I get the same resulting one clock pulse per 32 pulses on the PFI line. I started playing with the numbers and come to find that I was able to generate a pulse every 57 pulses in this setup. I set the high ticks to 2 and the low ticks to 71 and once I did that it generated a pulse every 57 pulses in. The results are not ideal though, a number of things happen within the first second of operation. One mode of operation the clock output pulse latches after a few pulses generated. Another mode of operation that I noticed was that it would generate n number of pulses and then just stop even though the program was still running. The results I am getting are not reproducible when it comes to the long-term operation of the clock pulse generation but the bottom line is not matter what happens the end result after 1 second is not what is expected. I will show below screen shots of my program and also scope shots for the respective modes of operation.
    Front End interface
    Block Diagram
    55 High ticks and 2 low ticks results
    55 low ticks and 2 high ticks results
    77 Low ticks and 2 high ticks results
    Undesired Latch after 1 second of operation
    N number of pulses generated and stopped while program was still running
     It appears the the long term operation (and when I say long term I mean after a second) is intermittent, it either latches high or low after a random number of pulses are generated on the clock output. I am not sure why this is happening. The one setup that I came up with that generates a pulse every 57 pulses is not going to work for the setup that I have I think I would have to reduce the 71 to 69 in order to compensate for the two pulses that happen while the output pulse of the clock is high. To be honest I have no idea what is going on and I am starting to wonder about my daq card. Being that it is not really reproducing the same results I am starting to think maybe something is wrong with it. Another possibility is that it might be the bnc 2110 that I am using. I will try another one tomarrow and see if this problem persisits. I am leaving now so I won't be able to try that as of yet but I wanted to pass this info and data along such that maybe you will notice something and be able to lead me in the right direction. Thank you again for all of your help.
    ~ Randy Brown

  • Java.sql.SQLException:ORA-01801:date format is too long for internal buffer

    Hi,
    I am getting the following exception when i trying to insert data in to a table through a stored procedure.
    oracle.apps.fnd.framework.OAException: java.sql.SQLException: ORA-01801: date format is too long for internal buffer
    when execute this stored procedure from ana anonymous block , it gets executed successfully, but i use a OracleCallableStatement to execute the procedure i am getting this error.
    Please let me know how to resolve this error.
    Is this error something to do with the Database Configuration ?
    Thanks & Regards
    Meenal

    I don't know if this will help, but we were getting this error in several of the standard OA framework pages and after much pain and aggravation it was determined that visiting the Sourcing Home Page was changing the timezone. For most pages this just changed the timezone that dates were displayed in, but some had this ORA-01801 error and some others had an ORA-01830 error (date format picture ends before converting entire input string). Unfortunately, if you are not using Sourcing at your site, this probably won't help, but if you are, have a look at patch # 4519817.
    Note that to get the same error, try the following query (I got this error in 9.2.0.5 and 10.1.0.3):
    select to_date('10-Mar-2006', 'DD-Mon-YYYY________________________________________________HH24:MI:SS') from dual;
    It appears that you can't have a date format that is longer than 68 characters.

  • Sql Developer MSSql Migration, Online data move, image to long raw mapping

    Hello,
    my task is to migrate a MSSql Server Database to Oracle 10.2.0.4 with SQL Developer Version 2.1.1.64
    and JTDS JDBC Driver oracle.sqldeveloper.thirdparty.drivers.sqlserver 11.1.1.58.17.
    Capture, convert and online data move works except data mapping from MSSql image to Oracle long raw dataype.
    I need to map MsSql image to oracle blob, to achieve online data move!
    is Sql Developer online migration MsSql image to oracle long raw possible?
    I know that long raw is outdated, but i'm dba and not responsible for the software part.
    What options should i try? offline data move? can MS Bulk Copy Program export image data and SQL*Loader import into long raw?
    Any experience or tipp?
    Thanks
    Michael

    Hi user12132314 ,
    I do not see any 'long raw' options on our current SQLServer 2005 data type mapping. (Raw is a separate 2000 byte data type).
    The usual route for SQLDeveloper is to go to Blob.
    Online - this is simple & automated.
    Offline - this actually goes via hex output in bcp to sqlldr, read in as CLOB which is then encoded to blob. (This is automated)
    (Online 'Copy to Oracle' option, right click on table, may be of use if advanced features not included in the select such as defaults are not required, it copies over a select * from table, including blobs)
    To move you from blob to long raw after migration can be done (easily enough if Oracle database >=10g so the stream does not have to be in smaller 'chunks')
    http://asktom.oracle.com/pls/asktom/f?p=100:11:3938270166267830::::P11_QUESTION_ID:702825000306
    I can look into this if you want to go down this route.
    Note however that long raw has drawbacks and blob is preferred, see
    Oracle® Database SQL Language Reference
    11g Release 2 (11.2)
    Part Number E17118-04
    Data Types
    LONG Data Type
    for example one restriction is:
    A table can contain only one LONG column.
    -Turloch
    SQLDeveloper Team

  • How to fill or bind data using Value Node in Tree Node

    Hi Gurus,
    Can anybody help me on how to fill data or bind data using Value Node in Tree Node View. I know how to create Tree Node but not able to show value on the UI in Tree View.
    Can u please let if anybody has done it?
    Thanks in advance.
    Madhusudan

    continued...
    TRY.
              lv_child = me->node_factory->get_proxy(
                        iv_bo = lv_value_node
                        iv_parent_proxy = me
                        iv_proxy_type = 'ZL_CLASS_CN02' ).
              lv_child->is_leaf = 'X'.
              APPEND lv_child TO rt_children.
            CATCH cx_sy_move_cast_error cx_sy_ref_is_initial.
          ENDTRY.
      In the above code iv_bo , lv_value_node will be the actual object of the second node or leaf node here, which will have the same structure of parent node along with data. After/before this, you would need to build table and refresh in do-prepare_output of IMPL class.In the above code iv_bo , lv_value_node will be the actual object of the second node or leaf node here, which will have the same structure of parent node along with data. After/before this, you would need to buid table and refresh in do-prepare_output of IMPL class.
    ztyped_context->resultlist->build_table( ).
      IF ztyped_context->resultlist->node_tab IS INITIAL.
        ztyped_context->resultlist->refresh( ).
      ENDIF.
    Also the EH_ONEXPAND has to be implemented and event handled in DO_HANDLE_EVENT. But this expand event has to be delegated to context node directly as CL_BSP_WD_CONTEXT_NODE_TREE will already have the implementation.
    ztyped_context->resultlist->expand_node( lv_event->row_key ).
    Where in result list is the node ZL_CLASS_CN00.
    After typing the whole content , i found this blog :). There are few things i have written more that in the blog.  /people/poonam.assudani/blog/2009/06/24/create-a-tree-view-display-in-crm-web-ui
    Regards,
    Karthik

Maybe you are looking for