ORA-24810 on writing blob via Pro*C

I'm using Oracle Pro*C with an Oracle 9.2 RDBMS.
I am trying to store a blob from a file using Pro*C. If the file is small enough that I can read/write the entire file in a single operation, it blow write works okay. However, I would like to be able to handle a blob of any size, up to 4MB. Unless, I use a 4MB io buffer, this requires partial file read / blob writes. In the program before, the initial write (first) works, but subsequent writes (next) fail with the 24810 attempting to write more data than indicated error. I cannot find a worthwhile explanation for this error - what does it mean?
Here is my sample program.
#include <oci.h>
#include <stdio.h>
#define CMN_IO_BUFF_SIZE 4096
EXEC SQL INCLUDE sqlca.h;
EXEC SQL INCLUDE oraca.h;
EXEC SQL BEGIN DECLARE SECTION;
VARCHAR f_Conn[100];
EXEC SQL END DECLARE SECTION;
int main () {
OCIBlobLocator *DbBlob;
FILE          *Fp;
int          FirstRead;
unsigned int     BytesRead;
unsigned int     Offset;
unsigned char     IoBuff[CMN_IO_BUFF_SIZE];
strcpy ( f_Conn.arr, "isis_app/isis_app@isisdev1" );
f_Conn.len = strlen ( f_Conn.arr );
EXEC SQL VAR IoBuff IS RAW ( CMN_IO_BUFF_SIZE );
printf ( "VAR returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
EXEC SQL CONNECT :f_Conn;
printf ( "CONNECT returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
EXEC SQL ALLOCATE :DbBlob;
printf ( "ALLOC returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
EXEC SQL INSERT INTO lob_table (a_blob)
VALUES (EMPTY_BLOB())
RETURNING a_blob INTO :DbBlob;
printf ( "INSERT returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
Fp = fopen ( "test.gif", "r" );
if ( Fp != 0 ) {
FirstRead = 1;
Offset = 1;
BytesRead = CMN_IO_BUFF_SIZE;
while ( BytesRead > 0 ) {
printf ( "Bytes requested %d\n", BytesRead );
BytesRead = fread ( IoBuff, 1, BytesRead, Fp );
printf ( "Bytes read %d\n", BytesRead );
if ( ( FirstRead == 1 ) && ( BytesRead < CMN_IO_BUFF_SIZE ) ) {
EXEC SQL LOB WRITE ONE :BytesRead
FROM :IoBuff INTO :DbBlob AT :Offset;
printf ( "WRITE ONE returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
printf ( "Bytes written %d at offset %d\n", BytesRead, Offset );
} else if ( ( FirstRead == 1 ) && ( BytesRead == CMN_IO_BUFF_SIZE ) ) {
EXEC SQL LOB WRITE FIRST :BytesRead
FROM :IoBuff INTO :DbBlob AT :Offset;
printf ( "WRITE FIRST returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
printf ( "Bytes written %d at offset %d\n", BytesRead, Offset );
} else if ( ( FirstRead == 0 ) && ( BytesRead == CMN_IO_BUFF_SIZE ) ) {
EXEC SQL LOB WRITE NEXT :BytesRead
FROM :IoBuff INTO :DbBlob AT :Offset;
printf ( "WRITE NEXT returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
printf ( "Bytes written %d at offset %d\n", BytesRead, Offset );
} else if ( ( FirstRead == 0 ) && ( BytesRead < CMN_IO_BUFF_SIZE ) ) {
EXEC SQL LOB WRITE LAST :BytesRead
FROM :IoBuff INTO :DbBlob AT :Offset;
printf ( "WRITE LAST returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
printf ( "Bytes written %d at offset %d\n", BytesRead, Offset );
FirstRead = 0;
Offset = Offset + BytesRead;
if ( BytesRead < CMN_IO_BUFF_SIZE ) {
BytesRead = 0;
fclose ( Fp );
} else {
printf ( "Error opening input file\n" );
EXEC SQL COMMIT WORK;
printf ( "COMMIT returned error %d\n", sqlca.sqlcode );
printf ( " %s\n", sqlca.sqlerrm.sqlerrmc );
}

Do not use buffered IO (FILE*) . Use direct Calls(Open/Write).
Buffered IO has limiations

Similar Messages

  • Mod_plsql: ORA-942 Execute(Temp BLOB) ORA-00942: table or view does not exi

    mod_plsql: ORA-942 Execute(Temp BLOB) ORA-00942: table or view does not exist
    This is the error I get in the Apache log. On the page:
    "The requested URL /pls/apex/wwv_flow.accept was not found on this server."
    I am sure this is just a configuration issue. I installed v1.6 with the http server off the companion cd with the db (10.2.0.1). After I had it working (did not test everything) I upgraded to 3.0.1.
    I get this message when I am trying to import an application that I just exported! I am testing this for pushing code up to our production environment.
    My dads.conf file looks like:
    Alias /i/ /oracle/product/apps/htmldb/images/
    AddType text/xml xbl
    AddType text/x-components htc
    <Location /pls/apex>
    SetHandler pls_handler
    Order deny,allow
    Allow from all
    AllowOverride None
    PlsqlDatabaseUsername APEX_PUBLIC_USER
    PlsqlDatabasePassword xxxxxx
    PlsqlDatabaseConnectString 192.168.2.195:1521:idpdev ServiceNameFormat
    PlsqlDefaultPage apex
    PlsqlDocumentTablename wwv_flow_file_object$
    PlsqlDocumentPath docs
    PlsqlDocumentProcedure wwv_flow_file_mgr.process_download
    PlsqlAuthenticationMode Basic
    PlsqlNLSLanguage AMERICAN_AMERICA.WE8ISO8859P1
    </Location>
    marvel.conf file is empty
    There is a log file for pl/sql.... it has the following text in it that seemed pertinent: Attempting to logon with '(unknown)'
    Also a log that says this and seems to have a "special character" in it that may be causing an issue:
    <1133864024 ms>6565646261636b206f6e a70726f6d707420202e2e2e646f6e65 a
    <1133864024 ms>-----------------------------2444716671664
    <1133864024 ms>^M
    <1133864024 ms>UploadBlobDoc: readahead 27 bytes into 82f5e5c
    <1133864024 ms>UploadBlobDoc : Inserting piece OCI_NEXT_PIECE
    <1133864024 ms>UploadBlobDoc:Attempt to write 2048 bytes(offset 249857)
    <1133864024 ms>UploadBlobDoc:OCILobWrite for 2048 bytes (offset 249857)
    <1133864024 ms>UploadBlobDoc: Read-Ahead buf 82f5e5c has 27 bytes
    <1133864024 ms>UploadBlobDoc : Inserting piece OCI_LAST_PIECE
    <1133864024 ms>UploadBlobDoc:Attempt to write 27 bytes(offset 251905)
    <1133864024 ms>UploadBlobDoc:OCILobWrite finished for 27 bytes
    <1133864024 ms>ORA-942 Execute(Temp BLOB) ORA-00942: table or view does not exist
    <1133864024 ms>Stale Connection due to Oracle error 942
    <1133864024 ms>/pls/apex/wwv_flow.accept HTTP-404 ORA-00942: table or view does not exist
    <1133864024 ms>(wpu.c,594) longjumping back to the beginning
    <1133864024 ms>(wpu.c,457) cleaning up before longjmp
    <1133864024 ms>(wpu.c,461) doing a rollback
    <1133864024 ms>(wpcs.c, 76) Executed 'rollback' (rc=0)
    <1133864024 ms>(wpcs.c, 76) Executed 'begin dbms_session.reset_package; end;' (rc=0)
    <1133864024 ms>(wpd.c,1816) Going to close cursor
    <1133864024 ms>Freed BLOB
    <1133864024 ms>DeinitCursor
    <1133864024 ms>(wpx.c,690) Shutdown has been called
    <1133864024 ms>(wpx.c,702) Going to logoff
    <1133864024 ms>Logoff: Closing connection due to stale connection
    <1133864034 ms>[ReqEndtime: 2/Oct/2007:15:38:11]
    <1133864034 ms>[ReqExecTime: 80 ms]
    I did go in and update the permissions to the wwv_flow_file_objects$ table to give PUBLIC full access to that table to see if that was the problem... it wasn't.
    Probably information overload, but just wanted to be thorough. Anyone have any ideas?

    I also have another issue... probably more of a clarification....
    I run the following to stop the process:
    "/oracle/product/apps/opmn/bin/opmnctl stopproc ias-component=HTTP_Server"
    then
    "/oracle/product/apps/opmn/bin/opmnctl startproc ias-component=HTTP_Server"
    But, at that point, it says that opmn is not running.
    So I try:
    "/oracle/product/apps/opmn/bin/opmnctl stopall"
    then
    "/oracle/product/apps/opmn/bin/opmnctl start"
    then
    "/oracle/product/apps/opmn/bin/opmnctl startproc ias-component=HTTP_Server" and it still says that opmn is not running.
    Once I start running the stops and starts, it will eventually start correctly.
    I think that I am trying the steps too fast and that I probably need to wait a bit between steps since that seems to work better that way.

  • ORA-22275 writing BLOB using Oracle 10g

    Am trying to INSERT a record including a BLOB. Using Visual C++ 6.0, Oracle 10g, and Oracle ODBC Driver.
    Setup calls to SQLAllocStmt, SQLBindParameter (for the BLOB, only), and SQL_LEN_DATA_AT_EXEC all are ok as far as can be told. (Return code is zero.)
    After setup calls, call to SQLExecDirect returns -1 (error with info) and the info is error ORA-22275 (invalid LOB locator specified). (Was expecting 99 (SQL_NEED_DATA) to start feeding LOB data.)
    This same code ran under Oracle 9i, but fails under 10g. Did not even rebuild the code.
    Does anyone have any ideas?
    Thanks in advance

    This was resolved in the latest 10g ODBC driver (10.1.0.3.1)
    Jon

  • ORA-29283 when writing file to server

    I am trying to setup procedure that writes the result of a query to a file on the database server. I am currently testing the writing of file using some generic code.
    The database is on Red Hat Linux. The directory was created on the server as the mfts_user account and the oracle user granted read/write access. This was verified by logging into the server as oracle and creating a file via "touch test.txt".
    The following 2 procedures/functions were created to test utl_file:
    create or replace FUNCTION dump_csv ( p_query IN VARCHAR2
    , p_separator IN VARCHAR2 DEFAULT ','
    , p_dir IN VARCHAR2
    , p_filename IN VARCHAR2)
    RETURN NUMBER
    AS
    l_output utl_file.file_type;
    l_thecursor INTEGER DEFAULT dbms_sql.open_cursor;
    l_columnvalue VARCHAR2(2000);
    l_status INTEGER;
    l_colcnt NUMBER DEFAULT 0;
    l_separator VARCHAR2(10) DEFAULT '';
    l_cnt NUMBER DEFAULT 0;
    BEGIN
    l_output := utl_file.fopen(p_dir, p_filename, 'w');
    dbms_sql.parse(l_thecursor, p_query, dbms_sql.native);
    FOR i IN 1 .. 255
    LOOP
    BEGIN
    dbms_sql.define_column(l_thecursor, i, l_columnvalue, 2000);
    l_colcnt := i;
    EXCEPTION
    WHEN others
    THEN
    IF(SQLCODE = -1007) THEN
    EXIT;
    ELSE
    RAISE;
    END IF;
    END;
    END LOOP;
    dbms_sql.define_column(l_thecursor, 1, l_columnvalue, 2000);
    l_status := dbms_sql.EXECUTE(l_thecursor);
    LOOP
    EXIT WHEN(dbms_sql.fetch_rows(l_thecursor) <= 0);
    l_separator := '';
    FOR i IN 1 .. l_colcnt
    LOOP
    dbms_sql.column_value(l_thecursor, i, l_columnvalue);
    utl_file.put(l_output, l_separator || l_columnvalue);
    l_separator := p_separator;
    END LOOP;
    utl_file.new_line(l_output);
    l_cnt := l_cnt + 1;
    END LOOP;
    dbms_sql.close_cursor(l_thecursor);
    utl_file.fclose(l_output);
    RETURN l_cnt;
    END dump_csv;
    create or replace PROCEDURE TEST_DUMP_CSV
    AS
    l_count NUMBER;
    l_fn VARCHAR2(30);
    BEGIN
    SELECT TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS') INTO l_fn FROM DUAL;
    l_count := dump_csv( p_query => 'select * from coreq',
    p_separator => ',',
    p_dir => 'OUTBOUND_NEW_DIR',
    p_filename => 'dump_csv' || l_fn || '.csv' );
    dbms_output.put_line( to_char(l_count) || ' rows extracted to file dump_csv' || l_fn || '.csv.' );
    END TEST_DUMP_CSV;
    To test utl_file, I execute as the MAXIMO user:
    CREATE OR REPLACE DIRECTORY outbound_new_dir AS '/home/mfts_user/Maximo/outbound_new';
    select dump_csv('select * from coreq', ',', 'OUTBOUND_NEW_DIR', 'dump_csv.csv' ) from dual;
    Here is the error I get:
    ORA-29283: invalid file operation
    ORA-06515: at "SYS.UTL_FILE", line 449
    ORA-29283: invalid file operation
    ORA-06512: at "MAXIMO.DUMP_CSV", line 15
    ORA-06512: at line 1
    This same setup works on Windows XP when logged in as an Admin user, which tells me that the syntax and logic is correct.
    What could be wrong with the Linux setup?

    Yes. I read that read/write is automatically granted to the user that creates the DIRECTORY object.
    The result of the query you gave was 2 records:
    GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY
    SYS     MAXIMO     SYS     OUTBOUND_NEW_DIR     READ     YES     NO
    SYS     MAXIMO     SYS     OUTBOUND_NEW_DIR     WRITE     YES     NO

  • Writing BLOB column from Cobol

    Hi,
    I´m using the Pro*Cobol pre-compile to execute SQL statements into programs Cobol Net Express.
    I need two helps.
    1 - How can do to the Pro*Cobol cut the blanks on the right into PIC X Varying hosts variables ?
    For example :
    ..... LastName VARCHAR(20)
    ..... 05 LASTNAME PIC X(20) VARYING.
    The Pro*Cobol documentation say that precompile transform the elementary item LASTNAME to the group item bellow :
    05 LASTNAME.
    10 LASTNAME-LEN PIC S9(04) COMP.
    10 LASTNAME-ARR PIC X(20).
    When a string less than 20 is moved to LASTNAME-ARR, the exact string lenght is moved to the LASTNAME-LEN and the string is writen
    into DB without blanks on right.
    When a string is red from DB, the precompile write into LASTNAME-LEN the exact string lenght and the LASTNAME-ARR receive the
    string value without blanks on write.
    Occurs that when I compile the program the Pro*Cobol/Micro Focus don´t recognize LASTNAME-LEN and LASTNAME-ARR.
    Am I correct ? May I use any directive that resolve that ?
    2 - I need to check these step-by-step to write a text file generated from cobol into a BLOB column.
    The LOAD FROM FILE statement receive SQLCODE -22275 : Invalid LOB locator specified.
    Assumptions :
    MAG-RELAT-BFILE SQL-BFILE.
    MAG-RELAT-BLOB SQL-BLOB.
    The fiel R1401 exists in D:\Petros\NE\
    Source Code :
    MOVE 'D:\Petros\NE\' TO ALIAS
    MOVE 13 TO ALIAS-L
    MOVE 5 TO FILENAME-L
    MOVE 'R1401' TO FILENAME
    EXEC SQL
    ALLOCATE :IMAG-RELAT-BFILE
    END-EXEC.
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO ALLOCATE' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BFILE' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    EXEC SQL
    LOB FILE SET :IMAG-RELAT-BFILE
    DIRECTORY = :ALIAS,
    FILENAME = :FILENAME
    END-EXEC.
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO LOB FILE SET' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BFILE' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    EXEC SQL
    ALLOCATE :IMAG-RELAT-BLOB
    END-EXEC
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO ALLOCATE' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BLOB' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    EXEC SQL
    LOB LOAD :TOTAL-BYTES
    FROM FILE :IMAG-RELAT-BFILE
    INTO :IMAG-RELAT-BLOB
    END-EXEC.
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO LOAD FFOM FILE' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BFILE, IMAG-RELAT-BLOB' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    Thanks,

    907466 wrote:
    Hi,
    I´m using the Pro*Cobol pre-compile to execute SQL statements into programs Cobol Net Express.
    I started my career as a cobol programmer in 1981, and the last project I worked on before transitioning to DBA was implementing MF Cobol. It's been a number of years but ...
    I need two helps.
    1 - How can do to the Pro*Cobol cut the blanks on the right into PIC X Varying hosts variables ?
    For example :
    ..... LastName VARCHAR(20)
    ..... 05 LASTNAME PIC X(20) VARYING.
    The Pro*Cobol documentation say that precompile transform the elementary item LASTNAME to the group item bellow :
    05 LASTNAME.
    10 LASTNAME-LEN PIC S9(04) COMP.
    10 LASTNAME-ARR PIC X(20).
    When a string less than 20 is moved to LASTNAME-ARR, the exact string lenght is moved to the LASTNAME-LEN and the string is writen
    into DB without blanks on right.
    When a string is red from DB, the precompile write into LASTNAME-LEN the exact string lenght and the LASTNAME-ARR receive the
    string value without blanks on write.
    Occurs that when I compile the program the Pro*Cobol/Micro Focus don´t recognize LASTNAME-LEN and LASTNAME-ARR.
    Am I correct ? May I use any directive that resolve that ?
    I don't know if you are correct or not. Is the pre-compiler or compiler step throwing an error message? If you are correct, there should be an error message that you should share with us.
    2 - I need to check these step-by-step to write a text file generated from cobol into a BLOB column.
    The LOAD FROM FILE statement receive SQLCODE -22275 : Invalid LOB locator specified.
    Assumptions :
    MAG-RELAT-BFILE SQL-BFILE.
    MAG-RELAT-BLOB SQL-BLOB.
    The fiel R1401 exists in D:\Petros\NE\
    Source Code :
    MOVE 'D:\Petros\NE\' TO ALIAS
    MOVE 13 TO ALIAS-L
    MOVE 5 TO FILENAME-L
    MOVE 'R1401' TO FILENAME
    EXEC SQL
    ALLOCATE :IMAG-RELAT-BFILE
    END-EXEC.
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO ALLOCATE' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BFILE' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    EXEC SQL
    LOB FILE SET :IMAG-RELAT-BFILE
    DIRECTORY = :ALIAS,
    FILENAME = :FILENAME
    END-EXEC.
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO LOB FILE SET' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BFILE' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    EXEC SQL
    ALLOCATE :IMAG-RELAT-BLOB
    END-EXEC
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO ALLOCATE' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BLOB' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    EXEC SQL
    LOB LOAD :TOTAL-BYTES
    FROM FILE :IMAG-RELAT-BFILE
    INTO :IMAG-RELAT-BLOB
    END-EXEC.
    IF SQLCODE NOT EQUAL +0
    MOVE 'ERRO COMANDO LOAD FFOM FILE' TO WW-CA-MENSAGEM
    MOVE 'IMAG-RELAT-BFILE, IMAG-RELAT-BLOB' TO WW-CA-ARQUIVO
    MOVE SQLCODE TO WW-ST-ARQUIVO
    MOVE 08 TO RETURNO-CODE
    PERFORM 999-TRATA-ERRO THRU 999-FIM
    END-IF.
    Thanks,So far you haven't shown us any symptoms, errors, etc. Just a description of what you are doing and a statement that you assume it won't work. Your executable code has a lot of reference to host variables that we can only assume a spelled correctly to match to variables you've declared in your data division. Are you going straight from submitting precompile to run-time, or are you using the very excellent interactive source-level debugger that is a prime feature of MF Cobol?

  • Writing BLOB column into a csv or txt or sql  files

    Hi All,
    I am having a requirement where i have to upload a file from user desktop and place that file into a unix directory. For this i am picking that file into a table using APEX and then writing that file in unix directory using UTL_FILE.
    The problem which i am facing is that after this every line in my file is having ^M character at the end.
    For this i modified my BLOB column into a CLOB column and then also i am facing the same problem.
    Is there any way to get rid of this as i have to upload these csv or txt files into tables using sql loader programs. i can;t write any shell script to remove ^M character before uploading as this program will be merge with existing programs and then it will require lots of code change.
    Kindly Help.
    Thanks
    Aryan

    Hi Helios,
    Thanks again buddy for your reply and providing me different ways.... but still the situation is i can;t write any shell script for removing the ^M. I will be writing into a file from CLOB column.
    Actually the scenrio is when i am writing a simple VARCHAR columns with 'W' mode then it is working fine, but i have a BLOB column which stores the data in binary format so i am converting that column into CLOB and then writing it into file with 'W' mode but then i am getting ^M characters. As per your suggestion i have to then again run a program for removing ^M or i have to modify all my previous programs for removing ^M logic.
    I want to avoid all these, and just wanted a way so that while writing into a file itself it should not have ^M. Else i have to go for a java stored procedure for running a shell script from sql, and in this still i am having some problem.
    Thanks Again Helios for your time and help.
    Regards
    Aryan

  • Retrieving BLOBs via stored procedure

    Hi all,
    I'm trying to retrieve bigger blobs (>1 MB) from a database using a stored procedure, but it didn't work. I can't figure out were the problem is.
    My stored prcedure looks like:
    CREATE OR REPLACE PROCEDURE get_blobs (blob_id IN NUMBER, v_enterid out NUMBER, v_enterdate_s out VARCHAR2, v_name out VARCHAR2, BUFFER_CHARS out LONG, v_blob_size out NUMBER) IS
    v_enterdate DATE;
    v_content BLOB;
    BUFFER LONG RAW;
    AMOUNT BINARY_INTEGER := 100;
    POS INTEGER := 1;
    COUNTER INTEGER :=0;
    BEGIN
    SELECT EntryID,EnterDate,Name,Content
    INTO v_enterid,v_enterdate,v_name,v_content
    FROM test_blob WHERE EntryID=blob_id;
    v_enterdate_s := to_char(v_enterdate,'dd.mm.yyyy');
    v_blob_size := DBMS_LOB.GETLENGTH(v_content);
    AMOUNT := cast(v_blob_size as BINARY_INTEGER);
    DBMS_LOB.READ (v_content, AMOUNT, POS, BUFFER);
    BUFFER_CHARS := 'aaaaa';
    CAST_TO_VARCHAR2(BUFFER);
    BUFFER2 := BUFFER2 || BUFFER_CHARS;
    RETURN;
    END;
    I think there are several mistakes in this code, as I tried and tried many days.
    When I execute the code using a sql script, it works (the output is just removed for testing), but if I try it with an PHP script I get the following error:
    Warning: Ora_Exec failed (ORA-21560: argument 2 is null, invalid, or out of range ORA-06512: at "SYS.DBMS_LOB", line 640 ORA-06512: at "JAN.GET_BLOBS", line 21 ORA-06512: at line 1 -- while processing OCI function OEXEC/OEXN) in d:\eclipse\crf tracker\modules\manage\new.inc.php on line 60
    The PHP routine looks like:
    $v_index_in = 1;
    $cursor = ora_open($conn);
    $query = "BEGIN GET_BLOBS(:v_index_in,:v_index,:v_enterdate,:v_name,:v_content,:v_size); END;";
    $stmt = ora_parse($cursor,$query);
    ora_bind ($cursor, "v_index_in", ":v_index_in", 10,1);
    ora_bind ($cursor, "v_index", ":v_index", 10,2);
    ora_bind ($cursor, "v_enterdate", ":v_enterdate", 20,2);
    ora_bind ($cursor, "v_name", ":v_name", 100,2);
    ora_bind ($cursor, "v_size", ":v_size", 10000, 2);
    ora_bind ($cursor, "v_content", ":v_content",$v_size,2);
    $v_index = '';
    $v_enterdate = '';
    $v_name = '';
    $v_content = '';
    $v_size = '';
    ora_exec($cursor,$stmt);
    Could please anybody helps me with that problem. I really see no chance to solve it on my own, because I'm just starting to work with Oracle, but I have to finish this in two weeks.
    It doesn't matter, if the output of the content will be a blob or whatever else, because the processing in PHP will not be a big problem.
    Thanks a lot in advance,
    best regards,
    Jan.

    Hi
    You can use ref cursor to return the results from your stored procedure. Here is how you define .
    CREATE OR REPLACE PACKAGE trial AS
    TYPE m_refcur IS REF CURSOR;
    END trial;
    Create or replace procedure example ( var1 out trial.m_refCur, var2 out trial.m_refCur)
    as
    open var1 for select * from emp;
    open var2 for select * from dept;
    end;
    You can call this stored procedure in Java using Callable Statement. I you are using C++ or VB you can use OLEDB or
    ADO API's to call same stored procedure.
    Hope this helps
    Chandar

  • The problem with writing  BLOB arrays in table at one time?

    i want to write many blobs in at one time , And i use the OCIBindArrayOfStruct()
    My code is blow, I want you guys to help me find my BUG, thank you very much.
    the Program always pops the ERROR message :ORA-22275: invalid LOB locator specified
    STATUS status;
              OCILobLocator **lArray;
              lArray = new OCILobLocator*[nCount];
              for (size_t i = 0;i < nCount;i++)
                        if((status = OCIDescriptorAlloc((dvoid *)env->envhp,(dvoid **)&lArray,
                             (ub4)OCI_DTYPE_LOB,(size_t)0,(dvoid **)0)) != OCI_SUCCESS)
                             return(status);
              text sqlstats[300];
              strcpy_s((char*)sqlstats,300,"insert into ");
              strcat_s((char*)sqlstats,300,texTableName);
              strcat_s((char*)sqlstats,300,"(TEXID,TEXNAME,TEXDATA) values (:texid,:texname,:lblob)");
              OCIBind bndhp1 = (OCIBind )0;
              OCIBind bndhp2 = (OCIBind )0;
              OCIBind bndhp3 = (OCIBind )0;
              PREPARE_RETURN(sqlstats);
              BIND_CTR(bndhp1,":texid",texinfo[0].texid,30);
              if ((status = OCIBindArrayOfStruct(bndhp1,env->errhp,nSize,sizeof(sb2),sizeof(ub2),sizeof(ub2))) != OCI_SUCCESS)
                   for(size_t j = 0;j < nCount;j++)
                        FREE_LOB_LOCATOR(lArray[j]);
                   delete[] texinfo;
                   char err[512];
                   OraError_Proc(status,err,512);
                   AfxMessageBox(err);
                   return status;
              BIND_CTR(bndhp2,":texname",texinfo[0].texname,MAX_NAME);
              if ((status = OCIBindArrayOfStruct(bndhp2,env->errhp,nSize,sizeof(sb2),sizeof(ub2),sizeof(ub2))) != OCI_SUCCESS)
                   for(size_t j = 0;j < nCount;j++)
                        FREE_LOB_LOCATOR(lArray[j]);
                   delete[] texinfo;
                   char err[512];
                   OraError_Proc(status,err,512);
                   AfxMessageBox(err);
                   return status;
              if((status = OCIBindByName(env->stmthp,&bndhp3,env->errhp,
                   (CONST text *)":lblob",(sb4)-1,(dvoid *)lArray,(sword)-1,SQLT_BLOB,
                   (dvoid *)0,(ub2 *)0,(ub2 *)0,(ub4)0,(ub4 *)0,OCI_DEFAULT)) != OCI_SUCCESS)
                   return(status);
              if ((status = OCIBindArrayOfStruct(bndhp4,env->errhp,sizeof(lArray[0]),sizeof(sb2),sizeof(ub2),sizeof(ub2))) != OCI_SUCCESS)
                   for(size_t j = 0;j < nCount;j++)
                        FREE_LOB_LOCATOR(lArray[j]);
                   delete[] texinfo;
                   char err[512];
                   OraError_Proc(status,err,512);
                   AfxMessageBox(err);
                   return status;
    //this line will not succeed
              if((status = OCIStmtExecute(env->svchp,env->stmthp,env->errhp,
                   (ub4)nCount,(ub4)0,(CONST OCISnapshot *)0,(OCISnapshot *)0,(ub4)OCI_DEFAULT)) != OCI_SUCCESS)
                   char err[512];
                   OraError_Proc(status,err,512);
                   AfxMessageBox(err);
                   for(size_t j = 0;j < nCount;j++)
                        FREE_LOB_LOCATOR(lArray[j]);
                   delete[] texinfo;
                   return status;

    [Edit : oops...misread the post...]
    Regards

  • Writing BLOB under Trasnaction Isolation level Serializable

    I am trying to update a BLOB datatype using JDBC.
    Steps:
    1. Select the Row FOR UPDATE
    2. Get hold of BLOB Object
    3. Get Binary Ouput Stream
    4. Write to Output Stream
    This is being done inside a bigger Trasnaction. This is working fine when Transaction Isolation level is READ COMMITTED.
    But when I am using Transaction Isolation level SERIALIZABLE, it behave in very inconsistent manner.
    Sometimes it gets updated. Other times, it gives an error stating 'Cannot serialize this Transaction'.
    Whats the reason and possible solution?

    Hi Kamal,
    The SERIALIZABLE degree of isolation has its own limitations and one of them is the ORA-08177 error. This message is displayed whenever an attempt is made to update a row that has changed since your transaction began.
    So probably you should be looking if there are transactions that are simultaneously trying to update the same row. I don't think you need simulataneous access for a row if you are doing SERIALIZABLE transaction.This link would be helpful to you.
    http://asktom.oracle.com/pls/ask/f?p=4950:8:11007064320210057726::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:3233191441609
    perhaps if you post part of your code here which does the update then we might be able to help you more.
    Thanks,
    khalid

  • Export and import BLOB via text

    Hello,
    we need to migrate a table containing BLOB column from one environment to another. We can not use EXP or DBMS_DATA_PUMP, it has to be in form of SQL inserts (this is customer's requirement - no reason to think about it twice...)
    My idea was to encode the blob to base64 and then decode it back to blob in the target system.
    To try it out, I have created a procedure as
    declare
    b_blob blob;
    v_varchar varchar2(32000);
    begin
    -- get some blob
    select job_data into b_blob from qrtz_job_details where job_name = '83'
    v_varchar := utl_encode.base64_encode(b_blob);
    -- blobtest.column1 is of type blob
    insert into blobtest (column1) values utl_encode.base64_decode(v_varchar2);
    end;
    I get error ORA-01461: can bind a LONG value only for insert into LONG column.
    What am I doing wrong?
    Any help appreciated.

    adammsvk wrote:
    Hello,
    we need to migrate a table containing BLOB column from one environment to another. We can not use EXP or DBMS_DATA_PUMP, it has to be in form of SQL inserts (this is customer's requirement - no reason to think about it twice...)
    My idea was to encode the blob to base64 and then decode it back to blob in the target system.
    To try it out, I have created a procedure as
    declare
    b_blob blob;
    v_varchar varchar2(32000);
    begin
    -- get some blob
    select job_data into b_blob from qrtz_job_details where job_name = '83'
    v_varchar := utl_encode.base64_encode(b_blob);
    -- blobtest.column1 is of type blob
    insert into blobtest (column1) values utl_encode.base64_decode(v_varchar2);
    end;
    I get error ORA-01461: can bind a LONG value only for insert into LONG column.
    What am I doing wrong?
    Any help appreciated.What is the datatype of column1 on blobtest table?
    Also, shouldn't you have brackets around your inserted value?
    insert into blobtest (column1) values (utl_encode.base64_decode(v_varchar2));

  • Displaying image in BLOB via pl/sql cartridge

    Hi gurus ,
    I have a oracle web site. I am dynamically putting links in the page using pl/sql cartridge and procedures. I have loaded all my pictures for the catalog in the BLOB columns. Can anyone help me or direct me how to display these images using plsql cartridge dynamically from BLOB columns instead of from image files ???
    null

    Originally posted by Dan Mullen:
    I'm not familiar with the PL/SQL cartridge.
    Since this is the interMedia forum my suggestion would be to store the image data in an ORDImage column and use the interMedia WebAgent to retrieve the data.
    The interMedia WebAgent can be used with "naked" blobs but you'll need to code your own pl/sql routines (the code wizard won't do it for you).
    The code wizard does generate procedures for OrdImages but due to some reasons the GET_XX procedures don't work and give the error "ORA-04043 Object Get_Image does not Exist <br><br>"
    null

  • Does Oracle occi have any memory bugs when writing blobs using streams?

    The function below will produce some kind of memory corruption that will cause an exception (which cannot be identified since memory is corrupted) while doing another call:
    ora::Statement stmt(__cn);
    string sql("BEGIN Pckg.Sp_procA(:1, :2, :3, :4, :5, :6, :7, :8, "
    ":9, :10, :11, :12, ":13, :14, :15, :payload); END;");
    occi::Blob payload(__cn.getConnection());
    occi::Environment* tempEnv = occi::Environment::createEnvironment();
    occi::Timestamp reportTime(tempEnv);
    reportTime.fromText(__report.report_time), "yyyy-mm-ddH24:mi:ss.ff");
    stmt.setSQL(sql);
    stmt.setString     (1, "");
    stmt.setString     (2, "");
    stmt.setString     (3, __report.varA);
    stmt.setString     (4, __report.varB);
    stmt.setInt          (5, __report.varC);
    stmt.setString     (6, __report.varD);
    stmt.setString     (7, __report.varE);
    stmt.setString     (8, __report.varF);
    stmt.setTimestamp(9, reportTime);
    stmt.setNull     (10, occi::OCCITIMESTAMP);
    stmt.setString     (11, __report.varG);
    stmt.setString     (12, __report.varH);
    stmt.setString     (13, __report.varI);
    stmt.setString     (14, __report.varK);
    stmt.setString     (15, __report.varX);
    stmt.setBinaryStreamMode(16, __report.payload.Size(), true);
    stmt.executeUpdate();
    occi::Stream* streamedData = stmt.getStream(16);
    streamedData->writeLastBuffer(__report.payload.GetPtr(), __report.payload.Size());
    stmt.closeStream(streamedData);
    occi::Environment::terminateEnvironment(tempEnv);
    return true;
    The function below works perfectly unless the code above is executed and looks like this:
    ora::Statement stmt(__cn);
    string sql("BEGIN "
    "Pckg.Sp_procB(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11); END;");
    occi::Environment* tempEnv = occi::Environment::createEnvironment();
    occi::Timestamp reportTime(tempEnv);
    reportTime.fromText(__report.report_time, "yyyy-mm-dd HH24:mi:ss.ff");
    stmt.setSQL(sql);
    stmt.setString     (1, "");
    stmt.setString     (2, "");
    stmt.setString     (3, __report.varA);
    stmt.setString     (4, __report.varB);
    stmt.setInt          (5, __report.varB);
    stmt.setString     (6, __report.varD);
    stmt.setString     (7, __report.varE);
    stmt.setString     (8, __report.varF);
    stmt.setString     (9, __report.varG);
    stmt.setTimestamp(10, reportTime);
    stmt.setNull     (11, occi::OCCITIMESTAMP);
    stmt.executeUpdate();
    occi::Environment::terminateEnvironment(tempEnv);
    return true;
    I got the blob insert example from Oracle's documentation and can't see anything wrong with it. The second function also seems to be ok which got me thinking that Oracle's occi might have some kind of bug that corrupts memory. Anyone knows anything about this or has done anything similar?

    Does anyone insert blobs this way? Or just using the "insert -> select for update" way?

  • About update a blob via JDBC

    I want to update a blob value via jdbc.
    The following is my program(part)
    1)String sqlstr = "SELECT * FROM test WHERE vchar='anyone' ";
    2)ResultSet rset = stmt.executeQuery(sqlstr);
    3)BLOB blob = ((OracleResultSet)rset).getBLOB(2);
    4)OutputStream outstream = blob.getBinaryOutputStream();
    when excute the line
    #5)outstream.write(...);
    A IOException throwed. And said "row containing the LOB value is not locked"
    So I change the sqlstr value as
    sqlstr = "SELECT * FROM test WHERE vchar='B' FOR UPDATE";
    Then ,when I restart the programe,it stoped at the line 2) and nothing tell me.
    Just one thing I can do is ctrl-c to corrupt
    it.
    Can you tell me how to resolve it.
    thank you.
    null

    Please take a look at this: http://technet.oracle.com/sample_code/tech/java/sqlj_jdbc/sample_code_index.htm
    You must do
    select ... for update;
    even before you manipulate LOB's.

  • ORA-01410 when selecting data via ROWID

    hi there,
    after update on 11.2.0.2 we are getting the following error in our application when selecting a record using the comparison via the ROWID:
    select s.rowid , S.VRZNG_ENHT_TITEL from vws_vrzng_enht_haupt_sys s where s.rowid='AAASN0AAFAAACBUAAB';
    -- => ORA-01410 (it doesn't matter which other column additionally to the rowid will be chosen)
    performing the same select but using the * for all data
    select s.rowid , s.* from vws_vrzng_enht_haupt_sys s where s.rowid='AAASN0AAFAAACBUAAB';
    -- => one row will be received.
    that is very strange to me.
    Using the former release 10.2.0.4 everything was fine without receiving such an error.
    Also the dirty workaround with setting the optimizer_features_enable to 10.2.0.4 will work....
    Has anyone faced this error, too? Any help will be higly appreciated.
    thanks Stefan

    Plan causing the ORA-01410:
    | Id  | Operation                    | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |                 |     1 |    72 |     1   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS                |                 |     1 |    72 |     1   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS               |                 |     1 |    57 |     1   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS BY USER ROWID| TBS_VRZNG_ENHT  |     1 |    54 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX UNIQUE SCAN         | CPK_GSFT_OBJ    |    22 |    66 |     0   (0)| 00:00:01 |
    |*  5 |   INDEX UNIQUE SCAN          | CUK_GOBH_GO2_ID |    17 |   255 |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("G"."GSFT_OBJ_ID"="VRZNG_ENHT_ID")
       5 - access("GOBH"."GSFT_OBJ_2_ID"="G"."GSFT_OBJ_ID")Plan using the DUAL subquery:
    | Id  | Operation                    | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |                         |     1 |    39 |     4   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS                |                         |     1 |    39 |     2   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS               |                         |     1 |    18 |     1   (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS BY USER ROWID| TBS_GSFT_OBJ_BZHNG_HRCH |     1 |    15 |     1   (0)| 00:00:01 |
    |   4 |     FAST DUAL                |                         |     1 |       |     2   (0)| 00:00:01 |
    |*  5 |    INDEX UNIQUE SCAN         | CPK_GSFT_OBJ            |     1 |     3 |     0   (0)| 00:00:01 |
    |   6 |   TABLE ACCESS BY INDEX ROWID| TBS_VRZNG_ENHT          |     1 |    21 |     1   (0)| 00:00:01 |
    |*  7 |    INDEX UNIQUE SCAN         | CPK_VRZNG_ENHT          |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access(CHARTOROWID( (SELECT 'AAASN0AAFAAACBUAAB' FROM "SYS"."DUAL" "DUAL")))
       5 - access("GOBH"."GSFT_OBJ_2_ID"="G"."GSFT_OBJ_ID")
       7 - access("G"."GSFT_OBJ_ID"="VRZNG_ENHT_ID")Have you spotted the difference yourself? One execution plan applies the ROWID to a different table than the other one. You might have hit a bug here, potentially caused by a transformation applied by the optimizer.
    Can you show us the execution plan that worked in 10.2.0.4?
    Would it be possible to share the definition of the view "vws_vrzng_enht_haupt_sys"?
    Hope this helps,
    Randolf

  • Reading/Writing BLOB more than 1000 KB

    Hi There,
    I do not get any error while writing a BLOB over 1000KB but it fails to read it. I can successfully write and read a BLOB less than 1000KB.
    Is any body knows why this is happening? I am using Oracle10g2 ODBC driver on Windows.
    Thanks in advance.
    Milind

    Hi Jonah,
    Sorry for delay in reply as I got pulled out for someother work. Anyway I tired your suggestion and still does not work. Even it fails for smaller BLOBs too.
    Do you want to take a pick at my code? Is there a way to upload it?
    I am thinking of using PL/SQL stored procedure to read the BLOB. Is it true that one can read only 32KB at a time using DBMS package? Or can one manage the buffer in procedure and then return as a whole?
    Thanks,
    Milind

Maybe you are looking for

  • Why can't I boot from the OS X DVD on my Intel iMac?

    I'm trying to boot from the Mac OS X Tiger CPU Drop in DVD that came from the Apple Refurb Store with my Intel iMac. I have tried holding down the C key on start up and it doesn't work. It boots like normal to the hard drive I have tried using the St

  • OCR and Voting Disk Mirror

    Hi, We have oracle 10.2.0.4 RAC database. Now we are planning to mirror the OCR and Voting disk. Currently we are using SAN mirror copy. SAN admin assures that we don't need mirror copy as it is being mirrored at the disk level. Can any one please su

  • How can i get session object based on session id?

    I have tried searching in forum but i can't find any solution that i can get session object based on session id and i also would like to know how to check whether the session is still active? My company currently using weblogic 5.1. I really hope tha

  • Initial parameters

    Hi, is it possible to set initial parameters like optimizer_index_caching or optimizer_indedx_cost_adj for a single schema (Oracle 10g)? Or are this parameters valid for the whole DB ? If yes, can someone tell me please how I can set this ? thanks in

  • Target Ledger Group / ASKBN

    Hi Gurus, I have several areas and among them a few one goes to a different ledger. Question: How can I set the system to avoid postings from leading area (01) to the different ledger? Is it possible at all? In the transaction "Define how depreciatio