LOB-Insertion

Hi,
my problem is "How to insert data directly or
from file into BLOB's?"
I've tried some of the 'load data'-examples from Oracle _8.1.6.Documentation with tables in SQL*Plus-window,but it didn't work.
I really appreciate your assistance!
Thanks
Boyan

Are you trying to insert multimedia data in to a database? If so, you can use SQL*Loader, Web Agent and Clipboard, and Annotator.
Clipboard can create procedures for you using a wizard. This way, you are certain that your procedures work for uploads.

Similar Messages

  • Error in loading a picture in BFILE

    Hi People,
    This is a procedure to load a .JPEG file in a column with bfile datatype.
    I have created a table ER with following structure:
    CREATE TABLE ER (ENO NUMBER, ENAME VARCHAR2(20), PHOTO BFILE);
    now i have inserted values in this table as,
    INSERT INTO ER(ENO,ENAME) VALUES(10,'VID');
    now i have written a procedure to update the column of photo in er table.
    CREATE DIRECTORY BF AS 'D:/';
    CREATE OR REPLACE PROCEDURE load_bfile
    (p_file_loc IN VARCHAR2) IS
    v_file BFILE;
    v_filename VARCHAR2(16);
    BEGIN
    v_filename := 'SAMPLE' || '.JPEG';
    v_file := BFILENAME(p_file_loc, v_filename);
    DBMS_LOB.FILEOPEN(v_file);
    UPDATE eR SET PHOTO = v_file
    where eno=10;
    DBMS_OUTPUT.PUT_LINE('LOADED FILE: '||v_filename
    || ' SIZE: ' || DBMS_LOB.GETLENGTH(v_file));
    DBMS_LOB.FILECLOSE(v_file);
    END load_bfile;
    The sample picture is now located in D: only.so inorder to execute the procedure i have issued as,
    EXEC load_bfile('BF');
    but following errors occured.
    ERROR at line 1:
    ORA-22288: file or LOB operation FILEOPEN failed
    The system cannot find the file specified.
    ORA-06512: at "SYS.DBMS_LOB", line 504
    ORA-06512: at "SYS.LOAD_BFILE", line 8
    ORA-06512: at line 1
    what could be the error here.I have checked that all pathname is correct.please suggest me.

    This is the struct of the table ER.
    SQL> DESC ER
    Name Null? Type
    ENO NUMBER
    ENAME VARCHAR2(20)
    PHOTO BINARY FILE LOB
    INSERT INTO ER(ENO,ENAME) VALUES(10,'VID');
    COMMIT;
    The file is in local D drive.
    CREATE DIRECTORY BF AS 'D:/';
    CREATE OR REPLACE PROCEDURE load_bfile
    (p_file_loc IN VARCHAR2) IS
    v_file BFILE;
    v_filename VARCHAR2(16);
    BEGIN
    v_filename := 'SAMPLE' || '.JPEG';
    v_file := BFILENAME(p_file_loc, v_filename);
    DBMS_LOB.FILEOPEN(v_file);
    UPDATE eR SET PHOTO = v_file
    where eno=10;
    DBMS_OUTPUT.PUT_LINE('LOADED FILE: '||v_filename
    || ' SIZE: ' || DBMS_LOB.GETLENGTH(v_file));
    DBMS_LOB.FILECLOSE(v_file);
    END load_bfile;
    EXEC load_bfile('BF');
    BEGIN load_bfile('BF'); END;
    ERROR at line 1:
    ORA-22288: file or LOB operation FILEOPEN failed
    The system cannot find the file specified.
    ORA-06512: at "SYS.DBMS_LOB", line 504
    ORA-06512: at "SYS.LOAD_BFILE", line 8
    ORA-06512: at line 1
    These are the steps i have followed.Is anything wrong in these steps?.This all i have done connecting to local database only.

  • SDO_PC blocks partition

    Hi,
    I am trying to create SDO_PC and I want to sort points in blocks via three dimensions. I was not succesfull and need advice ho to do it.
    DECLARE
    pc SDO_PC;
    BEGIN
    pc := SDO_PC_PKG.INIT(
    'base',
    'PC',
    'PC_BLKTAB2',
    'blk_capacity=50000',
    SDO_GEOMETRY(3003, NULL, NULL,
    SDO_ELEM_INFO_ARRAY(1,1003,3),
    SDO_ORDINATE_ARRAY(-1800000, -1800000, -1800, 1800000, 1800000, 1800)
    0.00005,
    3,
    NULL
    INSERT INTO base (pc) VALUES (pc);
    SDO_PC_PKG.CREATE_PC(
    pc,
    'entry'
    END;
    Error report:
    ORA-13199: Invalid Parameters for Partition_Table
    ORA-13199: Invalid Parameters for Partition_Table
    ORA-13236: interní chyba při zpracovávání R-stromu: [failed to cluster in memory]
    ORA-13249: Internal error in Spatial index: [mdrcrclmem]
    ORA-13234: selhání přístupu do tabulky R-stromového indexu [error in lob insert]
    ORA-29400: chyba zásobníku dat
    ORA-29861: index domény je označen jako LOADING, FAILED nebo UNUSABLE
    ORA-06512: na "MDSYS.PRVT_PC", line 3
    ORA-06512: na "MDSYS.PRVT_PC", line 130
    ORA-06512: na "MDSYS.SDO_PC_PKG", line 74
    ORA-06512: na line 20
    13199. 00000 - "%s"
    *Cause:    This is an internal error.
    *Action:   Contact Oracle Support Services.
    Thanks
    Kirin

    I am able to create to block like this with no problems:
    DECLARE
    pc SDO_PC;
    BEGIN
    pc := SDO_PC_PKG.INIT(
    'dotaz_kaple',
    'PC',
    'PC_BLKTAB3'
    'blk_capacity=50',
    SDO_GEOMETRY(2003, NULL, NULL,
    SDO_ELEM_INFO_ARRAY(1,1003,3),
    SDO_ORDINATE_ARRAY(-1800000, -1800000, 1800000, 1800000)
    0.00005,
    3,
    NULL
    INSERT INTO dotaz_kaple (pc) VALUES (pc);
    SDO_PC_PKG.CREATE_PC(
    pc,
    'testovaci_blok'
    END;
    Data created this way have the attribute PT_SORT_DIM = 1. But I want to sort points in blocks via three dimensions.
    I tried 1,1003,7 for elem info, but it is the same error. Blok size does not change anything.
    Here are my data I am using
    CREATE TABLE ENTRY (RID VARCHAR2(24),
         VAL_D1 NUMBER,
         VAL_D2 NUMBER,
         VAL_D3 NUMBER);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (1,-812782.209,-1079712.135,395.232);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (2,-812782.35,-1079712.102,395.38);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (3,-812782.359,-1079712.224,395.309);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (4,-812782.497,-1079712.178,395.233);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (5,-812782.536,-1079712.239,395.381);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (6,-812782.522,-1079712.376,395.312);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (7,-812782.3,-1079712.121,388.091);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (8,-812782.628,-1079712.196,395.507);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (9,-812782.704,-1079712.145,395.635);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (10,-812782.36,-1079712.25,388.088);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (11,-812782.686,-1079712.367,395.376);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (12,-812782.764,-1079712.15,395.423);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (13,-812782.667,-1079712.45,395.22);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (14,-812782.793,-1079712.287,395.562);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (15,-812782.875,-1079712.183,395.766);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (16,-812782.864,-1079712.155,395.301);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (17,-812782.578,-1079712.183,388.833);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (18,-812782.582,-1079712.145,388.633);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (19,-812782.597,-1079712.117,388.331);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (20,-812782.825,-1079712.474,395.384);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (21,-812782.889,-1079712.299,395.419);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (22,-812782.633,-1079712.137,388.094);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (23,-812782.916,-1079712.35,395.593);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (24,-812782.813,-1079712.592,395.23);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (25,-812782.699,-1079712.103,388.494);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (26,-812783.063,-1079712.129,395.849);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (27,-812782.705,-1079712.242,388.866);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (28,-812782.699,-1079712.232,388.665);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (29,-812782.71,-1079712.145,388.23);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (30,-812782.7,-1079712.235,388.358);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (31,-812782.657,-1079712.319,388.092);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (32,-812782.934,-1079712.51,395.238);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (33,-812783.064,-1079712.26,395.738);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (34,-812783.032,-1079712.292,395.289);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (35,-812783.05,-1079712.201,394.953);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (36,-812783.028,-1079712.22,394.607);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (37,-812782.989,-1079712.536,395.485);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (38,-812782.935,-1079712.637,395.113);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (39,-812783.09,-1079712.302,395.454);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (40,-812783.047,-1079712.353,394.932);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (41,-812783.039,-1079712.348,394.7);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (42,-812783.058,-1079712.244,394.299);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (43,-812782.795,-1079712.241,388.511);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (44,-812782.951,-1079712.715,395.341);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (45,-812782.871,-1079712.126,388.876);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (46,-812782.851,-1079712.153,388.673);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (47,-812783.088,-1079712.415,395.544);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (48,-812783.036,-1079712.423,394.445);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (49,-812782.812,-1079712.287,388.091);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (50,-812782.988,-1079712.771,395.471);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (51,-812783.007,-1079712.65,394.917);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (52,-812783.028,-1079712.554,394.606);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (53,-812783.125,-1079712.233,394.138);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (54,-812783.011,-1079712.667,394.702);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (55,-812783.006,-1079712.762,395.017);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (56,-812783.011,-1079712.811,395.158);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (57,-812782.795,-1079712.52,388.086);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (58,-812783.136,-1079712.427,394.343);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (59,-812783.104,-1079712.83,395.342);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (60,-812783.163,-1079712.621,394.922);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (61,-812782.854,-1079712.652,388.084);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (62,-812783.145,-1079712.839,395.551);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (63,-812783.037,-1079712.216,388.092);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (64,-812783.195,-1079712.69,395.047);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (65,-812782.955,-1079712.479,388.084);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (66,-812783.19,-1079712.764,395.181);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (67,-812783.294,-1079712.546,395.213);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (68,-812783.367,-1079712.397,395.208);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (69,-812783.428,-1079712.238,395.218);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (70,-812783.046,-1079712.616,388.092);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (71,-812783.551,-1079712.259,395.305);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (72,-812783.216,-1079712.317,388.101);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (73,-812783.313,-1079712.113,388.135);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (74,-812783.685,-1079712.105,395.301);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (75,-812783.431,-1079712.186,388.177);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (76,-812783.233,-1079712.689,388.15);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (77,-812783.543,-1079712.105,388.313);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (78,-812783.507,-1079712.357,388.155);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (79,-812783.601,-1079712.229,388.258);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (80,-812783.806,-1079712.182,388.313);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (81,-812783.49,-1079712.117,395.403);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (82,-812783.339,-1079712.28,395.394);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (83,-812783.066,-1079712.5,394.831);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (84,-812783.128,-1079712.538,395.276);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (85,-812783.082,-1079712.577,395.044);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (86,-812783.267,-1079712.51,388.149);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (87,-812783.21,-1079712.661,395.301);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (88,-812783.2,-1079712.397,395.419);
    Insert into ENTRY (RID,VAL_D1,VAL_D2,VAL_D3) values (89,-812783.209,-1079712.193,395.611);

  • Ingestion Performance with Out-of-line on Large Schema

    I have a large and complex schema consisting of 5 XSD's and few hundred complex types and a few thousand elements. This schema will be used for loading files ranging from 50mb to 650mb.
    Since there are circular references and self references, I have annotated out-of-line storage for a few of the global element references. The ingestion performance for a very small sample file of 800k has shown severe performance problems with the ingestion. By using a sql trace, I have been able to limit the problem to the out-of-line storage.
    This 800k file should be able to load in under one second yet it has taken almost 2 minutes. The inserts to the out-of-line tables represent about 80%+ of the total time. Both of the entries I have included below are out-of-line tables.
    From the Trace
    1 session in tracefile.
    2109 user SQL statements in trace file.
    1336 internal SQL statements in trace file.
    3445 SQL statements in trace file.
    74 unique SQL statements in trace file.
    148036 lines in trace file.
    133 elapsed seconds in trace file.
    INSERT /*+ NO_REF_CASCADE NESTED_TABLE_SET_REFS */ INTO
    "<VENDOR>"."GSMRELATION_VSDATA" e (e.sys_nc_rowinfo$,e.sys_nc_oid$,
    e.docid)
    VALUES
    (:1,:2,:3) RETURNING e.sys_nc_oid$,e.rowid INTO :4,:5
    call count cpu elapsed disk query current rows
    Parse 323 0.03 0.02 0 0 0 0
    Execute 323 42.64 41.91 0 4178 3418 323
    Fetch 0 0.00 0.00 0 0 0 0
    total 646 42.67 41.94 0 4178 3418 323
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: 60 (recursive depth: 1)
    INSERT /*+ NO_REF_CASCADE NESTED_TABLE_SET_REFS */ INTO
    "<VENDOR>"."UTRAN_VSDATA" e (e.sys_nc_rowinfo$,e.sys_nc_oid$,e.docid)
    VALUES
    (:1,:2,:3) RETURNING e.sys_nc_oid$,e.rowid INTO :4,:5
    call count cpu elapsed disk query current rows
    Parse 140 0.04 0.01 0 0 0 0
    Execute 140 18.75 18.38 0 1338 1429 140
    Fetch 0 0.00 0.00 0 0 0 0
    total 280 18.79 18.40 0 1338 1429 140
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: 60 (recursive depth: 1)
    The data above was gathered from Solaris 9 running 10.2.0.2 with the latest patches available.
    I have been able to reproduce this same error to a different degree using a personal instance of Windows XP 10.2.0.2.
    Does anyone have any idea what is happening?
    Thanks,
    VJ

    By the way, SR 5991118.993 has been created. The Oracle tech has suggested
    Bug 4369117
    Abstract: UNACCEPTABLE DIRECT PATH OUT-OF-LINE LOB INSERT PERFORMANCE
    Would this be a problem even if I'm not using CLOB annotation?
    Thanks,

  • SBWP  transaction - viewing folders/sending documents Long response times

    Hi all,
    I have some complains from some users (about 3-4 out of ~3000 user in my ECC 6.0 system) within my company about their response times on the Business Workplace. In particulat they started complaining about the response time of calling the TCOD SBWP . For 1-2 of them up to 4-5 minutes when myself as well as my other 2 colleagues were getting response of 500ms.
    Then they wanted also to view some folders on the Workplace and they had also response times of minutes instead of seconds.
    I checked that some of their shared folders as well as the Trash Bin had thousands of PDFs. I told to delete them and they deleted most of them. Stil when they want to open a folder it takes >2 minutes while for me to open the same shared folder takes 1-2 seconds.
    I checked in ST03N (user profiles, single records) and 1 of them had long database calls and request time in the Analysis of ABAP/4 database requests (Single Statistical Records).
    I am running out of ideas as I cannot explain why only for those 2-3 users happens to have long response times.
    Is it related to their folders in the Workplace? Where should I focus my investigation for the SBWP like transactions? Is it the case that some Oracle parameters might need to be checked?
    I run the automatic Oracle parameters check (O/S AIX 5.3 , Oracle 10.2 , ECC 6.0) and here are some recommandations:
    fixcontrol (5705630)                   add with value "5705630:ON"                                                                                use optimal OR concatenation; note 176754                    NO                                                          5705630:ON                                         B          1             
    fixcontrol (5765456)                   add with value "5765456:3"                                                                                no further information available                             NO                                                          5765456:3                                          B          1             
    optimpeek_user_binds                   add with value "FALSE"                                                                                avoid bind value peaking                                     NO                                                          FALSE                                              B          1             
    optimizerbetter_inlist_costing         add with value "OFF"                                                                                avoid preference of index supporting inlist                  NO                                                          OFF                                                B          1             
    optimizermjc_enabled                   add with value "FALSE"                                                                                avoid cartesean merge joins in general                       NO                                                          FALSE                                              B          1             
    sortelimination_cost_ratio             add with value "10"                                                                                use non-order-by-sorted indexes (first_rows)                 NO                                                          10                                                 B          1             
    event (10027)                            add with value "10027 trace name context forever, level 1"                                                               avoid process state dump at deadlock                         NO                                                          10027 trace name context forever, level 1          B          1             
    event (10028)                            add with value "10028 trace name context forever, level 1"                                                               do not wait while writing deadlock trace                     NO                                                          10028 trace name context forever, level 1          B          1             
    event (10091)                            add with value "10091 trace name context forever, level 1"                                                               avoid CU Enqueue during parsing                              NO                                                          10091 trace name context forever, level 1          B          1             
    event (10142)                            add with value "10142 trace name context forever, level 1"                                                               avoid Btree Bitmap Conversion plans                          NO                                                          10142 trace name context forever, level 1          B          1             
    event (10183)                            add with value "10183 trace name context forever, level 1"                                                               avoid rounding during cost calculation                       NO                                                          10183 trace name context forever, level 1          B          1             
    event (10191)                            add with value "10191 trace name context forever, level 1"                                                               avoid high CBO memory consumption                            NO                                                          10191 trace name context forever, level 1          B          1             
    event (10411)                            add with value "10411 trace name context forever, level 1"                                                               fixes int-does-not-correspond-to-number bug                  NO                                                          10411 trace name context forever, level 1          B          1             
    event (10629)                            add with value "10629 trace name context forever, level 32"                                                              influence rebuild online error handling                      NO                                                          10629 trace name context forever, level 32         B          1             
    event (10753)                            add with value "10753 trace name context forever, level 2"                                                               avoid wrong values caused by prefetch; note 1351737          NO                                                          10753 trace name context forever, level 2          B          1             
    event (10891)                            add with value "10891 trace name context forever, level 1"                                                               avoid high parsing times joining many tables                 NO                                                          10891 trace name context forever, level 1          B          1             
    event (14532)                            add with value "14532 trace name context forever, level 1"                                                               avoid massive shared pool consumption                        NO                                                          14532 trace name context forever, level 1          B          1             
    event (38068)                            add with value "38068 trace name context forever, level 100"                                                             long raw statistic; implement note 948197                    NO                                                          38068 trace name context forever, level 100        B          1             
    event (38085)                            add with value "38085 trace name context forever, level 1"                                                               consider cost adjust for index fast full scan                NO                                                          38085 trace name context forever, level 1          B          1             
    event (38087)                            add with value "38087 trace name context forever, level 1"                                                               avoid ora-600 at star transformation                         NO                                                          38087 trace name context forever, level 1          B          1             
    event (44951)                            add with value "44951 trace name context forever, level 1024"                                                            avoid HW enqueues during LOB inserts                         NO

    Hi Loukas,
    Your message is not well formatted so you are making it harder for people to read. However your problem is that you have 3-4 users of SBWP with slow runtimes when accessing folders. Correct ?
    You mentioned that there is a large number of documents in the users folders so usually these type of problems are caused by a large number of table joins on the SAPoffice tables specific to your users.
    Firstly please refer to SAP Note 988057 Reorganization - Information.
    To help with this issue you can use report RSBCS_REORG in SE38 to remove any deleted documents from the SAPoffice folders. This should speed up the access to your users documents in folders as it removes unnecessary documents from the SAPoffice tables.
    If your users do not show a significant speed up of access to their SAPoffice folders please refer to SAP Note 904711 - SAPoffice: Where are documents physically stored and verify that your statistics and indexes mentioned in this note are up to date.
    If neither of these help with the issue you can trace these users in ST12 and find out which table is cauing the longest runtime and see if there is a solution to either reduce this table or improve the access method on the DB level.
    Hope this helps
    Michael

  • Import performace.

    Hello,
    I have exported data of one schema which is having LOB in many tables. the export dump file size is around 1.5 GB only. When I import it into another database it takes around 6 hours to completed the import.
    Can Anyone tell me what parameters will boost the performance of LOB insert while import?
    I did nologging,nocache for tables and also indexes=no buffer=100000000
    what else can i set?
    Regards,
    Sandeep

    As suggested , use data pump if possible.
    If not you can also use indexes=N as this will drop down the time significantly. The indexes can then be created at a later date when time is not a factor.
    Even a large rllback segment for the duration of import will also be helpful. You need to take others offline.

  • Help!!urgent!!can not insert/update clob:the row containing the lob is not locked

    Hi,
    could you do me help?
    i can not insert a string into a oracle clob field, it echo as:
    ORA22920 row containing the lob value is not locked. ORA 06512 at "SYS.DBMS_LOB" line 708
    ORA 06512 at line 1;
    what its means? please.
    my table defined as : create table clob1(id number(5),mclob clob default empty_clob()); the id is create by a sequece of test_sequence automaticly.
    my code as belows:
    import java.io.*;
    import java.sql.*;
    //import oracle.sql.*;
    public class test4 {
    public static void main(String args[]) {
    String url_String
    = "jdbc:oracle:thin:test/test@myhost:1521:myorcl";
    try {
    Class.forName
    ("oracle.jdbc.driver.OracleDriver");
    java.sql.Connection con =
    java.sql.DriverManager.getConnection(url_String);
    con.setAutoCommit(true);
    Statement stmt
    =con.createStatement();
    String sqlStr ="insert into
    clob1 (mclob) " + "values(empty_clob())";
    stmt.executeUpdate(sqlStr);
    String query = "select
    test_seq.CURRVAL from dual";
    ResultSet rs =stmt.executeQuery
    (query);
    rs.next();
    int currval =rs.getInt(1);
    query = "select * from clob1 where
    id="+currval;
    String str
    ="abcedefhijklmnopqrstuvwxyz";
    rs =stmt.executeQuery(query);
    rs.next();
    java.sql.Clob clob1
    =rs.getClob(2);
    oracle.sql.CLOB clob=
    (oracle.sql.CLOB)clob1;
    System.out.print(clob);
    java.io.Writer
    wr=clob.getCharacterOutputStream();
    wr.write(str);
    wr.flush();
    wr.close();
    stmt.close();
    con.close();
    } catch(Exception ex) {
    System.err.println("SQLException: "
    + ex.getMessage());
    null

    Hi,
    To avoid ORA-22920 error while selecting lob column, use the 'for update' clause in the select statement like :
    query = "select * from clob1 where id="+currval+" FOR UPDATE" ;
    This should solve the problem. However, after fixing this, you might get the error :
    java.sql.SQLException: ORA-1002: fetch out of sequence
    I got this error when testing your code. To avoid this error, before executing 'select ... for update' statement Set AutoCommit to OFF, like :
    query = "select * from clob1 where id="+currval+" FOR UPDATE" ;
    con.setAutoCommit(false);
    rs =stmt.executeQuery(query);
    Hope that Helps,
    Srinivas

  • The size limit of the OCI LOB Array Insert is 64K for one field?

    I have a table with 4 field, and one is BLOB field. I want to insert 16 rows in one OCIStmtExecute. I know I can specify the iter parameter with 16 to execute sql 16 times.
    I found example in "Application Developer's Guide - Large Objects" in page "Data Interface for Persistent LOBs 13-17", there is a example function called "array_insert". It shows the usage of OCIBindArrayOfStruct, but can only insert LOB with same size, the LOB field of each row filled with the same size data.
    But I have to insert LOB with different size, for example 8K for row 1, and 16K for row 2, 128K for row 3. Than I find the alenp parameter of OCIBindByName/OCIBindByPos. It is "pointer to array of actual lengths of array elements."(OCI document). So I think I find the solution for my problem. But the type of alenp parameter is ub2*, is it means I can only insert 64K data for each row in my array insert? It is too small, I hope I can array insert BLOB with 16M each row.
    Or there is any other solution for my problem? I look forward to it for a long time! thanks every one!

    It is called Data Interface to work with LOB datatypes by APIs designed for use with legacy datatypes. I can specify SQLT_BIN to bind memory binary data to BLOB column, and INSERT or UPDATE directly. It can be without LOB locator and save round-trip to the server. This is very fit my needs, because I have to insert very much BLOBs to server as soon as possible.
    I have make a test program, and multi-row with different size blob( less than 65536 bytes) can be insert at one time, without locators. Multi-row wiht same size blob( more than 64K) also can be insert at one time--the alenp parameter is not used. I only can not insert multi-row with different size blob( more than 64k) because the type of alenp is ub2*.
    thank you for your reply!

  • Inserting LOBs with OCI

    I'm trying to write a function that will accept any insert/update type statement using OCI. The problem is when there are column of CLOB/BLOB type.
    These require a 2 part insrt statment:
    1. "INSERT INTO FOO VALUES (EMPTY_BLOB(),EMPTY_CLOB(),1)"
    2. "SELECT A,B FROM foo WHERE c=1 FOR UPDATE"
    Problem is, I have no way of knowing if the insert statement is using LOB columns, so I don't know to use the EMPTY _LOB function.
    Any suggestions?

    You can leave those columns null, if you want.
    Looks like you were going to do a
    CLOB myCLOB = getCLOB(), then get the characterstream from there?
    I switxhed from that a while back, since it was so frustrating. Oracle, as of 8.1.6, seems to accept the setCharacterStream and getCharacterStream methods in the PreparedStatement/ResultSet (respectively) just fine. Switch to preparedStatements and you should be fine.

  • Inserting Lob value in Oracle

    Hi,
    I want to know two things
    *1-* If I insert a lob value say blob through sql stament as shown below
    insert into employee(id,name,picture) values('101','Peter',pic );
    Now how value will insert into Oracle table i.e. lob pointer in original table and lob value in either other segment of same tablespace or in segment of different tablespace. I mean is there any internal mechanism which Oracle handles or developer/dba has to take care of it. Like I want to create a separate Lob segment for picture and want to insert original lob value in it. How can I achieve this.
    *2-*If I'm using DBMS_LOB package for insertind data in ltable containg lob clolums. Then is it necessary to first initialize lob cloumn and then insert value in it.
    Looking for guidence .
    Regards,
    Abbasi

    You can NOT insert a BLOB like insert into employee(id,name,picture) values('101','Peter',pic );, You can try it yourself and check!.
    If the column is of BFILE then atleast you need something like insert into employee(id,name,picture) values('101','Peter',BFILENAME('MY_DIR','pic.jpg' );To load/insert a BLOB you need DBMS_LOB. One example may be....
    SQL> CREATE TABLE myblob (id NUMBER(2), my_xls BLOB);
    Table created.
    SQL> CREATE OR REPLACE DIRECTORY TEST_DIR as 'C:\';
    Directory created.
    SQL> DECLARE
      2    v_src_loc BFILE := BFILENAME('TEST_DIR', 'saubhik.xls');
      3    v_amount  INTEGER;
      4    v_b       BLOB;
      5  BEGIN
      6    DBMS_LOB.OPEN(v_src_loc, DBMS_LOB.LOB_READONLY);
      7    v_amount := DBMS_LOB.GETLENGTH(v_src_loc);
      8    INSERT INTO myblob VALUES (1, EMPTY_BLOB()) RETURNING my_xls INTO v_b;
      9    DBMS_LOB.LOADFROMFILE(v_b, v_src_loc, v_amount);
    10    DBMS_LOB.CLOSE(v_src_loc);
    11  END;
    12  /
    PL/SQL procedure successfully completed.
    SQL> commit;
    Commit complete.

  • Inserting image to table...ORA-22288: file or LOB operation FILEOPEN failed

    Good day!
    I'm just new with using databases, and i'm enjoying it.
    So I read that you can insert images to a table, and so i decided to try it... and here's where I'm at..
    *I made a directory
    CREATE directory image_dir as 'D:\Images';
    --Directory Created.
    *I created a table
    CREATE TABLE animages
    (aname VARCHAR2(40),
    breedno NUMBER(10),
    image_file BLOB,
    image_name VARCHAR2(40),
    CONSTRAINT aname_fk FOREIGN KEY (aname) REFERENCES clist(aname));
    --Table Created.
    *Then I made a procedure for inserting the images
    CREATE OR REPLACE PROCEDURE insert_image_file (p_aname VARCHAR2, p_breedno NUMBER, p_image_name IN VARCHAR2)
    IS
    src_file BFILE;
    dst_file BLOB;
    lgh_file BINARY_INTEGER;
    BEGIN
    src_file := BFILENAME('IMAGE_DIR', p_image_name);
    INSERT INTO animages
         (aname, breedno, image_file, image_name)
    VALUES (p_aname, p_breedno, EMPTY_BLOB(), p_image_name)
    RETURNING image_file
    INTO dst_file;
    SELECT image_file
    INTO dst_file
    FROM animages
    WHERE aname = p_aname AND image_name = p_image_name
    FOR UPDATE;
    DBMS_LOB.fileopen(src_file, DBMS_LOB.file_readonly);
    lgh_file := DBMS_LOB.getlength(src_file);
    DBMS_LOB.loadfromfile(dst_file, src_file, lgh_file);
    UPDATE animages
    SET image_file = dst_file
    WHERE aname = p_aname AND image_name = p_image_name;
    DBMS_LOB.fileclose(src_file);
    END;
    --Procedure Created.
    *So i was able to do those but when i was trying to execute the procedure i get this error..
    execute insert_image_file('African Elephant', 60, 'African_Elephant');
    ERROR at line 1:
    ORA-22288: file or LOB operation FILEOPEN failed
    The device is not ready.
    ORA-06512: at "SYS.DBMS_LOB", line 523
    ORA-06512: at "SCOTT.INSERT_IMAGE_FILE", line 18
    ORA-06512: at line 1
    I've been looking for a solution for a day now, hope someone could help me, thanks.
    BTW, I got the code for the PROCEDURE from a user named Aparna16, just did some minor editing so it would fit my tables, thanks.
    And sorry if the post is too long.

    Hi;
    ORA-22288:Error:     ORA-22288
    Text:     file or LOB operation %s failed %s
    Cause:     The operation attempted on the file or LOB failed.
    Action:     See the next error message in the error stack for more detailed
         information. Also, verify that the file or LOB exists and that the
         necessary privileges are set for the specified operation. If the error
         still persists, report the error to the DBA.
    Regard
    Helios

  • Insert into oracle lob using php odbc

    is there a way to insert a string > 4000 characters into an oracle lob using php without using oci8 extension? If so can somebody post some sample code to do it?

    Perhaps you could externally invoke Oracle's SQL Loader utility.
    What are you trying to achieve?
    -- cj

  • Insert from Lob to BLOB

    How to insert from Lob into blob

    Dear User,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements001.htm#sthref149
    Regards,
    Taj

  • Inserting files into lob columns stored procedure

    i have created a stored procedure that inserts file into the database as a lob object. the source code converts the file into a byte [] stream (c#) and assigns to the parameter . please check the pl/sql stored procdure. is there a way to debug pl/sql code in vs 2005.
    PROCEDURE "STARDOC"."INSERTFILE" (
    "TITLE" IN VARCHAR2,
    "AUTHOR" IN VARCHAR2,
    "DESCRIPTION" IN VARCHAR2,
    "PUBLISHER" IN VARCHAR2,
    "LANGUAGE_ID" IN NUMBER,
    "CONTENT" IN BLOB DEFAULT empty_blob(),
    "FILENAME" IN VARCHAR2,
    "EXTENSION" IN VARCHAR2,
    "APPID" IN NUMBER,
    "DOC_SIZE" IN NUMBER,
    "CONTENT_TYPE_ID" IN NUMBER,
    "IDENTIFIER" IN VARCHAR2,
    "DOC_ID" OUT NUMBER) IS
    docid number;
    BEGIN -- executable part starts here
         SELECT seq_document_id.NEXTVAL INTO docid FROM dual;
         --Documents table     
         INSERT INTO DOCUMENTS(document_id, content, filename,
                                  extension, retrieval_metadata_id,
                                       content_metadata_id,description)
              VALUES (docid,content,filename,extension,docid,docid,description);
         --Content Metadata table     
         INSERT INTO CONTENT_METADATA(content_metadata_id, document_id,
                                            author, title, description,
                                            publisher, language_id, content_type_id)
              VALUES (docid, docid, author,
                        title, description, publisher,     
                        language_id, content_type_id);
         --retrieval metadata table
         INSERT INTO RETRIEVAL_METADATA(retrieval_metadata_id, identifier)
              VALUES (docid,identifier);
         --Storage metadata table
         INSERT INTO STORAGE_METADATA(storage_metadata_id, application_id,doc_size)
              VALUES (docid,appid, doc_size);
    EXCEPTION
    --rollback when an expetion occurs
    WHEN OTHERS THEN
              ROLLBACK;
    END "INSERTFILE";

    Yes.
    Are you experiencing any issues? From your posts above there is no mention of any issues you are facing that could ba attributed to the LOB size.
    Has the procedure been run at least once?

  • Deadlock when inserting lob

    I am getting several deadlocks.
    I have three client machines connecting via JDBC to our Oracle 10g server. Each of these clients insert about 10,000 rows a minute into a specific table. This table is defined as follows:
    create table contextdata (
    accdate timestamp(0) not null,
    hash varchar2(32) not null,
    content blob,
    primary key (hash)
    The data is inserted into the table as follows:
    1.insert into contextdata (accdate, hash, content) values (:1, :2, empty_blob());
    2.select content from contextdata where hash = :1 for update;
    3.Move data into the lob.
    4.Commit;
    We get three or four deadlocks a minute. A portion of a trace file follows:
    /opt/oracle/product/10.1.0/db_1/admin/dev/udump/dev_ora_966.trc
    Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    ORACLE_HOME = /opt/oracle/product/10.1.0/db_1
    System name:     Linux
    Node name:     oracle10lt
    Release:     2.6.11.10-grsec
    Version:     #2 SMP Thu Jun 2 16:42:30 MDT 2005
    Machine:     i686
    Instance name: dev
    Redo thread mounted by this instance: 1
    Oracle process number: 23
    Unix process pid: 966, image: oracledev@oracle10lt
    *** 2005-06-20 05:33:49.318
    *** SERVICE NAME:(SYS$USERS) 2005-06-20 05:33:49.318
    *** SESSION ID:(182.52820) 2005-06-20 05:33:49.318
    DEADLOCK DETECTED
    Current SQL statement for this session:
    SELECT CONTENT FROM CONTEXTDATA WHERE HASH=:1 FOR UPDATE
    The following deadlock is not an ORACLE error. It is a
    deadlock due to user error in the design of an application
    or from issuing incorrect ad-hoc SQL. The following
    information may aid in determining the deadlock:
    Deadlock graph:
    ---------Blocker(s)-------- ---------Waiter(s)---------
    Resource Name process session holds waits process session holds waits
    TX-0003002a-00013139 23 182 X 30 229 X
    TX-00080005-00000913 30 229 X 23 182 X
    session 182: DID 0001-0017-0000219B     session 229: DID 0001-001E-00001D08
    session 229: DID 0001-001E-00001D08     session 182: DID 0001-0017-0000219B
    Rows waited on:
    Session 229: obj - rowid = 0001FACD - AAAfrNAAEAAD6oHAAQ
    (dictionary objn - 129741, file - 4, block - 1026567, slot - 16)
    Session 182: obj - rowid = 0001FACD - AAAfrNAAEAAD5jYAAX
    (dictionary objn - 129741, file - 4, block - 1022168, slot - 23)
    Information on the OTHER waiting sessions:
    Session 229:
    pid=30 serial=25679 audsid=83986 user: 111/LTCLUSTER1
    O/S info: user: , term: , ospid: 1234, machine: qacluster3.oakleynetworks.com
    program:
    Current SQL Statement:
    SELECT CONTENT FROM CONTEXTDATA WHERE HASH=:1 FOR UPDATE
    End of information on OTHER waiting sessions.
    ===================================================
    The conflict seems to be between competing sessions; one session is doing an insert while another session is doing the select for update. They cannot be going after the same row because the value for the hash column is the primary key and the select for update is done immediately after the insert and within a transaction. If they were using the same value for hash, the insert would fail with a duplicate value or the duplicate insert would not return until the first session did a commit, at which time I would expect to get a duplicate key failure.
    There is no other activity against the database.
    Any help in resolving this deadlock would be greatly appreciated.
    Thanks
    Brent

    To be honest, I don't think the transaction is necessary, but the developers put it there anyway. The select statement will put the result cam_status
    into a variable, and then depends on its value, it will decide whether to execute the second update statement or not. I still can't upload the screen-shot, because it says it needs to verify my account at first. No clue at all. But it is very simple, just
    like:
    Clustered Index Update
    [analyst_monitor].[IX_Clust_scam_an_name_window]
    cost: 100%
    By the way, for some reason, I can't find the object based on the associatedObjectId listed in the XML
    <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor"
    indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
    For example: 
    SELECT * FROM sys.partition WHERE hobt_id = 72057654588604416
    This return nothing. Not sure why.

Maybe you are looking for