Utl_file and lob datatypes

hi,
is there any problem writing data from lob datatypes to file using the above package ?
We have some concerns about the pointer values especially when trying to export values which have forced the lob locator to be off-line.
If we were trying to move data from one instance to another, possibly on a different machine altogether, would lob values export and reload ok ?

Hi,
There are many whys you can do this. I think you always should starting in defining the needs.
You can use: utl_file, exp and imp, database links...
Best
EA

Similar Messages

  • PRO*C에서 EMBEDDED SQL STATEMENTS를 사용해서 LOB DATATYPES에 접근하는 예제

    제품 : PRECOMPILERS
    작성날짜 : 2001-07-12
    PRO*C에서 EMBEDDED SQL STATEMENTS를 사용해서 LOB DATATYPES에 접근하는 예제
    ==========================================================================
    Pro*C에서 LOB를 사용하는 방법에는 다음 3가지가 있습니다.
    (1) PL/SQL blocks에서 DBMS_LOB package를 이용하는 방법
    (2) OCI function을 이용하는 방법
    (3) Embedded SQL statements을 이용하는 방법
    다음은 (3)번째 방법에 대한 pro*c에서 지원하는 명령어들입니다.
    o APPEND: Appends lob value at the end of another LOB.
    EXEC SQL LOB APPEND :src TO :dst;
    o ASSIGN: Assigns LOB or BFILE locator to another.
    EXEC SQL LOB ASSIGN :src TO :dst;
    o CLOSE: Close LOB or BFILE.
    EXEC SQL LOB CLOSE :src;
    o COPY: Copy all or part of LOB value into another LOB.
    EXEC SQL LOB COPY :amt FROM :src [AT :src_offset] TO :dst [AT dst_offset];
    o CREATE TEMPORARY: Creates a temporary LOB.
    EXEC SQL LOB CREATE TEMPORARY :src;
    o ERASE: Erase the given amount of LOB data starting from a given offset.
    EXEC SQL LOB ERASE :amt FROM :src [AT :src_offset];
    o FILE CLOSE ALL: Closes all the BFILES open in the current session.
    EXEC SQL LOB FILE CLOSE ALL;
    o FILE SET: Set DIRECTORY alias and FILENAME in a BFILE locator.
    EXEC SQL LOB FILE SET :file DIRECTORY = :alias, FILENAME = :filename;
    o FREE TEMPORARY: Free the temporary space for the LOB locator.
    EXEC SQL LOB FREE TEMPORARY :src
    o LOAD FROM FILE: Copy all or part of BFIL into an internal LOB.
    EXEC SQL LOB LOAD :amt FROM FILE :file [AT :src_offset]
    INTO :dst [AT :dst_offset];
    o OPEN: Open a LOB or BFILE for read or read/write.
    EXEC SQL LOB OPEN :src [ READ ONLY | READ WRITE ];
    o READ: Reads all or part of LOB or BFILE into a buffer.
    EXEC SQL LOB READ :amt FROM :src [AT :src_offset]
    INTO :buffer [WITH LENGTH :buffer];
    o TRIM: Truncates the LOB vlaue.
    EXEC SQL LOB TRIM :src to :newlen;
    o WRITE: Writes contents of the buffer to a LOB.
    EXEC SQL LOB WRITE [APPEND] [FIRST | NEXT | LAST | ONE ]
    :amt FROM :buffer [WITH LENGTH :buflen] INTO :dst [AT :dst_offset];
    o DESCRIBE: Retrieves the attributes from a LOB.
    EXEC SQL LOB DESCRIBE :src GET attribute1 [{, attributeN}]
    INTO :hv1 [[INDICATOR] :hv_ind1] [{, :hvN [[INDICATOR] :hv_indN] }];
    Attributes can be any of the following:
    CHUNKSIZE: chunk size used to store the LOB value
    DIRECTORY: name of the DIRECTORY alias for BFILE
    FILEEXISTS: whether BFILE exists or not
    FILENAME: BFILE name
    ISOPEN: whether BFILE with this locate is OPEN or not
    ISTEMPORARY: whether specified LOB is temporary or not
    LENGTH: Length of BLOBs and BFILE in bytes, CLOBs and NCLOBs
    in characters.
    다음은 LOB를 사용하는 sample을 실행하는 방법입니다.
    1. 먼저 scott user에서 다음을 실행합니다. (create directory를 할 수 있는 권한이
    있어야 하며, directory는 사용하시는 환경에 맞도록 수정해 주십시요.)
    drop table lob_table;
    create table lob_table (key number, a_blob BLOB, a_clob CLOB);
    drop table lobdemo;
    create table lobdemo (key number, a_blob BLOB, a_bfile BFILE);
    drop directory dir_alias;
    create directory dir_alias as '/users/app/oracle/product/8.1.7/precomp/demo/proc';
    insert into lob_table values(1, utl_raw.cast_to_raw('1111111111'), 'aaaaaaaa');
    commit;
    2. 다음 코드는 out.gif 파일을 위에서 지정한 directory에 만들고 lob_table의
    내용에 들어있는 BLOB의 내용을 저장하는 sample입니다.
    #include <stdio.h>
    #include <string.h>
    #include <stdlib.h>
    #include <sqlda.h>
    #include <sqlcpr.h>
    /* Define constants for VARCHAR lengths. */
    #define UNAME_LEN 20
    #define PWD_LEN 40
    /* Declare variables. No declare section is
    needed if MODE=ORACLE. */
    VARCHAR username[UNAME_LEN]; /* VARCHAR is an Oracle-supplied struct */
    varchar password[PWD_LEN]; /* varchar can be in lower case also. */
    /* The following 3 lines avoid inclusion of oci.h during precompilation
    oci.h is needed only during compilation to resolve calls generated by
    the precompiler
    #ifndef ORA_PROC
    #include <oci.h>
    #endif
    #include <sqlca.h>
    OCIBlobLocator *blob;
    OCIClobLocator *clob;
    FILE *fp;
    unsigned int amt, offset = 1;
    #define MAXBUFLEN 5000
    unsigned char buffer[MAXBUFLEN];
    EXEC SQL VAR buffer IS RAW(MAXBUFLEN);
    /* Declare error handling function. */
    void sql_error(msg)
    char *msg;
    char err_msg[128];
    size_t buf_len, msg_len;
    EXEC SQL WHENEVER SQLERROR CONTINUE;
    printf("\n%s\n", msg);
    buf_len = sizeof (err_msg);
    sqlglm(err_msg, &buf_len, &msg_len);
    printf("%.*s\n", msg_len, err_msg);
    EXEC SQL ROLLBACK RELEASE;
    exit(EXIT_FAILURE);
    void main()
    /* Connect to ORACLE--
    * Copy the username into the VARCHAR.
    strncpy((char *) username.arr, "SCOTT", UNAME_LEN);
    /* Set the length component of the VARCHAR. */
    username.len =
    (unsigned short) strlen((char *) username.arr);
    /* Copy the password. */
    strncpy((char *) password.arr, "TIGER", PWD_LEN);
    password.len =
    (unsigned short) strlen((char *) password.arr);
    /* Register sql_error() as the error handler. */
    EXEC SQL WHENEVER SQLERROR DO sql_error("ORACLE error--\n");
    /* Connect to ORACLE. Program will call sql_error()
    * if an error occurs when connecting to the default database.
    EXEC SQL CONNECT :username IDENTIFIED BY :password;
    printf("\nConnected to ORACLE as user: %s\n", username.arr);
    /* Allocate the LOB host variables and select the BLOB value */
    EXEC SQL ALLOCATE :blob;
    EXEC SQL ALLOCATE :clob;
    EXEC SQL SELECT a_blob INTO :blob FROM lob_table WHERE key=1;
    /* Open external file to which BLOB value should be written */
    fp = fopen("out.gif", "w");
    EXEC SQL WHENEVER NOT FOUND GOTO end_of_lob;
    amt = 5000;
    EXEC SQL LOB READ :amt FROM :blob AT :offset INTO :buffer;
    fwrite(buffer, MAXBUFLEN, 1, fp);
    EXEC SQL WHENEVER NOT FOUND DO break;
    /* Use polling method to continue reading the next pieces */
    while (TRUE)
    EXEC SQL LOB READ :amt FROM :blob INTO :buffer;
    fwrite(buffer, MAXBUFLEN, 1, fp);
    end_of_lob:
    fwrite(buffer, amt, 1, fp);
    printf("\nG'day.\n\n\n");
    /* Disconnect from ORACLE. */
    EXEC SQL ROLLBACK WORK RELEASE;
    exit(EXIT_SUCCESS);
    3. 다음 코드는 위에서 만든 out.gif file을 lobdemo에 저장하는 sample입니다.
    #include <stdio.h>
    #include <string.h>
    #include <stdlib.h>
    #include <sqlda.h>
    #include <sqlcpr.h>
    /* Define constants for VARCHAR lengths. */
    #define UNAME_LEN 20
    #define PWD_LEN 40
    /* Declare variables. No declare section is
    needed if MODE=ORACLE. */
    VARCHAR username[UNAME_LEN]; /* VARCHAR is an Oracle-supplied struct */
    varchar password[PWD_LEN]; /* varchar can be in lower case also. */
    /* The following 3 lines avoid inclusion of oci.h during precompilation
    oci.h is needed only during compilation to resolve call generated by
    the precompiler
    #ifndef ORA_PROC
    #include <oci.h>
    #endif
    #include <sqlca.h>
    OCIBlobLocator *blob;
    OCIBFileLocator *bfile;
    char *alias = "DIR_ALIAS";
    char *filename = "out.gif";
    unsigned int amt = 50;
    unsigned int filelen;
    /* Declare error handling function. */
    void sql_error(msg)
    char *msg;
    char err_msg[128];
    size_t buf_len, msg_len;
    EXEC SQL WHENEVER SQLERROR CONTINUE;
    printf("\n%s\n", msg);
    buf_len = sizeof (err_msg);
    sqlglm(err_msg, &buf_len, &msg_len);
    printf("%.*s\n", msg_len, err_msg);
    EXEC SQL ROLLBACK RELEASE;
    exit(EXIT_FAILURE);
    void main()
    /* Connect to ORACLE--
    * Copy the username into the VARCHAR.
    strncpy((char *) username.arr, "SCOTT", UNAME_LEN);
    /* Set the length component of the VARCHAR. */
    username.len =
    (unsigned short) strlen((char *) username.arr);
    /* Copy the password. */
    strncpy((char *) password.arr, "TIGER", PWD_LEN);
    password.len =
    (unsigned short) strlen((char *) password.arr);
    /* Register sql_error() as the error handler. */
    EXEC SQL WHENEVER SQLERROR DO sql_error("ORACLE error--\n");
    /* Connect to ORACLE. Program will call sql_error()
    * if an error occurs when connecting to the default database.
    EXEC SQL CONNECT :username IDENTIFIED BY :password;
    printf("\nConnected to ORACLE as user: %s\n", username.arr);
    /* Allocate the LOB locator */
    EXEC SQL ALLOCATE :blob;
    EXEC SQL ALLOCATE :bfile;
    /* Initialize the DIRECTORY alias of the BFILE and FILENAME */
    EXEC SQL LOB FILE SET :bfile
    DIRECTORY = :alias, FILENAME = :filename;
    EXEC SQL INSERT INTO lobdemo values (1, EMPTY_BLOB(), :bfile);
    EXEC SQL SELECT a_blob, a_bfile INTO :blob, :bfile FROM lobdemo
    WHERE key = 1;
    EXEC SQL LOB OPEN :bfile;
    /* Get the BFILE length */
    EXEC SQL LOB DESCRIBE :bfile
    GET LENGTH INTO :filelen;
    printf("File length is: %d\n", filelen);
    amt = filelen;
    /* Read BFILE and write to BLOB */
    EXEC SQL LOB LOAD :amt FROM FILE :bfile INTO :blob;
    EXEC SQL LOB CLOSE :bfile;
    printf("\nG'day.\n\n\n");
    /* Disconnect from ORACLE. */
    EXEC SQL COMMIT WORK RELEASE;
    exit(EXIT_SUCCESS);
    4. 다음은 실행한 결과 입니다.
    첫번째 sample :
    Connected to ORACLE as user: SCOTT
    G'day.
    두번째 sample :
    Connected to ORACLE as user: SCOTT
    File length is: 10
    G'day.

  • How we handle CLOB and BLOB Datatypes in HANA DB

    Dear HANA Gurus,
    We have would like to build EDW using HANA base on our source system Oracle and it's supports CLOB and BLOB datatypes
    Would you please suggest how do we handle in HANA DB.
    Let not say it's oracle specific.
    Regards,
    Manoj

    Hello,
    check SAP HANA SQL Reference Guide for list of data types:
    (page 14 - Classification of Data Types)
    https://service.sap.com/~sapidb/011000358700000604922011
    For this purpose might be useful following data types:
    Large Object (LOB) Types
    LOB (large objects) data types, CLOB, NCLOB and BLOB, are used to store a large amount of data such as text documents and images. The maximum size of an LOB is 2 GB.
    BLOB
    The BLOB data type is used to store large binary data.
    CLOB
    The CLOB data type is used to store large ASCII character data.
    NCLOB
    The NCLOB data type is used to store a large Unicode character object.
    Tomas

  • Difference btw utl_file and dbms_lob

    hi,
    what is the difference between utl_file and dbms_lob , is that doing same acess file or difference?

    797525 wrote:
    what is the difference between utl_file and dbms_lob , is that doing same acess file or difference?
    UTL_FILE is a PL/SQL package that provides PL/SQL code with the standard operating system I/O interface. You will find this same interface in languages like C, C++, Pascal, PHP, Perl and others.
    DBMS_LOB is a PL/SQL package that enables you to manage a LOB data type in Oracle. This data type is "special" in that it can contain large data objects (like video, sound, images, documents, spreadsheets, etc).
    So technically, there is very little that is the same between the two.
    Conceptually however, both are interfaces to "files". The only difference is that UTL_FILE access external files (on an operating system's file system). Whereas DBMS_LOB provides the ability to create and store and manage "files" directly in the database. (there is also a BFILE data type that looks like a LOB but is actually an external file)
    It is seldom a good idea to step outside the database into an o/s file system using UTL_FILE. There are security and access control issues. Issues with concurrency and transactions. Issues with data integrity. Issues with backup of such data. Etc.
    So in general, when you deal with the concept of "+files+" in Oracle, you should be looking first at using the Oracle database itself - and deal with files as LOBs using DBMS_LOB.

  • Difference between CHAR and VARCHAR2 datatype

    Difference between CHAR and VARCHAR2 datatype
    CHAR datatype
    If you have an employee name column with size 10; ename CHAR(10) and If a column value 'JOHN' is inserted, 6 empty spaces will be inserted to the right of the value. If this was a VARCHAR column; ename VARCHAR2(10). How would it handle the column value 'JOHN' ?

    The CHAR datatype stores fixed-length character strings, and Oracle compares CHAR values using blank-padded comparison semantics.
    Where as the VARCHAR2 datatype stores variable-length character strings, and Oracle compares VARCHAR2 values using nonpadded comparison semantics.
    This is important when comparing or joining on the columns having these datatypes;
    SQL*Plus: Release 10.2.0.1.0 - Production on Pzt Au 6 09:16:45 2007
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    SQL> conn hr/hr
    Connected.
    SQL> set serveroutput on
    SQL> DECLARE
    2 last_name1 VARCHAR2(10) := 'TONGUC';
    3 last_name2 CHAR(10) := 'TONGUC';
    4 BEGIN
    5 IF last_name1 = last_name2 THEN
    6 DBMS_OUTPUT.PUT_LINE ( '-' || last_name1 || '- is equal to -' || last_name2
    || '-');
    7 ELSE
    8 DBMS_OUTPUT.PUT_LINE ( '-' || last_name1 || '- is NOT equal to -' || last_n
    ame2 || '-');
    9 END IF;
    10 END;
    11 /
    -TONGUC- is NOT equal to -TONGUC -
    PL/SQL procedure successfully completed.
    SQL> DECLARE
    2 last_name1 CHAR(6) := 'TONGUC';
    3 last_name2 CHAR(10) := 'TONGUC';
    4 BEGIN
    5 IF last_name1 = last_name2 THEN
    6 DBMS_OUTPUT.PUT_LINE ( '-' || last_name1 || '- is equal to -' || last_name2
    || '-');
    7 ELSE
    8 DBMS_OUTPUT.PUT_LINE ( '-' || last_name1 || '- is NOT equal to -' || last_n
    ame2 || '-');
    9 END IF;
    10 END;
    11 /
    -TONGUC- is equal to -TONGUC -
    PL/SQL procedure successfully completed.
    Also you may want to read related asktom thread - "Char Vs Varchar" http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1542606219593
    and http://tahitiviews.blogspot.com/2007/05/less-is-more-more-or-less.html
    Best regards.

  • How to copy a table with LONG and CLOB datatype over a dblink?

    Hi All,
    I need to copy a table from an external database into a local one. Note that this table has both LONG and CLOB datatypes included.
    I have taken 2 approaches to do this:
    1. Use the CREATE TABLE AS....
    SQL> create table XXXX_TEST as select * from XXXX_INDV_DOCS@ext_db;
    create table XXXX_TEST as select * from XXXX_INDV_DOCS@ext_db
    ERROR at line 1:
    ORA-00997: illegal use of LONG datatype
    2. After reading some threads I tried to use the COPY command:
    SQL> COPY FROM xxxx/pass@ext_db TO xxxx/pass@target_db REPLACE XXXX_INDV_DOCS USING SELECT * FROM XXXX_INDV_DOCS;
    Array fetch/bind size is 15. (arraysize is 15)
    Will commit when done. (copycommit is 0)
    Maximum long size is 80. (long is 80)
    CPY-0012: Datatype cannot be copied
    If my understanding is correct the 1st statement fails because there is a LONG datatype in XXXX_INDV_DOCS table and 2nd one fails because there is a CLOB datatype.
    Is there a way to copy the entire table (all columns including both LONG and CLOB) over a dblink?
    Would greatelly appriciate any workaround or ideas!
    Regards,
    Pawel.

    Hi Nicolas,
    There is a reason I am not using export/import:
    - I would like to have a one-script solution for this problem (meaning execute one script on one machine)
    - I am not able to make an SSH connection from the target DB to the local one (although the otherway it works fine) which means I cannot copy the dump file from target server to local one.
    - with export/import I need to have an SSH connection on the target DB in order to issue the exp command...
    Therefore, I am looking for a solution (or a workaround) which will work over a DBLINK.
    Regards,
    Pawel.

  • Using utl_file and unix pipes

    Hi,
    I'm trying to use utl_file and unix pipes to communicate with a unix process.
    Basically I want the unix process to read off one pipe and give me back the result on a different pipe.
    In the example below the unix process is a dummy one just copying the input to the output.
    I cant get this to work for a single plsql block writing and reading to/from the pipes - it hangs on the first read of the return pipe.
    Any ideas?
    ======== TEST CASE 1 ===============
    create directory tmp as '/tmp';
    on unix:
    cd /tmp
    mknod outpip p
    mknod inpip p
    cat < inpip > outpip
    drop table res;
    create table res (m varchar2(200));
    declare
    l_filehandle_rec UTL_FILE.file_type;
    l_filehandle_send UTL_FILE.file_type;
    l_char VARCHAR2(200);
    begin
    insert into res values ('starting');commit;
    l_filehandle_send := UTL_FILE.fopen ('TMP', 'inpip', 'A', 32000);
    insert into res values ('opened inpip ');commit;
    l_filehandle_rec := UTL_FILE.fopen ('TMP', 'outpip', 'R', 32000);
    insert into res values ('opened outpip ');commit;
    FOR i in 1..10 LOOP
    utl_file.put_line(l_filehandle_send,'line '||i);
    insert into res values ('written line '||i); commit;
    utl_file.get_line(l_filehandle_rec,l_char);
    insert into res values ('Read '||l_char);commit;
    END LOOP;
    utl_file.fclose(l_filehandle_send);
    utl_file.fclose(l_filehandle_rec);
    END;
    in a different sql session:
    select * from res:
    starting
    opened inpip
    opened outpip
    written line 1
    ============ TEST CASE 2 =================
    However If I use 2 different sql session (not what I want to do...), it works fine:
    1. unix start cat < inpip > outpip
    2. SQL session 1:
    set serveroutput on size 100000
    declare
    l_filehandle UTL_FILE.file_type;
    l_char VARCHAR2(200);
    begin
    l_filehandle := UTL_FILE.fopen ('TMP', 'outpip', 'R', 32000);
    FOR i in 1..10 LOOP
    utl_file.get_line(l_filehandle,l_char);
    dbms_output.put_line('Read '||l_char);
    END LOOP;
    utl_file.fclose(l_filehandle);
    END;
    3. SQL session 2:
    set serveroutput on size 100000
    declare
    l_filehandle UTL_FILE.file_type;
    begin
    l_filehandle := UTL_FILE.fopen ('TMP', 'inpip', 'A', 32000);
    FOR i in 1..10 LOOP
    utl_file.put_line(l_filehandle,'line '||i);
    --utl_lock.sleep(1);
    dbms_output.put_line('written line '||i);
    END LOOP;
    utl_file.fclose(l_filehandle);
    END;
    /

    > it hangs on the first read of the return pipe.
    Correct.
    A pipe is serialised I/O device. One process writes to the pipe. The write is blocked until a read (from another process or thread) is made on that pipe. Only when there is a reader for that data, the writer is unblocked and the actual write I/O occurs.
    The reverse is also true. A read on the pipe is blocked until another process/thread writes data into the pipe.
    Why? A pipe is a memory structure - not a file system file. If the write was not blocked the writer process can writes GBs of data into the pipe before a reader process starts to read that data. This will drastically knock memory consumption and performance.
    Thus the purpose of a pipe is to serve as a serialised blocking mechanism between a reader and a writer - allowing one to write data that is read by the other. With minimal memory overheads as the read must be serviced by a write and a write serviced by a read.
    If you're looking for something different, then you can open a standard file in share mode and write and read from it using two different file handles within the same process. However, the file will obviously have a file system footprint ito space (growing until the writer stops and the reader terminates and trashes the file) .
    OTOH a pipe's footprint is minimal.

  • Export and Import Of Tables having BLOB and Raw datatype

    Hi Gurus,
    I had to export one schema in one database and import to another schema in another database.
    However my current database contains raw and blob datatype.I have exported the whole database by the following commnad
    exp SYSTEM/manager FULL=y FILE=jbrms_full_19APR2013.dmp log=jbrms_full_19APR2013.log GRANTS=y ROWS=y
    My question is if all the tables with raw and blob have been exported properly or not.I have done one more thing after taking the export , I have imported to local db and checked the no of rows in the both the envs are same.As I have not tested with the application to confirm.
    I am using this version of Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
    not able to attach the complete log file but for the schema jbrms which has the blob and raw datatype.
    Please let me know if you find some potential concerns with the export for BLOB and raw
    . about to export JBRMS's tables via Conventional Path ...
    . . exporting table FS_FSENTRY 8 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table FS_WS_DEFAULT_FSENTRY 2 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table PM_WS_DEFAULT_BINVAL 60 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table PM_WS_DEFAULT_BUNDLE 751 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table PM_WS_DEFAULT_NAMES 2 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table PM_WS_DEFAULT_REFS 4 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table VERSIONING_FS_FSENTRY 1 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table VERSIONING_PM_BINVAL 300 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table VERSIONING_PM_BUNDLE 11654 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table VERSIONING_PM_NAMES 2 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.
    . . exporting table VERSIONING_PM_REFS 1370 rows exported
    EXP-00091: Exporting questionable statistics.
    EXP-00091: Exporting questionable statistics.

    You could see the 'QUESTIONABLE STATISTICS' warning for a couple of reasons. I don't remember them all but.
    1. If the target and source character set is different.
    2. system generated names (I think?)
    the best solution if you don't need the exact statistics that are on your source database would be to add
    statistics=none
    to your imp command and then regather statistics when the imp command is done.
    Dean

  • Separate tablespaces for indexes and LOBs

    I have modeled my db tables with jDeveloper. Now I would like to store my tables in default tables, my indexes in a separate tablespace, and LOBs in a 3rd tablespace.
    How can I best add the tablespace definitions to the table creation sql? I can't find such functionality in jDeveloper.
    If jDev does not support this, does Oracle have some other tools which could do this?
    Or can I set some ALTER SESSION parameter before creating the tables which would make my indexes and LOBs go to the correct tablespaces by default?

    I see jDeveloper 11g has a possibility for specifying tablespaces for the tables and LOBs.
    I cant figure out how to set tablespaces for indexes though. I can tell there is definately no way to set tablespace settings for primary key / foreign key indexes.
    So jDev 11g does not seem to solve my problem either.

  • Rowid and urowid datatypes

    I'm writing an exercise to learn more about different datatypes and am confused about the use of and difference between rowid and urowid. As I understand it using rowid should only be done when writing for backwards compatibility, and urowid should be used for new coding. I'm working in 9.2.0.1.0, so I assume I should use urowid.
    The following code returns an error (shown below). Anyone know why this is happenning?
    create or replace function lrnvrbls
    return varchar2
    is
    v_rowid urowid;
    v_data varchar2(100);
    select rowid
    into v_rowid
    from mytable
    where rownum = 10 -- 10 is just a number I chose
    execute immediate
    'select mycolumn ' ||
    'from mytable ' ||
    'where rowid = ' ||
    v_rowid
    into v_data;
    return v_data;
    end;
    SQL>select lrnvrbls from dual;
    select lrnvrbls from dual
    ORA-00904: "AAAHCAAAMAAAADVAAI": invalid identifier
    I've tried using both ROWID and UROWID datatypes with no luck. Also tried CHARTOROWID and ROWIDTOCHAR and failed the same way.
    Anybody understand how to use this?

    A single datatype called the universal rowid, or UROWID, supports both logical
    and physical rowids, as well as rowids of foreign tables such as non-Oracle
    tables accessed through a gateway.
    A column of the UROWID datatype can store all kinds of rowids. The value of the
    COMPATIBLE initialization parameter must be set to 8.1 or higher to use UROWID
    columns.
    DROP TABLE mytable
    CREATE TABLE mytable
    AS SELECT owner mycolumn FROM all_tables
    WHERE rownum < 100
    CREATE OR REPLACE
    FUNCTION lrnvrbls
    RETURN varchar2
    IS
    v_rowid urowid;
    v_data varchar2(100);
    strSQL varchar2(255);
    BEGIN
    select rowid
    into v_rowid
    from mytable
    where rownum = 1;
    strSQL := 'select mycolumn from mytable where rowid = '''||v_rowid ||'''';
    execute immediate strSQL into v_data;
    return v_data;
    END lrnvrbls;
    SELECT lrnvrbls FROM DUAL
    DROP FUNCTION lrnvrbls
    DROP TABLE mytable
    COMMIT
    16:39:37 SQL> DROP TABLE mytable
    16:39:39 2 /
    Table dropped.
    Elapsed: 00:00:00.00
    16:39:39 SQL> --
    16:39:39 SQL> CREATE TABLE mytable
    16:39:39 2 AS SELECT owner mycolumn FROM all_tables
    16:39:39 3 WHERE rownum < 100
    16:39:39 4 /
    Table created.
    Elapsed: 00:00:00.00
    16:39:39 SQL> --
    16:39:39 SQL> CREATE OR REPLACE
    16:39:39 2 FUNCTION lrnvrbls
    16:39:39 3 RETURN varchar2
    16:39:39 4 IS
    16:39:39 5 --
    16:39:39 6 v_rowid urowid;
    16:39:39 7 v_data varchar2(100);
    16:39:39 8 strSQL varchar2(255);
    16:39:39 9 --
    16:39:39 10 BEGIN
    16:39:39 11 --
    16:39:39 12 select rowid
    16:39:39 13 into v_rowid
    16:39:39 14 from mytable
    16:39:39 15 where rownum = 1;
    16:39:39 16 --
    16:39:39 17 strSQL := 'select mycolumn from mytable where rowid = '''||v_rowid ||'''';
    16:39:39 18 --
    16:39:39 19 execute immediate strSQL into v_data;
    16:39:39 20 --
    16:39:39 21 return v_data;
    16:39:39 22 --
    16:39:39 23 END lrnvrbls;
    16:39:39 24 /
    Function created.
    Elapsed: 00:00:00.00
    16:39:39 SQL> --
    16:39:39 SQL> SELECT lrnvrbls FROM DUAL
    16:39:39 2 /
    LRNVRBLS
    SYS
    Elapsed: 00:00:00.00
    16:39:39 SQL> --
    16:39:39 SQL> DROP FUNCTION lrnvrbls
    16:39:39 2 /
    Function dropped.
    Elapsed: 00:00:00.01
    16:39:40 SQL> DROP TABLE mytable
    16:39:40 2 /
    Table dropped.
    Elapsed: 00:00:00.00
    16:39:40 SQL> COMMIT
    16:39:40 2 /
    Commit complete.
    Elapsed: 00:00:00.00
    16:39:40 SQL>
    */

  • Help changing all PK and FK datatypes from uniqueidentifier to nvarchar(128)

    Is there a simple script to iterate through all my tables and change my PK and FK datatypes to nvarchar(128)? I don't want to have to go through each table manually, remove relationships, and change each key one by one. Here is my db schema if it helps.

    Hi Man_Cat,
    According to your description, you are looking for the script to update all the PK and FK datatypes from uniqueidentifier to nvarchar(128), right?
    Based on my research, we cannot change the data type of the Primary Key and Foreign Key directly as Tom said. However, we can drop the PK and FK and recreate them use the query below.
    --This will drop the primary key temporarily
    ALTER TABLE MyTable
    drop CONSTRAINT PK_MyTable
    --change data type
    ALTER TABLE MyTable
    ALTER COLUMN PrimaryKeyID BigInt
    --add primary key
    ALTER TABLE MyTable
    ADD CONSTRAINT PK_MyTable PRIMARY KEY (PrimaryKeyID)
    Reference
    http://forums.asp.net/t/1528279.aspx?Alter+script+to+change+the+Primarykey+Column+datatype
    Regards,
    Charlie Liao
    TechNet Community Support

  • Problem with CDC capturing changes to LOB datatypes

    Greetings all,
    I've recently setup a CDC procedure (Asyncronous Hotlog mode) to populate a data mart with several source database tables (Oracle Enterprise 11gR2 to 11gR1). When inserting new rows, all of the columns defined including LOB columns are being captured and populated successfully. However if a row is updated, the CDC stream is not carrying over the LOB data. Looking at the populated stage tables, the LOB columns have been nulled out (both in the UO row and the UN row). All other defined columns were captured correctly.
    Is there some known issue with CDC and LOB data types in 11g? I read some KB tips pertaining to CDC LOBS in 10g but it looks like all of that had been resolved with 11g.
    Any ideas why LOBs aren't being updated?
    Thanks,
    M/R

    I'm not aware of any. I would recommend opening an SR with Oracle and please post the resolution here so we can all learn from your experience.

  • ORA-12815 while reorg/compression of tables without LONG and LOB with 11g

    Hello fellows,
    I am in the luxury situation that I got a copy of our production R/3 environment that was left over from a project and is no more required by any of our developers.
    As we are still on oracle 9.2.0.7 I upgraded this copy to 11.2 in a two step process (from 9i to 10g to 11g).
    I got myself the SAP dbatools 7.20(3) and the Note 1431296 - LOB conversion and table compression with BRSPACE 7.20.
    I started with some small tablespaces but after a while I thought I'd like to try to reorg/compress the worst of all tablespaces...PSAPPOOLD with ~15.000 tables.
    I first converted tables with LONG fields online that can be compressed, than the onse that can not be compressed, than I reorged the tables that contain old LOB fields online. With these different executions of the brspace commands that are also mentioned in the above note I managed to move ~ 3.000 tables without any issues.
    But now I started with the biggest bunch of tables, the compression of tables without LONG and LOB fields online.
    This is the command I used:
    brspace -u / -p reorgEXCL.tab -f tbreorg -a reorg -o sapr3 -s PSAPPOOLD -t allsel -n psapreorg -i psapreorgi -c ctab -SCT
    ...after a few checks that are performed by brspace, I end up in the screen
    Options for reorganization of tables (which is still nothing I wouldn't have expected)
    1 * Reorganization action (action) ............ [reorg]
    2 - Reorganization mode (mode) ................ [online]
    3 - Create DDL statements (ddl) ............... [yes]
    4 ~ New destination tablespace (newts) ........ [PSAPREORG]
    5 ~ Separate index tablespace (indts) ......... [PSAPREORGI]
    6 - Parallel threads (parallel) ............... [1]
    7 ~ Table/index parallel degree (degree) ...... []
    8 ~ Category of initial extent size (initial) . []
    9 ~ Sort by fields of index (sortind) ......... []
    10 # Index for IOT conversion (iotind) ......... [FIRST]
    11 - Compression action (compress) ............. [none]
    12 # LOB compression degree (lobcompr) ......... [medium]
    13 # Index compression method (indcompr) ....... [ora_proc]
    But independent of what I enter in point 6 and 7, I always end up with below erros during the reorg/compression of the outstanding tables:
    Just one sample, but the issue is always the same.
    BR0301E SQL error -12815 in thread 2 at location tab_onl_reorg-26, SQL statement:
    'CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0#$" ON "SAPR3"."RTXTF#$" ("MANDT", "APPLCLASS", "TEXT_NAME", "TEXT_TYPE", "FROM_LINE",
    "FROM_POS")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 1662976 NEXT 655360 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSAPREORGI" PARALLEL ( INSTANCES 0) '
    ORA-12815: value for INSTANCES must be greater than 0
    Just in case, here it the OBJECT DDL:
    CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0"
        ON "SAPR3"."RTXTF"  ("MANDT", "APPLCLASS", "TEXT_NAME",
        "TEXT_TYPE", "FROM_LINE", "FROM_POS")
        TABLESPACE "PSAPPOOLI" PCTFREE 10 INITRANS 2 MAXTRANS 255
        STORAGE ( INITIAL 1624K NEXT 640K MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
        LOGGING
    Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    Many thanks
    Florian

    Hello Florian,
    > Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    I have not performed any compression operations on Oracle 11g R2 with brspace until yet .. but this error seems to be very obvious.
    It seems like SAP is still not using the procedure DBMS_REDEFINITION.COPY_TABLE_DEPENDENT to create the indexes (and NOT NULL constraints) on Oracle 11g R2. No idea why, i can only think of one case (create a DDL file before reorganisation to change the DDL parameters through the reorganisation in some kind of ways).
    So in your case it seems like SAP is creating a wrong SQL for creating the index on the interim table.
    You can try to create the DDL file first and correct the parameters and after that you can try to run the reorganisation again.
    Please check sapnote #646681 (Remark 5) for more information about the procedure for "creating the DDL first .. and then do the reorg with edited parameters).
    Regards
    Stefan

  • Converting ntext datatype of MS SQL to LOB datatype of Oracle using ODI

    Hi
    Could anyone help me how I can convert ntext datatype of MS SQL to BLOB/CLOB datatype of ORACLE using ODI tool? I have tried and it seems that ODI couldn't able to create working table with datatype of LOB.
    Thank you in advance.
    Myat

    Try using the Incremental Update (PL/SQL) IKM. I believe this will only handle 1 CLOB column in any interface - also pay attention to the KM notes for additional constraints and requirements.
    Make your staging area the same as the target.
    Use the TO_CLOB function to convert the data for your field, and execute this on the target.

  • Dbms_crypto and blob datatypes

    Hi everyone
    I've been trying to learn how to encrypt data (a file uploaded) using blob datatype. This is me first attempt at encrypting and have been doing some research on this. I need to use the BLOB as I am encrypting a file that is uploaded to the system. Does anyone have experience in this or know of a good example that I can take a look at?
    Ray

    Hello,
    check SAP HANA SQL Reference Guide for list of data types:
    (page 14 - Classification of Data Types)
    https://service.sap.com/~sapidb/011000358700000604922011
    For this purpose might be useful following data types:
    Large Object (LOB) Types
    LOB (large objects) data types, CLOB, NCLOB and BLOB, are used to store a large amount of data such as text documents and images. The maximum size of an LOB is 2 GB.
    BLOB
    The BLOB data type is used to store large binary data.
    CLOB
    The CLOB data type is used to store large ASCII character data.
    NCLOB
    The NCLOB data type is used to store a large Unicode character object.
    Tomas

Maybe you are looking for

  • Issue with allowedHTMLdomains.txt in Live App

    Hi, My Purpose is to record live streams on server side and play recorded files later. What i have done is - 1. Copied All files of applications/live in some safe location. 2. Copied all files from samples/applications/live to applications/live folde

  • Update query with non-equijoin

    I have three tables - sales_order, item and price. I want to update item.actual_price with price.list_price. The item table is joined to the sales_order table using order_id. The appropriate list_price for each order_id is where sales_order.order_dat

  • Freely programmed search help with external mapping

    Hi all. I have a freely programmed search help to search for physical inventory items. I map some data from the component where i use this search help to this search help via external mapping. This works fine. But in the search help I want to be able

  • What if the log directory is full?

    I've  just switched from Oracle to DB2. I have a simple question: what if the log directory is full before the logs are archived? Forgive me if this is to basic. Thanks!

  • Application Manager Will Not Install

    I recently purchased - about a month ago - the creative cloud and the application manager initially downloaded and installed with no issues. While some of the programs like Illustrator, In-Design, Lightroom and Premier Pro would not download or insta