CLOB and LONG datatypes

I have a problem with needing to display fields in a report that are stored in the database as CLOB.
The report isn't too complicated, and can be put together using folders in Admin.
There is a table in the database that stores text from reports. Each report is identified by its case id, and then there may be a number of reports for each case (let's say 1-5 for the purposes of this explaination). There will then be three sections of text in each report, identifed in the database by section numbers.
The fields needed are the case number (case_id), the report number (report_no), the section number (sect_no) and the section text (sect_text)
All of the fields are numeric, except sect_text, which is a CLOB datatype.
There are two queries, one to select the case id, min report number and first section of text and the other to select the case id, max report number and last section of text. The aim being a report that shows the start of the first report and the end of the last one.
So, that's the background. All of the queries to select the data work fine in sqlplus. However they can't be displayed in Disco because it brings in CLOB fields as LONG, and can't group them (which it has to do for the MAX and MIN)
I'd rather not set the whole thing up as a view on the DB, as I'd prefer to have the flexibiility of editing it in discoverer
Any tips?

Hi
I do have a tip. Please take a look at this thread that I was involved in a couple of years ago:
CLOB show up as item
And here is another one that might help:
Possible for Discoverer to display BLOB type documents stored in database?
Best wishes
Michael

Similar Messages

  • Extracting clob/blob/long datatypes in delimited file

    I have a few tables in database that have clob/blob/long datatypes. I need to unload this data into delimited file and reload the data into another database. I don't want to use export/import.
    Currently the code that i have can handle varchar2, number, date datatypes quite easily using sqlloader.
    The tables are not too big (hundred to 10k rows approximately) and performance is important but not crucial. The script can run for a few minutes unloading data, that is fine. Code maintenance is of higher priority with pl/sql being the preferred scripting language.
    From the sqlloader manual, i glean that use of special delimiter is important, something like <startlob> and <endlob> .
    I need ideas on how to do this.

    Hi Buddy.
    I am having same kind of Problem. I am using very similar procedure Instead of PUT_RAW I am using put line.
    Writing is not a problem after writing the image into a file I cannot open that. That means Image got corrupted. If you find out some solutions for your problem please help me
    My email id is [email protected]
    Thanks
    AJI

  • How we handle CLOB and BLOB Datatypes in HANA DB

    Dear HANA Gurus,
    We have would like to build EDW using HANA base on our source system Oracle and it's supports CLOB and BLOB datatypes
    Would you please suggest how do we handle in HANA DB.
    Let not say it's oracle specific.
    Regards,
    Manoj

    Hello,
    check SAP HANA SQL Reference Guide for list of data types:
    (page 14 - Classification of Data Types)
    https://service.sap.com/~sapidb/011000358700000604922011
    For this purpose might be useful following data types:
    Large Object (LOB) Types
    LOB (large objects) data types, CLOB, NCLOB and BLOB, are used to store a large amount of data such as text documents and images. The maximum size of an LOB is 2 GB.
    BLOB
    The BLOB data type is used to store large binary data.
    CLOB
    The CLOB data type is used to store large ASCII character data.
    NCLOB
    The NCLOB data type is used to store a large Unicode character object.
    Tomas

  • Mapping CLOB and Long in xml schema

    Hi,
    I am creating an xml schema to map some user defined database objects. For example, for a column which is defined as VARCHAR2 in the database, I have the following xsd type mapping.
    <xsd:element name="Currency" type="xsd:string" />
    If the oracle column is CLOB or Long(Oracle datatype), could you please tell me how I can map it in the xml schema? I do not want to use Oracle SQL type like:
    xdb:SQLType="CLOB" since I need a generic type mapping to CLOB. Would xsd:string still hold good for CLOB as well as Long(Oracle datatype) ?
    Please help.
    Thanks,
    Vadi.

    The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
    rs = stmt.executeQuery("select myLong, myNumber from tab");
    while (rs.next()) {
    int n = rs.getInt(2);
    String s = rs.getString(1);
    The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
    Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
    This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
    Douglas

  • Can't fetch clob and long in one select/query

    I created a nightmare table containing numerous binary data types to test an application I was working on, and believe I have found an undocumented bug in Oracle's JDBC drivers that is preventing me from loading a CLOB and a LONG in a single SQL select statement. I can load the CLOB successfully, but attempting to call ResultSet.get...() for the LONG column always results in
    java.sql.SQLException: Stream has already been closed
    even when processing the columns in the order of the SELECT statement.
    I have demonstrated this behaviour with version 9.2.0.3 of Oracle's JDBC drivers, running against Oracle 9.2.0.2.0.
    The following Java example contains SQL code to create and populate a table containing a collection of nasty binary columns, and then Java code that demonstrates the problem.
    I would really appreciate any workarounds that allow me to pull this data out of a single query.
    import java.sql.*;
    This class was developed to verify that you can't have a CLOB and a LONG column in the
    same SQL select statement, and extract both values. Calling get...() for the LONG column
    always causes 'java.sql.SQLException: Stream has already been closed'.
    CREATE TABLE BINARY_COLS_TEST
    PK INTEGER PRIMARY KEY NOT NULL,
    CLOB_COL CLOB,
    BLOB_COL BLOB,
    RAW_COL RAW(100),
    LONG_COL LONG
    INSERT INTO BINARY_COLS_TEST (
    PK,
    CLOB_COL,
    BLOB_COL,
    RAW_COL,
    LONG_COL
    ) VALUES (
    1,
    '-- clob value --',
    HEXTORAW('01020304050607'),
    HEXTORAW('01020304050607'),
    '-- long value --'
    public class JdbcLongTest
    public static void main(String argv[])
    throws Exception
    Driver driver = (Driver)Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
    DriverManager.registerDriver(driver);
    Connection connection = DriverManager.getConnection(argv[0], argv[1], argv[2]);
    Statement stmt = connection.createStatement();
    ResultSet results = null;
    try
    String query = "SELECT pk, clob_col, blob_col, raw_col, long_col FROM binary_cols_test";
    results = stmt.executeQuery(query);
    while (results.next())
    int pk = results.getInt(1);
    System.out.println("Loaded int");
    Clob clob = results.getClob(2);
    // It doesn't work if you just close the ascii stream.
    // clob.getAsciiStream().close();
    String clobString = clob.getSubString(1, (int)clob.length());
    System.out.println("Loaded CLOB");
    // Streaming not strictly necessary for short values.
    // Blob blob = results.getBlob(3);
    byte blobData[] = results.getBytes(3);
    System.out.println("Loaded BLOB");
    byte rawData[] = results.getBytes(4);
    System.out.println("Loaded RAW");
    byte longData[] = results.getBytes(5);
    System.out.println("Loaded LONG");
    catch (SQLException e)
    e.printStackTrace();
    results.close();
    stmt.close();
    connection.close();
    } // public class JdbcLongTest

    The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
    rs = stmt.executeQuery("select myLong, myNumber from tab");
    while (rs.next()) {
    int n = rs.getInt(2);
    String s = rs.getString(1);
    The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
    Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
    This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
    Douglas

  • HS generic and long datatypes in Oracle 8i

    I have a problem importing LONG datatype columns
    from Hyperion Pillar to Oracle (version 8.1.6.0.0) using
    the (Hyperion supplied) ODBC driver and generic HS.
    The error message I get is
    ORA-03001: unimplemented feature
    ORA-02063: preceding line from PILLAR.WORLD
    From the message it seems clear that Oracle cannot
    deal with longs using generic connectivity (at least
    in v. 8.1.6).
    My questions:
    1) Is there a workaround ?
    2) Does this work in 9i ?
    Notes:
    1) The problem is not with the ODBC driver, as I
    can see the long datatype columns using another
    tool which connects to Pillar via ODBC.
    2) HS has been set up correctly because all datatypes
    other than long can be viewed in Oracle without any
    problems.
    3) Operating System: Windows NT 4.00.1381
    Any help would be much appreciated!
    Kailash.

    Thanks for the reply.
    Pillar LONGS (or MEMO, as they are referred to in Pillar)
    map to ODBC SQL_LONGVARCHAR. So it appears that the
    mapping is OK, and that SELECTing from these columns
    should be possible via generic connectivity.
    Any other ideas on what may be wrong? Any help is
    much appreciated.
    Thanks,
    Kailash.

  • SQL*Plus 'Copy' command and LONG datatypes

    Hi. I'm using Oracle 9.2.0.5 and wanna copy LONG to LONG without using an Interface in VB or any other programming language.
    Some of the fields (plain text) are greater than 32 Kb, and I tried the SQL*Plus 'Copy' command, without success.
    (For compatibility reasons I can't convert LONG to CLOB, I need to copy LONG to LONG)
    This is the example I'm working with:
    Table Source_LONG (ID number, DATA long)
    Table Destination_LONG (ID number, DATA long)
    The SQL*Plus command: (connected from test_database@environment)
    set long 100000
    copy from test_database/test_database@environment insert destination_long (id,data)
    I tried using both FROM and TO, but same results.
    The fields are copied into destination_long, but they are
    truncated at 32768 bytes, even with the LONG variable set to 100000. Any ideas ?
    Thanks.

    I'm working with 2 similar tables with this structure:
    SOURCE_LONG (ID number, DATA long)
    DESTINATION_LONG (ID number, DATA long)
    SOURCE_LONG contains two rows:
    ID DATA
    1 hello
    3 ....text bigger than 32kb...
    I tried your solution and it insert 2 rows, but only the ID is filled. The DATA is empty in both cases :-(
    insert into destination_long(id,data) (select id,to_lob(data) from source_long);

  • Migration of LONG and LONG RAW datatype

    Just upgraded a DB from 8.1.7.4 to 10.2.0.1.0. In the post-upgrade tasks, it speaks of migrating tables with LONG and LONG RAW datatypes to CLOB's or BLOB's. All of my tables in the DB with LONG or LONG RAW datatypes are in the sys, sysman, mdsys or system schemas (as per query of dba_tab_columns). Are these to be converted? Or, does Oracle want us to convert user data only (user_tab_columns)?

    USER_TAB_COLUMNS tells you the columns in the tables owned by the current user. There may well be many users on your system that you created that contain objects. I suppose you could log in to each of those schemas and query their USER_TAB_COLUMNS table, but it's probably easier to query DBA_TAB_COLUMNS with an appropriate WHERE clause on the owner of the objects.
    Justin

  • JDBC Thin Client and Oracle Long Datatype

    I am using Wepshere 4.0.2 , JDBC 2.0 (thin driver) and oracle 9i.
    I have a procedure which takes Oracle Long Datatype as its parameter.
    I use following code to execute procedure.
    String dataforsql="AAA000000003 123123 07/01/200301/01/2003";
    byte[] bytes = dataforsql.getBytes();
    InputStream is = new ByteArrayInputStream(bytes);
    cstmt=conn.prepareCall("call nscw.CPPF_SAVEPDCRAWTABLE2(?,?,?)");
    cstmt.setAsciiStream (1, is,bytes.length);
    The above code works perfectly for data upto 4000 bytes. Once the data crosses the 4000 mark.
    i get a procedure error
    ORA-01460: unimplemented or unreasonable conversion requested

    cstmt.setAsciiStream (1, is,bytes.length);Oracle's support for CLOB (and BLOB) columns using set{Ascii,Binary}Stream() generally s*cks. You'll have to read Oracle's own JDBC manual (you can read it online at http://technet.oracle.com) for whatever sequence they recommend.
    E.g. for insertion and updation of CLOBS, you're supposed to use an oracle-specific function (EMPTY_CLOB()) as the value in the INSERT/UPDATE statement, and then do a SELECT, getClob(), and use Clob APIs to update the actual column value. At least officially. Or you have to use some Oracle-specific APIs in oracle.sql.Connection and oracle.sql.CLOB.

  • Problem with Long and BigDecimal DataType convertion

    HI,
    I have an application which talk to MySQL database with Spring JDBC layer and everything runs like a champ :)
    Now I am converting the same application with the same DAO implementation to communicate with oracle database and I am having some trouble converting DataType from database to my java code.
    For example before with MySQL I would convert any Integer coming from the database to Long datatype and it worked. Now it seems that Int datatype from oracle comes as a BigDecimal type.
    I am thinking about two approaches writing an additional implementation to implement my dao interfaces or if there is some trickery that I can bypass doing that and work around these datatype incompatibility from MYSQL to Oracle.
    I can also put bunch of try and catch statements and if fails with Long try with BigDecimal, but still don't buy it as the best solution.
    Here use also code snippet:
    User u = new User();
                        u.setObjectId((Long) m.get("oid"));//This works with MYSQL backend but not with Oracle so the raw object seem to be coming as BigDecimal datatype
                        u.setLogon((String) m.get("logon"));
                        u.setPassword((String) m.get("password"));
                        u.setName((String) m.get("uname"));
                        u.setCompany(companyDAO.getById((Long) m.get("cOid")));Any help will be appreciated.
    Edited by: kminev on Mar 3, 2010 12:25 PM

    //This is my entity class
    public class User implements Comparable<User>  {
        private Long objectId = 0L;
        private Company company;
        private String logon = "";
        private String password = "";
        private String name = "";
        private boolean hasAdminRights=false;
        private boolean hasMaintenanceRights=false;
    //This is my bean class
    <bean id="userDAO" class=" com.myorg.myapp.dao.jdbc.UserImpl">
            <property name="transactionManager">
                <ref local="transactionManager"/>
            </property>
            <property name="companyDAO">
                <ref local="companyDAO"/>
            </property>
            <property name="companyAccessDAO">
                <ref local="companyAccessDAO"/>
            </property>
        </bean>

  • Convert CLOB to LONG

    I have an app that has a CLOB column in a table. The reporting software, which is separate from the application, cannot read CLOBs. So the solution that was revealed to me was to setup a trigger to copy any data going into the CLOB table to another table where the column is a LONG.
    Of course, no DBA in their right mind would be happy with this. So, I'm trying to set about creating a view that will convert the CLOB to a LONG, so as to relieve my poor SAN from all the I/O. There is no TO_LONG function, nor can I find anywhere how to do this.
    Anyone feel like taking a stab at this?

    Tom,
    My guess is you might have to use dbms_lob.substr subprogram. Anyway, LONG datatype has a limit of 2gb while clob or blob can be bigger (theoritically upto 8TB ). So, you have to decide if you wanna truncate your data and store only the first 2gb. Also, just for information purposes, this is what Oracle says in documentation.
    ----------ORACLE DOCUMENTATION------
    Note:
    Do not create tables with LONG columns. Use LOB columns (CLOB, NCLOB) instead. LONG columns are supported only for backward compatibility.
    Oracle also recommends that you convert existing LONG columns to LOB columns. LOB columns are subject to far fewer restrictions than LONG columns. Further, LOB functionality is enhanced in every release, whereas LONG functionality has been static for several releases.
    HTH,
    Rahul.

  • SQLJ , CLOB and JDEV1.1

    Hi,
    I have a problem when i try to read a CLOB with JDev 1.1 in my
    SQLJ file.
    When i Build my Prg i have the following message:
    Warning: The Type oracle.jdbc.driver.OracleClob of host item
    ORARequeteSQL (at position #1) is not permitted in JDBC. This
    will not be portable.
    When I execute my Applet in the AppletViewer, I have the
    following message on SQLException:
    java.sql.SQLException: unable to convert database class
    java.lang.String to client class oracle.jdbc.OracleClob
    Below, you can see the SQLJ which generate this message:
    public String SQLRecupRequeteSQL(int iNumQuery) throws
    SQLException
    long lLongSQL;
    OracleClob ORARequeteSQL;
    String sRequeteSQL = new String();
    #sql { select libelle into :ORARequeteSQL from
    edcnx.requete where num_query=:iNumQuery};
    #sql lLongSQL={values(dbms_lob.getlength
    (:ORARequeteSQL)) };
    #sql { call dbms_lob.read
    (:ORARequeteSQL,:lLongSQL,0,:sRequeteSQL) };
    return(sRequeteSQL);
    null

    David,
    The semantics for using LOB datatypes in JDBC and SQLJ are
    covered in the SQLJ Client Developer's Guide and Reference. Here
    is an example of how to access a CLOB from that guide.
    void readFromClob(OracleClob clob) throws SQLException
    long clobLen, readLen;
    String chunk;
    #sql clobLen = { values(dbms_lob.getlength(:clob)) };
    for (long i = 1; i <= clobLen; i += readLen) {
    readLen = 10;
    #sql { call dbms_lob.read(:clob, :inout readLen, :i, :out chunk)
    System.out.println("read " + readLen + " chars: " + chunk);
    Regards,
    PSW
    DUPONT David (guest) wrote:
    : Hi,
    : I have a problem when i try to read a CLOB with JDev 1.1 in my
    : SQLJ file.
    : When i Build my Prg i have the following message:
    : Warning: The Type oracle.jdbc.driver.OracleClob of host item
    : ORARequeteSQL (at position #1) is not permitted in JDBC. This
    : will not be portable.
    : When I execute my Applet in the AppletViewer, I have the
    : following message on SQLException:
    : java.sql.SQLException: unable to convert database class
    : java.lang.String to client class oracle.jdbc.OracleClob
    : Below, you can see the SQLJ which generate this message:
    : public String SQLRecupRequeteSQL(int iNumQuery) throws
    : SQLException
    : long lLongSQL;
    : OracleClob ORARequeteSQL;
    : String sRequeteSQL = new String();
    : #sql { select libelle into :ORARequeteSQL from
    : edcnx.requete where num_query=:iNumQuery};
    : #sql lLongSQL={values(dbms_lob.getlength
    : (:ORARequeteSQL)) };
    : #sql { call dbms_lob.read
    : (:ORARequeteSQL,:lLongSQL,0,:sRequeteSQL) };
    : return(sRequeteSQL);
    null

  • Exporting a table with Long datatype col. name

    Hello,
    I need help exporting a table, One of the column is long datatype.
    How can I do this without using Export Util?
    Is it possible.
    (If someone has a solution then PLEASE Email
    me)
    URGENT.
    Thanks.
    Pankaj Patel.

    Just wanted to find out, if you already solved this problem. I have a similar issue with long column. I am trying to sql dump a table with long column that will be imported into another database(probably using sql*load), but the spooled file puts the data from the long column in separate lines 80 char long and not on a single line. I have set the long to 64000 and linesize to 32000. Wrap is on too.
    null

  • Long Datatypes in Reports

    I have a field of long datatype in a table. I need to display the first 200 characters of this field in a report.
    How can I do this. If it is thru Placeholder columns can you explkain in detail how I should go about this.
    null

    It was in the 8i migration guide (http://download-uk.oracle.com/docs/cd/A87860_01/doc/server.817/a86632/migaftrm.htm#6573), but we decided to get to 10g first and then take care of it (since we did a 7 to 10 upgrade in two steps, 7->8 and 8->10).
    So basically I should just leave the SYSTEM tablespace objects alone then? I only tried the system objects conversion on a test database, and since the non-system objects were fine, we converted them on production but left the system ones alone.

  • VERY URGENT: problem in sql query with long datatype in weblogic

    I have a problem while tryind to retrieve a column value with a long datatype using servlet and oci driver and the server is weblogic5.1 .I have used prepared statement the problem comes in the
    preparedStatement.executeQuery().
    The sql Query is simple query and runs well in all cases and fails only when the long datatype column is included in the query.
    The exception that comes on the weblogic server is that :
    AN UNEXPECTED EXCEPTION DETECTED IN THE NATIVE CODE OUTSIDE THE VM.

    Did you try changing the driver then?
    Please use Oracle's thin driver instead of the oci driver.
    There are many advantages of using the type 4 driver. the first and foremost being that it does not require oracle client side software on your machine. Therefore no enteries to be made in tnsnames.ora
    The thin driver is available in a jar called classes112.zip the class which implements the thin driver is oracle.jdbc.driver.OracleDriver
    the connection string is
    jdbc:oracle:thin:@<machine name>:1521:<sid>
    please try out with the thin driver and let me know.
    regards,
    Abhishek.

Maybe you are looking for