Reading from CLOB..Clolumn from table

Hi,
I would like to know the reading data from CLOB Column can any one help in this regard,
I had written some code but I would like to loop it..
Please see the code given below :-
DECLARE
lobloc CLOB;
buffer VARCHAR2(32000);
amount NUMBER := 200;
amount_in_buffer NUMBER;
offset NUMBER := 1;
BEGIN
--Initialize buffer with data to be inserted
SELECT f_large_value
into lobloc
FROM T_PRODUCT_BRAND_PARAMS
WHERE F_BRAND_ID='PPNET' AND F_PRODUCT_ID ='CASINO'
     AND F_KEY LIKE '%TAB_CONFIG%';
dbms_lob.read(lobloc,amount,offset,buffer);
--using length built-in function to find the length of the buffer
amount_in_buffer := length(buffer);
dbms_output.put_line(buffer);
-- dbms_output.put_line(to_char(amount_in_buffer));
END;
where f_large_value is clob column,
when I executed its given oly first 5 lines of the data..
I would like to loop it can any one suggest me... in this regard
Thanks
Pavan Kumar N

Hi,
I am using BO Data Integrator as ETL tool, can you specify how will I 'split the data loads' or 'read the data in mutliple passes' ? Is it by using multiple Query Transform with different Where clause as filters? eg. 'by period' and  'by year'?
Query_1 filter:
year = 2006
Query_2 filters:
year = 2007
Merge:
Query_1 and Query_2
Thanks again!
Randy

Similar Messages

  • Read from Table T001

    Hi Folks,
    I have a requirement to create a pull down of all "plants" (locations) from T001. Is there a way to read that information "on the fly" or at least update it on a scheduled basis?
    Thanks,
    Matt

    Ooookay.
    Well, to get the start for the pass, that reads the content from the ABAP table, you can create a job with the wizard and there choose "Jobs > SAP NetWeaver > ABAP Read Help Values" with the repository you want the data to be read from.
    It will create a job with some passes that use the "fromSAP" pass type to read tables (e.g. "Address Salutation ReadTable: TSAD3"). Just take one of those passes and change it up to fit the T001 table and in the destination tab change to a appropriate table name of your choosing to save the data.
    And then use this temp table for a second pass, a "to ID store" this time and fill whatever attribute you created to hold the data with it. 
    Since it's a job you can just schedule it to refresh the temp table whenever you want or need it to.
    Regards,
    Steffi.

  • Difference between read from table or load into table in Mappings

    Hi everybody.
    i wanted to deploy a mapping. Following error was shown:
    ora-04021 timeout occurred while waiting to lock object Mapp_xyz
    I tried to deploy another Mapping which was successfull. I found out that a specific table was locked and both mappings this table is used. In the Mapping which was deployed successfully i load into this table. In the other Mapping i read from this table. So my question is, is there any difference between reading from a table in a mapping and loading into a table?
    Because after the Lock was broken up the mapping was deployed successfull.
    Any Information would be appreciated.
    Greetings

    Hi,
    thanks for the reply. A Processflow was aborted while he was executing the mapping. So on our DB we had a active session which was causing the lock.
    Thanks.
    Greetings

  • Dynamic read from table  with DBMS_SQL Block

    Hi,
    In the orcale8 Application developer's Guide is this example:
    -- To select everything from this table and move it into four PL/SQL tables, you could use the following simple program:
    c number;
    d number;
    n_tab dbms_sql.Number_Table;
    d_tab1 dbms_sql.Date_Table;
    v_tab dbms_sql.Varchar2_Table;
    d_tab2 dbms_sql.Date_Table;
    indx number := 10;
    begin
    c := dbms_sql.open_cursor;
    dbms_sql.parse(c, 'select * from multi_tab order by 1', EXEC_sql.V7);
    dbms_sql.define_array(c, 1, n_tab, 5, indx);
    dbms_sql.define_array(c, 2, d_tab1, 5, indx);
    dbms_sql.define_array(c, 3, v_tab, 5, indx);
    dbms_sql.define_array(c, 4, d_tab2, 5, indx);
    d := dbms_sql.execute(c);
    loop
    d := dbms_sql.fetch_rows(c);
    dbms_sql.column_value(c, 1, n_tab);
    dbms_sql.column_value(c, 2, d_tab1);
    dbms_sql.column_value(c, 3, v_tab);
    dbms_sql.column_value(c, 4, d_tab2);
    exit when d != 5;
    end loop;
    dbms_sql.close_cursor(c);
    Here the four tables can be used for anything at all. One usage might be to use BIND_ARRAY to move the rows to another table by using a query such as 'INSERT into SOME_T values(:a, :b, :c, :d);
    exception when others then
    if dbms_sql.is_open(c) then
    dbms_sql.close_cursor(c);
    end if;
    raise;
    at the second 'dbms_sql.define_array' comes an ORA-0600 error.
    What's wrong ?

    Some solutions posted here : {message:id=9689485}

  • Select from table containing clob

    If i try to select from table containing clob column in SQL PLus it gives error.
    Tab1 contains 3 clob columns and 1 blob column
    select * from tab 1;
    SP2-0678: Column or attribute type can not be displayed by SQL*Plus
    The same statement works in SQL Developer and I am able to see the result.
    Actually i am writing the queries and they will be used by Java developers in their JSP page.
    So what happens here? Can Java use these select statements or will it throw error like SQL Plus?

    BLOB column content can't be displayed in SQL*Plus:
    SQL> create table t_blob (b blob);
    Table created.
    SQL> edit
    Wrote file afiedt.buf
      1* insert into t_blob values('01')
    SQL> /
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from t_blob;
    SP2-0678: Column or attribute type can not be displayed by SQL*PlusBlob and clob columns content can be processed using DBMS_LOB
    package procedures and functions or using client's language (like Java)
    methods. See JDBC specification.
    Rgds.

  • Using a procedure to read from CLOB

    Hi folks,
    I insert a dataset into a table using the following procedure:
    <CODE>
    CREATE OR REPLACE PROCEDURE WRITE_KATALOGTEXT(IN_DOKUID CHAR,
    IN_ARTNR
    NUMBER, IN_TEXT VARCHAR2, IN_DATUM DATE) IS
    LOB_LOC CLOB;
    BEGIN
    INSERT INTO TEXTCASTOR (DOKUID, ARTNR, TEXT, DATUM) VALUES
    (IN_DOKUID,
    IN_ARTNR, EMPTY_CLOB(), IN_DATUM);
    COMMIT;
    SELECT TEXT INTO LOB_LOC FROM TEXTCASTOR
    WHERE DOKUID=IN_DOKUID AND ARTNR=IN_ARTNR AND DATUM=IN_DATUM FOR
    UPDATE;
    DBMS_LOB.WRITE (LOB_LOC, length(IN_TEXT), 1, IN_TEXT);
    COMMIT;
    END;
    </CODE>
    After this insert I try to read from the table using the
    following Procedure
    <CODE>
    CREATE OR REPLACE FUNCTION GET_KATALOGTEXT(IN_ARTNR NUMBER) RETURN
    VARCHAR2 IS
    BUFFER CLOB;
    MY_RETURNVALUE VARCHAR2(32767);
    MY_ARTNR NUMBER(6):=IN_ARTNR;
    BEGIN
    SELECT TEXT INTO BUFFER FROM WORKFLOWOWNER.TEXTCASTOR
    WHERE ARTNR=MY_ARTNR;
    MY_RETURNVALUE:=DBMS_LOB.SUBSTR(BUFFER, 32767, 1);
    RETURN MY_RETURNVALUE;
    END;
    </CODE>
    My problem is, that I get the Oracle Error ORA-06502.
    It says, that the String buffer is too small if the length of the
    CLOB exceeds 4k.
    Writing works with values having a length of over fourthousand
    characters.
    Reading only works with values having a length of less than
    fourthousand characters.
    Has anyone a clue how to handle this?
    Greetings
    Markus

    Hi,
    In the First Proc u'r not Updating the Data with that Clob valu u'r writing ... I donot Know if this is u'r Actuall Proc or it is a Proc that has been minimized.
    As Regards the Second function the function seems to be okay. Are u trying to call this function thru a SQL Stmt. Then this will not work.
    What u need to do is to write a PL/SQL Block and call it. In SQL the Varchar2 can Only Hold 4000 Bytes and nothing more than that.
    Hope this helps.
    Regards,
    Ganesh R

  • Inserting in the UUT_Results table a value that we read from our data base from a local variable

    We would like to include in the UUT_Results table a value that we read from our data base into a local variable during the execution of our sequence file. We found that by modifying the configure database options we were able to add a column for this variable, but the local variable was not available to be placed into an expression for that column from the local variables or parameters. Is it possible to do this, and if so, how? Station Globals were available to be included in the expression, however the sequence file may be executed on more than one system which makes the Global unavailable on systems other than the one where the sequence file originated.

    You can use the TestStand API to programmatically create global variables at runtime, thus ensuring their existence. For example, you could call Engine.Globals.SetValString("GlobalStringVariableName", PropOption_InsertIfMissing, "variable value")
    Of course, if you need to test multiple UUTs in parallel, a single global is not sufficient. In that case you might consider adding the field you need to the UUT datatype in the process model. You could then access the field in your sequence via RunState.Root.Locals.UUT.YourNewField = YourNewValue.
    If you also want your sequence to run without using a process model, you must check for the existence of the UUT before accessing it. You could use the expression function: PropertyExists("RunState.Root.Locals.UUT.YourNewFi
    eld")

  • Dynamic reading from database table

    Hi Experts,
    While reading from a database table the below statement for deletion works:
    DELETE (p_table) FROM <fs_wadbtab>.
    p_table: name of database table which is entered as a selection screen parameter
    <fs_wadbtab> : workarea of line type P_table
    However, the below statement does not work:
    READ (p_table) FROM <fs_wadbtab>.
    My requirement is to read a record from p_table with contents in a dynamic structure.
    Kindly suggest.
    Thanks.

    Just misunderstood you.
    Edited by: Karri Kemppi on Jun 23, 2010 10:00 AM

  • Select count(*) from table in oracle 11g with direct path read takes time

    select count(*) from table takes long time, even more than couple of hours..
    direct path read is the wait event which is almost is at 99%..
    can u someone provide some info on this.. on solution.. thankx

    knowledgespring wrote:
    table has millions of records... 130 millions..
    select count(*) from BIG_SIZE_TABLE; --- executed in sql plus command prompt.
    Rows     Execution Plan
    0  SELECT STATEMENT   MODE: ALL_ROWS
    0   SORT (AGGREGATE)
    0    TABLE ACCESS   MODE: ANALYZED (FULL) OF 'BIG_SIZE_TABLE' (TABLE)
    Elapsed times include waiting on following events:
    Event waited on                             Times   Max. Wait  Total Waited
    ----------------------------------------   Waited  ----------  ------------
    SQL*Net message to client                       1        0.00          0.00
    enq: KO - fast object checkpoint                1        0.01          0.01
    Disk file operations I/O                       18        0.00          0.00
    direct path read                            58921        0.34        418.54direct path read time waited is : 58921 total time waited: 418.54
    That 418 seconds - not the hours you reported earlier. Is it possible that your connection to the database broke ?
    On a typical system, by the way, you can usually turn one direct read for tablescan into 1MB, so your scan seems to have covered about 59 GB, which seems to be in the right sort of ballpark for 130M rows.
    we have another query and when we test the query execution using v$sql, is_bind_sensitive =N, how to make is_bind_sensitive=Y all the time.. There is a hint /*+ bind_aware */ - I'd have to check whether or not it's documented at present. It might help.
    I would be interested in hearing why you think the hint should be bind sensitive when the optimizer doesn't.
    Regards
    Jonathan Lewis

  • Read from 2 tables, write only to one. Simple, but impossible??

    I've been messing about with various annotations for quite a while now, and I can't seem to figure out how to accomplish this seemingly simple task. I'm hoping somebody could explain to me the probably very obvious and simple mistake that I am making.
    I realize that question would otherwise point to problems in my database design and persistence strategy, but I can't seem to read from 2 tables, and write to only one. I have an entity bean that refers mainly to one table, but needs just one column of information from another table. It should not attempt to alter the secondary table, just join on it to get the one field it needs. So far, I've got the following:
    @Entity
    @Table(name = "ventureprofile")
    @SecondaryTable(name = "venture",
                    pkJoinColumns = @PrimaryKeyJoinColumn(name="ventureid"))
    @NamedQueries( {
            @NamedQuery(name = "Ventureprofile.findByVentureprofileid", query = "SELECT v FROM Ventureprofile v WHERE v.ventureprofileid = :ventureprofileid"),
            @NamedQuery(name = "Ventureprofile.findByVentureid", query = "SELECT v FROM Ventureprofile v WHERE v.ventureid = :ventureid"),
            @NamedQuery(name = "Ventureprofile.findByVenturesummary", query = "SELECT v FROM Ventureprofile v WHERE v.venturesummary = :venturesummary"),
            @NamedQuery(name = "Ventureprofile.findByLogoimagelocation", query = "SELECT v FROM Ventureprofile v WHERE v.logoimagelocation = :logoimagelocation"),
            @NamedQuery(name = "Ventureprofile.findByVisible", query = "SELECT v FROM Ventureprofile v WHERE v.visible = :visible ORDER BY v.venturename")
    public class Ventureprofile implements Serializable {
        @Id
        @GeneratedValue(strategy=GenerationType.AUTO, generator="Ventureprofile.ventureprofileid.seq")
        @SequenceGenerator(name="Ventureprofile.ventureprofileid.seq", sequenceName="ventureprofile_ventureprofileid_seq", allocationSize=1) 
        @Column(name = "ventureprofileid", nullable = false)
        private BigInteger ventureprofileid;      
        @Column(name = "ventureid", table="ventureprofile", nullable = false)
        private Long ventureid;
        @Column(name = "venturesummary")
        private String venturesummary;
        @Column(name = "logoimagelocation")
        private String logoimagelocation;
        @Column(name = "visible", nullable = false)
        private boolean visible;  
        @Column(table = "venture", name = "venturename", nullable=false, insertable=false, updatable=false)
        private String venturename;
    //... The extra column of data is "venturename", the tables are related by the column "ventureid". I contest, that since I have specified for read/write access no fields mapped to the 'venture' table (I specified with 'table=' any ambiguous columns), and described the table relationship via my @PrimaryKeyJoinColumn annotation, that toplink should not try to persist anything to the 'venture' table. However, I am getting the following exception:
    Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))): oracle.toplink.essentials.exceptions.DatabaseException
    Internal Exception: org.postgresql.util.PSQLException: ERROR: null value in column "venturename" violates not-null constraint
    Error Code: 0
    Call: INSERT INTO venture (ventureid) VALUES (?)
            bind => [220]
    Query: InsertObjectQuery(VCMarkWeb.db.entity.Ventureprofile[ventureprofileid=220])
            at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:311)
    ...It seems pretty clear that toplink is trying to insert an empty row into the secondary table, and barfing since it is inserting no values, including those constrained to be nonnull. Anyway...I hope it's clear what's going on here, and perhaps somebody knows the right way to do this. Thanks fot your help!

    Tom and Cornelius,
    Thank you both for your ideas. This what I think I will do:
    1) Drive up to OWC (fortunately they are only about 15 miles away from where I live) and get a bigger HD and some more RAM.
    2) Copy the OS9 applications on the existing HD to a thumb drive.
    3) Install the new hard drive and memory
    4) Install OSX TIger and 9.2.2
    5) Maybe do the 4.1.8 firmware update also, maybe.
    6) Look around for a Superdrive and if I can get one for a good price, I'll do it otherwise I'll just use USB thumb drives if I need to copy something off the PB.
    One thing that I was fighting with the other day was that when I had the yoyo power supply plugged in and attached when the PB was in the kitchen, the batteries would not charge although the yoyo was outputting +24 VDC.
    I move everything to the garage, where I originally fixed the yoyo power supply, plug it in out there and the batteries started charging!!! Scratching my head...then I check the voltage between the metal barrel (with the Vsensing resistor attached between the metal shell and the ground wire) of power supply plug and the +24 VDC wire and there is 22 to 23 VDC with the batteries being charged. In the kitchen when they were not being charged, it was about 1.0 VDC. So I rotate the plug a little bit and the batteries start charging. Rotate it more and the batteries don't charge. Something else to do when I have spare time.
    Anyway, thanks for the help and suggestions.
    John

  • How to read from an internal table with multiple key fields.

    Hi All!!
    I want to read from an internal table having keys as k1,k2,k3.
    How can I use read statement to read an entry having this as the key fields.
    Thanks in adavance..
    Prabhas Jha

    hi there
    use:
    sort itab by K1 K2 K3.
    read table itab into wa with key K1 = value 1
                                                  K2 = value2
                                                  K3 = value 3
                                                   BINARY SEARCH.
    where:
    itab is ur internal table
    wa is the work area with the same line type as the itab
    cheers
    shivika

  • How do I read from one table and write to another identical table?

    I am very new to Oracle. I am trying to do something that should be very simple.
    I am trying to read from one table in SQL and then write to another
    Identically formatted table. I keep getting various errors. Could someone please
    post some vey simple code that will work so that I can play around with it?
    Any help would be greatly appreciated.
    Thanks,
    Ron

    Thanks, but I must be missing something.
    I have two tables, SONGLIST and SETLIST.
    The second line by itself works just fine on either table.
    Here is the code I used following your seggestion, along with it's error message.
    Hope you can help. Thanks again...
    INSERT INTO SETLIST
    SELECT TITLE FROM SONGLIST WHERE ROTATION <> 'X'
    ORA-00947: not enough values

  • Need to read all entris  for field prtxt from table /sapsll/prt

    hi
    i need to read all entries from table /sapsll/prt field prtxt but only one is coming
    pls see below seelct statement
    if not gt_sagmeld[] is initial.
                SELECT /sapsll/cuit~guid_cuit         " PK
                       /sapsll/cuit~QUANT_FLT         " to be displayed
                       /sapsll/cuit~QUAUM             " to be displayed
                       /sapsll/cuit~RPTDT             " to be displayed
                       /sapsll/cuit~guid_cuhd
                       /sapsll/cuit~guid_pr           " needed for gt_prt inttab
                      /sapsll/corref~refno
                       /sapsll/corref~guid_pobj
                INTO corresponding fields of table gt_sapsllcuit
                FROM ( ( /sapsll/cuit
                inner join /sapsll/cuhd on /sapsll/cuit~guid_cuhd = /sapsll/cuhd~guid_cuhd )
                inner join /sapsll/corref on /sapsll/corref~guid_pobj = /sapsll/cuhd~guid_cuhd )
                FOR all entries in gt_sagmeld
                WHERE /sapsll/cuit~guid_cuit = gt_sagmeld-guid_pobj.
             endif.
            if not gt_sapsllcuit[] is initial.
             select /sapsll/prt~prtxt
                    /sapsll/prt~guid_pr       
             into corresponding fields of table gt_prt
             from /sapsll/prt
             for all entries in gt_sapsllcuit
             where /sapsll/prt~guid_pr = gt_sapsllcuit-guid_pr.
    loop at gt_sagmeld into wa_sagmeld.
    read table gt_sapsllcuit into wa_sapsllcuit
    with key guid_cuit = wa_sagmeld-guid_pobj
    binary search.
    Read table gt_prcon into wa_prcon with key
    guid_pr = wa_sapsllcuit-guid_pr.
    if sy-subrc = 0.
    *wa_sagmeld_outtab-guid_pr  = wa_prt-guid_pr. 
    here i am facing problem as multiple entries from table  /sapsll/prt is not being displayed only one.....but i have checked in table it is having two enteries
    pls suggest
    reagards
    Nishant

    Hi Nishant!
    When you use 'for all entries' SAP (or database?) does a 'delete adjacent duplicates' on the result. This is necessary because of the special selection technique in this case.
    You need to select enough columns from /sapsll/prt, so that your two entries will differ in the result.
    Regards,
    Christian

  • Reading data from table on screen .

    Hi
    i have a 'Z' screen which contains table - i want to read the content of the table from screen beacuse some times the user fill the table fileds but doesnt press the ENTER key and the data he filled in the table on screen doesnt enter the internal table, is there a way to read table contant from screen ?
    thanks
    Elad

    hi eladush,
    below is the master table contol program, hope its help you
    <removed by moderator>
    Moderator message: please post only relevant code parts, your posts must contain less than 5000 characters to preserve formatting.
    Edited by: Thomas Zloch on May 31, 2011 9:41 AM

  • JDBC outbound reading from multiple tables

    Hi,
    I have a scenario in which I have to read data from multiple tables.
    The keys of these tables (employee details) are dependent on a employee master table.
    Now there is a 1-N relationship between master and the details table. I have to send the data to an IDOC.
    I am stuck since reading from all the tables at the same time is not possible.
    Please suggest some solution.
    Thanks,
    Vaibhav

    Hi Bhavesh,
    Thanks for ur reply. It solved my problem.
    Full points to you...
    Pls help me with one more issue.
    As per the method u told i am reading from multiple tables. Each table has data for multiple employees.
    The response message is of the format
    <STATEMENT_Table1_response>
    <row>
       Data for emp 1
    </row>
    <row>
       Data for emp 1
    </row>
    </STATEMENT_Table1_response>
    <STATEMENT_Table1_response>
    <row>
       Data for emp 2
    </row>
    <row>
       Data for emp 2
    </row>
    </STATEMENT_Table1_response>
    <STATEMENT_Table2_response>
    <row>
       Data for emp 1
    </row>
    <row>
       Data for emp 1
    </row>
    </STATEMENT_Table2_response>
    <STATEMENT_Table2_response>
    <row>
       Data for emp 2
    </row>
    <row>
       Data for emp 2
    </row>
    </STATEMENT_Table2_response>
    I want to collect it to an idoc where i have data of each employee collected together. Like
    <Emp1>
      <Data from Table1>
      <Data from Table2>
    </Emp1>
    <Emp2>
      <Data from Table1>
      <Data from Table2>
    </Emp2>
    I have tried many ways and looks like a custom code in java have to written.
    Is there anything else possible?
    Thanks in Advance,
    Vaibhav

  • How to read data from table GLFUNCA quickly?

    Hello everybody!
         Recently,my colleague has programmed a report about reading data from table GLFUNCA,and its speed is very slowly and exceeded limited time sometimes .
    Excuse me?
    how to use INDEX and VIEW TO accelerate it?
    pls supply my helpful solutions.
    Thanks!
    Best wishes!

    Hi,
    Please check your SELECT statement to find out what are the columns being used in the WHERE clause. Then go to SE11 and find out if there are indexes on those columns.
    If Indexes don't exist then you can create indexes on those columns, which might improve the performance.
    However, there might be other areas also, where the performance can be improved. Do a run time analysis / trace and figure which part of the program is taking maximum time.
    Regards,
    Ravi
    Note : Please mark the helpful answers

Maybe you are looking for

  • HT204053 After a name change, how do I create a new Apple ID?

    I recently changed my name and would like to create a new Apple ID, then merge all the data from my old Apple ID into my new Apple ID. Is that possible, or will I risk losing all my data?

  • Printing with Adobe Acrobat 9 Standard

    Does anyone have issues printing in Adobe Acrobat 9 Standard with the HP CM1415fnw? I thought I have the latest driver downloaded.

  • How to call a  servlet in a Session Bean

    Hi All I have one servlet program.And i have One Sessionbean. I want to call the servlet in the session bean. I am using Runtime method in java to call the servlet by giving it's full path as it's argument. But it is not working fine.I don't know wha

  • JDBC-- PI7.0-- SAPBI with Proxy

    Hi Experts, I have a requirement like this : JDBC(Sql Server)>PI7.0>SAPBI.  I need a suggestions from the above requirement. For this requirement I am planning to use sender as a JDBC Adapter and Reciver as Proxy. I have some questions on this requir

  • Outbound EDI missing envelope segment

    Outbound EDI file is missing envelope segment (ISA,GS,GE AND IEA). Input xml is getting processed successfully. However, the outbound EDI is missing the envelope segment. My vendor says he needs envelope segement. What am I doing wrong? Thanks for yo