Using a procedure to read from CLOB

Hi folks,
I insert a dataset into a table using the following procedure:
<CODE>
CREATE OR REPLACE PROCEDURE WRITE_KATALOGTEXT(IN_DOKUID CHAR,
IN_ARTNR
NUMBER, IN_TEXT VARCHAR2, IN_DATUM DATE) IS
LOB_LOC CLOB;
BEGIN
INSERT INTO TEXTCASTOR (DOKUID, ARTNR, TEXT, DATUM) VALUES
(IN_DOKUID,
IN_ARTNR, EMPTY_CLOB(), IN_DATUM);
COMMIT;
SELECT TEXT INTO LOB_LOC FROM TEXTCASTOR
WHERE DOKUID=IN_DOKUID AND ARTNR=IN_ARTNR AND DATUM=IN_DATUM FOR
UPDATE;
DBMS_LOB.WRITE (LOB_LOC, length(IN_TEXT), 1, IN_TEXT);
COMMIT;
END;
</CODE>
After this insert I try to read from the table using the
following Procedure
<CODE>
CREATE OR REPLACE FUNCTION GET_KATALOGTEXT(IN_ARTNR NUMBER) RETURN
VARCHAR2 IS
BUFFER CLOB;
MY_RETURNVALUE VARCHAR2(32767);
MY_ARTNR NUMBER(6):=IN_ARTNR;
BEGIN
SELECT TEXT INTO BUFFER FROM WORKFLOWOWNER.TEXTCASTOR
WHERE ARTNR=MY_ARTNR;
MY_RETURNVALUE:=DBMS_LOB.SUBSTR(BUFFER, 32767, 1);
RETURN MY_RETURNVALUE;
END;
</CODE>
My problem is, that I get the Oracle Error ORA-06502.
It says, that the String buffer is too small if the length of the
CLOB exceeds 4k.
Writing works with values having a length of over fourthousand
characters.
Reading only works with values having a length of less than
fourthousand characters.
Has anyone a clue how to handle this?
Greetings
Markus

Hi,
In the First Proc u'r not Updating the Data with that Clob valu u'r writing ... I donot Know if this is u'r Actuall Proc or it is a Proc that has been minimized.
As Regards the Second function the function seems to be okay. Are u trying to call this function thru a SQL Stmt. Then this will not work.
What u need to do is to write a PL/SQL Block and call it. In SQL the Varchar2 can Only Hold 4000 Bytes and nothing more than that.
Hope this helps.
Regards,
Ganesh R

Similar Messages

  • Reading from CLOB..Clolumn from table

    Hi,
    I would like to know the reading data from CLOB Column can any one help in this regard,
    I had written some code but I would like to loop it..
    Please see the code given below :-
    DECLARE
    lobloc CLOB;
    buffer VARCHAR2(32000);
    amount NUMBER := 200;
    amount_in_buffer NUMBER;
    offset NUMBER := 1;
    BEGIN
    --Initialize buffer with data to be inserted
    SELECT f_large_value
    into lobloc
    FROM T_PRODUCT_BRAND_PARAMS
    WHERE F_BRAND_ID='PPNET' AND F_PRODUCT_ID ='CASINO'
         AND F_KEY LIKE '%TAB_CONFIG%';
    dbms_lob.read(lobloc,amount,offset,buffer);
    --using length built-in function to find the length of the buffer
    amount_in_buffer := length(buffer);
    dbms_output.put_line(buffer);
    -- dbms_output.put_line(to_char(amount_in_buffer));
    END;
    where f_large_value is clob column,
    when I executed its given oly first 5 lines of the data..
    I would like to loop it can any one suggest me... in this regard
    Thanks
    Pavan Kumar N

    Hi,
    I am using BO Data Integrator as ETL tool, can you specify how will I 'split the data loads' or 'read the data in mutliple passes' ? Is it by using multiple Query Transform with different Where clause as filters? eg. 'by period' and  'by year'?
    Query_1 filter:
    year = 2006
    Query_2 filters:
    year = 2007
    Merge:
    Query_1 and Query_2
    Thanks again!
    Randy

  • Using a Procedure in the FROM clause of a query

    Is it possible to use a Procedure that accepts multiple parameters and returns multiple parameters in the FROM section of a query?
    I have a Procedure that formats a postal address from BS7666 format into an Oracle Apps friendly format.
    I'd like to be able to select the data from the source, feed it through this procedure and output it as part of a Materialised View.
    PROCEDURE Format_llpg_Address
    In_Loc IN VARCHAR2,
    In_Description IN VARCHAR2,
    In_County IN VARCHAR2,
    In_Town IN VARCHAR2,
    In_PostTown IN VARCHAR2,
    In_Saon_Start_num IN NUMBER,
    In_Saon_Start_Suffix IN VARCHAR2,
    In_Saon_End_num IN NUMBER,
    In_Saon_End_Suffix IN VARCHAR2,
    In_Saon_Text IN VARCHAR2,
    In_Paon_Start_num IN NUMBER,
    In_Paon_Start_Suffix IN VARCHAR2,
    In_Paon_End_num IN NUMBER,
    In_Paon_End_Suffix IN VARCHAR2,
    In_Paon_Text IN VARCHAR2,
    In_PostCode IN VARCHAR2,
    Out_Address1 OUT NOCOPY VARCHAR2,
    Out_Address2 OUT NOCOPY VARCHAR2,
    Out_Address3 OUT NOCOPY VARCHAR2,
    Out_Town OUT NOCOPY VARCHAR2,
    Out_County OUT NOCOPY VARCHAR2,
    Out_PostCode OUT NOCOPY VARCHAR2)
    Many Thanks,
    Jason.

    You should look at [pipelined functions|http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28370/tuning.htm#i52954]
    Adrian Billington has a number of excellent articles on pipelined functions.
    Here's a [link to one of them|http://www.oracle-developer.net/display.php?id=207]
    Edited by: dombrooks on Oct 12, 2009 4:45 PM

  • How to use SSIS 2014 to read from Sharepoint List ?

    Hi there,
    I have designed a simple entry form for users using List in Sharepoint 2013.
    I need to use the information to merge into my datawarehouse.
    Question :
    1. How can I setup SSIS (SQL 2014) to get the source from List in Sharepoint?
    I have tried to download the script as suggested in codeplex but it seems not working with SQL 2014 as there is no SSIS toolbar displayed.
    Any help is much appreciated
    Thank you and Best Regards

    Hi SylviaO,
    The current
    SharePoint List Adapter that extracting and loading SharePoint Data in SQL Server Integration Services supports x86/x64 systems using SQL 2005 / SQL 2008 / SQL 2008 R2/ SQL 2012. It doesn’t support SQL 2014. Maybe we need wating the update.
    Besides, the
    OData Source component can also be used to read from SharePoint lists. So we can download and install the 64 bit ODataSourceForSQLServer2014-amd64.msi or 32 bit ODataSourceForSQLServer2014-x86.msi from the
    Microsoft® SQL Server® 2014 Feature Pack.
    The following blog about using the SSIS OData Source Connector to read data from SharePoint lists is for your reference:
    http://whitepages.unlimitedviz.com/2014/03/using-the-odata-source-connector-with-sharepoint-online-authentication/
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • ? read from clob and store each line in database

    Greetings,
    i would like to read LINE BY LINE the contents of a CLOB column( which stores the contents of a plain file .txt)
    and store each line into a table.
    Is that possible?

    pollywog wrote:
    with t as (select to_clob('fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    v
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa
    fdsafdsafdafdsaffdsafdsafdsafdsafdafdafdsdsfdsa') x from dual
    select
    text
    from t
    model return updated rows
    dimension by (0 d)
    measures (dbms_lob.substr( x, 4000, 1 ) text, 0 position_of_return )  -- the position of return is where the next carriage return is
    rules iterate(100) until position_of_return[iteration_number+1] = 0
    position_of_return[iteration_number + 1] = instr(text[0],chr(10),1,iteration_number + 1),
    text[iteration_number + 1] = substr(text[0],
    position_of_return[iteration_number],
    position_of_return[iteration_number + 1] - position_of_return[iteration_number]
    Hi;
    Thank you for your kind help. The query is very fast. But i would like to ask a question about it. My CLOB contains more than 4000 characters. Is it possible to change the *1* in dbms_lob.substr( x, 4000, *1* ) to start again from where it left.
    I did that by making a pipelined function and loop ing until i get to end of clob. But is there a faster way using just sql..
    Best Regards
    Fatih
    FUNCTION get_clob_lines(cl_data CLOB) RETURN t_x_clob_table
    PIPELINED IS
    yrecords t_x_clob_record;
    CURSOR c_lines(n_start_position IN NUMBER) IS
    SELECT position_of_return
    ,text
    FROM (SELECT position_of_return
    ,text
    FROM dual model RETURN updated rows
    dimension BY(0 d)
    measures(dbms_lob.substr(cl_data, 4000, n_start_position) text, 0 position_of_return)
    rules iterate(4000) until position_of_return [ iteration_number + 1 ] = 0
    (position_of_return [ iteration_number + 1 ] = instr(text [ 0 ], chr(10), 1, iteration_number + 1),
    text [ iteration_number + 1 ] = substr(text [ 0 ],
    position_of_return [ iteration_number ] + 1, position_of_return [ iteration_number + 1 ] - (position_of_return [ iteration_number ] + 1))
    ) ccc
    WHERE ccc.position_of_return <> 0
    ORDER BY ccc.position_of_return;
    l_n_max_position NUMBER;
    l_n_start_position NUMBER;
    BEGIN
    l_n_start_position := 1;
    WHILE l_n_start_position < dbms_lob.getlength(cl_data) LOOP
    FOR r_lines IN c_lines(l_n_start_position) LOOP
    yrecords := t_x_clob_record(n_position => r_lines.position_of_return,
    v_text => r_lines.text);
    l_n_max_position := r_lines.position_of_return;
    PIPE ROW(yrecords);
    END LOOP;
    l_n_start_position := l_n_start_position + l_n_max_position;
    END LOOP;
    RETURN;
    END;

  • Replace xml code when used as a xmltype converted from clob.

    I am still new at all this so I will try to make sense.
    I have the "sys_xmlgen" where it takes my clob_content which is a clob and converts it into the v_xml which is xmltype.
    ****code to show change of clob to xmltype.
    SELECT sys_xmlgen(clob_content) INTO v_xml FROM xmltest2 WHERE item_id = v_item_id;
    update xmltest2 set xml = v_xml where item_id = v_item_id;
    **end of code
    When you use sys_xmlgen it changes the xml into this example:
    <?xml version="1.0"?>
    <CLOB_CONTENT>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
    &lt;!DOCTYPE metadata SYSTEM &quot;csdgm2.dtd&quot;&gt;
    &lt;?xml-stylesheet href=&quot;FGDC_V2.xsl&quot; type=&quot;text/xsl&quot;?&gt;
    &lt;metadata&gt;
    &lt;idinfo&gt;
    </CLOB_CONTENT>
    I need to remove the <CLOB_CONTENT> tag and change some things such as the "&lt;" to a "<" and so on. But when I do a replace statement
    select replace(clob_content, '&lt;', '<') into v_xml from xmltest2;
    It says I can not do a clob into number.
    Just to make myself clear, the clob_content is clob and v_xml is xmltype. SOOO I think that is where my problem is. Does anyone know the syntax to replace the code in my xml so that it looks like the original clob xml??
    Any help would be appreciated.

    I am still new at all this so I will try to make sense.
    I have the "sys_xmlgen" where it takes my clob_content which is a clob and converts it into the v_xml which is xmltype.
    ****code to show change of clob to xmltype.
    SELECT sys_xmlgen(clob_content) INTO v_xml FROM xmltest2 WHERE item_id = v_item_id;
    update xmltest2 set xml = v_xml where item_id = v_item_id;
    **end of code
    When you use sys_xmlgen it changes the xml into this example:
    ?xml version="1.0"?>
    <CLOB_CONTENT>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;
    &lt;!DOCTYPE metadata SYSTEM &quot;csdgm2.dtd&quot;&gt;
    &lt;?xml-stylesheet href=&quot;FGDC_V2.xsl&quot; type=&quot;text/xsl&quot;?&gt;
    &lt;metadata&gt;
    &lt;idinfo&gt;
    &lt;citation&gt;
    &lt;citeinfo&gt;</CLOB_CONTENT>
    I need to remove the <CLOB_CONTENT> tag and change some things such as the "<" to a "<" and so on. But when I do a replace statement
    select replace(clob_content, '<', '<') into v_xml from xmltest2;
    It says I can not do a clob into number.
    Just to make myself clear, the clob_content is clob and v_xml is xmltype. SOOO I think that is where my problem is. Does anyone know the syntax to replace the code in my xml so that it looks like the original clob xml??
    Any help would be appreciated.

  • Using an array to read from and write to files

    my problem is, I do not know how to do this program, it's like an inventory warehouse program that uses an array to store the items, then I have to use a keyboard input to run through the program the first time, using a write method to save the file, and a read method to go through the program a second time and resave it to a different file with the write method, I've asked countless times for help, but the instructor just won't help, so I was hoping I could get some help here, and if I don't get this done, I'm going to end up failing the class.
    also, the requirements are listed here as well, so it can be better understood.
    Write a program that:
    1. allows the user to choose whether the inventory data comes from a file or from the keyboard
    2. allows for up to 20 inventory items
    3. if the data comes from the keyboard, asks for all inventory information (item name, number in stock,
    initial warehouse, and value of one item)
    4. if the data comes from a file, displays the inventory information for the item.
    5. for each item, asks the user how many items to add or delete from inventory
    6. determines whether an item must be moved to a different warehouse and changes the location if
    necessary (note, it may be necessary to move an item to a smaller warehouse)
    7. once all inventory changes have been made, for each warehouse displays the items in the warehouse, the
    total number of items in the warehouse, and the total value of the items in the warehouse.
    8. once all inventory changes have been made, stores the item information in a file (which can be used for
    the next program run)
    9. asks the user for the names of the input file (if data is coming from a file) and the output file (always).
    p.s. I can post the source code for it if it's required for the help.
    Thanks,
    Xandler

    my specific question is how I would go about using the scanner utility to tell it to input from the file or keyboard, if from keyboard it's manually item information is done manually, if from file it asks the user for the file name and the user selects it, then it saves it to a file, can't really post what I've tried because that's the problem, I don't know how to go about doing it and the instructor won't help me, I know though that it requires a file read method for the 2nd run, and a keyboard input for the first and a write for the saving of the file(s).
    Thanks,
    Xandler

  • Using NOT EXISTS to read from the same table.

    Hi,
    If I use the flwg SQL's:
    select * from outstanding_balance
    where dt_cats_date = '21,Nov 2005'
    order by id_entity,id_object_type,id_object,id_payment_control ASC
    select * from outstanding_balance
    where dt_cats_date = '18,Nov 2005'
    order by id_entity,id_object_type,id_object,id_payment_control ASC
    Rows returned by the 1st is 15363
    Rows retured  by thr 2nd Query id 15325
    ie difference of 38But,when I use the NOT EXISTS operator :
    select *
    from outstanding_balance ob1
    where ob1.dt_cats_date = '21 Nov 2005'
    and not exists (
    select *
    from outstanding_balance ob2
    where ob2.dt_cats_date = '18 Nov 2005'
    and ob2.id_entity = ob1.id_entity
    and ob2.id_object_type = ob1.id_object_type
    and ob2.id_object = ob1.id_object
    and NVL(ob2.id_payment_control, -1) = NVL(ob1.id_payment_control, -1)
    Rows Returned is 34.
    Shouldnt this also return 38? Have I overlooked my NOT EXISTS somewhere?

    Have you tried with NOT IN instead of NOT EXISTS? They really behave differently.
    http://asktom.oracle.com/pls/ask/f?p=4950:8:16307188002894201762::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:442029737684
    Jaffar

  • Using servlets to read from text file and insert data

    Hi,
    Can I use a servlet to read from a space delimited text file on the client computer and use that data to insert into a table in my database? I want to make it easy for my users to upload their data without having to have them use SQL*Loader. If so can someone give me a hint as how to get started? I appreciate it.
    Thanks,
    Colby

    Create a page for the user to upload the file to your webserver and send a message (containing the file location) to a server app that will open the file, parse it, and insert it into your database. Make sure you secure the page.
    or
    Have the user paste the file into a simple web form that submits to a servlet that parses the data and inserts it into your db.

  • Read from Oracle DB and Write to MySQL

    Hi All,
    I am fairly new to database administration, so please bear with me if this is something that is simple or not achievable at all -
    SetUp:
    I have an Oracle DB on one dedicated Server, to which I only have a read_only access.
    I have a MySQL database setup on a windows server 2008, both are on the company network and accessible to internal employees only.
    Problem Statement:
    I need to read certain tables from Oracle DB and push the records to MySQL database.
    I have a stored procedure which was doing this but from one Oracle Schema to another, which is running fine, now I need to use the same stored procedure but read from one database (Oracle) and write to another database (MySQL), is there a way to do this through the stored procedure, I know I can write a java program to do this, but need to do it through a stored procedure.
    Appreciate any help in this regards.

    c5b4a91d-d35a-43ba-ac96-6d1821541d33 wrote:
    Hi All,
    I am fairly new to database administration, so please bear with me if this is something that is simple or not achievable at all -
    SetUp:
    I have an Oracle DB on one dedicated Server, to which I only have a read_only access.
    I have a MySQL database setup on a windows server 2008, both are on the company network and accessible to internal employees only.
    Problem Statement:
    I need to read certain tables from Oracle DB and push the records to MySQL database.
    I have a stored procedure which was doing this but from one Oracle Schema to another, which is running fine, now I need to use the same stored procedure but read from one database (Oracle) and write to another database (MySQL), is there a way to do this through the stored procedure, I know I can write a java program to do this, but need to do it through a stored procedure.
    Appreciate any help in this regards.
    Start here:  http://docs.oracle.com/cd/E11882_01/server.112/e25494/ds_concepts.htm#i1007709

  • Reading from .CSV and storing it into a collection

    Hi folks,
    Is there a way to make a dynamic procedure to work with .CSV documents and store it into a collection? For example you have to make a procedure to read from .CSV but users upload 10 different version that have different number of columns.
    Normally I would define a record type to match those columns and store it into a collection. However if I don't know the number of columns I would need to define 10 records in advance which I am trying to avoid.
    Problem is I cant define SQL elements on the fly. Meaning on production I don't have the rights to dynamically create a table to match my columns and then drop the table after I no longer need it so I need to store data into a collection.
    And the last option where I would loop through the document and then do the operations I need is not good since the document is a part of other procedures that write and read from it. The idea is to pick the data, store it into a collection, close the file and then work with it.
    This is what I got so far:
    declare
      -- Variables
      l_file      utl_file.file_type;
      l_line      varchar2(10000);
      l_string    varchar2(32000);
      l_delimiter varchar2(10);
      -- Types
      type r_kolona is record(
        column_1 varchar2(500)
       ,column_2 varchar2(500)
       ,column_3 varchar2(500)
       ,column_4 varchar2(500)
       ,column_5 varchar2(500));
      type t_column_table is table of r_kolona;
      t_column    t_column_table := t_column_table();
    begin
      /*Define the delimiter*/
      l_delimiter := ';';
      /*Open file*/
      l_file      := utl_file.fopen( 'some dir', 'some.csv', 'R');
      /*Takes first row of document as header*/
      utl_file.get_line( l_file, l_line);
      loop
        begin
          utl_file.get_line( l_file, l_line);
          /*Delete newline operator*/
          l_string                         := rtrim( l_line, chr(13)) || l_delimiter;
          /*Extend array and insert parsed values */
          t_column.extend;
          t_column(t_column.last).column_1 := substr( l_string, 1, instr( l_string, l_delimiter, 1, 1) - 1);
          t_column(t_column.last).column_2 := substr( l_string, instr( l_string, l_delimiter, 1, 1) + 1, instr( l_string, l_delimiter, 1, 2) - instr( l_string, l_delimiter, 1, 1) - 1);
          t_column(t_column.last).column_3 := substr( l_string, instr( l_string, l_delimiter, 1, 2) + 1, instr( l_string, l_delimiter, 1, 3) - instr( l_string, l_delimiter, 1, 2) - 1);
          t_column(t_column.last).column_4 := substr( l_string, instr( l_string, l_delimiter, 1, 3) + 1, instr( l_string, l_delimiter, 1, 4) - instr( l_string, l_delimiter, 1, 3) - 1);
          t_column(t_column.last).column_5 := substr( l_string, instr( l_string, l_delimiter, 1, 4) + 1, instr( l_string, l_delimiter, 1, 5) - instr( l_string, l_delimiter, 1, 4) - 1);
        exception
          when no_data_found then
            exit;
        end;
      end loop;
      /*Close file*/
      utl_file.fclose(l_file);
      /*Loop through collection elements*/
      for i in t_column.first .. t_column.last
      loop
        dbms_output.put_line(
             t_column(i).column_1
          || ' '
          || t_column(i).column_2
          || ' '
          || t_column(i).column_3
          || ' '
          || t_column(i).column_4
          || ' '
          || t_column(i).column_5);
      end loop;
    exception
      when others then
        utl_file.fclose(l_file);
    end; Stupid version would be to define a record with 50 elements and hope they dont nuke the excel with more columns :)
    Best regards,
    Igor

    Igor S. wrote:
    Use some to query data and then fix wrong entries on prod (insert, update, delete). Manipulate with some and then make new reports. The first that come to mind but basicly is to write a procedure that can be used for ANY .csv so I dont have to rewrite the code.This is logically wrong and smacks of poor design.
    You're wanting to take CSV files with various unknown formats of data, read that data into some generic structure, and then somehow magically be able to process the unknown data to be able to "fix wrong entries". If everything is unknown... how will you know what needs fixing?
    Good design of any system stipulates the structures that are acceptable, and if that means you know there are just 20 possible CSV formats and you can implement a mechanism to determine which format a particular CSV is in (perhaps something in the filename?) then you will create 20 known targets (record structures/tables or whatever) to receive that data into, using 20 external tables, or procedure or whatever is necessary.
    Doing anything other than that is poor design, leaves the code open to breaking, is non-scalable, hard to debug, and just wrong on so many levels. This isn't how software is engineered.
    For example you have 20 developers that have to work with .CSV files. So when someone has to work with a .CSV he would call a procedure with parameters directory and file name. And as a out parameter would get a collection with .CSV stored inside.As others have mentioned, give the developers an Apex application for their data entry/manipulation, working directly on the database with known structures and validation so they can't create "wrong" data in the first place. They can then export that as .CSV data for other purposes if really required.

  • Transformation Rule Type "Read from DataStore

    Hi All,
    i have two DSO's (Header and Item) my requirement is in the Item DSO i have a field Bill-to party in the same way in my header DSO also Bill-to party
    i need to fill Bill to party field in header DSO with Item DSO Field Bill-to party by using the rule type Read from Data store
    in the item DSO i have two key fields. in both the DSO's (header and item) only one common key field Document Num .i am assigning Docnum in transformation but i am failed to fill bill-to(Error-Cannot read from Datastore ). Please guide me how to achieve this.

    Hi.
    I think the problem is that the transformation rule needs the full target key fields (at item level) to be mapped in order to get the result value. Elsewhere, if more than one record are found more than one result values are to be found as well.
    It would work if you are reading Header DSO as all Items will get just one record as result.
    This can be solved using start/end routines ABAP programming.
    Hope this helps.
    regards.

  • Getting error Can't read from the source or disk when moving documents from one folder to another folder in the library

    Hi,
    When we try to move documents from one folder to another folder in the document library using "Open with explorer" getting beloe error.
         Can read from the source file or disk.
        The user having below permission for the Library as well as site.
    Fullcontrol,Limited access--->Given directly
    Read,Limited access--->Givin through the all Test grp
    Contribute,Limited Access-->given through test members grp
    Read,Limited access---> givin through The group grp
    Could you please help me anyone....
    Thanks

    Hi Reddy,
    If you are moving files in two libraries in different sites, then the error will occur and it is by design that there are limitations on the DAV move commands that the DAV client is respecting.
    https://social.msdn.microsoft.com/Forums/en-US/6245f332-c609-4a7b-8e00-c8b5e46f7759/cant-move-files-using-windows-explorer-cant-read-from-source?forum=sharepointgeneral
    If you are moving files in the same library, I recommend you to use Wireshark to reveal the error message and enable IIS
    Trace Logging for Failed Requests to examine the IIS log file for troubleshooting.
    https://social.msdn.microsoft.com/Forums/en-US/47cd569d-98f2-4cca-b78e-fd178c097285/cant-read-from-the-source-file-or-disk?forum=sharepointgeneralprevious
    To narrow down the issue scope, I recommend you to test with another library and see if the copy in explorer can work.
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Read from network problem?

    when I use class "URL" to read from Server via a cdma network ,I got a block from BufferedReader.read(char[] cbuf, int off, int len), but this method is not a block method described in jdk document,why ?
    Then I use ready() method first ,followed by some or other read method , but I can't get the "-1" that indicating end-of-file,so I don't know when should I stop it ,how can I solve this problem ?

    Hmm, try one more thime, sorry for the format but
    supplying the tags doesn't work with this code.
    Here is code I use to read an URL, you could try it and see
    if it throws an exception or blocks:
    //The base64encoder is part of the w3c tools
    //download jigsaw and look for the base64,,, file
    //http://www.google.nl/search?hl=nl&q=site%3Aw3c.org+jigsaw&lr=
    //compiled it and put it in C:\Program Files\Java\jre1.5.0\lib\ext\W3cToolsCodecBase64.jar
    //put this jar file in the classpath when you compile
    import java.applet.Applet;
    import java.io.ByteArrayOutputStream;
    import java.io.InputStream;
    import java.net.URL;
    import java.net.URLConnection;
    import org.w3c.tools.codec.Base64Encoder;
    public class UrlWithProxy extends Applet implements Runnable {
         public void run() {
              String location = "http://www.google.co.jp/";
              if(this.getParameter("dataSource")!=null){
                   location = this.getParameter("dataSource");
              String data = this.getResponse( location,"Domain\\UserAccount","password","proxyAddress","shift_jis");
              System.out.println(data);
         public void init() {
              new Thread(this).start();
         private String getResponse(String requestUrl,String user, String password, String proxyUrl, String charsetName) {
              URL url = null;
    //          String datipost = null;
              InputStream in = null;
              String ret = null;
              String pwd=null;
              String encodedPassword=null;
              String charEncoding=null;
              try {
                   url = new URL(requestUrl);
                   URLConnection conn = url.openConnection();
                   conn.setRequestProperty("Accept-Charset","utf-8"); // forces a utf-8 response, or throws a bad request exception
                   if(proxyUrl!=null){
                        System.getProperties().put( "proxySet", "true" );
                        System.getProperties().put( "proxyHost", proxyUrl );
                        System.getProperties().put( "proxyPort", "80" );
                        pwd = user + ":" + password;
                        Base64Encoder enc = new Base64Encoder(password);
                        encodedPassword = enc.processString() ;
                        conn.setRequestProperty( "Proxy-Authorization", encodedPassword );
                   // first try to find out if the server has
                   // provided us with the charset of the response
                   if(charsetName==null){
                        charEncoding = "UTF8";
                        int i = conn.getHeaderFields().size()-1;
                        while(i>-1){
                             System.out.print(conn.getHeaderFieldKey(i));
                             System.out.print("=");
                             System.out.println(conn.getHeaderField(i));
                             if(conn.getHeaderField(i).toLowerCase().indexOf("charset")!=-1){
                                  String charsetLine = conn.getHeaderField(i).toUpperCase();
                                  if(charsetLine.indexOf("SHIFT_JIS")!=-1){
                                       charEncoding = "Shift_JIS";
                                  if(charsetLine.indexOf("EUC-JP")!=-1){
                                       charEncoding = "EUC-JP";
                             i--;
                   }else{
                        charEncoding=charsetName;
                   in = conn.getInputStream();
                   ByteArrayOutputStream bos = new ByteArrayOutputStream();
                   int len;
                   byte[] buf = new byte[1024];
                   while ((len = in.read(buf)) > 0) {
                        bos.write(buf, 0, len);
                   in.close();
                   ret = new String(bos.toByteArray(),charEncoding);
              } catch (Exception e) {
                   e.printStackTrace(System.out);
              return ret;
    }

  • ERROR WSNAT_CAT:1287 When Reading from file

    Hi,
    I'm new in Tuxedo and hope you understand my lack of knowledge
    During an execution of a procedure that reads from a file in the server it shows the error in ULOG:
    "121620.prod240!WSH.22478.1.0: WSNAT_CAT:1287: WARN: Forced shutdown of client; user name ''; client name ''; workstation address '//192.168.1.8:6386677'"
    after reading/inserting the first record.
    I alter the file, erase that first record and execute the proc. In some cases it reads all the file or just another record, so i have to alter the file again
    After some attempts, it runs fine and reads all the records remaining.
    The version of Tuxedo 8.1
    The file may have as much as 50 records.
    Any suggestion will be appreciated

    Hi,
    Does the WSNAT_CAT:1287 error message occur every time? I don't see how that error message relates to what you are doing as it basically says that a workstation client timed out which is likely due to client inactivity and isn't related to your service. If timing out your client is an issue, try setting or changing the -T option on the WSL.
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

Maybe you are looking for