Fetch from cursor when no records returned

Hi,
I've got the following question / problem?
When I do a fetch from a cursor in my for loop and the cursor returns no record my variable 'r_item' keeps the value of the previous fetched record. Shouldn't it contain null if no record is found and I do a fetch after I closed and opend the cursor? Is there a way the clear the variable before each fetch?
Below you find an example code
CURSOR c_item (itm_id NUMBER) IS
SELECT DISTINCT col1 from table1
WHERE id = itm_id;
r_item  c_item%ROWTYPE;
FOR r_get_items IN c_get_items LOOP
  IF r_get_items.ENABLE = 'N' THEN       
      open c_item(r_get_items.ITMID);
      fetch c_item into r_item;
      close c_item;
      IF  r_item.ACCES = 'E' then
           action1
      ELSE                 
           action2
      END IF;
  END IF;
END LOOP;  Thanx

DECLARE
    CURSOR c_dept IS
      SELECT d.deptno
      ,      d.dname
      ,      d.loc
      ,      CURSOR (SELECT empno
                     ,      ename
                     ,      job
                     ,      hiredate
                     FROM   emp e
                     WHERE  e.deptno = d.deptno)
      FROM   dept d;
    TYPE refcursor IS REF CURSOR;
    emps refcursor;
    deptno dept.deptno%TYPE;
    dname dept.dname%TYPE;
    empno emp.empno%TYPE;
    ename emp.ename%TYPE;
    job emp.job%TYPE;
    hiredate emp.hiredate%TYPE;
    loc dept.loc%TYPE;
BEGIN
   OPEN c_dept;
   LOOP
     FETCH c_dept INTO deptno, dname, loc, emps;
     EXIT WHEN c_dept%NOTFOUND;
     DBMS_OUTPUT.put_line ('Department : ' || dname);
     LOOP
       FETCH emps INTO empno, ename, job, hiredate;
       EXIT WHEN emps%NOTFOUND;
       DBMS_OUTPUT.put_line ('-- Employee : ' || ename);
     END LOOP;
  END LOOP;
  CLOSE c_dept;
END;
/like this...

Similar Messages

  • How to fetch from cursor into plsql collection

    Dear Friends,
    I am trying to understand PLSQL collections. I am trying with the following example.
    CREATE OR REPLACE TYPE emp_obj AS OBJECT
    (     empname          VARCHAR2(100),     empjob          VARCHAR2(50),     empsal          NUMBER);
    CREATE OR REPLACE TYPE emp_tbl IS TABLE OF emp_obj;
    CREATE OR REPLACE PACKAGE eg_collection AS
    -- Delcare ref cursor
    TYPE rc IS REF CURSOR;
    -- Procedure
    PROCEDURE eg_collection_proc (out_result OUT rc);
    END;
    CREATE OR REPLACE PACKAGE BODY eg_collection AS
    PROCEDURE eg_collection_proc( out_result OUT rc) AS
    emp_tdt     emp_tbl := emp_tbl(emp_obj('oracle','DBA',100));
    CURSOR c2 IS SELECT ename,job,sal FROM emp WHERE sal > 2000;
    -- Declare a record type to hold the records from cursor and then pass to the collection
    emp_rec emp_obj;
    BEGIN
         OPEN c2;
         LOOP FETCH c1 INTO emp_rec;
              EXIT WHEN c1%NOTFOUND;
              emp_tdt.extend;
    emp_tdt(emp_tdt.count) := emp_rec;
         END LOOP;
         CLOSE c2;
    OPEN out_result FOR SELECT * FROM TABLE(CAST(emp_tdt AS emp_tbl));
    END eg_collection_proc;
    END eg_collection;
    Executing the proc
    variable r refcursor;
    exec eg_collection.eg_collection_proc(:r);
    print r;
    But I am getting compilation error type mismatch found at emp_rec between fetch cursor into variable

    I am trying to understand PLSQL collections. I dont why the code is not working
    SQL> CREATE OR REPLACE TYPE emp_obj AS OBJECT
    2 (
    3      empname          VARCHAR2(100),
    4      empjob          VARCHAR2(50),
    5      empsal          NUMBER
    6 )
    7 /
    Type created.
    SQL> CREATE OR REPLACE TYPE emp_tbl IS TABLE OF emp_obj
    2 /
    Type created.
    SQL> DECLARE
    2      emp_tdt emp_tbl := emp_tbl ();
    3 BEGIN
    4
    5      emp_tdt.extend;
    6      SELECT emp_obj(ename, job, sal) BULK COLLECT INTO emp_tdt
    7      FROM emp WHERE sal < 4000;
    8
    9      DBMS_OUTPUT.PUT_LINE ('The total count is ' || emp_tdt.count);
    10
    11      emp_tdt.extend;
    12      SELECT ename, job, sal INTO emp_tdt(1).empname, emp_tdt(1).empjob, emp_tdt(1).empsal
    13      FROM emp WHERE empno = 7900;
    14
    15      DBMS_OUTPUT.PUT_LINE ('The total count is ' || emp_tdt.count);
    16
    17 END;
    18 /
    The total count is 13
    The total count is 14
    PL/SQL procedure successfully completed.
    SQL> DECLARE
    2      emp_tdt emp_tbl := emp_tbl ();
    3 BEGIN
    4
    5      emp_tdt.extend;
    6      SELECT ename, job, sal INTO emp_tdt(1).empname, emp_tdt(1).empjob, emp_tdt(1).empsal
    7      FROM emp WHERE empno = 7900;
    8
    9      DBMS_OUTPUT.PUT_LINE ('The total count is ' || emp_tdt.count);
    10
    11      emp_tdt.extend;
    12      SELECT emp_obj(ename, job, sal) BULK COLLECT INTO emp_tdt
    13      FROM emp WHERE sal < 4000;
    14
    15      DBMS_OUTPUT.PUT_LINE ('The total count is ' || emp_tdt.count);
    16 END;
    17 /
    DECLARE
    ERROR at line 1:
    ORA-06530: Reference to uninitialized composite
    ORA-06512: at line 6

  • NLS Error on Second Fetch from Cursor

    Oracle 8i using PRO*C
    We have a UNIX (HP) environment (operational account) in which the NLS_LANG was not set for the shell. One of our applications opened a cursor for update and performed its first fetch. After processing it went back to fetch an additional buffer. At this point, the application failed with the following error: "SQLCODE: ORA-01890: NLS error detected". When we set the NLS_LANG enviroment variable this error disappeared.
    I need to know what the NLS_LANG enviroment variable is doing and why it is causing the second fetch to fail when it is not set so I can argue with the powers that be to have this paremeter always set for this accounts shell (i.e. globally). No-one really knows what this does here or why it would cause the cursor to fail, and so they are telling us to just set the variable in our own applications shell.
    I know the real answer to this is to set it up for the operational (global) shell but...
    Thanks in advance,
    Bill Rosmus

    it is difficult. The main problem is that you can't be sure that the function is called only once for each row.
    Why don't you simply run the cursor and the function separatly in pl/sql.
      CURSOR myCur IS SELECT myTable1.* FROM myTable1 ;
      SUBTYPE myRecType IS myCur%ROWTYPE ;
      funcResult varchar2(100); /* use the correct return datatype from your function here */
      FOR myRec in myCur LOOP
          BEGIN
             funcResult := myPackage.myFunc(myRec.column1);
          EXCEPTION
             WHEN myPackage.myFunc_Exception THEN
               <... do anything when fetched but function myFunc raised this exception ...>
          END ;
          <...do anything with row currently fetched>
       END LOOP;You even have more control where to handle the exception. Also there is one pl/sql context switch less. Since the function itself is pl/sql it could even be faster to run it in pl/sql then to call it from sql (inside a select).

  • Help!!! slow fetch from cursor

    I have a problem fetching records from a ref cursor returned by a procedure.
    Basically I play both a PL/SQL developer and DBA roles for a development and production Oracle 9.2.0.6 databases hosted on separate Sun Solaris 5.8 servers. The problem PL/SQL signature is shown below, and it basiccally dynamically constructs a large querry (I will call global querry) which is a UNION ALL of 16 smaller querries and opens the cursor parameter for this dynamically constructed querry. The entire querry is assigned to a VARCHAR2(20000) variable, and is normally a little over 15,000 bytes in size. The returned cursor is used to publish the querry result records in Crystal reports. The problem is that the entire process of executing and fetching the result records from the procedure is taking as much as 25 minutes to complete. On investigation of the problem by executing the procedure with a PL/sql block in sqlplus, and adding timing constructs in the execution of the procedure and fetches from the returned cursor, I discovered to my shock that the procedure executes consistently in 1 second (second is the granularity of the timer), but each record fetch is done in a minimum of 16 seconds. All efforts to tune the database memory structures to improve the fetches have yielded very small improvements bringing down the fetch times to about 11 seconds. This is still unacceptable. Is there anybody out there who can suggest a solution to this problem?
    Procedure signature:
    sp_production_report ( p_result_set IN OUT meap_report.t_reportRefCur,
    p_date_from IN VARCHAR2,
    p_date_to IN VARCHAR2,
    p_agency_code IN INTEGER DEFAULT NULL,
    p_county_code IN INTEGER DEFAULT NULL,
    p_selection IN INTEGER DEFAULT 0);
    Test block in sqlplus:
    declare
    -- Local variables here
    i integer;
    v_start INTEGER;
    v_end INTEGER;
    v_end_fetch INTEGER;
    v_cnt INTEGER := 0;
    v_end_loop INTEGER;
    v_elapsed INTEGER;
    v_cur meap_report.t_reportRefCur;
    v_desc VARCHAR2(300);
    v_hh VARCHAR2(300);
    v_meap INTEGER;
    v_bp INTEGER;
    v_ara INTEGER;
    v_tot INTEGER;
    BEGIN
    -- Test statements here
    DBMS_OUTPUT.ENABLE(100000);
    SELECT TO_NUMBER ( TO_CHAR(SYSDATE, 'SSSSS')) INTO v_start FROM DUAL;
    sp_production_report ( p_result_set => v_cur,
    p_date_from => '07/01/2008',
    p_date_to => '07/31/2008',
    p_selection => 0);
    SELECT TO_NUMBER ( TO_CHAR(SYSDATE, 'SSSSS') ) INTO v_end FROM DUAL;
    FETCH v_cur INTO v_desc, v_hh, v_meap, v_bp, v_ara, v_tot;
    SELECT TO_NUMBER ( TO_CHAR(SYSDATE, 'SSSSS') ) INTO v_end_fetch FROM DUAL;
    WHILE v_cur%FOUND LOOP
    v_cnt := v_cnt + 1;
    FETCH v_cur INTO v_desc, v_hh, v_meap, v_bp, v_ara, v_tot;
    END LOOP;
    SELECT TO_NUMBER ( TO_CHAR(SYSDATE, 'SSSSS') ) INTO v_end_loop FROM DUAL;
    v_elapsed := v_end_loop - v_end;
    DBMS_OUTPUT.PUT_LINE ( 'Procedure (p_selection 0) executed in ' || TO_CHAR ( (v_end - v_start) ) || ' seconds.' );
    DBMS_OUTPUT.PUT_LINE ( 'Fetched 1st record in ' || TO_CHAR ( (v_end_fetch - v_end) ) || ' seconds.' );
    DBMS_OUTPUT.PUT_LINE ( 'Procedure (p_selection 0) :' || TO_CHAR (v_cnt) ||
    ' records fetched in ' || TO_CHAR ( v_elapsed ) || ' seconds.' );
    CLOSE v_cur;
    END;

    And why not use timestamps instead of dates? They get subsecond resolution:
    declare
            t1 timestamp;
            t2 timestamp;
            d1 INTERVAL DAY(3) TO SECOND(3);
    begin
            t1 := systimestamp;
            dbms_lock.sleep(1.45);
            t2 := systimestamp;
            d1 := t2 - t1;
            dbms_output.put_line('Start:    '||t1);
            dbms_output.put_line('End:      '||t2);
            dbms_output.put_line('Duration: '||d1);
    end;
    /As for how to speed up your fetch, that depends on how the SQL the cursor is based on is constructed. Without that you need to refer to Rob's post.

  • Fetch From Cursor

    In my Procedure I want to explicitly open the cursor and fetch from the cursor and again close the cursor
    I don’t want to use like this for some testingsomething:
    Create procedure kk
    Cur out sys_refcursor
    As
    Open cur for
    Select * from table;
    End
    I need to use like this
    Create procedure kk
    Cursor c is select * from table; need to return this cursor.
    As
    How to return that cursor
    Thanks

    maybe something like:
    create or replace procedure get_emp_name as
      cursor c1 is
        select * from emp;
      vEname emp.ename%type;
    begin
      open c1;
      fetch c1 into vEname;
      if c1%notfound then
         exit;
      end if;
      close c1;
    end;
    /or
    Create procedure get_emp_name as
      Cursor c1 is
       select *
         from emp;
    begin
      for c1_rec in c1 loop
         dbms_output.put_line('Emp Name: '||c1_rec.ename);
      end loop;
    end;
    /

  • Start an executable from ODI when a record is inserted

    Hi All,
    I'm new with ODI and I have a problem. Is it possible that ODI to "feel" when a record is inserted in a mapped Oracle table and start a program? Could I use shell scripts for this?
    Thanks in advance,
    Teodora

    Hi cdmnagaraj,
    I give you my example and how I've fixed it. Hope that it will help you.
    So... the most important requirements:
    - I have an application that writes a file and a record in a parameter table.
    - When this records appear I write a record in another's application parameters table.
    - The next step is to start the second application.
    - When the process is done, it deletes the record from the second parameter table. No odi action
    - When I see that there is no record, I must update the status of the parameters record in the first table.
    How I did:
    - wrote a package where I put the steps, starting with writing in the record in the second parameters table, up to the updating the status in the first parameters table
    - generated a scenario for this package
    - wrote another package where I put the first step where I wait for the record to appear in the first parameters table
    - the second step is the scenario (just drag and drop it)
    - if the scenario finishes in error I send a mail
    - if the scenario finishes ok I return to the first step.
    Right click on the first step and mention that it is the first step.
    Launch the package. If it finishes successful, you will see that another log entry appears and waits for another data.
    Regards,
    Teodora

  • Fetching from cursor

    Hello all.
    How can i enforce the sql+ start fetching first cursor field by
    one field intervals?
    My sql+ starts fetch from the second item by 2 records intervals,
    so the 4th record is the next one to fetched after the second.
    best regards
    yosi sarid.

    Hi,
    could you please show us the where clause of the cursor?
    I think you are missing something.
    Another fault in your Code is the x_commit_count.
    You increment it every time, but you never set it to zero again. So after 100 inserts, it commits after each loopstep.
    Is this what you want???
    DECLARE
    CURSOR c_cursor IS SELECT * FROM TABLE_A@REMOTE, TABLE_B@REMOTE WHERE...
    x_commit_count NUMBER := 0;
    BEGIN
    FOR r_record IN c_cursor LOOP
    INSERT INTO LOCAL_TABLE;
    x_commit_count := x_commit_count + 1;
    IF x_commit_count >= 100 THEN
    COMMIT;
    ELSE
    NULL;
    END IF;
    END LOOP;
    END;
    HTH
    Detlev

  • Fetch from cursor variable

    Hello,
    I have a procedure, which specification is something like that:
    procedure proc1 (pcursor OUT SYS_REFCURSOR, parg1 IN NUMBER, parg2 IN NUMBER, ...);Inside the body of proc1 I have
    OPEN pcursor FOR
      SELECT column1,
                  column2,
                  CURSOR (SELECT column1, column2
                                    FROM table2
                                  WHERE <some clauses come here>) icursor1
          FROM table1
       WHERE <some clauses come here>;In a PL/SQL block I would like to execute proc1 and then to fetch from pcursor. This is what I am doing so far:
    DECLARE
      ldata SYS_REFCURSOR;
      larg1 NUMBER := 123;
      larg2 NUMBER := 456;
      outcolumn1 dbms_sql.Number_Table;
      outcolumn2 dbms_sql.Number_Table;
    BEGIN
      some_package_name.proc1 (ldata, larg1, larg2, ...);
      FETCH ldata BULK COLLECT INTO
        outcolumn1, outcolumn2,...,  *and here is my problem*;
    END;
    /How can I rewrite this in order to get the content of icursor1 ?
    Thanks a lot!

    Verdi wrote:
    How can I rewrite this in order to get the content of icursor1 ?
    Firstly ref cursors contain no data they are not result sets but pointers to compiled SQL statements.
    Re: OPEN cursor for large query
    PL/SQL 101 : Understanding Ref Cursors
    Ref cursors are not supposed to be used within PL/SQL or SQL for that matter, though people keep on insisting on doing this for some reason.
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10472/static.htm#CIHCJBJJ
    Purpose of Cursor Variables
    You use cursor variables to pass query result sets between PL/SQL stored subprograms and their clients. This is possible because PL/SQL and its clients share a pointer to the work area where the result set is stored.A ref cursor is supposed to be passed back to a procedural client language, such as Java or .Net.
    If you want to re-use a SQL statement in multiple other PL/SQL or SQL statements you would use a view.

  • SAVE EXCEPTIONS when fetching from cursors by BULK COLLECT possible?

    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    Hello,
    I'm using an Cursor's FETCH by BULK COLLECT INTO mydata...
    Is it possible to SAVE EXCEPTIONS like with FORALL? Or is there any other possibility to handle exceptions during bulk-fetches?
    Regards,
    Martin

    The cursor's SELECT-statement uses TO_DATE(juldat,'J')-function (for converting an julian date value to DATE), but some rows contain an invalid juldat-value (leading to ORA-01854).
    I want to handle this "rows' exceptions" like in FORALL.
    But it could also be any other (non-Oracle/self-made) function within "any" BULK instruction raising (un)wanted exceptions... how can I handle these ones?
    Martin

  • Dynamic Column Name while Fetching from Cursor?

    Scenario:
    I have a table having 5 Column C01,C02,C03,C04,C05....... now i have
    a record type variable att_rec. One first fetch the first record
    fetch into att_rec and i can access column like
    att_rec.C01,att_rec.C02 and so on simply i wan access these columns
    through following Loop but unable to do so. It Understand the
    att_Rec.C01 as String .. Any Clue ???
    IF NOT attdays_cnt%ISOPEN
    THEN
    OPEN attdays_cnt;
    END IF;
    /* Keep fetching until no more records are FOUND */
    FETCH attdays_cnt INTO att_rec;
    message('Cursor Opened');
    WHILE attdays_cnt%FOUND
    LOOP
    a31_cnt:=1; -- reinitializtion of variables for other employees
    p_days:=0;
    w_days:=0;
    WHILE a31_cnt<32 LOOP
    IF concat('ATT_REC.C',to_char(a31_cnt,'00'))='L'='P' THEN
    p_days:=p_days+1;
    ELSE ATT_REC.C01='W' THEN
    w_days:=w_days+1;
    END IF;
    END LOOP;
    Message was edited by:
    Fiz Dosani

    Perhaps this is marginally simpler with nested table, YMMV ;-)
    (based on Elic's example)
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    SQL> CRE ATE TABLE t (i, j, k)
      2  AS
      3     SELECT LEVEL, POWER (LEVEL, 2), POWER (LEVEL, 3)
      4     FROM DUAL
      5     CONNECT BY LEVEL <= 5;
    Table created.
    SQL> CREATE OR REPLACE type ntt_number AS TABLE OF NUMBER;
      2  /
    Type created.
    SQL> SET SERVEROUTPUT ON;
    SQL> DECLARE
      2     TYPE t_tbl IS TABLE OF ntt_number;
      3
      4     var t_tbl;
      5  BEGIN
      6     SELECT ntt_number (i, j, k)
      7     BULK COLLECT INTO var
      8     FROM   t;
      9
    10     FOR i IN 1 .. var.COUNT LOOP
    11        FOR j IN 1 .. var (i).COUNT LOOP
    12           DBMS_OUTPUT.PUT_LINE (
    13              'var (' || i || ') (' || j || ') => ' || var (i) (j));
    14        END LOOP;
    15     END LOOP;
    16  END;
    17  /
    var (1) (1) => 1
    var (1) (2) => 1
    var (1) (3) => 1
    var (2) (1) => 2
    var (2) (2) => 4
    var (2) (3) => 8
    var (3) (1) => 3
    var (3) (2) => 9
    var (3) (3) => 27
    var (4) (1) => 4
    var (4) (2) => 16
    var (4) (3) => 64
    var (5) (1) => 5
    var (5) (2) => 25
    var (5) (3) => 125
    PL/SQL procedure successfully completed.
    SQL>

  • Zend Amf & Flex 3, problem when many records returned

    I have a project with a start dateField (calendar) and an end dateField (calendar). The user chooses a start date and an end date and the database pulls latitude and longitude coordinates for events that occurred between those dates. If the user chooses dates that produce fewer than roughly 11,200 records, it works perfectly.  If the user chooses dates that produce more than that, it produces the following error message:
    faultCode:Client.Error.DeliveryInDoubt
    faultString:'Channel disconnected'
    faultDetail:'Channel disconnected before an acknowledgement was received'
    The server is my local machine (localhost). I'm running:
    MAMP   (which has  Apache 2.0.59)
    Flex 3
    I tried editing the my.cnf (mySQL options file) and increased the max_allowed_packet to 32M, but that didn't work.
    It's a strange problem as the same code works when a relatively small amount of data is returned, but doesn't work when more data is pulled. Could it be some sort of memory or packet limit or a time-out that is called?
    If anyone has any suggestions, please let me know. 
    -Laxmidi

    Richard Bates of flexandair.com figured it out. In my php.ini file, I had the memory limit set at 8M. After, changing it to 32M, it worked. Thank you, Richard!
    -Laxmidi

  • Amount field of VK11 isnt fetching from flatfile when im performing it'sBDC

    Dear Guru,
    Here i have encountered a typical issue.
    Im Performing BDC for VK11 (Create Condition Record) with "Key Combination" --->> "Location, Material Code (Base Price for Longs)".
    While I am running this BDC ( source code attached below) in All screen mode every datas which are of type "CHAR" like--->>
    Condition type(kschl) ,
    Plant(werks) ,
    Material No(matnr),
    Valid From date(datab),
    Valid To date(datbi),
    Rate Unit(konwa) are coming properly from flatfile except
    Rate (condition amount - KBETR)  which is are of data type "CURR".
    So guru I want to know what code i should add into my below bdc prog to fetch data properly into  RATE - Condition amount field which is of type " CURR".
    Pls Help.
    Source Code:
    REPORT z_bdc_vk11_famd
           NO STANDARD PAGE HEADING LINE-SIZE 255.
    *& DATA-DECLARATION
    TYPES: BEGIN OF t_cust,
                kschl LIKE rv13a-kschl,
                werks LIKE komg-werks,
                matnr LIKE komg-matnr,
                kbetr LIKE konp-kbetr,
    ***            konwa LIKE konp-konwa,
                datab LIKE rv13a-datab,
                datbi LIKE rv13a-datbi,
           END OF t_cust.
    TYPES: BEGIN OF t_sucrec,
             cnum TYPE komg-werks,
             cnam TYPE komg-matnr,
    END OF t_sucrec.
    TYPES: BEGIN OF t_errrec,
           lineno TYPE string,      "Line Number
          message TYPE string,      "Error Message
    END OF t_errrec.
    DATA:  v_file TYPE string,      "Variable for storing flat file
          it_cust TYPE STANDARD TABLE OF t_cust, "Internal table of Customer
          wa_cust LIKE LINE OF it_cust,  "Workarea of Internal table it_cust
        it_sucrec TYPE STANDARD TABLE OF t_sucrec,
                                          "Internal table of Success records
        wa_sucrec LIKE LINE OF it_sucrec,
                                       "Workarea of Internal table it_sucrec
        it_errrec TYPE STANDARD TABLE OF t_errrec,
                                       "Internal table of Error records
        wa_errrec LIKE LINE OF it_errrec,
                                       "Workarea of Internal table it_errrec
        it_bdctab LIKE bdcdata OCCURS 0 WITH HEADER LINE,
                                        "Internal table structure of BDCDATA
    it_messagetab LIKE bdcmsgcoll OCCURS 0 WITH HEADER LINE,
                                        "Tracing Error Messages
           v_date LIKE sy-datum,  "Controlling of session date
          v_index LIKE sy-tabix,  "Index Number
         v_totrec TYPE i,         "Total Records
         v_errrec TYPE i,         "Error Records
         v_sucrec TYPE i,         "Success Records
        v_sesschk TYPE c.         "Session maintenance
    *& SELECTION-SCREEN
    SELECTION-SCREEN: BEGIN OF BLOCK blk1 WITH FRAME TITLE text-001 NO
    INTERVALS.
    PARAMETERS: p_file    TYPE rlgrap-filename.
    "rlgrap-filename is a predefined structure
    SELECTION-SCREEN: END OF BLOCK blk1.
    SELECTION-SCREEN: BEGIN OF BLOCK blk2 WITH FRAME TITLE text-002 NO
    INTERVALS.
    PARAMETERS: p_mode    LIKE ctu_params-dismode DEFAULT 'N',
                p_update  LIKE ctu_params-updmode DEFAULT 'A'.
    SELECTION-SCREEN END OF BLOCK blk2.
    *& INITIALIZATION
    INITIALIZATION.
      v_date = sy-datum - 1.
    *& AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
      CALL FUNCTION 'F4_FILENAME'
        EXPORTING
          program_name  = syst-cprog
          dynpro_number = syst-dynnr
          field_name    = ' '
        IMPORTING
          file_name     = p_file.
    *& START-OF-SELECTION
    START-OF-SELECTION.
      v_file = p_file.
      CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          filename                = v_file
          filetype                = 'ASC'
          has_field_separator     = 'X'
        TABLES
          data_tab                = it_cust
        EXCEPTIONS
          file_open_error         = 1
          file_read_error         = 2
          no_batch                = 3
          gui_refuse_filetransfer = 4
          invalid_type            = 5
          no_authority            = 6
          unknown_error           = 7
          bad_data_format         = 8
          header_not_allowed      = 9
          separator_not_allowed   = 10
          header_too_long         = 11
          unknown_dp_error        = 12
          access_denied           = 13
          dp_out_of_memory        = 14
          disk_full               = 15
          dp_timeout              = 16
          OTHERS                  = 17.
      IF sy-subrc = 0.
    ****MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    ****         WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
    *& END-OF-SELECTION
    END-OF-SELECTION.
      LOOP AT it_cust INTO wa_cust.
        v_index = sy-tabix.
        PERFORM bdc_dynpro      USING 'SAPMV13A' '0100'.
        PERFORM bdc_field       USING 'BDC_CURSOR'
                                      'RV13A-KSCHL'.
        PERFORM bdc_field       USING 'BDC_OKCODE'
                                      '=ANTA'.
        PERFORM bdc_field       USING 'RV13A-KSCHL'
                                      wa_cust-kschl.
        PERFORM bdc_dynpro      USING 'SAPLV14A' '0100'.
        PERFORM bdc_field       USING 'BDC_CURSOR'
                                      'RV130-SELKZ(01)'.
        PERFORM bdc_field       USING 'BDC_OKCODE'
                                      '=WEIT'.
        PERFORM bdc_dynpro      USING 'SAPMV13A' '1595'.
        PERFORM bdc_field       USING 'BDC_CURSOR'
                                      'RV13A-DATBI(01)'.
        PERFORM bdc_field       USING 'BDC_OKCODE'
                                      '/00'.
        PERFORM bdc_field       USING 'KOMG-WERKS'
                                      wa_cust-werks.
        PERFORM bdc_field       USING 'KOMG-MATNR(01)'
                                      wa_cust-matnr.
        PERFORM bdc_field       USING 'KONP-KBETR(01)'
                                      wa_cust-kbetr.
        PERFORM bdc_field       USING 'KONP-KONWA(01)'
                                      'INR'.
        PERFORM bdc_field       USING 'RV13A-DATAB(01)'
                                      wa_cust-datab.
        PERFORM bdc_field       USING 'RV13A-DATBI(01)'
                                      wa_cust-datbi.
        PERFORM bdc_dynpro      USING 'SAPMV13A' '1595'.
        PERFORM bdc_field       USING 'BDC_CURSOR'
                                      'KOMG-MATNR(01)'.
        PERFORM bdc_field       USING 'BDC_OKCODE'
                                      '=SICH'.
        CALL TRANSACTION 'VK11' USING it_bdctab
                                 MODE p_mode
                               UPDATE p_update
                        MESSAGES INTO it_messagetab.
        IF sy-subrc = 0.
    *& reading success records to corresponding internal table
          READ TABLE it_messagetab WITH KEY msgtyp = 'S'.
          IF sy-subrc = 0.
    *        wa_sucrec-cnum = it_messagetab-msgv1.
            wa_sucrec-cnum = wa_cust-werks.
            wa_sucrec-cnam = wa_cust-matnr.
            APPEND wa_sucrec TO it_sucrec.
            CLEAR wa_sucrec.
          ENDIF.
        ELSE.
    *& reading error records to corresponding internal table
          READ TABLE it_messagetab WITH KEY msgtyp = 'E'.
          IF sy-subrc = 0.
            CALL FUNCTION 'FORMAT_MESSAGE'
              EXPORTING
                id  = sy-msgid
                no  = it_messagetab-msgnr
                v1  = it_messagetab-msgv1
                v2  = it_messagetab-msgv2
                v3  = it_messagetab-msgv3
                v4  = it_messagetab-msgv4
              IMPORTING
                msg = wa_errrec-message.
            wa_errrec-lineno = v_index.
            APPEND wa_errrec TO it_errrec.
            CLEAR wa_errrec.
          ENDIF.
        ENDIF.
        CLEAR : it_bdctab, it_messagetab.
        REFRESH: it_bdctab, it_messagetab.
      ENDLOOP.
      DESCRIBE TABLE it_cust LINES v_totrec.
      DESCRIBE TABLE it_errrec LINES v_errrec.
      DESCRIBE TABLE it_sucrec LINES v_sucrec.
      PERFORM disp_data.
      SKIP 2.
      IF v_sucrec > 0.
        PERFORM disp_success_data.
      ENDIF.
      SKIP 2.
      IF v_errrec > 0.
        PERFORM disp_error_data.
      ENDIF.
    *& Form bdc_dynpro
    *#  text
    *#  -->P_0104 text
    *#  -->P_0105 text
    FORM bdc_dynpro USING program dynpro.
      CLEAR it_bdctab.
      it_bdctab-program  = program.
      it_bdctab-dynpro   = dynpro.
      it_bdctab-dynbegin = 'X'.
      APPEND it_bdctab.
    ENDFORM. " bdc_dynpro
    *& Form bdc_field
    FORM bdc_field USING fnam fval.
      CLEAR it_bdctab.
      it_bdctab-fnam = fnam.
      it_bdctab-fval = fval.
      APPEND it_bdctab.
    ENDFORM. " bdc_field
    *& Form disp_data
    FORM disp_data .
      ULINE (45).
      WRITE : / sy-vline,
      4 'FAMD Price Master UPDATE SUMMARY'(004) COLOR 1,
      45 sy-vline.
      ULINE /(45).
      WRITE : / sy-vline,
      'Total Records Processed'(007),
      28 '=',
      30 v_totrec,
      45 sy-vline,
      / sy-vline,
      'Error Records'(005),
      28 '=',
      30 v_errrec,
      45 sy-vline,
      / sy-vline,
      'Successful Records'(006),
      28 '=',
      30 v_sucrec,
      45 sy-vline.
      ULINE /(45).
    ENDFORM. " disp_data
    *& Form disp_success_data
    FORM disp_success_data .
      ULINE (45).
      WRITE : / sy-vline,
      14 'Successful Records'(012) COLOR 1,
      45 sy-vline.
      ULINE /(45).
      WRITE : / sy-vline ,
      'Plant Number'(010) COLOR 2,
      17 sy-vline,
      25 'Material Number'(011) COLOR 2,
      45 sy-vline.
      ULINE /(45).
      LOOP AT it_sucrec INTO wa_sucrec.
        WRITE: / sy-vline ,
        wa_sucrec-cnum,
        17 sy-vline,
        19 wa_sucrec-cnam,
        45 sy-vline.
      ENDLOOP.
      ULINE /(45).
    ENDFORM. " disp_success_data
    *& Form disp_error_data
    FORM disp_error_data .
      ULINE (90).
      WRITE : / sy-vline,
      35 'Error Records'(013) COLOR 1,
      90 sy-vline.
      ULINE /(90).
      WRITE : / sy-vline,
      'Record Number'(008) COLOR 2,
      sy-vline,
      37 'Reason for error'(009) COLOR 2,
      90 sy-vline.
      ULINE /(90).
      LOOP AT it_errrec INTO wa_errrec.
        WRITE : / sy-vline,
        wa_errrec-lineno,
        17 sy-vline,
        wa_errrec-message,
        90 sy-vline.
      ENDLOOP.
      ULINE /(90).
    ENDFORM. " disp_error_data
    Flat file Sequence:
    Condition Type     Plant     Matrial No     Rate      Validity start date     Validity end date

    I worked out n i hav found the solution

  • Crystal Report Alerts not firing when no records are fetched from the DB

    Hello,
    The crystal report alert i have created in the report in the event of no records being fetched from the query is not firing.  The condition used is isnull ( count(DB Field ) ).
    Is there a limitation with alerts that they would be fired only when some records are fetched in the report.
    Appreciate any pointers
    -Jayakrishnan

    hi Jayakrishnan,
    as alerts require records to be returned here's what you will need to do:
    1) delete your current alert
    2) create a new formula with syntax like
                  isnull(DistinctCount ()) or DistinctCount () = 0
    3) create a new Subreport (which you will put in a report header)
    4) the subreport can be based off of any table
    5) have the subreport record selection always return only 1 record...for performance reasons
    6) change the subreport link to be based on the new formula
    7) the link will be a one way link in that you will not use the "Select data in subreport based on field" option
    8) now in the subreport, create the Alert based on the parameter created by the subreport link
    i have tested this and it works great.
    jamie

  • Optimal number of records to fetch from Forte Cursor

    Hello everybody:
    I 'd like to ask a very important question.
    I opened Forte cursor with approx 1.2 million records, and now I am trying
    to figure out the number of records per fetch to obtain
    the acceptable performance.
    To my surprise, fetching 100 records at once gave me approx 15 percent
    performance gain only in comparsion
    with fetching records each by each.
    I haven't found significant difference in performance fetching 100, 500 or
    10.000 records at once.In the same time, fetching 20.000
    records at once make a performance approx 20% worse( this fact I cannot
    explain).
    Does anybody have any experience in how to improve performance fetching from
    Forte cursor with big number of rows ?
    Thank you in advance
    Genady Yoffe
    Software Engineer
    Descartes Systems Group Inc
    Waterloo On
    Canada

    You can do it by writing code in start routine of your transformations.
    1.If you have any specific criteria for filtering go with that and delete unwanted records.
    2. If you want to load specific number of records based on count, then in start routine of the transformations loop through source package records by keeping a counter till you reach your desired count and copy those records into an internal table.
    Delete records in the source package then assign the records stored in internal table to source package.

  • RE: (forte-users) Optimal number of records to fetch fromForte Cursor

    Guys,
    The behavior (1 fetch of 20000 vs 2 fetches of 10000 each) may also be DBMS
    related. There is potentially high overhead in opening a cursor and initially
    fetching the result table. I know this covers a great deal DBMS technology
    territory here but one explanation is that the same physical pages may have to
    be read twice when performing the query in 2 fetches as compared to doing it in
    one shot. Physical IO is perhaps the most expensive (vis a vis- resources)
    part of a query. Just a thought.
    "Rottier, Pascal" <[email protected]> on 11/15/99 01:34:22 PM
    To: "'Forte Users'" <[email protected]>
    cc: (bcc: Charlie Shell/Bsg/MetLife/US)
    Subject: RE: (forte-users) Optimal number of records to fetch from Forte C
    ursor
    The reason why a single fetch of 20.000 records performs less then
    2 fetches of 10.000 might be related to memory behaviour. Do you
    keep the first 10.000 records in memory when you fetch the next
    10.000? If not, then a single fetch of 20.000 records requires more
    memory then 2 fetches of 10.000. You might have some extra over-
    head of Forte requesting additional memory from the OS, garbage
    collections just before every request for memory and maybe even
    the OS swapping some memory pages to disk.
    This behaviour can be controlled by modifying the Minimum memory
    and Maximum memory of the partition, as well as the memory chunk
    size Forte uses to increment its memory.
    Upon partition startup, Forte requests the Minimum memory from the
    OS. Whithin this area, the actual memory being used grows, until
    it hits the ceiling of this space. This is when the garbage collector
    kicks in and removes all unreferenced objects. If this does not suffice
    to store the additional data, Forte requests 1 additional chunk of a
    predefined size. Now, the same behaviour is repeated in this, slightly
    larger piece of memory. Actual memory keeps growing until it hits
    the ceiling, upon which the garbage collector removes all unrefer-
    enced objects. If the garbage collector reduces the amount of
    memory being used to below the original Miminum memory, Forte
    will NOT return the additional chunk of memory to the OS. If the
    garbage collector fails to free enough memory to store the new data,
    Forte will request an additional chunk of memory. This process is
    repeated untill the Maximum memory is reached. If the garbage
    collector fails to free enough memory at this point, the process
    terminates gracelessly (which is what happens sooner or later when
    you have a memory leak; something most Forte developpers have
    seen once or twice).
    Pascal Rottier
    STP - MSS Support & Coordination Group
    Philip Morris Europe
    e-mail: [email protected]
    Phone: +49 (0)89-72472530
    +++++++++++++++++++++++++++++++++++
    Origin IT-services
    Desktop Business Solutions Rotterdam
    e-mail: [email protected]
    Phone: +31 (0)10-2428100
    +++++++++++++++++++++++++++++++++++
    /* All generalizations are false! */
    -----Original Message-----
    From: [email protected] [SMTP:[email protected]]
    Sent: Monday, November 15, 1999 6:53 PM
    To: [email protected]
    Subject: (forte-users) Optimal number of records to fetch from Forte
    Cursor
    Hello everybody:
    I 'd like to ask a very important question.
    I opened Forte cursor with approx 1.2 million records, and now I am trying
    to figure out the number of records per fetch to obtain
    the acceptable performance.
    To my surprise, fetching 100 records at once gave me approx 15 percent
    performance gain only in comparsion
    with fetching records each by each.
    I haven't found significant difference in performance fetching 100, 500
    or
    10.000 records at once.In the same time, fetching 20.000
    records at once make a performance approx 20% worse( this fact I cannot
    explain).
    Does anybody have any experience in how to improve performance fetching
    from
    Forte cursor with big number of rows ?
    Thank you in advance
    Genady Yoffe
    Software Engineer
    Descartes Systems Group Inc
    Waterloo On
    Canada
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]

    Hi Kieran,
    According to your description, you are going to figure out what is the optimal number of records per partition, right? As per my understanding, this number was change by your hardware. The better hardware you have, the more number of records per partition.
    The earlier version of the performance guide for SQL Server 2005 Analysis Services Performance Guide stated this:
    "In general, the number of records per partition should not exceed 20 million. In addition, the size of a partition should not exceed 250 MB."
    Besides, the number of records is not the primary concern here. Rather, the main criterion is manageability and processing performance. Partitions can be processed in parallel, so the more there are the more can be processed at once. However, the more partitions
    you have the more things you have to manage. Here is some links which describe the partition optimization
    http://blogs.msdn.com/b/sqlcat/archive/2009/03/13/analysis-services-partition-size.aspx
    http://www.informit.com/articles/article.aspx?p=1554201&seqNum=2
    Regards,
    Charlie Liao
    TechNet Community Support

Maybe you are looking for

  • How do I get Garageband '08 to stop changing the pitches on tracks?

    I just recently updated Garageband to '08. Since then, I have started having problems with songs that I started recording prior to the update. Garageband is changing the pitches/notes for both Live instruments and MIDI instruments. I have tried chang

  • Personal & Work contacts

    Is there a way to separate personal and work contacts?? I would love to only see work contacts when I'm in my workspace and only see personal contacts when in the personal space..

  • Bad DVD quality

    Imported video as "large" from Sony HDR SR12. burned a DVD using Toast 11. I realize Std def on a 40" LCD is not ideal but I expected better. I have Blu ray  and plan to burn HD or BD but some friends do not. Render the movie in Imovie and then burn

  • DNS - can't remove "0.0.10.in-addr.arpa" reverse domain!

    I'm having some trouble with DNS behind our firewall. In this case we have an internal block of IP's. We're using the public 10.0.1.xxx subnet. Using OS X Server's DNS service to attempt to add a virtual host to our to our previously working network

  • Solution Manager 7.1 Cannot Create Solution (No Business Partner)

    Hello everybody, I would like to ask your help about some issue in our Solution Manager (7.1 SP 10 VAR Scenario). We need to create solutions but when we go Solution Manager Administration Workspace and we click on create we are prompted to select Bu