Cursor Fetch into Record vs Record Components

Anyone,
What are pros and cons of the use of the cursor method that fatches into a single record value vs. that of record components?
I am just not seeing why you might use one vs other so was curious if their is something I am missing in reagrds to performance or temp space required or anything like that????
Thanks for any help,
Miller

You should use record components when possible. It is easier to code and read.
Cursor loops are great:
for c_rec in select a, b, c from t1, t2 where t1.a = t2.a loop
dbms_output.put_line(c_rec.b || c_rec.a);
end loop;
In the above case you don't even have to declare c_rec!
Tom Best

Similar Messages

  • Cursor fetch into

    I have a function, to which i pass a sql statement.In the function using the cursor i get the output of the sql statemnt into a output variable. This function works fine, but for one particular select statement it takes[u] over an hour to execute in the FETCH...INTO statement specifically. The same select statement when executed outside in sql editor returns in under 2 mins.
    The output of the sql is 5 records. Any ideas on why it is taking forever in the FETCH...INTO statment?
    FUNCTION getOutput(p_sql VARCHAR2)
    RETURN VARCHAR2
    AS
    TYPE extractlinecurtype IS REF CURSOR;
    cur_extract extractlinecurtype;
    v_oneextractline VARCHAR2 (32767);
    v_output VARCHAR2 (32767) := NULL;
    crlf CHAR (2) := CHR (10) || CHR (13);
    BEGIN
    OPEN cur_extract FOR p_sql;
    dbms_output.put_line(sysdate);
    LOOP
    FETCH cur_extract
    INTO v_oneextractline;
    EXIT WHEN cur_extract%NOTFOUND;
    v_output := v_output || CHR (10) || v_oneextractline;
    END LOOP;
    IF cur_extract%ISOPEN
    THEN
    CLOSE cur_extract;
    END IF;
    v_output := v_output || crlf || crlf;
    return v_output;
    EXCEPTION
    WHEN OTHERS
    THEN
    dbms_output.put_line('error in getoutput' || SQLCODE || SQLERRM);
    RETURN NULL;
    END getOutput;

    I cannot send the sql cause it uses our internal tables. My question is why does a query take under 2mins to run in sql editor and over an hr in pl/sql with cursor?
    Is there something wrong in declaration?

  • Dynamic query and Open Cursor fetch into ??

    Hi;
    I think we took the wrong turn some were... because looks like a dead end...
    What I need to do:
    Create a procedure to compare two generic tables in diferente databases and find rows that have diferent data.
    (No restritions on what tables it might compare)
    What I have done:
    CREATE OR REPLACE PROCEDURE COMPARE_TABLES_CONTENT_V3(
    tableName_A IN VARCHAR2, tableName_B IN VARCHAR2, filter IN VARCHAR2) AS
    -- Get all collumns present in [TableName_A]
    -- This is necessary to create a dynamic query
    select column_name bulk collect
    into v_cols1
    from all_tab_columns
    where table_name = v_tab1
    and owner = nvl(v_own1, user)
    order by column_id;
    -- If there is no columns in table A ... them no need to proceed
    if v_cols1.count = 0 then
    dbms_output.put_line('[Error] reading table ' || tableName_A ||
    ' collumns information. Exit...');
    return;
    end if;
    -- Collect Collumns information by ',' separated, for the query
    for i in 1 .. v_cols1.count loop
    compareCollumns := compareCollumns || ',' || v_cols1(i);
    end loop;
    -- Create the query that gives the diferences....
    getDiffQuery := 'select ' || compareCollumns ||
    ' , count(src1) CNT1, count(src2) CNT2 from (select a.*, 1 src1, to_number(null) src2 FROM ' ||
    tableName_A ||
    ' a union all select b.*, to_number(null) src1, 2 src2 from ' ||
    tableName_B || ' b) group by ' || compareCollumns ||
    ' having count(src1) <> count(src2)';
    Whats the problem?
    I'm lost...I can access the query using debugging on the oracle procedure... and run it OK on a new SQL window. No problem here... its perfect.
    But I dont know how to access this information inside the procedure, I need to format the information a bit...
    I try a cursor...
    open vCursor for getDiffQuery;
    fetch vCursor into ??????? -- Into what... query, columns and tables is all dynamic...
    close vCursor;
    Any ideas..

    Making the issue more simple....
    At this point I have a oracle procedure that generates a dynamic sql query, based on the table names passed by parameter.
    getDiffQuery := 'select ' || compareCollumns || (.....)
    end procedure;
    This ''getDiffQuery'' contains a query that gets the diferences in the tables. (Working fine)
    I would like to access this information in the same procedure.
    I cant use cursor because the table, columns... actualy all is dynamic based on the generated query:( !

  • How to fetch into any type of record

    Hi,
    I have a problem. I have to create a procedure to which any query is given it is able to execute and then fetch into the record create based on the cursor. here is the code
    procedure execute_query(p_sql varchar2) is
    type t_rc is ref cursor;
    l_rc t_rc;
    v_t_record l_rc%rowtype;
    begin
    --dbms_lock.sleep(10);
    open l_rc for p_sql;
    loop
    fetch l_rc
    into v_t_record;
    dbms_output.put_line(v_t_record.object_name);
    exit when l_rc%notfound;
    end loop;
    v_query_row_count := l_rc%rowcount;
    -- dbms_output.put_line(v_query_row_count);
    end execute_query;
    constraints:
    i can specify return clause in ref cursor.
    I have to fetch into the records from different queries
    thanks
    Regards
    nick
    Edited by: Nick Naughty on Dec 21, 2008 5:16 AM

    Yes, as I already mentioned, you could use DBMS.SQL:
    create or replace
      procedure p1(
                   p_query varchar2
      is
          c           number;
          d           number;
          col_cnt     integer;
          f           boolean;
          rec_tab     dbms_sql.desc_tab;
          v_number    number;
          v_string    varchar2(4000);
          v_date      date;
          v_rownum    number;
      begin
          c := dbms_sql.open_cursor;
          dbms_sql.parse(c,p_query, dbms_sql.native);
          dbms_sql.describe_columns(c,col_cnt,rec_tab);
          for col_num in 1..rec_tab.count loop
            if rec_tab(col_num).col_type = 1
              then
                dbms_sql.define_column(c,col_num,v_string,rec_tab(col_num).col_max_len);
            elsif rec_tab(col_num).col_type = 2
              then
                dbms_sql.define_column(c,col_num,v_number);
            elsif rec_tab(col_num).col_type = 12
              then
                dbms_sql.define_column(c,col_num,v_date);
              else raise_application_error(-20900,'unsupported data type');
            end if;
          end loop;
          d := dbms_sql.execute(c);
          v_rownum := 0;
          loop
            exit when dbms_sql.fetch_rows(c) = 0;
            v_rownum := v_rownum + 1;
            dbms_output.put_line('row ' || v_rownum);
            for col_num in 1..rec_tab.count loop
              if rec_tab(col_num).col_type = 1
                then
                  dbms_sql.column_value(c,col_num,v_string);
                  dbms_output.put_line('  ' || rec_tab(col_num).col_name || ' = ' || v_string);
              elsif rec_tab(col_num).col_type = 2
                then
                  dbms_sql.column_value(c,col_num,v_number);
                  dbms_output.put_line('  ' || rec_tab(col_num).col_name || ' = ' || v_number);
              elsif rec_tab(col_num).col_type = 12
                then
                  dbms_sql.column_value(c,col_num,v_date);
                  dbms_output.put_line('  ' || rec_tab(col_num).col_name || ' = ' || v_date);
                else
                  raise_application_error(-20900,'unsupported data type');
              end if;
            end loop;
          end loop;
          dbms_sql.close_cursor(c);
        exception
          when others
            then
              if dbms_sql.is_open(c)
                then
                  dbms_sql.close_cursor(c);
              end if;
              raise;
    end;
    set serveroutput on format wrapped
    exec p1('select ename,sal,hiredate from emp');
    SQL> create or replace
      2    procedure p1(
      3                 p_query varchar2
      4                )
      5    is
      6        c           number;
      7        d           number;
      8        col_cnt     integer;
      9        f           boolean;
    10        rec_tab     dbms_sql.desc_tab;
    11        v_number    number;
    12        v_string    varchar2(4000);
    13        v_date      date;
    14        v_rownum    number;
    15    begin
    16        c := dbms_sql.open_cursor;
    17        dbms_sql.parse(c,p_query, dbms_sql.native);
    18        dbms_sql.describe_columns(c,col_cnt,rec_tab);
    19        for col_num in 1..rec_tab.count loop
    20          if rec_tab(col_num).col_type = 1
    21            then
    22              dbms_sql.define_column(c,col_num,v_string,rec_tab(col_num).col_max_len);
    23          elsif rec_tab(col_num).col_type = 2
    24            then
    25              dbms_sql.define_column(c,col_num,v_number);
    26          elsif rec_tab(col_num).col_type = 12
    27            then
    28              dbms_sql.define_column(c,col_num,v_date);
    29            else raise_application_error(-20900,'unsupported data type');
    30          end if;
    31        end loop;
    32        d := dbms_sql.execute(c);
    33        v_rownum := 0;
    34        loop
    35          exit when dbms_sql.fetch_rows(c) = 0;
    36          v_rownum := v_rownum + 1;
    37          dbms_output.put_line('row ' || v_rownum);
    38          for col_num in 1..rec_tab.count loop
    39            if rec_tab(col_num).col_type = 1
    40              then
    41                dbms_sql.column_value(c,col_num,v_string);
    42                dbms_output.put_line('  ' || rec_tab(col_num).col_name || ' = ' || v_string);
    43            elsif rec_tab(col_num).col_type = 2
    44              then
    45                dbms_sql.column_value(c,col_num,v_number);
    46                dbms_output.put_line('  ' || rec_tab(col_num).col_name || ' = ' || v_number);
    47            elsif rec_tab(col_num).col_type = 12
    48              then
    49                dbms_sql.column_value(c,col_num,v_date);
    50                dbms_output.put_line('  ' || rec_tab(col_num).col_name || ' = ' || v_date);
    51              else
    52                raise_application_error(-20900,'unsupported data type');
    53            end if;
    54          end loop;
    55        end loop;
    56        dbms_sql.close_cursor(c);
    57      exception
    58        when others
    59          then
    60            if dbms_sql.is_open(c)
    61              then
    62                dbms_sql.close_cursor(c);
    63            end if;
    64            raise;
    65  end;
    66  /
    Procedure created.
    SQL> set serveroutput on format wrapped
    SQL> exec p1('select ename,sal,hiredate from emp');
    row 1
      ENAME = SMITH
      SAL = 800
      HIREDATE = 17-DEC-80
    row 2
      ENAME = ALLEN
      SAL = 1600
      HIREDATE = 20-FEB-81
    row 3
      ENAME = WARD
      SAL = 1250
      HIREDATE = 22-FEB-81
    row 4
      ENAME = JONES
      SAL = 2975
      HIREDATE = 02-APR-81
    row 5
      ENAME = MARTIN
      SAL = 1250
      HIREDATE = 28-SEP-81
    row 6
      ENAME = BLAKE
      SAL = 2850
      HIREDATE = 01-MAY-81
    row 7
      ENAME = CLARK
      SAL = 2450
      HIREDATE = 09-JUN-81
    row 8
      ENAME = SCOTT
      SAL = 3000
      HIREDATE = 19-APR-87
    row 9
      ENAME = KING
      SAL = 5000
      HIREDATE = 17-NOV-81
    row 10
      ENAME = TURNER
      SAL = 1500
      HIREDATE = 08-SEP-81
    row 11
      ENAME = ADAMS
      SAL = 1100
      HIREDATE = 23-MAY-87
    row 12
      ENAME = JAMES
      SAL = 950
      HIREDATE = 03-DEC-81
    row 13
      ENAME = FORD
      SAL = 3000
      HIREDATE = 03-DEC-81
    row 14
      ENAME = MILLER
      SAL = 1300
      HIREDATE = 23-JAN-82
    PL/SQL procedure successfully completed.
    SQL> SY.

  • Fetch from cursor when no records returned

    Hi,
    I've got the following question / problem?
    When I do a fetch from a cursor in my for loop and the cursor returns no record my variable 'r_item' keeps the value of the previous fetched record. Shouldn't it contain null if no record is found and I do a fetch after I closed and opend the cursor? Is there a way the clear the variable before each fetch?
    Below you find an example code
    CURSOR c_item (itm_id NUMBER) IS
    SELECT DISTINCT col1 from table1
    WHERE id = itm_id;
    r_item  c_item%ROWTYPE;
    FOR r_get_items IN c_get_items LOOP
      IF r_get_items.ENABLE = 'N' THEN       
          open c_item(r_get_items.ITMID);
          fetch c_item into r_item;
          close c_item;
          IF  r_item.ACCES = 'E' then
               action1
          ELSE                 
               action2
          END IF;
      END IF;
    END LOOP;  Thanx

    DECLARE
        CURSOR c_dept IS
          SELECT d.deptno
          ,      d.dname
          ,      d.loc
          ,      CURSOR (SELECT empno
                         ,      ename
                         ,      job
                         ,      hiredate
                         FROM   emp e
                         WHERE  e.deptno = d.deptno)
          FROM   dept d;
        TYPE refcursor IS REF CURSOR;
        emps refcursor;
        deptno dept.deptno%TYPE;
        dname dept.dname%TYPE;
        empno emp.empno%TYPE;
        ename emp.ename%TYPE;
        job emp.job%TYPE;
        hiredate emp.hiredate%TYPE;
        loc dept.loc%TYPE;
    BEGIN
       OPEN c_dept;
       LOOP
         FETCH c_dept INTO deptno, dname, loc, emps;
         EXIT WHEN c_dept%NOTFOUND;
         DBMS_OUTPUT.put_line ('Department : ' || dname);
         LOOP
           FETCH emps INTO empno, ename, job, hiredate;
           EXIT WHEN emps%NOTFOUND;
           DBMS_OUTPUT.put_line ('-- Employee : ' || ename);
         END LOOP;
      END LOOP;
      CLOSE c_dept;
    END;
    /like this...

  • How to fetch n records at a time from a FM and then fetch next n records

    I have a report program which is calling a function module . This function module returns all the records from a table. As the table has millions of records, it cant be returned at a time. How can we return N records from a function module and then return Next n records .The Function module and the report program are in different system .
    Thanks in Advance.

    Use open cursor and fetch cursor.
    Check the program as in SAP Help.
    DATA: BEGIN OF count_line,
            carrid TYPE spfli-carrid,
            count  TYPE i,
          END OF count_line,
          spfli_tab TYPE TABLE OF spfli.
    DATA: dbcur1 TYPE cursor,
          dbcur2 TYPE cursor.
    OPEN CURSOR dbcur1 FOR
      SELECT carrid count(*) AS count
             FROM spfli
             GROUP BY carrid
             ORDER BY carrid.
    OPEN CURSOR dbcur2 FOR
      SELECT *
             FROM spfli
             ORDER BY carrid.
    DO.
      FETCH NEXT CURSOR dbcur1 INTO count_line.
      IF sy-subrc <> 0.
        EXIT.
      ENDIF.
      FETCH NEXT CURSOR dbcur2
        INTO TABLE spfli_tab PACKAGE SIZE count_line-count.
    ENDDO.
    CLOSE CURSOR: dbcur1,
                  dbcur2.
    Regards
    Kannaiah

  • Delta package not fetching all records from Delta queue in r/3

    Hello,
    I have loaded Goods Movement Data using 2LIS_03_BF datasource into my BI system.
    The Delta has been initialized and everyday the delta is being moved from r/3.
    I observed that when I execute my delta package not all delta records are fetched into PSA from r/3.
    For Ex: Before starting my delta package I checked in SMQ1 of my R/3 system and see that there are around 1000 records.On executing the delta package I see that the record count is reduced from 1000 to 400 in SMQ1.On executing the delta package again I get the remaining 400 records into my PSA.
    Shouldn't the delta package get all records from the queue on single execution??
    Is there some setting to define the nr of records to be loaded?
    I'm confused with this behaviour.Please help me understand this behaviour.
    Thank You.

    Hello,
          First thing: the data is not transferred from the SMQ1 queue, rather the data is transfered to BW from the RSA7 Delta queue. You need to check the number of records present in the RSA7 queue.
    Since SMQ1 is involved, i think you are using the unserialized V3 update method. In this method, when data is first written to the application tables, it is first transferred to the SMQ1 update queue,then via a job to the LBWQ extractor queue and then to the RSA7 delta queue. So the number of entries that you see in the SMQ1 queue are not the number of entries that have to be transferred to BW but rather the records that are waiting to be transferred to the extractor queue in LBWQ. Similarly, in LBWQ, the number of entries displayed here are not the no of entries that are going to be transferred to BW, they are the no of entries that will be transferred to the delta queue RSA7 when the next v3 update job runs.
    If you want to check the number of records that will be transferred to BW, select the datasource in rsa7 and then click on the display data entries button.
    Hope this helps.
    Regards.

  • Fetching 10 records at a time

    Product.Category dimension has 4 child nodes Accessories,Bikes,Clothing n Components.My problem is when I have thousands of first level nodes my application takes a lot of time to load. Is there a way to fetch only say 100 records at a time? So then when
    i click a next button i get the next 100
    Eg:On the 1st click of a button I fetch 2 members
    WITH MEMBER [Measures].[ChildrenCount] AS
    [Product].[Category].CurrentMember.Children.Count
    SELECT [Measures].[ChildrenCount] ON 1
    ,TopCount([Product].[Category].Members, 2) on 0
    FROM [Adventure Works]
    This fetches only Accessories. Is there a way the fetch the next two records Bikes n Clothing on  click.
    Then Components on the next click. So on an so forth.

    Hi Tsunade,
    According to your description, there are thousands of members on your cube. It will take long time to retrieve all the member at a time, in order to improve the performance, you are looking for a function to fetch 10 records at a time, right? Based on my
    research, there is no such a functionally to work around this requirement currently.
    If you have any concern about this behavior, you can submit a feedback at
    http://connect.microsoft.com/SQLServer/Feedback and hope it is resolved in the next release of service pack or product. Your feedback enables Microsoft to make software and services the best that they can be, Microsoft might consider to add this feature
    in the following release after official confirmation.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Select query on QALS table taking around 4 secs to fetch one record

    Hi,
    I have one select query that takes around 4 secs to fetch one record. I would like to know if there are any ways to reduce the time taken for this select.
    SELECT
         b~prueflos
         b~matnr
         b~lagortchrg
         a~vdatum
         a~kzart
         a~zaehler
         a~vcode
         a~vezeiterf
         FROM qals AS b LEFT OUTER JOIN qave AS a ON
         bprueflos = aprueflos
         INTO TABLE t_qals1
         FOR ALL ENTRIES IN t_lgorts
          WHERE  matnr = t_lgorts-matnr
          AND    werk = t_lgorts-werks
          AND    lagortchrg = t_lgorts-lgort
          AND    stat35 = c_x
          AND    art IN (c_01,c_08).
    When I took the SQL trace, here I found other details :
    Column          No.Of Distinct Records
    MANDANT                                      2
    MATNR                                      2.954
    WERK                                          30
    STAT34                                         2
    HERKUNFT                                   5
    Analyze Method                    Sample 114.654 Rows
    Levels of B-Tree                                 2
    Number of leaf blocks                         1.126
    Number of distinct keys                      16.224
    Average leaf blocks per key                1
    Average data blocks per key               3
    Clustering factor                                  61.610
    Also note, This select query is using INDEX RANGE SCAN QALS~D.
    All the suggestions are welcome
    Regards,
    Vijaya

    Hi Rob,
    Its strange but, the table t_lgorts  has only ONE record
    MATNR  =  000000000500003463
    WERK = D133
    LAGORTCHRG   = 0001                                            
    I have also seen that for the above criteria the table QALS has 2266 records that satisfy this condition.
    I am not sure..but if we write the above query as subquery instead of Outer join..will it improve the performance?
    Will check it from my side too..
    Regards,
    Vijaya

  • Fetch the records based on number

    Hi experts,
        I have a req like if user give input as 5..it should fetch 5 records from database.
    for example database values are
    SERNR                MATNR
    101                                              A
    102                                              A
    103                                              A
    104                                              A
    105                                              A
    106                                              A
    107                                              A
    If user gives the input as 5 it should fetch 5 records like 101 to 105....Can any body plz help me how to write select query for this..
    Thanks in advance,
    Veena.
    Edited by: s veena on Jan 18, 2011 5:52 AM

    Hi Veena,
    You can use UPTO in your select query. For example
    SELECT MATNR FROM MARA INTO TABLE IT_MATNR UPTO P_NUMBER ROWS.
    "Here P_NUMBER is the Selection Screen Parameter
    It will fetch records based on the number in the parameter.
    Thanks & Regards,
    Faheem.

  • Fetching latest records

    Hi All,
    I want to fetch some records from a database table which are the latest entries. How can I fetch these records?
    Regards,
    Jeetu

    Hi,
    Method 1:
    1) check whether you have DATE field in your database table.
    2) fetch all records into your internal table.
    3) sort internal table by date descending.
    4) you will get the latest records on top in your internal table.
    Method 2:
    If you want only latest 10 records from your internal table
    data: begin of itab occurs 10,
    Declare your fields here make sure that you have DATE field in your internal table.
            end of itab.
    Select <fields>  from <database table> into itab.
      append itab sorted by date.
    endselect.
    note: Date should be one of teh fields of itab.

  • Fetching limited records SQL

    Hi All,
    Need all your valuable advise again.
    The ISSUE is :-
    Rightnow, the SELECT below , the inner query will fetch all the records from the database. Outer query will fetch the exact block of records from the inner query results(total available).
    Example:- Lets consider a person search query is used to filter the records by department. Assume this query yields 30K records
    Whatwe do today is,
    SELECT * FROM(
    SELECT
    rownumrownumber,deptname
    FROM person
    WHERE department = 'Electronics' // Assume this inner query fetches 30K records
    ) WHERE rownumberbetween 100 and 125;// out of 30K we just need the 25 results
    Solution:
    How do we make the Inner query to limit the maximum records we wanted? Presently,its fetching all records.
    SELECT * FROM(
    SELECT
    rownum rownumber,deptname
    FROM person
    WHERE department = 'Electronics' ANDrownum<=125 // now the inner query will fetch only 125 records
    ) WHERE rownumber between 100 and 125;// out of 30K we just need the 25 results

    Take a look at these two articles:
    [On Top-N and Pagination Queries|http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    [On Rownum and Limiting Results|http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html]
    These show how to handle all sorts of situation (e.g. if you care about ordering or not). In any case the simple solution is:
    Pagination with ROWNUM
    My all-time-favorite use of ROWNUM is pagination. In this case, I use ROWNUM to get rows N through M of a result set. The general form is as follows:
    select *
      from ( select /*+ FIRST_ROWS(n) */
      a.*, ROWNUM rnum
          from ( your_query_goes_here,
          with order by ) a
          where ROWNUM <=
          :MAX_ROW_TO_FETCH )
    where rnum  >= :MIN_ROW_TO_FETCH;
    where
        * FIRST_ROWS(N) tells the optimizer, "Hey, I'm interested in getting the first rows, and I'll get N of them as fast as possible."
        * :MAX_ROW_TO_FETCH is set to the last row of the result set to fetch—if you wanted rows 50 to 60 of the result set, you would set this to 60.
        * :MIN_ROW_TO_FETCH is set to the first row of the result set to fetch, so to get rows 50 to 60, you would set this to 50.
    The concept behind this scenario is that an end user with a Web browser has done a search and is waiting for the results. It is imperative to return the first result page (and second page, and so on) as fast as possible. If you look at that query closely, you'll notice that it incorporates a top-N query (get the first :MAX_ROW_TO_FETCH rows from your query) and hence benefits from the top-N query optimization I just described. Further, it returns over the network to the client only the specific rows of interest—it removes any leading rows from the result set that are not of interest. Just plug your query (without trying to worry about the number or rows returned) into the template query. The articles explain how to do more complex things.

  • Fetching many records all at once is no faster than fetching one at a time

    Hello,
    I am having a problem getting NI-Scope to perform adequately for my application.  I am sorry for the long post, but I have been going around and around with an NI engineer through email and I need some other input.
    I have the following software and equipment:
    LabView 8.5
    NI-Scope 3.4
    PXI-1033 chassis
    PXI-5105 digitizer card
    DELL Latitude D830 notebook computer with 4 GB RAM.
    I tested the transfer speed of my connection to the PXI-1033 chassis using the niScope Stream to Memory Maximum Transfer Rate.vi found here:
    http://zone.ni.com/devzone/cda/epd/p/id/5273.  The result was 101 MB/s.
    I am trying to set up a system whereby I can press the start button and acquire short waveforms which are individually triggered.  I wish to acquire these individually triggered waveforms indefinitely.  Furthermore, I wish to maximize the rate at which the triggers occur.   In the limiting case where I acquire records of one sample, the record size in memory is 512 bytes (Using the formula to calculate 'Allocated Onboard Memory per Record' found in the NI PXI/PCI-5105 Specifications under the heading 'Waveform Specifications' pg. 16.).  The PXI-5105 trigger re-arms in about 2 microseconds (500kHz), so to trigger at that rate indefinetely I would need a transfer speed of at least 256 Mb/s.  So clearly, in this case the limiting factor for increasing the rate I trigger at and still be able to acquire indefinetely is the rate at which I transfer records from memory to my PC.
    To maximize my record transfer rate, I should transfer many records at once using the Multi Fetch VI, as opposed to the theoretically slower method of transferring one at a time.  To compare the rate that I can transfer records using a transfer all at once or one at a time method, I modified the niScope EX Timestamps.vi to allow me to choose between these transfer methods by changing the constant wired to the Fetch Number of Records property node to either -1 or 1 repectively.  I also added a loop that ensures that all records are acquired before I begin the transfer, so that acquisition and trigger rates do not interfere with measuring the record transfer rate.  This modified VI is attached to this post.
    I have the following results for acquiring 10k records.  My measurements are done using the Profile Performance and Memory Tool.
    I am using a 250kHz analog pulse source.
    Fetching 10000 records 1 record at a time the niScope Multi Fetch
    Cluster takes a total time of 1546.9 milliseconds or 155 microseconds
    per record.
    Fetching 10000 records at once the niScope Multi Fetch Cluster takes a
    total time of 1703.1 milliseconds or 170 microseconds per record.
    I have tried this for larger and smaller total number of records, and the transfer time per is always around 170 microseconds per record regardless if I transfer one at a time or all at once.  But with a 100MB/s link and 512 byte record size, the Fetch speed should approach 5 microseconds per record as you increase the number of records fetched at once.
    With this my application will be limited to a trigger rate of 5kHz for running indefinetely, and it should be capable of closer to a 200kHz trigger rate for extended periods of time.  I have a feeling that I am missing something simple or am just confused about how the Fetch functions should work. Please enlighten me.
    Attachments:
    Timestamps.vi ‏73 KB

    Hi ESD
    Your numbers for testing the PXI bandwidth look good.  A value of
    approximately 100MB/s is reasonable when pulling data accross the PXI
    bus continuously in larger chunks.  This may decrease a little when
    working with MXI in comparison to using an embedded PXI controller.  I
    expect you were using the streaming example "niScope Stream to Memory
    Maximum Transfer Rate.vi" found here: http://zone.ni.com/devzone/cda/epd/p/id/5273.
    Acquiring multiple triggered records is a little different.  There are
    a few techniques that will help to make sure that you are able to fetch
    your data fast enough to be able to keep up with the acquired data or
    desired reference trigger rate.  You are certainly correct that it is
    more efficient to transfer larger amounts of data at once, instead of
    small amounts of data more frequently as the overhead due to DMA
    transfers becomes significant.
    The trend you saw that fetching less records was more efficient sounded odd.  So I ran your example and tracked down what was causing that trend.  I believe it is actually the for loop that you had in your acquisition loop.  I made a few modifications to the application to display the total fetch time to acquire 10000 records.  The best fetch time is when all records are pulled in at once. I left your code in the application but temporarily disabled the for loop to show the fetch performance. I also added a loop to ramp the fetch number up and graph the fetch times.  I will attach the modified application as well as the fetch results I saw on my system for reference.  When the for loop is enabled the performance was worst at 1 record fetches, The fetch time dipped  around the 500 records/fetch and began to ramp up again as the records/fetch increases to 10000.
    Note I am using the 2D I16 fetch as it is more efficient to keep the data unscaled.  I have also added an option to use immediate triggering - this is just because I was not near my hardware to physically connect a signal so I used the trigger holdoff property to simulate a given trigger rate.
    Hope this helps.  I was working in LabVIEW 8.5, if you are working with an earlier version let me know.
    Message Edited by Jennifer O on 04-12-2008 09:30 PM
    Attachments:
    RecordFetchingTest.vi ‏143 KB
    FetchTrend.JPG ‏37 KB

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

Maybe you are looking for

  • No HD picture when viewing in Quicktime

    I can't get a picture when I view quicktime HD video. Primaily Apple movie trailers. The sound is fine. Its perfect. I only get zig zag lines during HD video. The only way to see a trailer is if it has regular video. I have quicktime 7.5. This is ver

  • Buttons not works into xml forms

    Hellow, I have created some custom XML forms . However, the buttons on the editor (hyperlink, text size etc) dont work. I Have created 2 types of buttons, one type is a button standard of xml forms builder. And the second type is a button created wit

  • How to deliver a paramter to the viewobject with jsf

    I use hr schema to learn adf bc component.but i want to use jsf #{param.region} to get the parameter from the urls and deliver it to the viewobject and last show the list table on the jsf page in the viewoject xml sql : select a,b,c,d... from xxxview

  • Trouble building project in netbeans 5.0

    I am working on a project for school. However, when I run the program I get the following message and the program doesn't display. I do have the setvisible method in the main ,but I am unable to see my project. Can someone please help me regarding wh

  • Installing a program with ARD that is not a PKG file

    I have about 200 ibooks that need to have a sofware package installed on. I was hoping to this by scheduling the installs to run on off hours using ARD 2.2. My problem is that the program that i need to install uses a VISE installer, rather than a .p