Optimal number of records to fetch from Forte Cursor

Hello everybody:
I 'd like to ask a very important question.
I opened Forte cursor with approx 1.2 million records, and now I am trying
to figure out the number of records per fetch to obtain
the acceptable performance.
To my surprise, fetching 100 records at once gave me approx 15 percent
performance gain only in comparsion
with fetching records each by each.
I haven't found significant difference in performance fetching 100, 500 or
10.000 records at once.In the same time, fetching 20.000
records at once make a performance approx 20% worse( this fact I cannot
explain).
Does anybody have any experience in how to improve performance fetching from
Forte cursor with big number of rows ?
Thank you in advance
Genady Yoffe
Software Engineer
Descartes Systems Group Inc
Waterloo On
Canada

You can do it by writing code in start routine of your transformations.
1.If you have any specific criteria for filtering go with that and delete unwanted records.
2. If you want to load specific number of records based on count, then in start routine of the transformations loop through source package records by keeping a counter till you reach your desired count and copy those records into an internal table.
Delete records in the source package then assign the records stored in internal table to source package.

Similar Messages

  • RE: (forte-users) Optimal number of records to fetch fromForte C ursor

    The reason why a single fetch of 20.000 records performs less then
    2 fetches of 10.000 might be related to memory behaviour. Do you
    keep the first 10.000 records in memory when you fetch the next
    10.000? If not, then a single fetch of 20.000 records requires more
    memory then 2 fetches of 10.000. You might have some extra over-
    head of Forte requesting additional memory from the OS, garbage
    collections just before every request for memory and maybe even
    the OS swapping some memory pages to disk.
    This behaviour can be controlled by modifying the Minimum memory
    and Maximum memory of the partition, as well as the memory chunk
    size Forte uses to increment its memory.
    Upon partition startup, Forte requests the Minimum memory from the
    OS. Whithin this area, the actual memory being used grows, until
    it hits the ceiling of this space. This is when the garbage collector
    kicks in and removes all unreferenced objects. If this does not suffice
    to store the additional data, Forte requests 1 additional chunk of a
    predefined size. Now, the same behaviour is repeated in this, slightly
    larger piece of memory. Actual memory keeps growing until it hits
    the ceiling, upon which the garbage collector removes all unrefer-
    enced objects. If the garbage collector reduces the amount of
    memory being used to below the original Miminum memory, Forte
    will NOT return the additional chunk of memory to the OS. If the
    garbage collector fails to free enough memory to store the new data,
    Forte will request an additional chunk of memory. This process is
    repeated untill the Maximum memory is reached. If the garbage
    collector fails to free enough memory at this point, the process
    terminates gracelessly (which is what happens sooner or later when
    you have a memory leak; something most Forte developpers have
    seen once or twice).
    Pascal Rottier
    STP - MSS Support & Coordination Group
    Philip Morris Europe
    e-mail: [email protected]
    Phone: +49 (0)89-72472530
    +++++++++++++++++++++++++++++++++++
    Origin IT-services
    Desktop Business Solutions Rotterdam
    e-mail: [email protected]
    Phone: +31 (0)10-2428100
    +++++++++++++++++++++++++++++++++++
    /* All generalizations are false! */
    -----Original Message-----
    From: [email protected] [SMTP:[email protected]]
    Sent: Monday, November 15, 1999 6:53 PM
    To: [email protected]
    Subject: (forte-users) Optimal number of records to fetch from Forte
    Cursor
    Hello everybody:
    I 'd like to ask a very important question.
    I opened Forte cursor with approx 1.2 million records, and now I am trying
    to figure out the number of records per fetch to obtain
    the acceptable performance.
    To my surprise, fetching 100 records at once gave me approx 15 percent
    performance gain only in comparsion
    with fetching records each by each.
    I haven't found significant difference in performance fetching 100, 500
    or
    10.000 records at once.In the same time, fetching 20.000
    records at once make a performance approx 20% worse( this fact I cannot
    explain).
    Does anybody have any experience in how to improve performance fetching
    from
    Forte cursor with big number of rows ?
    Thank you in advance
    Genady Yoffe
    Software Engineer
    Descartes Systems Group Inc
    Waterloo On
    Canada
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]

    Hi Kieran,
    According to your description, you are going to figure out what is the optimal number of records per partition, right? As per my understanding, this number was change by your hardware. The better hardware you have, the more number of records per partition.
    The earlier version of the performance guide for SQL Server 2005 Analysis Services Performance Guide stated this:
    "In general, the number of records per partition should not exceed 20 million. In addition, the size of a partition should not exceed 250 MB."
    Besides, the number of records is not the primary concern here. Rather, the main criterion is manageability and processing performance. Partitions can be processed in parallel, so the more there are the more can be processed at once. However, the more partitions
    you have the more things you have to manage. Here is some links which describe the partition optimization
    http://blogs.msdn.com/b/sqlcat/archive/2009/03/13/analysis-services-partition-size.aspx
    http://www.informit.com/articles/article.aspx?p=1554201&seqNum=2
    Regards,
    Charlie Liao
    TechNet Community Support

  • RE: (forte-users) Optimal number of records to fetch fromForte Cursor

    Guys,
    The behavior (1 fetch of 20000 vs 2 fetches of 10000 each) may also be DBMS
    related. There is potentially high overhead in opening a cursor and initially
    fetching the result table. I know this covers a great deal DBMS technology
    territory here but one explanation is that the same physical pages may have to
    be read twice when performing the query in 2 fetches as compared to doing it in
    one shot. Physical IO is perhaps the most expensive (vis a vis- resources)
    part of a query. Just a thought.
    "Rottier, Pascal" <[email protected]> on 11/15/99 01:34:22 PM
    To: "'Forte Users'" <[email protected]>
    cc: (bcc: Charlie Shell/Bsg/MetLife/US)
    Subject: RE: (forte-users) Optimal number of records to fetch from Forte C
    ursor
    The reason why a single fetch of 20.000 records performs less then
    2 fetches of 10.000 might be related to memory behaviour. Do you
    keep the first 10.000 records in memory when you fetch the next
    10.000? If not, then a single fetch of 20.000 records requires more
    memory then 2 fetches of 10.000. You might have some extra over-
    head of Forte requesting additional memory from the OS, garbage
    collections just before every request for memory and maybe even
    the OS swapping some memory pages to disk.
    This behaviour can be controlled by modifying the Minimum memory
    and Maximum memory of the partition, as well as the memory chunk
    size Forte uses to increment its memory.
    Upon partition startup, Forte requests the Minimum memory from the
    OS. Whithin this area, the actual memory being used grows, until
    it hits the ceiling of this space. This is when the garbage collector
    kicks in and removes all unreferenced objects. If this does not suffice
    to store the additional data, Forte requests 1 additional chunk of a
    predefined size. Now, the same behaviour is repeated in this, slightly
    larger piece of memory. Actual memory keeps growing until it hits
    the ceiling, upon which the garbage collector removes all unrefer-
    enced objects. If the garbage collector reduces the amount of
    memory being used to below the original Miminum memory, Forte
    will NOT return the additional chunk of memory to the OS. If the
    garbage collector fails to free enough memory to store the new data,
    Forte will request an additional chunk of memory. This process is
    repeated untill the Maximum memory is reached. If the garbage
    collector fails to free enough memory at this point, the process
    terminates gracelessly (which is what happens sooner or later when
    you have a memory leak; something most Forte developpers have
    seen once or twice).
    Pascal Rottier
    STP - MSS Support & Coordination Group
    Philip Morris Europe
    e-mail: [email protected]
    Phone: +49 (0)89-72472530
    +++++++++++++++++++++++++++++++++++
    Origin IT-services
    Desktop Business Solutions Rotterdam
    e-mail: [email protected]
    Phone: +31 (0)10-2428100
    +++++++++++++++++++++++++++++++++++
    /* All generalizations are false! */
    -----Original Message-----
    From: [email protected] [SMTP:[email protected]]
    Sent: Monday, November 15, 1999 6:53 PM
    To: [email protected]
    Subject: (forte-users) Optimal number of records to fetch from Forte
    Cursor
    Hello everybody:
    I 'd like to ask a very important question.
    I opened Forte cursor with approx 1.2 million records, and now I am trying
    to figure out the number of records per fetch to obtain
    the acceptable performance.
    To my surprise, fetching 100 records at once gave me approx 15 percent
    performance gain only in comparsion
    with fetching records each by each.
    I haven't found significant difference in performance fetching 100, 500
    or
    10.000 records at once.In the same time, fetching 20.000
    records at once make a performance approx 20% worse( this fact I cannot
    explain).
    Does anybody have any experience in how to improve performance fetching
    from
    Forte cursor with big number of rows ?
    Thank you in advance
    Genady Yoffe
    Software Engineer
    Descartes Systems Group Inc
    Waterloo On
    Canada
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]
    For the archives, go to: http://lists.sageit.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]

    Hi Kieran,
    According to your description, you are going to figure out what is the optimal number of records per partition, right? As per my understanding, this number was change by your hardware. The better hardware you have, the more number of records per partition.
    The earlier version of the performance guide for SQL Server 2005 Analysis Services Performance Guide stated this:
    "In general, the number of records per partition should not exceed 20 million. In addition, the size of a partition should not exceed 250 MB."
    Besides, the number of records is not the primary concern here. Rather, the main criterion is manageability and processing performance. Partitions can be processed in parallel, so the more there are the more can be processed at once. However, the more partitions
    you have the more things you have to manage. Here is some links which describe the partition optimization
    http://blogs.msdn.com/b/sqlcat/archive/2009/03/13/analysis-services-partition-size.aspx
    http://www.informit.com/articles/article.aspx?p=1554201&seqNum=2
    Regards,
    Charlie Liao
    TechNet Community Support

  • Crystal Report Alerts not firing when no records are fetched from the DB

    Hello,
    The crystal report alert i have created in the report in the event of no records being fetched from the query is not firing.  The condition used is isnull ( count(DB Field ) ).
    Is there a limitation with alerts that they would be fired only when some records are fetched in the report.
    Appreciate any pointers
    -Jayakrishnan

    hi Jayakrishnan,
    as alerts require records to be returned here's what you will need to do:
    1) delete your current alert
    2) create a new formula with syntax like
                  isnull(DistinctCount ()) or DistinctCount () = 0
    3) create a new Subreport (which you will put in a report header)
    4) the subreport can be based off of any table
    5) have the subreport record selection always return only 1 record...for performance reasons
    6) change the subreport link to be based on the new formula
    7) the link will be a one way link in that you will not use the "Select data in subreport based on field" option
    8) now in the subreport, create the Alert based on the parameter created by the subreport link
    i have tested this and it works great.
    jamie

  • Number of records being pulled from OLAP/ SQL in BPC 5.1

    Hello BPC gurus,
                         We are experience performance issues with EVDRE.basically the report errors out and the error log states " Decompressing request failed". We are in BPC 5.1
           We were trying to understand how many number of records the evdre is pulling from from OLAP / database  so that we can look into some fine tuning opportunities of the EVDRE. 
          In the BI world we have RSRT where in which we can view the number of records from database, number of records transferred. Is there any such feature in BPC, where in which we can information on record counts.
    We have turned on the error logs , but none of them give us an idea of the record count.
    Appreciate your help in advance.
    Thanks
    sai

    Hi Sorin,
                   Thank you very much for getting to me on my clarificaiton on the record count. As per your suggestion, we have already looked into this OSS note, and changed the entries in the table. After making these changes, the queries that normally execute in 1 min, now take 30 minutes to complete. Believe this was the observation also in some of the threads related to this issue. 
    You had mentioned that there might be an issue with the communication between application server  and BPC Client. or SQE generating MDX query.  Can you please give us some pointers on how to investigate this. Have turned on error logs evdataserver_debug.txt & EVDATASERVER_TRACE.txt on the file server,  but i believe there is an OSS note 1311019, that talks about these logs not workign with SP9.
    If you can guide us in the folllowing that would be helpful
    1  how to bebug this issue that we are currently facing.
    2. How does the concept of compressing / decompressing work in BPC.
    Thanks
    sai

  • Number of records in cube from Query Designer

    I don't have access to the cube in BW(listschema). I only have access to pull reports from cube from Query Designer. How can I tell the total number of records in a cube?
    Thanks.

    Hi
    you can use the tech content in the query designer to display the count for no of records or you can do the same via creating a new CKF
    or see the below link to display the count
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/009819ab-c96e-2910-bbb2-c85f7bdec04a

  • Max number of records to hold in explicit cursor

    Hi Everyone,
    What is the maxmimum number of records that could be holded in
    an explicit cursor for manipulation. I need to process millions of records.
    Can I hold it in cursors or use temp table to hold those records and
    do fixes with volume control.
    Thanks

    Hi Kishore sorry for the delayed response,
    Table1
    prim_oid     sec_oid          rel_oid
    pp101     cp102          101
    pp101     cp103          101
    pp102     cp104          101
    pp102     cp105          101
    Table2
    ID     p_oid     b_oid     rel_oid
    1     pp101     -51     102
    2     pp102     -51     102
    3     cp102     52     102
    4     cp103     53     102
    5     cp104     54     102
    6     cp105     54     102
    From table1 I get the parent and child recs based on rel_oid=101,
    the prim_oid and sec_oid are related to another col in table2 again
    with a rel_oid. I need to get all the prim_oid that are linked to -ive b_oid
    in table2 whose child sec_oid are linked with +ive b_oid.
    In the above case, parent pp101 linked to 2 child cp102 & cp103 and
    pp102 linked to 2 child cp104 & cp105. Both pp101 and pp102 are linked
    to -ive b_oid (table2), but the children of these parents are linked to +ive b_oids.
    But pp101's children are linked to 2 diff b_oid and pp102's childrend are linked
    to same b_oid. For my requirement I can only update b_oid of pp102 with that
    of its children b_oid whereas cannot update pp101's b_oid as it children are
    linked to diff b_oid's.
    I've a sql that will return prim_oid, b_oid, sec_oid, b_oid as a record as below
    1     pp101     -51     3     cp102     52
    1     pp101     -51     4     cp103     53
    2     pp102     -51     5     cp104     54
    2     pp102     -51     6     cp105     54
    with a cursor sql that returns records as above, it would be difficult to process
    distinct parent and distinct child. So I've a cursor that returns only the parent
    records as below,
    1     pp101     -51
    2     pp102     -51
    and then for each parent I get the distinct child b_oid, if I get only one child
    b_oid I update parent else dont. but the problem is table2 has 8 million parent recs
    with link to -ve b_oid but child of only 2 million recs have link to only one distinct
    b_oid.
    If i include volume control in the cursor sql chances are all might returns like
    pp101 for which update is not required, so I should not have volume control in
    curosr sql which will now return all the 8 million record. (my assumption).
    is there any other feasible solution? Thanks

  • BAPI returns less number of records when called from WebDynpro

    Hi,
    We have a BAPI which updates some tables and then bring back output in the form of a table.
    When we execute BAPI from R/3 we get all records. When we execute the BAPI using webdynpro, for the same input values, we are always getting 22 records. This count remains same always. 
    When we had put a breakpoint in the BAPI and tested it using webdynpro, we get few more records. Wondering what is the prob?
    Any help?
    regards,
    Shabeer

    Hi,
    Are you using the same user when running the BAPI form R/3 and from the portal?
    We had a similar problem when the user from the portal didn't have the necessary authorizations.
    Adi.

  • Bulk Fetch from a Cursor

    Hi all,
         Can you please give your comments on the code below.
         we are facing an situation where the value of <cursor_name>%notfound is misleading. How we are overcoming the issue is moving the 'exit when cur_name%notfound' stmt just before the end loop.
    open l_my_cur;
    loop
    fetch l_my_cur bulk collect
    into l_details_array;
    --<< control_comes_here>>
    --<< l_details_array.count gives me the correct no of rows>>
    exit when l_inst_cur%NOTFOUND;
    --<< control never reaches here>>
    --<< %notfound is true>>
    --<< %notfound is false only when there are as many records fetched as the limit (if set)>>
    forall i in 1 .. l_count
    insert into my_table ....( .... ) values ( .... l_details_array(i) ...);
    --<< This is never executed :-( >>
    end loop;
    Thanks,
    Sunil.

    Read
    fetch l_my_cur bulk collect
    into l_details_array; as
    fetch l_my_cur bulk collect
    into l_details_array LIMIT 10000;
    I am trying to process 10,000 rows at a time from a possible 100,000 records.
    Sunil.
    Hi all,
         Can you please give your comments on the code below.
         we are facing an situation where the value of <cursor_name>%notfound is misleading. How we are overcoming the issue is moving the 'exit when cur_name%notfound' stmt just before the end loop.
    open l_my_cur;
    loop
    fetch l_my_cur bulk collect
    into l_details_array;
    --<< control_comes_here>>
    --<< l_details_array.count gives me the correct no of rows>>
    exit when l_inst_cur%NOTFOUND;
    --<< control never reaches here>>
    --<< %notfound is true>>
    --<< %notfound is false only when there are as many records fetched as the limit (if set)>>
    forall i in 1 .. l_count
    insert into my_table ....( .... ) values ( .... l_details_array(i) ...);
    --<< This is never executed :-( >>
    end loop;
    Thanks,
    Sunil.

  • Fetching from a cursor and writing to a file in Pro*C

    Hi guys,
    I have a situation in hand here and I guess my "C" skills are putting me up to the test. My cursor is fetching 3 records and its all fine. I am also being able to sprintf those details and the fprintf also works perfectly -- except when I come back for Record 2, the details get overwritten and finally Record 3 is what remains on the file.
    I know that UTL_FILE.PUT_LINE works just fine in Loops but here I seem to be doing somethign wrong. Has anyone seen this problem or situation before ?
    void get_student_data(void)
       FILE   *student_file;
       char    student_file_name[100];
      exec sql begin declare section;
      exec sql end declare section;
          if ((student_file = fopen(student_file_name,"w")) == NULL)
              printf("Error opening data file!\n");
       exec sql declare student_cur cursor for
             select  s.student_id
                      to_char(s.start_date,'DD-Mon-YYYY'),
                      s.student_addr1,
                      s.student_addr2,
                      s.city,
                      s.state,
                      s.zip_code
             from   student s
             order by s.student_id;
       exec sql open student_cur;
       for (;;)
          exec sql fetch student_cur
              into  :cur_student_id,
                     :cur_start_date,
                     :cur_addr1,
                     :cur_addr2,
                     :cur_city,
                     :cur_state,
                     :cur_zip_code;
          if (sqlcode > 0)
             break;
       sprintf(out_line, "\"%d\",\"%s\",\"%s\",\"%s\",\"%s\",\"%s\","\n",
                         student_id,
                         (char *)start_date.arr,
                         (char *)addr1.arr,
                         (char *)addr2.arr,
                         (char *)city.arr,
                         (char *)state.arr,
                         (char *)zip_code.arr );
       fprintf(student_file, out_line);
       exec sql close student_cur;
       if (!student_file == NULL))
          fclose(student_file_file);
    {code}
    Thanks a bunch !
    Edited by: RDonASunnyDay on Oct 20, 2009 11:07 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Hi riedelme,
    The program is working fine. The procedure I am calling happens to be inside a FOR loop which I did not mention and that's my fault. You guys were on the right track.
    If you notice, the fclose(filename) was at the very end of the procedure but everytime the procedure is called in the out for loop, the file gets opened and closed. So, that's why the file has always one record !
    However, the logic of closing the cursor shoudl still be after closing the inner for loop.
    Thanks

  • Fetch from cursor when no records returned

    Hi,
    I've got the following question / problem?
    When I do a fetch from a cursor in my for loop and the cursor returns no record my variable 'r_item' keeps the value of the previous fetched record. Shouldn't it contain null if no record is found and I do a fetch after I closed and opend the cursor? Is there a way the clear the variable before each fetch?
    Below you find an example code
    CURSOR c_item (itm_id NUMBER) IS
    SELECT DISTINCT col1 from table1
    WHERE id = itm_id;
    r_item  c_item%ROWTYPE;
    FOR r_get_items IN c_get_items LOOP
      IF r_get_items.ENABLE = 'N' THEN       
          open c_item(r_get_items.ITMID);
          fetch c_item into r_item;
          close c_item;
          IF  r_item.ACCES = 'E' then
               action1
          ELSE                 
               action2
          END IF;
      END IF;
    END LOOP;  Thanx

    DECLARE
        CURSOR c_dept IS
          SELECT d.deptno
          ,      d.dname
          ,      d.loc
          ,      CURSOR (SELECT empno
                         ,      ename
                         ,      job
                         ,      hiredate
                         FROM   emp e
                         WHERE  e.deptno = d.deptno)
          FROM   dept d;
        TYPE refcursor IS REF CURSOR;
        emps refcursor;
        deptno dept.deptno%TYPE;
        dname dept.dname%TYPE;
        empno emp.empno%TYPE;
        ename emp.ename%TYPE;
        job emp.job%TYPE;
        hiredate emp.hiredate%TYPE;
        loc dept.loc%TYPE;
    BEGIN
       OPEN c_dept;
       LOOP
         FETCH c_dept INTO deptno, dname, loc, emps;
         EXIT WHEN c_dept%NOTFOUND;
         DBMS_OUTPUT.put_line ('Department : ' || dname);
         LOOP
           FETCH emps INTO empno, ename, job, hiredate;
           EXIT WHEN emps%NOTFOUND;
           DBMS_OUTPUT.put_line ('-- Employee : ' || ename);
         END LOOP;
      END LOOP;
      CLOSE c_dept;
    END;
    /like this...

  • Number of records in a standard table

    Hi,
    How to find number of records, without fetching them into a internal table?
    is there any command for that?
    I want to know the number of records of a standard table in a report.
    Thanks

    Hi,
    If you want to know the number of records in your internal table after select statement.
    You can use the below statement.
    data : wa_lines like sy-tfill.
    DESCRIBE TABLE itab LINES WS_LINES.
    In wa_lines you will have the number of records.
    If you want to know the number of records from standard itself.
    You can use below :
    select count(*) from table into variable
    endselect.
    Thanks,
    Sriram Ponna.

  • Number of records in FAGLFLEXT table

    Dear colleagues,
    Could You please tell me what the maximum number of records FAGLFLEXT table may contain.
    I was said during FI-GL migration project in 2008 that the optimal number of records is 500 000 per year.
    After having implemented new project we expect the number of records to increase approximately up to 1 500 000 - 3 500 000 records for 1 year. Is this critical?
    I read at the forum that SAP recommends to have maximum 10 000 000 records in this table but it was not clarified for what period: for one year or for the whole life of the system.
    Regards,
    Stanislav.

    - hope below notes help.
    [Note 820495|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=820495]
    [Note 1045430|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1045430]
    Rgds.

  • Table memory has Hughe through number of records very low number

    Dear team,
    When I'm checking one table it has below number of records.
    select count(*) from table1
    4980092
    but the space allocated for this table
    select sum(bytes) from user_segments where segment_name = 'table1';
    SUM(BYTES)
    2361712640
    I'm surprised with this size.
    When find the cause,I found the if we are delerting the records memory won't get freed, then i how can i freeup the memory for this table.
    Delete happenning for thie table frequently on daily basis.

    user11081688 wrote:
    Dear team,
    When I'm checking one table it has below number of records.
    select count(*) from table1
    4980092
    but the space allocated for this table
    select sum(bytes) from user_segments where segment_name = 'table1';
    SUM(BYTES)
    2361712640
    I'm surprised with this size.
    why?
    When find the cause,I found the if we are delerting the records memory won't get freed, correct
    then i how can i freeup the memory for this table.
    there is no need to do so, since space will be reused by new rows.
    Delete happenning for thie table frequently on daily basis.if DELETE occurs daily, why is number of rows close to zero?
    how many rows get INSERT daily?
    what is average ROW LENGTH?
    SQL> select 2361712640/4980092 from dual;
    2361712640/4980092
            474.230725
    SQL

  • Fetch From Cursor

    In my Procedure I want to explicitly open the cursor and fetch from the cursor and again close the cursor
    I don’t want to use like this for some testingsomething:
    Create procedure kk
    Cur out sys_refcursor
    As
    Open cur for
    Select * from table;
    End
    I need to use like this
    Create procedure kk
    Cursor c is select * from table; need to return this cursor.
    As
    How to return that cursor
    Thanks

    maybe something like:
    create or replace procedure get_emp_name as
      cursor c1 is
        select * from emp;
      vEname emp.ename%type;
    begin
      open c1;
      fetch c1 into vEname;
      if c1%notfound then
         exit;
      end if;
      close c1;
    end;
    /or
    Create procedure get_emp_name as
      Cursor c1 is
       select *
         from emp;
    begin
      for c1_rec in c1 loop
         dbms_output.put_line('Emp Name: '||c1_rec.ename);
      end loop;
    end;
    /

Maybe you are looking for

  • Conditional Pivots

    Dear all, I've got currently a quite tricky problem with a Pivot-Table. I'm using the Pivot on the basis of an Analysis Services Cube. My idea is to have 2 dimensions on the x-axis and a KPI on the y value. The trick is, that the dimension on the x-a

  • List price updation in Profit segment

    Hi, Could you explain how the List price-VV137 in CE1 table  is getting updated in Profit segment??? Thanks and Regards Sankar

  • How to transport a VC model other than the simple import/export?

    Hello, What are the other options I have to transport a VC moel from one portal to another other than the export/import option that goes through my local filesystem? If I will use the portal transport package to transfer the iView generated it won't

  • Select first n columns

    Hi I need to select first n columns thru a query. i dont know the column names. Thanks and Regards Ananth Antony

  • MacBook Pro 2011 slow at low battery

    I have an early 2011 MacBook Pro 13"  and when the battery gets to around 15% all applications seem to slow down. After about a minute of being plugged in everything runs smoothly again. Is this normal? Is it just trying to save power by slowing down