Fetching latest records

Hi All,
I want to fetch some records from a database table which are the latest entries. How can I fetch these records?
Regards,
Jeetu

Hi,
Method 1:
1) check whether you have DATE field in your database table.
2) fetch all records into your internal table.
3) sort internal table by date descending.
4) you will get the latest records on top in your internal table.
Method 2:
If you want only latest 10 records from your internal table
data: begin of itab occurs 10,
Declare your fields here make sure that you have DATE field in your internal table.
        end of itab.
Select <fields>  from <database table> into itab.
  append itab sorted by date.
endselect.
note: Date should be one of teh fields of itab.

Similar Messages

  • Performance Issue - Fetching latest date from a507 table

    Hi All,
    I am fetching data from A507 table for material and batch combination. I want to fetch the latest record based on the value of field DATBI. I have written the code as follows. But in the select query its taking more time. I dont want to write any condition in where claue for DATBI field because I have already tried with that option.
    SELECT kschl
               matnr
               charg
               datbi
               knumh
        FROM a507
        INTO TABLE it_a507
        FOR ALL ENTRIES IN lit_mch1
        WHERE kschl = 'ZMRP'
        AND   matnr = lit_mch1-matnr
        AND   charg = lit_mch1-charg.
    SORT it_a507 BY kschl matnr charg datbi DESCENDING.
      DELETE ADJACENT DUPLICATES FROM it_a507 COMPARING kschl matnr charg.

    Hi,
    These kind of tables will be storing large volumes of data. Thus while making a select on it, its important to use as many primary key fields as possible in the where condition. Here you can try mentioning KAPPL since its specific to a requirement. If its for purchasing use 'M' and try.
    if not lit_mch1[] is initial.
    SELECT kschl
    matnr
    charg
    datbi
    knumh
    FROM a507
    INTO TABLE it_a507
    FOR ALL ENTRIES IN lit_mch1
    WHERE kappl = 'M'
    AND kschl = 'ZMRP'
    AND matnr = lit_mch1-matnr
    AND charg = lit_mch1-charg.
    endif.
    SORT it_a507 BY kschl matnr charg datbi DESCENDING.
    DELETE ADJACENT DUPLICATES FROM it_a507 COMPARING kschl matnr charg.
    This should considerably increase the performance
    Regards,
    Vik

  • How to fetch latest data from database instead of clicking on Generate report in RSRT?

    Hi all,
    We are not able to get latest data of BEx query.Everytime we have to click on Generate Report  button in RSRT for getting latest value.
    Is there any way to get latest value automatically? We are not doing any changes in query.
    When we click on Generate Report it is fetching data from Database instead of cache,so if it is possible to disable cache or any other workaround?
    We have used time(0TIME) characteristic in query as a formula variable. We have applied a condition on that variable to fetch the first record (latest value). When we click on generate report, it fetches correct value with latest time but when we run the query without generate report, it is not fetching latest value. The cube on which this query is developed, is a realtime cube.
    Regards,
    Zalak

    Once its in the request attributes it would be far better to use JSP Expression Language as well as the JSTL that Malcolm suggested So if the servlet code said:
         request.setAttribute("myList", al);Then using EL and the JSTL tags from <%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> the JSP would look something like:
    <c:forEach items="${myList}" var="user">
    <TR>
       <TD><c:out value="${user.name}" /></TD>
       <TD><c:out value="${user.fname}" /></TD>
       <TD><c:out value="${user.city}" /></TD>
       <TD><c:out value="${user.cont}" /></TD>
       <TD><c:out value="${user.age}" /></TD>
    </TR>
    </c:forEach>

  • Bex Report to show only latest record on report and then history

    Hello Gurus,
    Table 1:
    Employee
    Valid From
    Valid To
    Hours Perday
    1900
    01/01/2014
    02/28/2014
    8
    1900
    03/01/2014
    03/31/2014
    6
    1900
    04/01/2014
    12/31/9999
    4
    Table 2:
    I have above in DSO, On report I need to show the latest record without Valid From and Valid To Fields shown below.
    Employee
    Hours Perday
    1900
    4
    And when user drags Valid From and Valid To field from free charac. it should show history as in DSO. I have leveraged 0VALIDTO customer exit processing variable with variable as "Value Ranges" >= Key Date so when query is executed if date entered is 04/01/2014 it should bring last record with Hours perday=4 and if date is entered as 03/01/2014 should bring two records with hours per day 6 and 4 and so on.
    The problem on report is that the record being pulled is always latest but when Valid To and Valid From free characteristics are dragged on the report it still shows the latest record and no history. Is there a way that when key date is entered it should show latest record as per >= of that key date and when Valid from is dragged in to report it should display history prior to that key date?
    Thanks,
    Sam

    Hi Sam,
    You are almost in the right direction. A little fine tuning is required in your process
    As you are dealing with Time dependent Master data, the ideal thing to use is "Key Date" in the query. But there is a limitation here. Key Date can show only single date's relevant master data. I mean if you enter 03.01.2014, only second record would be shown. Key date cannot work on Multiple dates or ranges to fetch historical data or multiple records accordingly.
    To achieve your requirement, you should not use Key date in your query.
    I have leveraged 0VALIDTO customer exit processing variable with variable as "Value Ranges" >= Key Date
    The above idea is good. You should make Valid From as Cust exit(i_Step =2) processed and make it as User Entry enabled. So that user exit code can pick up values as per > = Valid From(User Entry Based). But this user exit will be picking historical records if user enters other than latest Valid from date. I mean no drilldown is required. You need to use Valid from in rows also, I believe.
    Try this and let me know if you want any more inputs.
    Regards,
    Suman

  • Fetch latest revision level

    Hello Experts,
    I am writing a select statement to fetch REVLV field from AEOI table by passing AEOI-OBJKT = MATNR from selection screen.
    multiple records are coming into internal table.now i need to take latest record for each material.there is a chance of having multiple records on same date.but they are not maintaining Time field.so how can i capture latest revision level.

    Howdy,
    As the table has a sequential ID number (AEOI-AENNR is the Change Number), you can grab the highest Change Number to be the most recent entry.
    One note with this though is it won't work if you have multiple application servers and your system uses Parallel Buffering as per OSS note 599157 - so you might want to check with your basis consultant on that.
    The only other alternative is to enhance AEOI using include structure CI_AEOI and then find and exit to populate the system time when the record is created.
    Cheers
    Alex

  • How to get latest record on top of the result list

    Hi Gurus,
    How to get latest record on top of the result list when you open the record.
    saved data method in BT120H_CPL of OverView page and result list in ICCMP_INBOX.
    Regards,
    Ravi

    Hi
    Try sort descending by on fileld "changed at ".
    manipulate the sort depends on your requirement
    Regards
    Logu

  • Fetching many records all at once is no faster than fetching one at a time

    Hello,
    I am having a problem getting NI-Scope to perform adequately for my application.  I am sorry for the long post, but I have been going around and around with an NI engineer through email and I need some other input.
    I have the following software and equipment:
    LabView 8.5
    NI-Scope 3.4
    PXI-1033 chassis
    PXI-5105 digitizer card
    DELL Latitude D830 notebook computer with 4 GB RAM.
    I tested the transfer speed of my connection to the PXI-1033 chassis using the niScope Stream to Memory Maximum Transfer Rate.vi found here:
    http://zone.ni.com/devzone/cda/epd/p/id/5273.  The result was 101 MB/s.
    I am trying to set up a system whereby I can press the start button and acquire short waveforms which are individually triggered.  I wish to acquire these individually triggered waveforms indefinitely.  Furthermore, I wish to maximize the rate at which the triggers occur.   In the limiting case where I acquire records of one sample, the record size in memory is 512 bytes (Using the formula to calculate 'Allocated Onboard Memory per Record' found in the NI PXI/PCI-5105 Specifications under the heading 'Waveform Specifications' pg. 16.).  The PXI-5105 trigger re-arms in about 2 microseconds (500kHz), so to trigger at that rate indefinetely I would need a transfer speed of at least 256 Mb/s.  So clearly, in this case the limiting factor for increasing the rate I trigger at and still be able to acquire indefinetely is the rate at which I transfer records from memory to my PC.
    To maximize my record transfer rate, I should transfer many records at once using the Multi Fetch VI, as opposed to the theoretically slower method of transferring one at a time.  To compare the rate that I can transfer records using a transfer all at once or one at a time method, I modified the niScope EX Timestamps.vi to allow me to choose between these transfer methods by changing the constant wired to the Fetch Number of Records property node to either -1 or 1 repectively.  I also added a loop that ensures that all records are acquired before I begin the transfer, so that acquisition and trigger rates do not interfere with measuring the record transfer rate.  This modified VI is attached to this post.
    I have the following results for acquiring 10k records.  My measurements are done using the Profile Performance and Memory Tool.
    I am using a 250kHz analog pulse source.
    Fetching 10000 records 1 record at a time the niScope Multi Fetch
    Cluster takes a total time of 1546.9 milliseconds or 155 microseconds
    per record.
    Fetching 10000 records at once the niScope Multi Fetch Cluster takes a
    total time of 1703.1 milliseconds or 170 microseconds per record.
    I have tried this for larger and smaller total number of records, and the transfer time per is always around 170 microseconds per record regardless if I transfer one at a time or all at once.  But with a 100MB/s link and 512 byte record size, the Fetch speed should approach 5 microseconds per record as you increase the number of records fetched at once.
    With this my application will be limited to a trigger rate of 5kHz for running indefinetely, and it should be capable of closer to a 200kHz trigger rate for extended periods of time.  I have a feeling that I am missing something simple or am just confused about how the Fetch functions should work. Please enlighten me.
    Attachments:
    Timestamps.vi ‏73 KB

    Hi ESD
    Your numbers for testing the PXI bandwidth look good.  A value of
    approximately 100MB/s is reasonable when pulling data accross the PXI
    bus continuously in larger chunks.  This may decrease a little when
    working with MXI in comparison to using an embedded PXI controller.  I
    expect you were using the streaming example "niScope Stream to Memory
    Maximum Transfer Rate.vi" found here: http://zone.ni.com/devzone/cda/epd/p/id/5273.
    Acquiring multiple triggered records is a little different.  There are
    a few techniques that will help to make sure that you are able to fetch
    your data fast enough to be able to keep up with the acquired data or
    desired reference trigger rate.  You are certainly correct that it is
    more efficient to transfer larger amounts of data at once, instead of
    small amounts of data more frequently as the overhead due to DMA
    transfers becomes significant.
    The trend you saw that fetching less records was more efficient sounded odd.  So I ran your example and tracked down what was causing that trend.  I believe it is actually the for loop that you had in your acquisition loop.  I made a few modifications to the application to display the total fetch time to acquire 10000 records.  The best fetch time is when all records are pulled in at once. I left your code in the application but temporarily disabled the for loop to show the fetch performance. I also added a loop to ramp the fetch number up and graph the fetch times.  I will attach the modified application as well as the fetch results I saw on my system for reference.  When the for loop is enabled the performance was worst at 1 record fetches, The fetch time dipped  around the 500 records/fetch and began to ramp up again as the records/fetch increases to 10000.
    Note I am using the 2D I16 fetch as it is more efficient to keep the data unscaled.  I have also added an option to use immediate triggering - this is just because I was not near my hardware to physically connect a signal so I used the trigger holdoff property to simulate a given trigger rate.
    Hope this helps.  I was working in LabVIEW 8.5, if you are working with an earlier version let me know.
    Message Edited by Jennifer O on 04-12-2008 09:30 PM
    Attachments:
    RecordFetchingTest.vi ‏143 KB
    FetchTrend.JPG ‏37 KB

  • Fetching 10 records at a time

    Product.Category dimension has 4 child nodes Accessories,Bikes,Clothing n Components.My problem is when I have thousands of first level nodes my application takes a lot of time to load. Is there a way to fetch only say 100 records at a time? So then when
    i click a next button i get the next 100
    Eg:On the 1st click of a button I fetch 2 members
    WITH MEMBER [Measures].[ChildrenCount] AS
    [Product].[Category].CurrentMember.Children.Count
    SELECT [Measures].[ChildrenCount] ON 1
    ,TopCount([Product].[Category].Members, 2) on 0
    FROM [Adventure Works]
    This fetches only Accessories. Is there a way the fetch the next two records Bikes n Clothing on  click.
    Then Components on the next click. So on an so forth.

    Hi Tsunade,
    According to your description, there are thousands of members on your cube. It will take long time to retrieve all the member at a time, in order to improve the performance, you are looking for a function to fetch 10 records at a time, right? Based on my
    research, there is no such a functionally to work around this requirement currently.
    If you have any concern about this behavior, you can submit a feedback at
    http://connect.microsoft.com/SQLServer/Feedback and hope it is resolved in the next release of service pack or product. Your feedback enables Microsoft to make software and services the best that they can be, Microsoft might consider to add this feature
    in the following release after official confirmation.
    Regards,
    Charlie Liao
    TechNet Community Support

  • How to find out the latest Record in per_all_people_f and per_all_assignme

    Hi ,
    How to find out the latest Record in per_all_people_f and per_all_assignments_f
    Requirement : Need to find out the latest record in per_all_people_f and per_all_assignments_f to update the attribute column with pre defined value . Its not possible to track only with person_id / assignment_id and effective end date
    SELECT pp_id
    FROM (SELECT app.person_id pp_id,
    asf.*
    FROM apps.per_all_people_f app,
    apps.per_all_assignments_f asf
    WHERE --app.person_id=123568 and
    asf.person_id = app.person_id AND
    app.effective_end_date = to_date('31-dec-4712') AND
    asf.effective_end_date = to_date('31-dec-4712')
    GROUP BY app.person_id)
    HAVING COUNT(pp_id) > 1
    GROUP BY pp_id
    This query also returns more than 1 value for person_id .
    It would be great if you put comment on this .. Thanks in advance ,
    Arya

    I am getting more records with asf.primary_flag='Y' . If you give ur mail id , i will send the sample data
    ASSIGNMENT_ID     EFFECTIVE_START_DATE     EFFECTIVE_END_DATE     BUSINESS_GROUP_ID     RECRUITER_ID     GRADE_ID     POSITION_ID     JOB_ID     ASSIGNMENT_STATUS_TYPE_ID     PAYROLL_ID     LOCATION_ID     PERSON_REFERRED_BY_ID     SUPERVISOR_ID     SPECIAL_CEILING_STEP_ID     PERSON_ID     RECRUITMENT_ACTIVITY_ID     SOURCE_ORGANIZATION_ID     ORGANIZATION_ID     PEOPLE_GROUP_ID     SOFT_CODING_KEYFLEX_ID     VACANCY_ID     PAY_BASIS_ID     ASSIGNMENT_SEQUENCE     ASSIGNMENT_TYPE     PRIMARY_FLAG     APPLICATION_ID     ASSIGNMENT_NUMBER     CHANGE_REASON     COMMENT_ID     DATE_PROBATION_END     DEFAULT_CODE_COMB_ID     EMPLOYMENT_CATEGORY     FREQUENCY     INTERNAL_ADDRESS_LINE     MANAGER_FLAG     NORMAL_HOURS     PERF_REVIEW_PERIOD     PERF_REVIEW_PERIOD_FREQUENCY     PERIOD_OF_SERVICE_ID     PROBATION_PERIOD     PROBATION_UNIT     SAL_REVIEW_PERIOD     SAL_REVIEW_PERIOD_FREQUENCY     SET_OF_BOOKS_ID     SOURCE_TYPE     TIME_NORMAL_FINISH     TIME_NORMAL_START     BARGAINING_UNIT_CODE     LABOUR_UNION_MEMBER_FLAG     HOURLY_SALARIED_CODE     REQUEST_ID     PROGRAM_APPLICATION_ID     PROGRAM_ID     PROGRAM_UPDATE_DATE     ASS_ATTRIBUTE_CATEGORY     ASS_ATTRIBUTE1     ASS_ATTRIBUTE2     ASS_ATTRIBUTE3     ASS_ATTRIBUTE4     ASS_ATTRIBUTE5     ASS_ATTRIBUTE6     ASS_ATTRIBUTE7     ASS_ATTRIBUTE8     ASS_ATTRIBUTE9     ASS_ATTRIBUTE10     ASS_ATTRIBUTE11     ASS_ATTRIBUTE12     ASS_ATTRIBUTE13     ASS_ATTRIBUTE14     ASS_ATTRIBUTE15     ASS_ATTRIBUTE16     ASS_ATTRIBUTE17     ASS_ATTRIBUTE18     ASS_ATTRIBUTE19     ASS_ATTRIBUTE20     ASS_ATTRIBUTE21     ASS_ATTRIBUTE22     ASS_ATTRIBUTE23     ASS_ATTRIBUTE24     ASS_ATTRIBUTE25     ASS_ATTRIBUTE26     ASS_ATTRIBUTE27     ASS_ATTRIBUTE28     ASS_ATTRIBUTE29     ASS_ATTRIBUTE30     LAST_UPDATE_DATE     LAST_UPDATED_BY     LAST_UPDATE_LOGIN     CREATED_BY     CREATION_DATE     TITLE     OBJECT_VERSION_NUMBER
    931510     7-Nov-08     31-Dec-12     122     (null)     (null)     (null)     3978     1     (null)     14402     (null)     220150     (null)     734956     (null)     (null)     476     (null)     (null)     (null)     (null)     2     E     Y     (null)     100035417-2     (null)     (null)     (null)     45948739     (null)     (null)     (null)     (null)     (null)     (null)     (null)     868007     (null)     (null)     (null)     (null)     449     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)                                        
    797386     26-Aug-08     26-Aug-08     122     (null)     (null)     (null)     3980     3     (null)     14402     (null)     218925     (null)     734956     (null)     (null)     476     (null)     (null)     (null)     (null)     1     E     Y     (null)     100035417     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     740071     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)                                        
    916076     26-Aug-08     31-Dec-12     122     (null)     (null)     (null)     3980     1     4     14402     (null)     218925     (null)     734956     (null)     (null)     476     (null)     (null)     (null)     (null)     1     B     Y     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)                                        
    797386     25-Feb-08     25-Aug-08     122     (null)     (null)     (null)     3980     1     (null)     14402     (null)     218925     (null)     734956     (null)     (null)     476     (null)     (null)     (null)     (null)     1     E     Y     (null)     100035417     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     740071     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)     (null)

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

  • Query takes too much time in fetching last records.

    Hi,
    I am using oracle 8.1 and trying to execute a SQL statement, it takes few minutes and display records.
    When trying to fetch all the records, it is fast up to some level and takes much time to fetch last record.
    Ex: If total records = 16336 , then it fetches records faster up to 16300 and takes app.500 sec to fetch last 36 records.
    Could you kindly let me know the reason.
    I have copied the explain plan below for your reference.Please let me know if anything is required.
    SELECT STATEMENT, GOAL = RULE               4046     8     4048
    NESTED LOOPS OUTER               4046     8     4048
      NESTED LOOPS OUTER               4030     8     2952
       FILTER                         
        NESTED LOOPS OUTER                         
         NESTED LOOPS OUTER               4014     8     1728
          NESTED LOOPS               3998     8     936
           TABLE ACCESS BY INDEX ROWID     IFSAPP     CUSTOMER_ORDER_TAB     3966     8     440
            INDEX RANGE SCAN     IFSAPP     CUSTOMER_ORDER_1_IX     108     8     
           TABLE ACCESS BY INDEX ROWID     IFSAPP     CUSTOMER_ORDER_LINE_TAB     4     30667     1901354
            INDEX RANGE SCAN     IFSAPP     CUSTOMER_ORDER_LINE_PK     3     30667     
          TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_CONS_PARCEL_CONTENT_TAB     2     2000     198000
           INDEX RANGE SCAN     IFSAPP     PWR_CONS_PARCEL_CONTENT_1_IDX     1     2000     
         TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_CONS_PARCEL_TAB     1     2000     222000
          INDEX UNIQUE SCAN     IFSAPP     PWR_CONS_PARCEL_PK          2000     
       TABLE ACCESS BY INDEX ROWID     IFSAPP     CONSIGNMENT_PARCEL_TAB     1     2000     84000
        INDEX UNIQUE SCAN     IFSAPP     CONSIGNMENT_PARCEL_PK          2000     
      TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_OBJECT_CONNECTION_TAB     2     20     2740
       INDEX RANGE SCAN     IFSAPP     PWR_OBJECT_CONNECTION_IX1     1     20     Thanks.

    We are using PL/SQL Developer tool. The time what we have mentioned in the post is approximated time.
    Apologies for not mentioning these details in previous thread.Let it be approximate time but how did you arrive at that time? When a query fetches records how did you derived that one portion is fetched in x and the remaining in y time limit?
    I would suggest this could be some issue with PL/SQL Developer (Never used this tool by myself) But for performance testing i would suggest you to use SQL Plus. Thats the best tool to test performance.

  • How can we split a select query to 3 or 4 if it is fetching much records?

    I am running a query like:
    select * from table_name
    it will be fetching 152940696 records. Now i want to fetch this result as 3 or 4 select statements. That is, in the second query I want to fetch the records from where i stopped in the first query. and similar for the 3rd i have to continue from the 2nd query. And for the 4th query i have to start from where i have stopped in the 3rd query.
    when i tried with rownum we can fetch the records upto < or <= to a particular count like 100000000. But above this count i cannot fetch using rownum. Because > or >= wont work with rownum.
    Is there anyother way to split the select query as i explained.
    Thanks in advance

    I'll assume you want to split the query up for performance reasons.
    The easiest way to do this if you have the license is to use the parallel query option, which can help, hurt or do nothing. The only way to find out is to try. PQO would be best from a performance standpoint if possible, provided it will do what you need.
    Failing that as has been suggested you need a logical, scalable way to divide up the queries. It has already been pointed out that the rownum solution probably will not work correctly. Also, the MINUS with ROWNUM idea has the disadvantage of having to read a lot of the same data twice, making the query run longer.
    Perhaps a range would provide a way to split up the data - something like
    select whatever
      from table
    where primary_key < 10000000;
    select whatever
      from table
    where primary key between 10000001 and 199999999;
    ...

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Retrieving latest record of an employee for a same scholar ID from pa9048

    Hi Experties...!
    i want to retrieve the latest record from the pa9048 table where the scholar ID (sname) is same for an employee.........can anyone tell me how i do that..........!
    latest record in means of date.......!
    Thanx in advance................!

    Hi,
    Use the RP-PROVIDE-FROM-LAST inftytab subty beg end statement to retrieve the last record.

  • Latest record to get from MSEG Table

    Hi frnds,
    I am getting multiple records against material no. i want single record and latest one.
    SELECT DISTINCT MBLNR MATNR LIFNR
            FROM MSEG
            INTO TABLE IT_MSEG
            WHERE MATNR IN S_MATNR
            ORDER BY MATNR MBLNR DESCENDING.
    S_MATNR = multiple matnr
    eg: matnr   mblnr lifnr
          test     1        abc
          test     2        abc
          test2   4        xyz
          test3   3        abc
          test3   5        vvvv
    I want in my ITAB.
    eg: matnr   mblnr lifnr
          test     1        abc
          test2   4        xyz
          test3   5        vvvv
    Here I am sorting with mblnr to pick latest record.
    How i will get.
    Regards.

    Hi,
        just do one thing if after sorting you are getting first record of latest record in each material then use
    delete adjacent duplicates from itab comparing matnr.
    Regards
    Vijay dwivedi

Maybe you are looking for

  • How to Create a Dimension Table in OBIEE11g

    Hello, Chapter 9, page 221(if pdf) of the Build Repository manual of OBIEE 11g explains following. Creating Dimensions in Level-Based Hierarchies After creating a dimension, each dimension can be associated with attributes (columns) from one or more

  • DME files not getting stored in Appl Server .

    Hi All ,     Please help me to solve this issue ..      My DME files is not getting stored in the application server directory created by me , rather it is getting stored in the     TEMP folder of the application server . How Can I store the DME outp

  • Trouble with PIOS Printer example

    Hello gurus, I have a problem. I'm trying to run an example from the tutorial, the one that aply the PIOS Printer API, and it throws me an error (awfull by the way) like this: java.lang.ClassNotFoundException: com.sap.ip.me.api.pios.impl.connection.C

  • I'm using iPhoto '09...

    ...but am thinking of upgrading to iPhoto 11, however there are 2 questions I'd like the answers to, before I do so. These are: 1. In iPhoto 09, when I click or use the space bar to view any of my photos, all I get in the large screen is an exclamati

  • Macbook Pro (late 2008) with 8GB ram sporadic freezing / hanging

    Hi, Last year, I was running Lion on my late 2008 MBP with 4GB ram with no problems.  Whilst 4GB is the maximum officially supported for my machine, forum posts suggested that some people had no issues running it with 8GB. I went for the upgrade and