Fetching limited records SQL

Hi All,
Need all your valuable advise again.
The ISSUE is :-
Rightnow, the SELECT below , the inner query will fetch all the records from the database. Outer query will fetch the exact block of records from the inner query results(total available).
Example:- Lets consider a person search query is used to filter the records by department. Assume this query yields 30K records
Whatwe do today is,
SELECT * FROM(
SELECT
rownumrownumber,deptname
FROM person
WHERE department = 'Electronics' // Assume this inner query fetches 30K records
) WHERE rownumberbetween 100 and 125;// out of 30K we just need the 25 results
Solution:
How do we make the Inner query to limit the maximum records we wanted? Presently,its fetching all records.
SELECT * FROM(
SELECT
rownum rownumber,deptname
FROM person
WHERE department = 'Electronics' ANDrownum<=125 // now the inner query will fetch only 125 records
) WHERE rownumber between 100 and 125;// out of 30K we just need the 25 results

Take a look at these two articles:
[On Top-N and Pagination Queries|http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
[On Rownum and Limiting Results|http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html]
These show how to handle all sorts of situation (e.g. if you care about ordering or not). In any case the simple solution is:
Pagination with ROWNUM
My all-time-favorite use of ROWNUM is pagination. In this case, I use ROWNUM to get rows N through M of a result set. The general form is as follows:
select *
  from ( select /*+ FIRST_ROWS(n) */
  a.*, ROWNUM rnum
      from ( your_query_goes_here,
      with order by ) a
      where ROWNUM <=
      :MAX_ROW_TO_FETCH )
where rnum  >= :MIN_ROW_TO_FETCH;
where
    * FIRST_ROWS(N) tells the optimizer, "Hey, I'm interested in getting the first rows, and I'll get N of them as fast as possible."
    * :MAX_ROW_TO_FETCH is set to the last row of the result set to fetch—if you wanted rows 50 to 60 of the result set, you would set this to 60.
    * :MIN_ROW_TO_FETCH is set to the first row of the result set to fetch, so to get rows 50 to 60, you would set this to 50.
The concept behind this scenario is that an end user with a Web browser has done a search and is waiting for the results. It is imperative to return the first result page (and second page, and so on) as fast as possible. If you look at that query closely, you'll notice that it incorporates a top-N query (get the first :MAX_ROW_TO_FETCH rows from your query) and hence benefits from the top-N query optimization I just described. Further, it returns over the network to the client only the specific rows of interest—it removes any leading rows from the result set that are not of interest. Just plug your query (without trying to worry about the number or rows returned) into the template query. The articles explain how to do more complex things.

Similar Messages

  • Fetching limited records from VO

    Hi,
    We have a OA Framework page with a classic table and some LOVs. The classic table is getting the data from a VO (only 20 records are displayed in the table, next link has to be used to view more records). The maximum number of records present in the VO is 50000 (business requirement) . Because of the huge amount of data, the page is taking very very long time to load (session gets expired sometimes!!!). Below are the questions.
    1. Is it possible to query only the first 20 records from VO when the page is loaded initially? Next set of 20 records
    should be loaded on clicking on Next link.
    2. If the above can't be done, is there any alternative method using which the performance issue can be solved?
    Any comments on this would be of great help.
    Thanks,
    Shree

    Shree,
    What you are suggesting would require the user to click on Next link for 2500 times in case the last record needs to be accessed!!
    Anyway the VO would query all the records at one go. All queries work like that, dont they? The table shall show limited number of records and the previous/next links can be used to navigate to the other data. Now 50000 records means that the value of Profile Fnd: View Object Max Fetch Size is much above the recommended value of 200. I guess you should have a relook at the design of the form to show limited number of records as the output of one query. Otherwise accept the performance issues as they are. Increasing the profile value would have performance impacts.
    Hope that clarifies.
    Regards
    Sumit

  • How to quickly fetch the records from an SQL recordset

    I'm using LW/CVI V7.1.1 and SQL Toolkit V2.06.
    When displaying the recordset from a SELECT statement I use the following construct:
      SQLhandle1 = DBActivateSQL(DBhandle, SELECTtext);
      DBBindCol() statements.......
      numRecs = DBNumberOfRecords(SQLhandle1);
      for (n=1; n<=numRecs; n++) {
        DBFetchNext(SQLhandle1);
        display record to the user...
      DBDeactivateSQL (SQLhandle1);
    This has always worked fine for me when using local databases. Now I am developing an app for a remote database, and the fetching of each selected record is proving to be an issue. It takes at best, 60msecs for my round-trip network access to fetch each record. If selecting very many records, the fetching can add up to a considerable time delay.
    My question is, how can I bind the entire recordset to my application variables, (or to a local table?) in a single request to the database? Does LW/CVI support such a method? Or perhaps someone knows an SQL method to help me?
    Thanks

    Hi Michael,
    Thanks for the help. This is what I was looking for. Not sure why I missed it!
    However, after trying it out, it doesn't seem to help. The statement:  DBGetVariantArray(SQLhandle, &array, &recs, &fields); seems to take the same amount of time to get the records as the individual DBFetchNext() statements. So if my SQL statement matches 100 records, the DBGetVariantArray() call will take 100*60msec to complete.
    Is there a DB attribute setting that needs changed?

  • Fetching many records all at once is no faster than fetching one at a time

    Hello,
    I am having a problem getting NI-Scope to perform adequately for my application.  I am sorry for the long post, but I have been going around and around with an NI engineer through email and I need some other input.
    I have the following software and equipment:
    LabView 8.5
    NI-Scope 3.4
    PXI-1033 chassis
    PXI-5105 digitizer card
    DELL Latitude D830 notebook computer with 4 GB RAM.
    I tested the transfer speed of my connection to the PXI-1033 chassis using the niScope Stream to Memory Maximum Transfer Rate.vi found here:
    http://zone.ni.com/devzone/cda/epd/p/id/5273.  The result was 101 MB/s.
    I am trying to set up a system whereby I can press the start button and acquire short waveforms which are individually triggered.  I wish to acquire these individually triggered waveforms indefinitely.  Furthermore, I wish to maximize the rate at which the triggers occur.   In the limiting case where I acquire records of one sample, the record size in memory is 512 bytes (Using the formula to calculate 'Allocated Onboard Memory per Record' found in the NI PXI/PCI-5105 Specifications under the heading 'Waveform Specifications' pg. 16.).  The PXI-5105 trigger re-arms in about 2 microseconds (500kHz), so to trigger at that rate indefinetely I would need a transfer speed of at least 256 Mb/s.  So clearly, in this case the limiting factor for increasing the rate I trigger at and still be able to acquire indefinetely is the rate at which I transfer records from memory to my PC.
    To maximize my record transfer rate, I should transfer many records at once using the Multi Fetch VI, as opposed to the theoretically slower method of transferring one at a time.  To compare the rate that I can transfer records using a transfer all at once or one at a time method, I modified the niScope EX Timestamps.vi to allow me to choose between these transfer methods by changing the constant wired to the Fetch Number of Records property node to either -1 or 1 repectively.  I also added a loop that ensures that all records are acquired before I begin the transfer, so that acquisition and trigger rates do not interfere with measuring the record transfer rate.  This modified VI is attached to this post.
    I have the following results for acquiring 10k records.  My measurements are done using the Profile Performance and Memory Tool.
    I am using a 250kHz analog pulse source.
    Fetching 10000 records 1 record at a time the niScope Multi Fetch
    Cluster takes a total time of 1546.9 milliseconds or 155 microseconds
    per record.
    Fetching 10000 records at once the niScope Multi Fetch Cluster takes a
    total time of 1703.1 milliseconds or 170 microseconds per record.
    I have tried this for larger and smaller total number of records, and the transfer time per is always around 170 microseconds per record regardless if I transfer one at a time or all at once.  But with a 100MB/s link and 512 byte record size, the Fetch speed should approach 5 microseconds per record as you increase the number of records fetched at once.
    With this my application will be limited to a trigger rate of 5kHz for running indefinetely, and it should be capable of closer to a 200kHz trigger rate for extended periods of time.  I have a feeling that I am missing something simple or am just confused about how the Fetch functions should work. Please enlighten me.
    Attachments:
    Timestamps.vi ‏73 KB

    Hi ESD
    Your numbers for testing the PXI bandwidth look good.  A value of
    approximately 100MB/s is reasonable when pulling data accross the PXI
    bus continuously in larger chunks.  This may decrease a little when
    working with MXI in comparison to using an embedded PXI controller.  I
    expect you were using the streaming example "niScope Stream to Memory
    Maximum Transfer Rate.vi" found here: http://zone.ni.com/devzone/cda/epd/p/id/5273.
    Acquiring multiple triggered records is a little different.  There are
    a few techniques that will help to make sure that you are able to fetch
    your data fast enough to be able to keep up with the acquired data or
    desired reference trigger rate.  You are certainly correct that it is
    more efficient to transfer larger amounts of data at once, instead of
    small amounts of data more frequently as the overhead due to DMA
    transfers becomes significant.
    The trend you saw that fetching less records was more efficient sounded odd.  So I ran your example and tracked down what was causing that trend.  I believe it is actually the for loop that you had in your acquisition loop.  I made a few modifications to the application to display the total fetch time to acquire 10000 records.  The best fetch time is when all records are pulled in at once. I left your code in the application but temporarily disabled the for loop to show the fetch performance. I also added a loop to ramp the fetch number up and graph the fetch times.  I will attach the modified application as well as the fetch results I saw on my system for reference.  When the for loop is enabled the performance was worst at 1 record fetches, The fetch time dipped  around the 500 records/fetch and began to ramp up again as the records/fetch increases to 10000.
    Note I am using the 2D I16 fetch as it is more efficient to keep the data unscaled.  I have also added an option to use immediate triggering - this is just because I was not near my hardware to physically connect a signal so I used the trigger holdoff property to simulate a given trigger rate.
    Hope this helps.  I was working in LabVIEW 8.5, if you are working with an earlier version let me know.
    Message Edited by Jennifer O on 04-12-2008 09:30 PM
    Attachments:
    RecordFetchingTest.vi ‏143 KB
    FetchTrend.JPG ‏37 KB

  • Query takes too much time in fetching last records.

    Hi,
    I am using oracle 8.1 and trying to execute a SQL statement, it takes few minutes and display records.
    When trying to fetch all the records, it is fast up to some level and takes much time to fetch last record.
    Ex: If total records = 16336 , then it fetches records faster up to 16300 and takes app.500 sec to fetch last 36 records.
    Could you kindly let me know the reason.
    I have copied the explain plan below for your reference.Please let me know if anything is required.
    SELECT STATEMENT, GOAL = RULE               4046     8     4048
    NESTED LOOPS OUTER               4046     8     4048
      NESTED LOOPS OUTER               4030     8     2952
       FILTER                         
        NESTED LOOPS OUTER                         
         NESTED LOOPS OUTER               4014     8     1728
          NESTED LOOPS               3998     8     936
           TABLE ACCESS BY INDEX ROWID     IFSAPP     CUSTOMER_ORDER_TAB     3966     8     440
            INDEX RANGE SCAN     IFSAPP     CUSTOMER_ORDER_1_IX     108     8     
           TABLE ACCESS BY INDEX ROWID     IFSAPP     CUSTOMER_ORDER_LINE_TAB     4     30667     1901354
            INDEX RANGE SCAN     IFSAPP     CUSTOMER_ORDER_LINE_PK     3     30667     
          TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_CONS_PARCEL_CONTENT_TAB     2     2000     198000
           INDEX RANGE SCAN     IFSAPP     PWR_CONS_PARCEL_CONTENT_1_IDX     1     2000     
         TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_CONS_PARCEL_TAB     1     2000     222000
          INDEX UNIQUE SCAN     IFSAPP     PWR_CONS_PARCEL_PK          2000     
       TABLE ACCESS BY INDEX ROWID     IFSAPP     CONSIGNMENT_PARCEL_TAB     1     2000     84000
        INDEX UNIQUE SCAN     IFSAPP     CONSIGNMENT_PARCEL_PK          2000     
      TABLE ACCESS BY INDEX ROWID     IFSAPP     PWR_OBJECT_CONNECTION_TAB     2     20     2740
       INDEX RANGE SCAN     IFSAPP     PWR_OBJECT_CONNECTION_IX1     1     20     Thanks.

    We are using PL/SQL Developer tool. The time what we have mentioned in the post is approximated time.
    Apologies for not mentioning these details in previous thread.Let it be approximate time but how did you arrive at that time? When a query fetches records how did you derived that one portion is fetched in x and the remaining in y time limit?
    I would suggest this could be some issue with PL/SQL Developer (Never used this tool by myself) But for performance testing i would suggest you to use SQL Plus. Thats the best tool to test performance.

  • Best way to Fetch the record

    Hi,
    Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
    Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
    The table has the following key Columns for the Select (sample Table)
    Client_Visit
    ID Number(12,0) --sequence generated number
    EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
    Create_TS Timestamp(6)
    Client_ID Number(9,0)
    Cascade Flg vahrchar2(1)
    On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
    I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
    Code 1:
    SELECT   au_subtyp1.au_id_k,
                                       au_subtyp1.pgm_struct_id_k
                                  FROM au_subtyp au_subtyp1
                                 WHERE au_subtyp1.create_ts =
                                          (SELECT MAX (au_subtyp2.create_ts)
                                             FROM au_subtyp au_subtyp2
                                            WHERE au_subtyp2.au_id_k =
                                                                au_subtyp1.au_id_k
                                              AND au_subtyp2.create_ts <
                                                     TO_DATE ('2013-01-01',
                                                              'YYYY-MM-DD'
                                              AND au_subtyp2.eff_dte =
                                                     (SELECT MAX
                                                                (au_subtyp3.eff_dte
                                                        FROM au_subtyp au_subtyp3
                                                       WHERE au_subtyp3.au_id_k =
                                                                au_subtyp2.au_id_k
                                                         AND au_subtyp3.create_ts <
                                                                TO_DATE
                                                                    ('2013-01-01',
                                                                     'YYYY-MM-DD'
                                                         AND au_subtyp3.eff_dte < =
                                                                TO_DATE
                                                                    ('2012-12-31',
                                                                     'YYYY-MM-DD'
                                   AND au_subtyp1.exists_flg = 'Y'
    Explain Plan
    Plan hash value: 2534321861
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  1 |  FILTER                  |           |       |       |       |            |          |
    |   2 |   HASH GROUP BY          |           |     1 |    91 |       | 33265   (2)| 00:06:40 |
    |*  3 |    HASH JOIN             |           |  1404K|   121M|    19M| 33178   (1)| 00:06:39 |
    |*  4 |     HASH JOIN            |           |   307K|    16M|  8712K| 23708   (1)| 00:04:45 |
    |   5 |      VIEW                | VW_SQ_1   |   307K|  5104K|       | 13493   (1)| 00:02:42 |
    |   6 |       HASH GROUP BY      |           |   307K|    13M|   191M| 13493   (1)| 00:02:42 |
    |*  7 |        INDEX FULL SCAN   | AUSU_PK   |  2809K|   125M|       | 13493   (1)| 00:02:42 |
    |*  8 |      INDEX FAST FULL SCAN| AUSU_PK   |  2809K|   104M|       |  2977   (2)| 00:00:36 |
    |*  9 |     TABLE ACCESS FULL    | AU_SUBTYP |  1404K|    46M|       |  5336   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
       3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
       4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
       7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
                  hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
           filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
                  "AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
       9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
    I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
    select au_id_k,pgm_struct_id_k from (
    SELECT au_id_k
          ,      pgm_struct_id_k
          ,      ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
          create_ts, eff_dte,exists_flg
          FROM   au_subtyp
          WHERE  create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
          AND    eff_dte  <= TO_DATE('2012-12-31','YYYY-MM-DD') 
          ) d  where rn =1   and exists_flg = 'Y'
    --Explain Plan
    Plan hash value: 4039566059
    | Id  | Operation                | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  1 |  VIEW                    |           |  2809K|   168M|       | 40034   (1)| 00:08:01 |
    |*  2 |   WINDOW SORT PUSHED RANK|           |  2809K|   133M|   365M| 40034   (1)| 00:08:01 |
    |*  3 |    TABLE ACCESS FULL     | AU_SUBTYP |  2809K|   133M|       |  5345   (2)| 00:01:05 |
    Predicate Information (identified by operation id):
       1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
       2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
                  INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
       3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
                  2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
    Vijay

    Hi Justin,
    Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
    The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
    The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
    Both the production and test environment are 3 Node RAC.
    First Query...
    CPU used by this session     4740
    CPU used when call started     4740
    Cached Commit SCN referenced     21393
    DB time     4745
    OS Involuntary context switches     467
    OS Page reclaims     64253
    OS System time used     26
    OS User time used     4562
    OS Voluntary context switches     16
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     2487
    bytes sent via SQL*Net to client     15830
    calls to get snapshot scn: kcmgss     37
    consistent gets     52162
    consistent gets - examination     2
    consistent gets from cache     52162
    enqueue releases     19
    enqueue requests     19
    enqueue waits     1
    execute count     2
    ges messages sent     1
    global enqueue gets sync     19
    global enqueue releases     19
    index fast full scans (full)     1
    index scans kdiixs1     1
    no work - consistent read gets     52125
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time cpu     1
    parse time elapsed     1
    physical write IO requests     69
    physical write bytes     17522688
    physical write total IO requests     69
    physical write total bytes     17522688
    physical write total multi block requests     69
    physical writes     2139
    physical writes direct     2139
    physical writes direct temporary tablespace     2139
    physical writes non checkpoint     2139
    recursive calls     19
    recursive cpu usage     1
    session cursor cache hits     1
    session logical reads     52162
    sorts (memory)     2
    sorts (rows)     760
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     1
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     9
    Second Query
    CPU used by this session     1197
    CPU used when call started     1197
    Cached Commit SCN referenced     21393
    DB time     1201
    OS Involuntary context switches     8684
    OS Page reclaims     21769
    OS System time used     14
    OS User time used     1183
    OS Voluntary context switches     50
    SQL*Net roundtrips to/from client     9
    bytes received via SQL*Net from client     767
    bytes sent via SQL*Net to client     15745
    calls to get snapshot scn: kcmgss     17
    consistent gets     23871
    consistent gets from cache     23871
    db block gets     16
    db block gets from cache     16
    enqueue releases     25
    enqueue requests     25
    enqueue waits     1
    execute count     2
    free buffer requested     1
    ges messages sent     1
    global enqueue get time     1
    global enqueue gets sync     25
    global enqueue releases     25
    no work - consistent read gets     23856
    opened cursors cumulative     2
    parse count (hard)     1
    parse count (total)     2
    parse time elapsed     1
    physical read IO requests     27
    physical read bytes     6635520
    physical read total IO requests     27
    physical read total bytes     6635520
    physical read total multi block requests     27
    physical reads     810
    physical reads direct     810
    physical reads direct temporary tablespace     810
    physical write IO requests     117
    physical write bytes     24584192
    physical write total IO requests     117
    physical write total bytes     24584192
    physical write total multi block requests     117
    physical writes     3001
    physical writes direct     3001
    physical writes direct temporary tablespace     3001
    physical writes non checkpoint     3001
    recursive calls     25
    session cursor cache hits     1
    session logical reads     23887
    sorts (disk)     1
    sorts (memory)     2
    sorts (rows)     2810365
    table scan blocks gotten     23856
    table scan rows gotten     2809607
    table scans (short tables)     1
    user I/O wait time     2
    user calls     11
    workarea executions - onepass     1
    workarea executions - optimal     5Thanks,
    Vijay
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
    Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM

  • Slow Speed While fetching data from SQL 2008 using DoQuery.

    Hello,
    I am working for an AddOn and tried to use DoQuery for fetching data from SQL 2008 in C#.
    There are around 148 records which full fill this query condition but it takes much time to fetch the data.
    I wanna know that is there any problem in this code by which my application is getting slower.
    I used break Points and checked it, I founds that while connecting to the server it is taking time.
    Code:
    // Get an initialized SBObob object
    oSBObob = (SAPbobsCOM.SBObob)oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.BoBridge);
    //// Get an initialized Recordset object
    oRecordset = (SAPbobsCOM.Recordset)oCompany.GetBusinessObject(SAPbobsCOM.BoObjectTypes.BoRecordset);
    string sqlstring = "select DocEntry,ItemCode From OWOR  where OWOR.Status='R' and DocEntry not in ( Select distinct(BaseRef) from IGE1 where IGE1.BaseRef = OWOR.DocEntry)";
    oRecordset.DoQuery(sqlstring);
    var ProductList = new BindingList<KeyValuePair<string, string>>();
    ProductList.Add(new KeyValuePair<string, string>("", "---Please Select---"));
    while (!(oRecordset.EoF))
    ProductList.Add(new KeyValuePair<string, string>(oRecordset.Fields.Item(0).Value.ToString(), oRecordset.Fields.Item(0).Value.ToString() + " ( " + oRecordset.Fields.Item(1).Value.ToString() + " ) "));
    oRecordset.MoveNext();
    cmbProductionOrder.ValueMember = "Key";
    cmbProductionOrder.DisplayMember = "Value";
    Thanks and Regards,
    Ravi Sharma

    Hi Ravi,
    your code and query look correct. But can you ellaborate a little bit.
    It seems to be a DI API program ( no UI API ) ?
    When you say "I founds that while connecting to the server it is taking time." do you mean the recordset query or the DI API connection to SBO ? The later would be "normal" since the connection can take up to 30 seconds.
    To get data it is usually better to use direct SQL connections.
    regards,
    Maik

  • Select query on QALS table taking around 4 secs to fetch one record

    Hi,
    I have one select query that takes around 4 secs to fetch one record. I would like to know if there are any ways to reduce the time taken for this select.
    SELECT
         b~prueflos
         b~matnr
         b~lagortchrg
         a~vdatum
         a~kzart
         a~zaehler
         a~vcode
         a~vezeiterf
         FROM qals AS b LEFT OUTER JOIN qave AS a ON
         bprueflos = aprueflos
         INTO TABLE t_qals1
         FOR ALL ENTRIES IN t_lgorts
          WHERE  matnr = t_lgorts-matnr
          AND    werk = t_lgorts-werks
          AND    lagortchrg = t_lgorts-lgort
          AND    stat35 = c_x
          AND    art IN (c_01,c_08).
    When I took the SQL trace, here I found other details :
    Column          No.Of Distinct Records
    MANDANT                                      2
    MATNR                                      2.954
    WERK                                          30
    STAT34                                         2
    HERKUNFT                                   5
    Analyze Method                    Sample 114.654 Rows
    Levels of B-Tree                                 2
    Number of leaf blocks                         1.126
    Number of distinct keys                      16.224
    Average leaf blocks per key                1
    Average data blocks per key               3
    Clustering factor                                  61.610
    Also note, This select query is using INDEX RANGE SCAN QALS~D.
    All the suggestions are welcome
    Regards,
    Vijaya

    Hi Rob,
    Its strange but, the table t_lgorts  has only ONE record
    MATNR  =  000000000500003463
    WERK = D133
    LAGORTCHRG   = 0001                                            
    I have also seen that for the above criteria the table QALS has 2266 records that satisfy this condition.
    I am not sure..but if we write the above query as subquery instead of Outer join..will it improve the performance?
    Will check it from my side too..
    Regards,
    Vijaya

  • To fetch ALTERNATE records from a table. (EVEN NUMBERED)

    suppose i having tABle employee columns empid,empname ,emplocation it having 20 rows and  i want to fetch 10 records which are even how to write query
    jitendra

    it was interview question
    Then you ask back to the interview: what do you mean with even rows? In a relational database there is no inherit number for a row. In fact a table is by definition an unordered set. Of course, we can extract only the even empid:s, but there is no guarantee
    that there is exactly ten of them. Chance could have it that all empids are odd.
    If the interviewer insists you can say that we can always run a query with row_number, but that frankly does not make any sense.
    If nothing else, this sort of response may reveal what sort of interviewer you are talking to. Is is exactly as clueless as the question suggest? Or is he purposely asking you a stupid question to see if you are smart enough to call his bluff?
    I don't know about interviews in general, but if I were interviewing people, I would certainly be impressed by someone who has the guts to object and come with sound objections. That is exactly the kind of people I want to work with.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Fetching 10 records at a time

    Product.Category dimension has 4 child nodes Accessories,Bikes,Clothing n Components.My problem is when I have thousands of first level nodes my application takes a lot of time to load. Is there a way to fetch only say 100 records at a time? So then when
    i click a next button i get the next 100
    Eg:On the 1st click of a button I fetch 2 members
    WITH MEMBER [Measures].[ChildrenCount] AS
    [Product].[Category].CurrentMember.Children.Count
    SELECT [Measures].[ChildrenCount] ON 1
    ,TopCount([Product].[Category].Members, 2) on 0
    FROM [Adventure Works]
    This fetches only Accessories. Is there a way the fetch the next two records Bikes n Clothing on  click.
    Then Components on the next click. So on an so forth.

    Hi Tsunade,
    According to your description, there are thousands of members on your cube. It will take long time to retrieve all the member at a time, in order to improve the performance, you are looking for a function to fetch 10 records at a time, right? Based on my
    research, there is no such a functionally to work around this requirement currently.
    If you have any concern about this behavior, you can submit a feedback at
    http://connect.microsoft.com/SQLServer/Feedback and hope it is resolved in the next release of service pack or product. Your feedback enables Microsoft to make software and services the best that they can be, Microsoft might consider to add this feature
    in the following release after official confirmation.
    Regards,
    Charlie Liao
    TechNet Community Support

  • User Name limitations in SQL Developer

    I am using SQL Developer from past 1 year. I am facing some issues with some of my connections which are with Numaric User Names ( Ex. User Name : 12312). In older versions it was giving error ' Invalid User name / Passoword' And in Version3 its giving *' Un Supported Verifier Type , Vendor Code: 17451'*.
    Am able to connect with same user from TOAD and Command Prompt. And with alpha numaric name i able to connect from SQL Developer also.
    Is it a limitation in SQL Developer to have user as alphanumaric only ( Starting with Charectar) ? Can somebody help me if i need to change any setting to use *'Numaric User name'*
    Thanks,
    Ram
    [email protected]

    Thanks Jim for the reply. i tried using double quotes but still the same issue. And i am able to connect from commadprompt, SQL + and from Toad. Only issue with SQL developer.

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

  • How can we split a select query to 3 or 4 if it is fetching much records?

    I am running a query like:
    select * from table_name
    it will be fetching 152940696 records. Now i want to fetch this result as 3 or 4 select statements. That is, in the second query I want to fetch the records from where i stopped in the first query. and similar for the 3rd i have to continue from the 2nd query. And for the 4th query i have to start from where i have stopped in the 3rd query.
    when i tried with rownum we can fetch the records upto < or <= to a particular count like 100000000. But above this count i cannot fetch using rownum. Because > or >= wont work with rownum.
    Is there anyother way to split the select query as i explained.
    Thanks in advance

    I'll assume you want to split the query up for performance reasons.
    The easiest way to do this if you have the license is to use the parallel query option, which can help, hurt or do nothing. The only way to find out is to try. PQO would be best from a performance standpoint if possible, provided it will do what you need.
    Failing that as has been suggested you need a logical, scalable way to divide up the queries. It has already been pointed out that the rownum solution probably will not work correctly. Also, the MINUS with ROWNUM idea has the disadvantage of having to read a lot of the same data twice, making the query run longer.
    Perhaps a range would provide a way to split up the data - something like
    select whatever
      from table
    where primary_key < 10000000;
    select whatever
      from table
    where primary key between 10000001 and 199999999;
    ...

  • Need query to fetch single record

    I am having table t1 with below data
    c1 c2
    1   10
    2   10
    when I am using below query 
    select max(c2) from t1 group by c1
    I am geeting both the records.
    How can i fetch only record (any one) ?

    Hi,
    Here's one way
    WITH   got_r_num AS
        SELECT  c1, c2
        ,       ROW_NUMBER () OVER (ORDER BY  c2  DESC) AS r_num
        FROM    t1
    SELECT  c1, c2
    FROM    got_r_num
    WHERE   r_num  = 1
    The row displayed will be the one with the highest c2 value (or one of those rows, if there happens to be a tie.)
    This is called a Top-N Query, because it picks N items (N=1 in this case) from the top of an ordered list.
    Starting in Oracle 12.1, you can also use
    SELECT    c1, c2
    FROM      t1
    ORDER BY  c2  DESC
    FETCH     FIRST 1 ROW ONLY

  • Query not fetched the record

    Hello,
    Could someone help me please ?
    I have a listing of my sales orders and I want to make changes in my order by opening the form and fetched with that record. When I click on that particular orderno in my listing of order and call the form to display the details, it calls the form but says "Query could not fetch the record". I do not know why ? Please help me with the solution.
    Thanx

    Hello,
    I think you are passing orderno to called form as a parameter. If you are using parameter list check..
    1. If parameter data is getting in form correctly ?
    2. Next, have you changed where clause of other block,so that is will display record with passed orderno ?
    I am expecting more details from you.
    Thanx
    Adi

Maybe you are looking for

  • Deleting email from my phone ONLY, not off the server

    I recently had a glitch logging/accessing my email on my iPhone, which I've resolved in the past by deleting and re-adding my email. Not sure if I skipped a step this time or if the step was no longer there, but now when I delete an email from my pho

  • Problem in Soap response

    Hi all, I am working on SOAP sychronous scenario.i.e ,Web services(Axis) to SAP R/3 system and i want the response from SAP R/3 system and it is to be displayed in web client. In this scenario I am able to send the SOAP message to SAP R/3 system thro

  • Grey screen"no entry sign/question mark

    My imac was working fine and then the browsers began freezing. Tried force quits but no apps came up in the window. Tried restarting and heard a loud beep followed by grey screen and folder with question mark. Took out all connections and tried start

  • Syncing music results in large Other data on iphone

    using 8GB iphone 3G and iTunes 7.7 on mac OS 10.4.11, when i sync music to the iphone, i get 1.42 GB of music plus 3.31GB of Other. when i turn off music syncing, Other goes does to 228MB. this happens if i have all other syncing but music turned off

  • Javascript written in Dreamweaver behaves different when inserted in Muse

    I have a simple javascript list that is written in Dreamweaver and works as expected but when I copy & paste that exact javascript into the -Object-insert html section in Muse the outcome is different. Is there a place I can go to research on what I