Fast way to determine record index in large tables

Hello,
We're currently struggling with a bit of a problem. We have a table with about 150k records containing (amongst other things) the "score" of a user. Now we want to determine the index of said user sorted by the score (i.e. "this user is ranked 92471th").
We need to do this quite often and simple solutions like counting the number of users with a higher score than said user is way too slow for our purpose. Are there any fast ways to do this? (preferably on SQL level, but some caching schemes that doesnt involve a big memory hit is acceptable too)

If the score field is indexed (and is numeric) this shouldnn't take too long.
If it is taking too long and since there won't be a better way I think your best idea is to script something that runs the queries once per x (day maybe) and caches that result in the table somewhere. So essentially the list is only updated once every so often.

Similar Messages

  • HT4009 I'm in Subscription **** on the iPad.  Is there an App or faster way to determine current Magazine subscriptions?

    How can I easily be sure I don't already have a magazine subscription or a single issue in Newsstand (or Zinio) when confronted with a purchase option. Is there an App or faster way to determine this information?  Both Newsstand and Zenio have seen fit to "hide" this data deep in another App or menu, perhaps as a marketing tool to get you to buy multiple copies of the same magazine.  I travel for a living and buy and delete digital magazine all the time.  Sometimes, I buy them and am unable to download immediately due to slow WiFi or restrictions in airports, this adds to the confusion as what I have bought, deleted, or never downloaded, especially with multiple readers.  Additionally, the App store manages current subscriptions on one menu and single purchases in another.    I love reading on the new iPad, but the management of subscriptions has been an absolute nightmare.   Is there another Newstand App that is easier to use or perhaps a Subscription Management App?     

    Take a look here:
    http://support.apple.com/kb/HT4098
    Other than looking in the various locations or doing a search for the title, I don't know of any way to easily find single-issue purchases.
    Regards.

  • Deleting few records from a large table

    Hi,
    I have a table having roughly 1.5 million. I sent a script to delete 16 duplicate records and it took more than 20 minute.
    Any help is appreciated
    Thanks
    Racle

    Since the delete statement keeps a log of records being deleted in case the delete statement may need to be rolled back, it may take a while to delete. Also, do the columns in your where clause have indexes on them? If no, then your system may have a hard time finding the records that need to be deleted. The truncate table command also deletes records much faster than the delete statement, but it doesn't keep a log of the records being deleted so use with caution!

  • Way to send records added to Z table to Transport ?

    Hi
    The functional personnel add records to Z table with SE16 using 'Create Entries' button; then they consult the records, select the records, go to 'Table Entry' menu and 'Transport Entries' option and put the records in a Transport Number for send it to other client.
    But they are afraid to forget select some added record when selecting them to put in transport; so my task is investigate if is there a way to put added records automatically in transport without need to select them.
    That is: Is there any way to make SE16 automatically ask for a transport number when we are creating records using 'Create Entries' button ??
    For example, when we select 'Save' after create some record(s), is it possible make to SE16 ask for a transport number ??
    I don't know if there is any attribute that we can set to Z table to make SE16 behaves in this way.
    Regards
    Frank

    Hi Sharad
    Thanks for your response, but i have some doubts: 
    What does that FM makes ??  does it calls to SE16 ??
    And value table name goes in 'View Name' parameter; then i leave others empty ?  And the tables parameters ?
    Tha is because i called this FM directly with SE37 to see its behavior but ti sends an error 'Transport is nos possible for the specified data'.
    (I´m going to award points)
    Thanks a lot
    Frank

  • Efficiently Querying Large Table

    I have to query a recordset of 14k records against (join) a very large table: billions of data -even a count of the table does not return any resulty after 15 mins.
    I tried a plsql procedure to store the first recordset in a temp table and then preparing two cursors: one on the temp table and the other on the large table.
    However, the plsql procedure runs for a long time and just gives up with this error:
    SQL> exec match;
    ERROR:
    ORA-01041: internal error. hostdef extension doesn't exist
    BEGIN match; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Is there is way through which I can query more efficiently?
    - Using chucks of records from the large table at a time - how to do that? (rowid)
    - Or just ask the dba to partition the table - but the whole table would still need to be queried.
    The temp table is:
    CREATE TABLE test AS SELECT a.mon_ord_no, a.mo_type_id, a.p2a_pbu_id, a.creation_date,b.status_date_time,
    a.expiry_date, a.current_mo_status_desc_id, a.amount,
    a.purchaser_name, a.recipent_name, a.mo_id_type_id,
    a.mo_redeemed_by_id, a.recipient_type, c.pbu_id, c.txn_seq_no, c.txn_date_time
    FROM mon_order a, mo_status b, host_txn_log c
    where a.mon_ord_no = b.mon_ord_no
    and a.mon_ord_no = c.mon_ord_no
    and b.status_date_time = c.txn_date_time
    and b.status_desc_id = 7
    and a.current_mo_status_desc_id = 7
    and a.amount is not null
    and a.amount > 0
    order by b.status_date_time;
    and the PL/SQL Procedure is:
    CREATE OR REPLACE PROCEDURE MATCH
    IS
    --DECLARE
    deleted INTEGER :=0;
    counter INTEGER :=0;
    CURSOR v_table IS
         SELECT DISTINCT pbu_id, txn_seq_no, create_date
         FROM host_v
         WHERE status = 4;
    v_table_record v_table%ROWTYPE;
    CURSOR temp_table (v_pbu_id NUMBER, v_txn_seq_no NUMBER, v_create_date DATE) IS
         SELECT * FROM test
         WHERE pbu_id = v_pbu_id
         AND txn_seq_no = v_txn_seq_no
         AND creation_date = v_create_date;
    temp_table_record temp_table%ROWTYPE;
    BEGIN
    OPEN v_table;
    LOOP
    FETCH v_table INTO voucher_table_record;
    EXIT WHEN v_table%NOTFOUND;
    OPEN temp_table (v_table_record.pbu_id, v_table_record.txn_seq_no, v_table_record.create_date);
    LOOP
    FETCH temp_table INTO temp_table_record;
    EXIT WHEN temp_table %FOUND;
         DELETE FROM test WHERE pbu_id = v_table_record.pbu_id AND
                   temp_table_record.txn_seq_no = v_table_record.txn_seq_no AND
                   temp_table_record.creation_date = v_table_record.create_date;
    END LOOP;
    CLOSE temp_table;
    END LOOP;
    CLOSE v_table;
    END MATCH;
    /

    Many thanks,
    I can get the explain plan for the SQL statement, but I am not sure how to get it for teh PLSQL. Which section in the PLSQL do I get the explain plan. I am using SQL Navigator.
    I can create the cursor with the join, and if it does not need the delete statement, then there is no need requirement for the procedure itself. Should I just run the query as a SQL statement?
    You have not said what I should do with the rowid?
    Regards

  • How to rotate a large table

    I'm building a long technical report with Pages'08.
    There are many tables in this report.
    This document is in portrait format.
    In the middle of this document I have a particularly large
    table which can't be read if I try to stay on a portrait
    presentation of this table: too many columns (15).
    Hence I'd like to find an easy way to either rotate
    the page or the table so as to be able to use larger
    columns.
    I discovered that Pages'08 doesn't permit to put one
    page in landscape format. I also abandonned the idea of
    using 3 different documents (part 1 in portrait,
    part 2 in landscape, part 3 in portrait again).
    I have to chain the paragraph numbers.
    I have to make a table of content at the end of
    this technical report.
    What is the most efficient way to manage to fill this large
    table.
    Word let me do this, but unfortunately it does also
    make me spend toomuch time on other simple and basic functions.
    Is Pages'09 better on this basic and frequent need (at least for my job)?
    <pre>--------
    As long as you'll see students making graphics with pen on paper,
    you'll see the missing keystone of the software empire.
    dan</pre>

    Peggy wrote:
    You can rotate a floating table, but it can be a problem if you need to edit the table. It will auto-rotate to portrait to edit it but it can be difficult to see or get to the outside edges. I find it easiest to copy & paste the table into a landscape document the copy it back after editing.
    Thank you for the nice hint.
    I finally choosed to work on a temporary document in A3 format,
    and keep it open so as to be able to quickly copy my table in the
    main document every time I update it.
    During this copy operation I noticed a boring problem:
    as the text column in my main document is slightly smaller than my table,
    Pages decide to shrink it every time, and I can't recover it's
    original size (which I penibly tuned up in my A3 document).
    Hence all the cell contents are partially hidden.
    The button:
    Inspector > Metrics > Original Size
    is off.
    Do you know how to circumvent this bad habit of Pages to resize my
    imported table?
    <pre>--------
    As long as you'll see students making graphics with pen on paper,
    you'll see the missing keystone of the software empire.
    dan</pre>

  • Need to find a faster way to create many records

    There are times in my application where I need to create on the order of thousands of new records.
    The current workflow is to create objects by parsing a file that the user wishes to import. After creating the objects, I get a reference to my entity and use the AddNew() method. With a for loop I create the new record and fill in the fields from the objects.
    I then display the new records in a datagrid. From here, the user can save to the database after reviewing the datagrid.
    Parsing the file, instantiating the objects, and filling the datagrid are all extremely quick but the process of calling AddNew() and actually filling the record is very slow. 
    It seems like I am very CPU limited. My first thought was to use multple threads to create the new records(aka AddNew(), not for saving) but I don't know of any way to do this since Lightswitch requires all access to be from the screen logic thread.
    Is there a better way for me to do this?

    I have a couple of thoughts for you:
    I think that you will have to upload the file to the web server and load from there.  That will get you the most improvement in speed and that may be fast enough.
    The next thing that you can do is to use a stored procedure to insert the records instead of Entity Framework via AddNew().  You could use Michael's code to insert records individually, which would be much faster than AddNew, or you could do a batch
    insert for best performance.  If you are only inserting a thousand records or so, it won't make much difference.
    See the following for what I have found to be the fastest insert method:
    Inserting records using stored procedure with table-valued
    parameter
    Another user used this method and got the following results:
    Got it to work in a small test program entering 19MLN records and the results are just fabulous:
    vb.net record entry on the server -> 8 hours
    stored procedure per records -> 30 minutes
    stored procedures with table parameter -> 3 minutes
    (That's 100,000 records per second).
    Another thing that you might want to do is to first insert the records to a temp table; the user could then approve/reject records.  If they approve, run a stored proc to copy from the temp table to the final table.  (When I say temp table,
    I don't mean a real SQL server temp table, but another table that is just used to hold the data while reviewing.)
    Mark

  • How to rebuild index on a table in a faster way

    Hi All,
    We have using Oracle 9.2.0.3
    On weekly basis we are loading around 20-30 million of data into a table using sqlloader.
    Before loading the data into the table, we are dropping all the indexes adn rebuilding the same once done.
    There are 6 index on the table and rebuilding them takes around 10-11 hours.
    Is there any ways to reduce the time of index rebuilding.
    Rgds,
    Amol

    Doing it that way a
    alter session set skip_unusable_indexes=truemight also be necessary.

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • Best way to determine insertion order of items in cache for FIFO?

    I want to implement a FIFO queue. I plan on one producer placing unprocessed Orders into a cache. Then multiple consumers will each invoke an EntryProcessor which gets the oldest unprocessed order, sets it processed=true and returns it. What's the best way to determine the oldest object based on insertion order? Should I timestamp the objects with a trigger when they're added to the cache and then index by that value? Or is there a better way? maybe something coherence automatically saves when objects are inserted? Also, it's not critical that the processing order be precisely FIFO, close is good enough.
    Also, since the consumer won't know the key value for the object it will receive, how could the consumer call something like this so it doesn't violate Constraints on Re-entrant Calls? http://wiki.tangosol.com/display/COH34UG/Constraints+on+Re-entrant+Calls
    Thanks,
    Andrew

    Ok, I think I can see where you are coming from now...
    By using a queue for each for each FIX session then you will be experiencing some latency as data is pushed around inside the cluster between the 'owning node' for the order and the location of the queue; but if this is acceptable then great. The number of hops within the cluster and hence the latency will depend on where and how you detect changes to your orders. The advantage of assiging specific orders to each queue is that this will not change should the cluster rebalance; however you should consider what happens if the node controlling a specific FIX session is lost - do you recover from FIX log? If so where is that log kept? Remember to consider what happens if your cluster splits, such that the node with the FIX session is still alive, but is separated from the rest of the cluster. In examining these failure cases you may decide that it is easier to use Coherence's in-built partitioning to assign orders to sessions father than an attribute of order object.
    snidely_whiplash wrote:
    Only changes to orders which result in a new order or replace needing to be sent cause an action by the FIX session. There are several different mechanisms you could use to detect changes to your orders and hence decide if they need to be enqueued:
    1. Use a post trigger that is fired on order insert/update and performs the filtering of changes and if necessary adds the item to the FIX queue
    2. Use a cache store that does the same as (1)
    3. Use an entry processor to perform updates to the order object (as I believe you previously mentioned) and performs logic in (1)
    4. Use a CQC on the order cache
    5. A map listener on the order cache
    The big difference between 1-3 and 4, 5 is that the CQC is i) a SPOF ii) not likely located in the same place as your order object or the queue (assuming that queue is in fact an object in another cache), iii) asynchronously fired hence introducing latency. Also note that the CQC will store your order objects locally whereas a map listener will not.
    (1) and (3) will give you access to both old and new values should that be necessary for your filtering logic.
    Note you must be careful not to make any re-entrant calls with any of 1-3. That means if you are adding something to a FIX queue object in another cache (say using an entry processor) then it should be on a different cache service.
    snidely_whiplash wrote:
    If I move to a CacheStore based setup instead of the CQC based one then any change to an order, including changes made when executions or rejects return on the FIX session will result in the store() method being called which means it will be called unnecessarily a lot. It would be nice if I could specify the CacheStore only store() certain types of changes, ie. those that would result in sending a FIX message. Anything like that possible?There is negligible overhead in Coherence calling your store() method; assuming that your code can decide if anything FIX-related needs to be done based only on the new value of the order object then this should be very fast indeed.
    snidely_whiplash wrote:
    What's a partitioned "token cache"?This is a technique I have used in the past for running services. You create a new partitioned cache into which you place 'tokens' representing a user-defined service that needs to be run. The insertion/deletion of a token in the backing map fires a backing map listener to start/stop a service +(not there are 2 causes of insert/delete in a backing map - i) a user ii) cluster repartitioning)+. In this case that service might be a fix session. If you need to designate a specific member on which a service needs to run then you could add the member id to the token object; however you must be careful that unless you write your own partitioning strategy the token will likely not live on the same cache member as the token indicates; in which case you would want a ful map listener or CQC to listen for tokens rather than a backing map listener
    I hope that's useful rather than confusing!
    Paul

  • Capturing 90 minutes from DV--Faster way?

    I have about 100 mini DV tapes that I want to make into DVD's. I have tried the "OneStep" method and hate it---it just doesn't work all the time. I got all the way through one (capturing/processing/encoding) only to have it crash when the burn cycle started.
    I want to take the video into iMovie instead and export to iDVD. Is there a faster way than letting it capture in real time to iMovie? I hate to wait 90 minutes for each tape. I thought I read that capture could be "sped up" in fast forward somehow. I may be thinking of something else.
    Also, I have a large number of VHS tapes I would like to put onto DVD. I saw a Sony product mentioned for capture (rather than copying all the tapes to miniDV first, then iMovie)
    What should my workflow be for this type of project?
    Thanks,
    Steve

    Also, I have a large number of VHS tapes I would like
    to put onto DVD. I saw a Sony product mentioned for
    capture (rather than copying all the tapes to miniDV
    first, then iMovie)
    What should my workflow be for this type of project?
    I just finished converting a collection of VHS tapes to DVD. I wanted a Digital Video Recorder to use anyway with my DirecTivo satellite TV setup, so I bought a combo VHS-DVD Recorder.
    The box offers one-button transfer of VHS to DVD, so it's very convenient. But I found the VHS deck wasn't able to deliver the quality I got using a separate VHS deck. I ended up connecting a separate VHS box to the DVD half the recorder. Burning a DVD from the tape is a snap and although it's hard to believe, it actually improved the quality.
    In fact, I'd encourage you to consider that solution for your mini DV tapes too. Consider buying a DVR that accepts FireWire input from your camera -- mine does not -- and burn the DVD there. WAY easier for so large a collection.
    Some DVRs have horrible user interfaces (for adding titles, etcetera) so shop carefully. And be picky about recording quality. That combo unit was the second box I purchased. The first, a top name brand, refused to properly sync audio and video after 40 minutes of recording. And the user interface was terrible.
    I now really enjoy being able to transfer special TV (Tivo-type) recordings to DVD. No problems; great quality, and a lot less expensive than tapes.
    Karl

  • Is there a way to determine exactly where a breakpoint occurs?

    Hello everyone:  I am having trouble getting my head around this problem I am having, so I'm hoping someone here has run into something like this and has a tip for me.
    I have a PXI-7354 that I am using to control a rotary stage which has an 8000 lpr encoder, and a 10:1 reduction gear, so I have 80,000 lpr effectively.
    I am using the 7354 to generate a Breakpoint Pulse every 100 encoder counts, so I should be getting 800 pulses per revolution.  (I use the breakpoint pulses to trigger a second device and aPXIe-5122 data acquisition card to synchronize the production and acquisition of a data record.)
    However, and here's the problem:
    When I rotate 1 revolution, I see 799 pulses
    When I rotate 2 revolutions, I see 1598 pulses
    When I rotate 3 revolutions, I see 2397 pulses
    etc. 
    I am losing 1 pulse per revolution. I haven't figured this out yet, as I am using periodic breakpoints with a whole number of breakpoints as a period.
    THe problem is that I "count" the number of breakpoint pulses that I get in order to derive the angular position where the breakpoint occurs.  For instance, if I start at 0 degrees, and I have 0.45 degree spacing between breakpoints, after 10 pulses, I should be at 4.05 degrees.  After 100 pulses, I should be at  44.55 degrees.
    As I am missing one count per rev, however, my derived angular position is incorrect.
    I need a way to determine the actual position of each breakpoint.  The most obvious way to do this is to use the HS capture functionality of the board, and I could (further) share the breakpoint pulse with the HS Capture input on the motion card to do HS capture, but is there any way to do this internally on the 7354?
    Thanks for looking at this, any help is appreciated.
    Wes
    Wes Ramm, Cyth UK
    CLD, CPLI
    Solved!
    Go to Solution.

    Thanks for your response, Matt.
    I have already got the BP signal going to my external device via a UMI-7774, so this is not a problem.  The tricky part of this question is whether there is an easy way to "share" the BP information with the HS Capture INPUT line so that I can grab a HS position when the BPs are generated, so that I'd have a buffer of ACTUAL position, rather than relying on the BPs being in the correct location (and DERIVING the instantaneous position of the BPs by counting the BPs).  It seems that it is NOT possible to share the signal by routing the BP1 Out simultaneously to the external UMI 7774 pin AND to either the HS Capture INPUT OR to my data acquisition card.  I know that I can route my encoder signal and other things to my DAQ card, but this won't help me in this case.  Furthermore, I can only have 1 BP per axis, so it isn't possible to replicate that functionality on a second BP generator.
    I am working on setting up a third device to count the pulses generated by the 7354 when I exercise the stage through motion, so I'll have more data later today.
    I'll post here any findings.
    Thanks again,
    Wes
    Wes Ramm, Cyth UK
    CLD, CPLI

  • Is there any way to determine if a link is a book mark or hyperlink in java script

    Is there any way to determine if a link is a book mark or hyperlink in java script
    Sub Problem:
    I am making an array of quads of all the hyperlinks in a document. I would like to automatically skip over all the bookmarks in the starting pages of a document and just get the links of the hyperlinks.
    Now I have to manually set the pages that contain bookmarks so they are not included in the array.
    Is there any way to determine if a link is a book mark or hyperlink in java script?
    It would help automate the conversion I need below
    John
    Main Problem:
    I have been working on converting a set of pdf files with 1000’s of hyperlinks like www.site.com\folder1\file1.pdf#page=10
    To jump to a local copy of the files with a relative type link
    ../folder1/file1.pdf and then go to the proper page.
    I have found that it can be done manually by changing the hyperlink to a javascript
    var otherDoc = app.openDoc('../folder1/file1.pdf', this);otherDoc.pageNum = 10 - 1;
    and setting each destination file with a disclose()=true;
    Based on the help so far that java script cannot access the hyperlink value in a link
    See: http://forums.adobe.com/thread/1039908?tstart=60
    I have resorted to the following plan using acrobat javascript, an external keyboard macro recorder and excel in combination to get around the problem
    Four folder level acrobat javascripts with “buttons”
    One to get all the link quads in an array, in the pdf and report the total number
    The second creates a form field in the far corner of the first page and moves there.
    The third jumps to each link found by creating a form field just to the left of the link and zooms in so it can be selected by a “mouse click” from the keyboard macro recorder 
    The forth deletes the form field
    The keyboard macro recorder runs javascript 2 and then 3 then clicks on the link just to the right of the middle of the screen and uses keys to get to the advanced editing to get to edit the hyperlink .
    The hyperlink is then copied to excel where it is converted using string functions to the needed javascript text to be copied back.
    To the acrobat file into a java script (after deleting the hyperlink)
    Rinse/lather/repeat
    I have been able to convert about 150 links an hour.
    Better then hand typing, but not like having java access to the links.
    I am looking to improve the solution

    thanks for your help.
    I may have been confusing a "acrobat bookmark" and a bookmark in a word file that is converted to a pdf and ends up being a
    link of the type:
    "Go to a page in this document"
    which I do not want in my array vs
    a link of the action type:
    "Open a web link"
    Which I do want
    John
    My code, note how I have to skip pages with "Go to a page in this document" links depending on the document, I would like to use the same code for each document and skip over the "Go to a page in this document" links :
    global.ilinkindex = 1; 
    global.aLinkquads = [ [0, 1, 1, 0, 0],
           [0, 0, 0, 0, 0] ];
    function GetLinkArray()
    global.ilinkindex = 1;
    var iTotalLinks=0;
    // for ( var p = 0; p < this.numPages - 8 ; p++)                   // end before bookmarks for each page of the file x.pdf
    //  for ( var p = 0; p < this.numPages; p++)                     // for each page of the file
    for ( var p = 23; p < this.numPages; p++)                     // start after bookmarks for each page of the file y.pdf
      var cropbox = this.getPageBox("Crop", p);
      var alinksonpage = this.getLinks(p, cropbox);            // get array of links on page
      for ( var ll = 0; ll < alinksonpage.length; ll++)
       var linkquads = alinksonpage[ll].rect;     // get link Quads
       linkquads[4] = p;          // add page number to link Quads array
        global.aLinkquads[global.ilinkindex] = linkquads; // add quads to global link Quads array
        global.ilinkindex++;
    iTotalLinks = global.aLinkquads.length - 1;
    global.ilinkindex = 1;
    app.alert("Number of Links in Document is " + iTotalLinks );

  • Best way of Using Index on a Table.

    I am trying to understand the phenomena of using INDEX on a Table
    need some guidance!!!
    Let us take this scenario
    I have a table "MYRECORD" which has 4 attributes(or coulombs)
    1. "STATE" (varchar) // this can have 49 different values like newyork, dehli etc
    2. "YEAR" //a year like 2007
    3. "MONTH" //a month like JAN,FEB etc
    4. "CAT" (int) // type(category) of data represented by values 0 to 40
    with a PRIMARY KEY(STATE,YEAR,MONTH,CAT)
    now i will create index
    1. INX_myrecord (STATE,YEAR,MONTH) on table MYRECORD
    so now my question is
    1. what is the effect on performance of DB it makes?
    2. when I use a query
    SELECT * FROM MYRECORD WHERE STATE="dehli" AND YEAR=2007 AND MONTH="JAN";
    how will it get processed if index is created and not created.
    3. how can I refer a index by name in a query if so possible?
    Cheers,
    UD
    Message was edited by:
    UDAY

    You have edited your post. Now you have a primary key consisting of state, year, month and cat which makes an index on state, year and month useless as the already existing primary key can provide for retrieval of rows by index. If you don't have other columns - or just few other not being large varchar2 columns - you should have created the table as an IOT (Index Organized Table - avoiding to have separate table and index containing - nearly - the same data) in the first place.
    As a primary key by definition can contain only unique non null values, a query like SELECT * FROM MYRECORD WHERE STATE='dehli' AND YEAR=2007 AND MONTH='JAN' cannot give you more than the number of distinct cat values (0 .. 40) + 1 (if cat can be null - presumed one/some of the corresponding state, year and month is not null)
    The information processing depends principally of the query, the mere presence of an index does not make sure it will be used. If an index is used it means the index will be searched first then the table rows will be accessed by rowids contained in the index (usually a single row or a range of rows - a rather small number of them - is retrieved this way, your select for example). Submitting something like SELECT * FROM MYRECORD WHERE YEAR=2007 AND cat=33 would most likely produce a full table scan of myrecord table ignoring the primary key.
    Regards
    Etbin

Maybe you are looking for

  • Handling video files from camera

    I am shooting a large number of video clips (on a Panasonic Lumix) of our baby and need a solution for organizing & saving them all, not too concerned with editiing at the moment. I tried to copy them over from my camera to an external drive, however

  • How to use include statements

    I'm trying to learn to code .jsp pages (learning from the book "Professional JSP"). I'm trying out methods of including other pages, both .jsp and .htm pages. I was able to get it to work with the statement <%@ include file="filename.jsp" %> But, I t

  • Space on hard drive

    I know I've seen this but, just searched and can't find. What syncs when you have more in your library then will fit on your Ipod? Does it just sync the playlists or do you have to select the tunes that you want to sync?

  • 64 bit or 32 bit or None?

    How can you tell what "bit" your Mac Book Pro is? What is that anyway?

  • Is there any case sensitive difference into oracle 10.2.0.3 and 1.0.2.0.4

    Hi, I am getting the below out for same query on differenent version server 10.2.0.3 select * from emp where ename like '%m%'; ouput : (Including all 'm' character small & capital letters) 1. Smith 2. Mak server 10.2.0.4 select * from emp where ename