Partitioning - query on large table v. query accessing several partitions

Hi,
We are using partitioning on a large fact table, however, in deciding partitioning strategy looking for advice regarding queries which have to access several partitions versus query against a large table.
What is quicker - a query which acccesses a large table or a query which accesseses several partitions to return results. I
Need to partition due to size/admin etc. but want to make sure queries which need to access > 1 partition are not significantly slower than ones which access a large table by comparison.
Ones which access just one partition fine but some queries have to accesse several partitions
Many Thanks

Here are your choices stated another way. Is it better to:
1. Get one weeks data by reading one month's data and throwing away 75% of it (assumes partitioning by month)
2. Get one weeks data by reading three weeks of it and throwing away part of two weeks? (assumes partitioning by week)
3. Get one weeks data by reading seven daily partitions and not having to throw away any of it? (assumes daily partitioning)
I have partitioned as frequently as every 5-15 minutes (banking and telecom) and have yet to find a situation where partitions larger than the minimum date-range for the majority of queries makes sense.
Anyone can insert data into a table ... an extra millisecond per insert is generally irrelevant. What you want to do is optimize reading the data where that extra millisecond per row, over millions of rows, adds up to measurable time.
But this is Oracle so the best answer to your questions is to recommend you not take anyone advice on this but rather run some tests with real data, in real-world volumes, with real-world DML and queries.

Similar Messages

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • Pagination query help needed for large table - force a different index

    I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
    Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
    SELECT members.*
    FROM members,
        SELECT RID, rownum rnum
        FROM
            SELECT rowid as RID
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
             and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
    The problem I have is this:
    SELECT rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
    SELECT /*+ index(members, joindate_idx) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindate
    SELECT /*+ first_rows(100) */ rowid as RID
    FROM members
    WHERE last_name = 'Smith'
    ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
    SELECT members.*      -- Select all data from members table
    FROM members,           -- members table added to FROM clause
        SELECT RID, rownum rnum
        FROM
            SELECT /*+ index(members, joindate_idx) */ rowid as RID   -- Hint is ignored now that I am joining in the outer query
            FROM members
            WHERE last_name = 'Smith'
            ORDER BY joindate
        WHERE rownum <= 100
    WHERE rnum >= 1
            and RID = members.rowid           -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
    So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
    It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
    Thanks!

    Lakmal Rajapakse wrote:
    OK here is an example to illustrate the advantage:
    SQL> set autot traceonly
    SQL> select * from (
    2  select a.*, rownum x  from
    3  (
    4  select a.* from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  )
    9  where x >= 1100
    10  /
    101 rows selected.
    Execution Plan
    Plan hash value: 3711662397
    | Id  | Operation                      | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  1 |  VIEW                          |            |  1200 |   521K|   192   (0)| 00:00:03 |
    |*  2 |   COUNT STOPKEY                |            |       |       |            |          |
    |   3 |    VIEW                        |            |  1200 |   506K|   192   (0)| 00:00:03 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| EVENTS     |   253M|    34G|   192   (0)| 00:00:03 |
    |   5 |      INDEX FULL SCAN           | EVEN_IDX02 |  1200 |       |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("X">=1100)
    2 - filter(ROWNUM<=1200)
    Statistics
    0  recursive calls
    0  db block gets
    443  consistent gets
    0  physical reads
    0  redo size
    25203  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    SQL>
    SQL>
    SQL> select * from aoswf.events a, (
    2  select rid, rownum x  from
    3  (
    4  select rowid rid from aoswf.events a
    5  order by EVENT_DATETIME
    6  ) a
    7  where rownum <= 1200
    8  ) b
    9  where x >= 1100
    10  and a.rowid = rid
    11  /
    101 rows selected.
    Execution Plan
    Plan hash value: 2308864810
    | Id  | Operation                   | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT            |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |   1 |  NESTED LOOPS               |            |  1200 |   201K|   261K  (1)| 00:52:21 |
    |*  2 |   VIEW                      |            |  1200 | 30000 |   260K  (1)| 00:52:06 |
    |*  3 |    COUNT STOPKEY            |            |       |       |            |          |
    |   4 |     VIEW                    |            |   253M|  2895M|   260K  (1)| 00:52:06 |
    |   5 |      INDEX FULL SCAN        | EVEN_IDX02 |   253M|  4826M|   260K  (1)| 00:52:06 |
    |   6 |   TABLE ACCESS BY USER ROWID| EVENTS     |     1 |   147 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter("X">=1100)
    3 - filter(ROWNUM<=1200)
    Statistics
    8  recursive calls
    0  db block gets
    117  consistent gets
    0  physical reads
    0  redo size
    27539  bytes sent via SQL*Net to client
    281  bytes received via SQL*Net from client
    8  SQL*Net roundtrips to/from client
    0  sorts (memory)
    0  sorts (disk)
    101  rows processed
    Lakmal (and OP),
    Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
    SQL> select * from v$version ;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE     10.2.0.1.0     Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.1
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUE
    SQL> show parameter pga
    NAME                                 TYPE        VALUE
    pga_aggregate_target                 big integer 103M
    SQL> create table t nologging as select * from all_objects where 1 = 2 ;
    Table created.
    SQL> create index t_idx on t(last_ddl_time) nologging ;
    Index created.
    SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
    40617 rows created.
    SQL> commit ;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
    PL/SQL procedure successfully completed.
    SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    CREATED
         47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
         47672 ALL$OLAP2_CUBE_DIM_USES        28-JUL-2009 08:08:39
         47681 ALL$OLAP2_CUBE_MEASURE_MAPS    28-JUL-2009 08:08:39
         47682 ALL$OLAP2_FACT_LEVEL_USES      28-JUL-2009 08:08:39
         47685 ALL$OLAP2_AGGREGATION_USES     28-JUL-2009 08:08:39
         47692 ALL$OLAP2_CATALOGS             28-JUL-2009 08:08:39
         47665 ALL$OLAPMR_FACTTBLKEYMAPS      28-JUL-2009 08:08:39
         47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS  28-JUL-2009 08:08:39
         47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS   28-JUL-2009 08:08:39
         47669 ALL$OLAP9I2_HIER_DIMENSIONS    28-JUL-2009 08:08:39
         47666 ALL$OLAP9I1_HIER_DIMENSIONS    28-JUL-2009 08:08:39
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
    OBJECT_ID OBJECT_NAME                    LAST_DDL_TIME
         37534 com/sun/mail/smtp/SMTPMessage  06-FEB-2010 03:46:14
         13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
         11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
         42266 SI_GETCLRHSTGRFTR              06-FEB-2010 03:40:20
         16695 /2940a364_RepIdDelegator_1_3   06-FEB-2010 03:38:17
         36539 sun/io/ByteToCharMacHebrew     06-FEB-2010 03:28:57
         26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
         14044 /d29b81e1_OldHeaders           06-FEB-2010 03:12:12
         36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
         12920 /25f8f3a5_BasicSplitPaneUI     06-FEB-2010 03:11:06
         15752 /2f494dce_JDWPThreadReference  06-FEB-2010 03:09:31
    11 rows selected.
    SQL> set autotrace traceonly
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
      2  ;
    11 rows selected.
    Execution Plan
    Plan hash value: 44968669
    | Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |   1 |  SORT ORDER BY                  |       |  1200 | 91200 |   180   (2)| 00:00:03 |
    |*  2 |   HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  3 |    VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  4 |     COUNT STOPKEY               |       |       |       |            |          |
    |   5 |      VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   6 |       INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   7 |    TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("T".ROWID="T1"."RID")
       3 - filter("RN">=1190)
       4 - filter(ROWNUM<=1200)
    Statistics
              1  recursive calls
              0  db block gets
            348  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT         |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                    |      |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY          |      |       |       |            |          |
    |   3 |    VIEW                  |      | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|      | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL   | T    | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            343  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
    11 rows selected.
    Execution Plan
    Plan hash value: 168880862
    | Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  1 |  HASH JOIN                     |       |  1200 | 91200 |   179   (2)| 00:00:03 |
    |*  2 |   VIEW                         |       |  1200 | 30000 |    98   (0)| 00:00:02 |
    |*  3 |    COUNT STOPKEY               |       |       |       |            |          |
    |   4 |     VIEW                       |       | 40617 |   475K|    98   (0)| 00:00:02 |
    |   5 |      INDEX FULL SCAN DESCENDING| T_IDX | 40617 |   793K|    98   (0)| 00:00:02 |
    |   6 |   TABLE ACCESS FULL            | T     | 40617 |  2022K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T".ROWID="T1"."RID")
       2 - filter("RN">=1190)
       3 - filter(ROWNUM<=1200)
    Statistics
              0  recursive calls
              0  db block gets
            349  consistent gets
              0  physical reads
              0  redo size
           1063  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
             11  rows processed
    SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
    11 rows selected.
    Execution Plan
    Plan hash value: 882605040
    | Id  | Operation           | Name | Rows     | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  1 |  VIEW                |     |  1200 | 62400 |    80   (2)| 00:00:01 |
    |*  2 |   COUNT STOPKEY       |     |     |     |          |          |
    |   3 |    VIEW            |     | 40617 |  1546K|    80   (2)| 00:00:01 |
    |*  4 |     SORT ORDER BY STOPKEY|     | 40617 |  2062K|    80   (2)| 00:00:01 |
    |   5 |      TABLE ACCESS FULL      | T     | 40617 |  2062K|    80   (2)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("RN">=1190)
       2 - filter(ROWNUM<=1200)
       4 - filter(ROWNUM<=1200)
    Statistics
         175  recursive calls
           0  db block gets
         388  consistent gets
           0  physical reads
           0  redo size
           1063  bytes sent via SQL*Net to client
         385  bytes received via SQL*Net from client
           2  SQL*Net roundtrips to/from client
           4  sorts (memory)
           0  sorts (disk)
          11  rows processed
    SQL> set autotrace off
    SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
    The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query.

  • Efficiently Querying Large Table

    I have to query a recordset of 14k records against (join) a very large table: billions of data -even a count of the table does not return any resulty after 15 mins.
    I tried a plsql procedure to store the first recordset in a temp table and then preparing two cursors: one on the temp table and the other on the large table.
    However, the plsql procedure runs for a long time and just gives up with this error:
    SQL> exec match;
    ERROR:
    ORA-01041: internal error. hostdef extension doesn't exist
    BEGIN match; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Is there is way through which I can query more efficiently?
    - Using chucks of records from the large table at a time - how to do that? (rowid)
    - Or just ask the dba to partition the table - but the whole table would still need to be queried.
    The temp table is:
    CREATE TABLE test AS SELECT a.mon_ord_no, a.mo_type_id, a.p2a_pbu_id, a.creation_date,b.status_date_time,
    a.expiry_date, a.current_mo_status_desc_id, a.amount,
    a.purchaser_name, a.recipent_name, a.mo_id_type_id,
    a.mo_redeemed_by_id, a.recipient_type, c.pbu_id, c.txn_seq_no, c.txn_date_time
    FROM mon_order a, mo_status b, host_txn_log c
    where a.mon_ord_no = b.mon_ord_no
    and a.mon_ord_no = c.mon_ord_no
    and b.status_date_time = c.txn_date_time
    and b.status_desc_id = 7
    and a.current_mo_status_desc_id = 7
    and a.amount is not null
    and a.amount > 0
    order by b.status_date_time;
    and the PL/SQL Procedure is:
    CREATE OR REPLACE PROCEDURE MATCH
    IS
    --DECLARE
    deleted INTEGER :=0;
    counter INTEGER :=0;
    CURSOR v_table IS
         SELECT DISTINCT pbu_id, txn_seq_no, create_date
         FROM host_v
         WHERE status = 4;
    v_table_record v_table%ROWTYPE;
    CURSOR temp_table (v_pbu_id NUMBER, v_txn_seq_no NUMBER, v_create_date DATE) IS
         SELECT * FROM test
         WHERE pbu_id = v_pbu_id
         AND txn_seq_no = v_txn_seq_no
         AND creation_date = v_create_date;
    temp_table_record temp_table%ROWTYPE;
    BEGIN
    OPEN v_table;
    LOOP
    FETCH v_table INTO voucher_table_record;
    EXIT WHEN v_table%NOTFOUND;
    OPEN temp_table (v_table_record.pbu_id, v_table_record.txn_seq_no, v_table_record.create_date);
    LOOP
    FETCH temp_table INTO temp_table_record;
    EXIT WHEN temp_table %FOUND;
         DELETE FROM test WHERE pbu_id = v_table_record.pbu_id AND
                   temp_table_record.txn_seq_no = v_table_record.txn_seq_no AND
                   temp_table_record.creation_date = v_table_record.create_date;
    END LOOP;
    CLOSE temp_table;
    END LOOP;
    CLOSE v_table;
    END MATCH;
    /

    Many thanks,
    I can get the explain plan for the SQL statement, but I am not sure how to get it for teh PLSQL. Which section in the PLSQL do I get the explain plan. I am using SQL Navigator.
    I can create the cursor with the join, and if it does not need the delete statement, then there is no need requirement for the procedure itself. Should I just run the query as a SQL statement?
    You have not said what I should do with the rowid?
    Regards

  • Need help in optimisation for a select query on a large table

    Hi Gurus
    Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
    My Select is reading from a table which contains 10 Million records.
    I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
    I am pasting the code. please help
    Data: wa_i_tab1 type tys_tg_1 .
    DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
    Data : wa_result_pkg type tys_tg_1,
    wa_result_pkg1 type tys_tg_1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
    /BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
    into CORRESPONDING FIELDS OF table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
    sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    loop at RESULT_PACKAGE into wa_result_pkg.
    read TABLE i_tab INTO wa_i_tab1 with key
    /BIC/ZREB_SDAT =
    wa_result_pkg-/BIC/ZREB_SDAT
    AGREEMENT = wa_result_pkg-AGREEMENT
    /BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
    IF SY-SUBRC = 0.
    move wa_i_tab1-/BIC/ZSETLRUN to
    wa_result_pkg-/BIC/ZSETLRUN.
    wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
    modify RESULT_PACKAGE from wa_result_pkg1
    TRANSPORTING /BIC/ZSETLRUN.
    ENDIF.
    CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
    endloop.

    Hi,
    1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
    2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
    refer the below code is
    RESULT_PACKAGE1[] = RESULT_PACKAGE[].
    sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
    from /BIC/PZREB_SDAT
    into table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE1
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE1-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
    and one more thing your getting 10 million records so use package size in you select query.
    Refer the following link also For All Entry for 1 Million Records
    Regards,
    Dhina..
    Edited by: Dhina DMD on Sep 15, 2011 7:17 AM

  • Slow query due to large table and full table scan

    Hi,
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:
    SELECT table1.column, table2.column, table3.column
    FROM table1
    JOIN table2 on table1.table2Id = table2.id
    LEFT JOIN table3 on table2.table3id = table3.id
    WHERE table1.id IN(
    SELECT id
    FROM (
    (SELECT a.*, rownum rnum FROM(
    SELECT table1.id
    FROM table1,
    table2,
    table3
    WHERE
    table1.table2id = table2.id
    AND
    table2.table3id IS NULL OR table2.table3id = :table3IdParameter
    ) a
    WHERE rownum <= :end))
    WHERE rnum >= :start
    Table1 and table2 are the large tables in this example. This query starts two full table scans on those tables.
    Can we avoid this? We have, what we think are, the correct indexes.
    /best regards, Håkan

    >
    Hi Håkan - welcome to the forum.
    We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
    We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
    to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
    We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
    Two of the full table scans are of the two large tables mentioned above.
    This is an example query:Firstly, please read the forum FAQ - top right of page.
    Please format your SQL using tags [code /code].
    In order to help us to help you.
    Please post table structures - relevant (i.e. joined, FK, PK fields only) in the form - note use of code tags - we can just run table create script.
    CREATE TABLE table1
      Field1  Type1,
      Field2  Type2,
    FieldN  TypeN
    );Then give us some table data - not 100's of records - just enough in the form
    INSERT INTO Table1 VALUES(Field1, Field2.... FieldN);
    ..Please post EXPLAIN PLAN - again with tags.
    HTH,
    Paul...
    /best regards, Håkan

  • Item Classes - large tables - query timeouts!!!

    Hi,
    I've been building a series of Item Classes in Discoverer 4.1. Admin Tool on a reasonably large set of tables - but some of the Item Classes timeout before the list of values are retrieved.
    Discoverer continually tells me to go and set values for the query govenor - but I've already done this (15minute warning, 60 minute timeout).
    Does the Admin Tool have another place to configure the Query Govenor - other than Tool -> Privileges -> Query Govenor Tab...
    I've been able to fool the database by running a SQL*Plus query (resulting in a cached result set) for the smaller tables, but the Item Classes based on the larger table will not return.
    I've reviewed the 'LOV that works more like the LOV in Oracle Forms' thread and the Database Function and Custom folder solutions will work - but I'd prefer to fix the problem at its source. Is it possible Discoverer requires an index on every column used as an item class?
    Can anyone help me?
    thanks,
    Lance

    Have a look at the timeout value specified in the registry key:
    \\HKEY_CURRENT_USER\Software\ORACLE\Discoverer\Database\ItemClassDelay
    By default it is 20 seconds.
    Metalink Note refers:
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=1040079.6
    You're probably safer using your lookup tables to generate LOV's anyway, since you have no guarantee that an LOV generated from a particular field in your table will contain all of the possible values available in the lookup table.

  • Large table partition

    Hi,
    I have a large table with multiple columns, one of them has a date column. I would like to partition based on date column.
    Method 1. What is the advantage of assigning a new tablespace to a partition ?
    Method 2. Why not have one big tablespace with multiple datafiles assigned to it ?
    Which one is faster for querying results.
    Thanks.

    user633980 wrote:
    I have oracle's ASM running on 3 15K disks. But querying this big table is taking a while to return results. I thought that range partitioning may help.
    What other things would you suggest which will decrease the query time.There are 2 basic things one can do at DBA/developer level to improve I/O.
    <li>do less I/O (obviously)
    <li>do "faster" I/O
    The less I/O is where indexing and partitioning and so on come to the fore. Partitioning can reduce a large invoice table contains a years worth of invoices into 365 partitions (which is like having a bunch of "<i>mini tables</i>" each with its own local indexes). This can make a significant reduction in I/O when wanting to processes specific invoices. Partitioning also makes data management easier (enabling one to treat a partition as a physical entity ito data exchange, indexing, export/import, etc).
    How well partitioning will work in your case depends on how well the partitioning is designed (range/list/hash), on which columns, the indexing approach used, how the queries are structured to enable partitioning pruning (only hitting specific partitions instead of all the data in the partitioned table) and so on.
    Doing I/O faster - well, that's not much you can do to make the actual I/O faster. That requires relooking/redesigning/reconfiguring the storage layer. And this is typically a sysadmin function and not a developer one. What you can however do from a developer viewpoint is to make I/O faster using parallel processing.
    For example, if the query need to read a million rows, that can be made faster by using 10 parallel processes - with each processing a 100,000 rows. This however requires correctly using Oracle's PQ (Parallel Query) feature and utilising the available I/O capacity correctly. For example, PQ is of little use if the I/O pipes are already being hit at close to full capacity.
    The basic point about tablespaces is that this has no impact on any of this - nor should it have as it is a logical data management layer.

  • Create a New Tree in Query Access Manager

    Folks,
    Hello. In PeopleTools Query Access Manager, click on button "Create a New Tree" to create a new Query Access Tree, the system always comes up this message:
    "You are not authorized to update definition QUERY_TREE_OLAP. You are not authorized to update the specific definition. Contact your security administrator for access to the specified definition."
    Do any folks understand how to solve this problem ?
    Thanks.

    I figured it was that simple.
    I haven't seen you on here in awhile.

  • Finding query access frequency or how many times a query has been executed?

    Dear Experts
    I need to find the total number of access frequency of individual queries that are requested by the users say at a particular time.
    Say there are 20 distinct queries requested in the time difference of 3 hours. All of the 20 queries or some of the queries may be requested for more than 2/3 times at that time by other users. By the way say query Q1 is requested 5 times at that so its query access frequency or how many times this query is executed is 5.
    From where and how can I can find this counting of query access frequency or how many times a query has been executed at particular time or a session?
    Normally we know the there SQL history dynamic performance views or if it can be possible to query the Shared pool library cache for SQL area it may be possible to find the total number of execution time for a query. But how to find that if anyone knows, please help me about this.
    Regards-
    Engr. A.N.M. Bazlur Rashid
    OCP DBA

    That's one of the stats reported by statspack - assuming that your query does sufficient work to meet the thresholds for the standard report. Executions is of course one of the columns of v$sql so you might just wish to sample that yourself. Finally if you are on 11g and the sql you are interested is relatively low resource intensive and you are licensed for AWR then you can use the slightly madly named "colored sql" feature that ensures that a specific statement will always be sampled for AWR.
    Niall Litchfield
    http://www.orawin.info/

  • PeopleSoft Query Access Services security problems, in other words does QAS

    PeopleSoft Query Access Services security problems, in other words does QAS bypass PeopleSoft?

    Rod,
    Can you please post the entire contents of the dataserver summary screen...thanks.
    Typically the "No suitable driver exists" error is due to trying to connect to an unsupported DB version with the driver.  For example using the MSSQL 2000 driver to connect to MSSQL 2005, be sure that you have downloaded both the MSSQL 2000 (una2000.jar) and 2005 jars and have the proper classpaths specified.
    Sam

  • Using dbms_metadata to get defination of large table

    Hi,
    I have very very large table definition table with partitions and sub-partitions and I want to create that table in some other db.
    I am trying below spooled query but its not retrieving complete definition of the table, any other setting I should follow
    spool t.sql
    set pagesize 0
    set long 2000000
    select dbms_metadata.get_ddl
    I am using oracle 10g r2
    Thanks,
    Pankaj

    Thanks you all for your replied, following are observation after settings suggested
    With setting
    ==================
    set longchunksize 2000000
    set long 2000000
    set lines 2000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    10819 23419 21635001
    =================
    With setting
    set long 100000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    3452 7326 279269
    Result => None worked for me :(

  • PUT large table in the keep pool?

    I would like to buffer a table in the keep pool:
    The table I would like to put in the keep pool is 200Mb (which is really large). The corresponding indices have a size of 70MB.
    Actually, many blocks of table are stored in the default pool. But some blocks of that huge table are not accessed for a long time. Thus they are purged from the default pool due to the LRU strategy (when blocks of other tables are queried and buffered)
    If the table (which is supposed to be in the keep pool) is queried again, the queried blocks have to be reloaded plus the corresponding index.
    All these reloads make up 600Mb in three hours, which could be saved, if the table and its indices are stored in the keep pool.
    The table itself has only one key field :-(
    The table has many many indices, because the table is accessed by many different fields :-(
    The table is changed by updates and insert 400 times in 3 hours :-( ---->indices are updated :-(
    I can not split that table :-(
    I can not increase the size of the cache :-(
    I dont know if response times (of queries targeting that table) would decrease significantly if the table is stored in the keep pool.
    What do you think? There must be a significant change in terms of the respnse time ??!?!

    Hi abapers,
    the error is here:
    DATA: lr_error TYPE REF TO cx_address_bcs.
    DATA: lv_message TYPE string.
    loop at tab_destmail_f into wa_destmail_f.   "or assigning fs
    TRY.
        recipient = cl_cam_address_bcs=>create_internet_address( wa_destmail_f-smtp_addr ).
    >>>>>>>>    send_request->add_recipient( recipient ).
      CATCH cx_address_bcs INTO lr_error.
        lv_message = lr_error->get_text( ).
    ENDTRY.
    endloop.
    The messages are:
    Zugriff über 'NULL' Objektreferenz nicht möglich.
    Access via 'NULL' object reference not possible.
    Err.tmpo.ejec.         OBJECTS_OBJREF_NOT_ASSIGNED
    Excep.                 CX_SY_REF_IS_INITIAL
    Fecha y hora           21.01.2010 19:00:55
    The information of   lv_message is:
    {O:322*CLASS=CX_ADDRESS_BCS}
    Excepción CX_ADDRESS_BCS ocurrida (programa: CL_CAM_ADDRESS_BCS============CP, include CL_CAM_ADDRESS_BCS============CM01W, línea: 45)
    I don't find the solution, any saving code?
    This is very important to me. Thanks a lot for your help!

  • HS connection to MySQL fails for large table

    Hello,
    I have set up an HS to a MySql 3.51 dabatabe using an ODBC DNS. My Oracle box has version 10.2.0.1 running in Windows 2003 R2. MySQL version is 4.1.22 running on a different machine with the same OS.
    I completed the connection through a database link, which works fine in SQLPLUS when selecting small MySQL Tables. However, I keep getting an out of memory error when selecting certain large table from the MySQL database. Previously, I had tested the DNS and ran the same SELECT in Access and it doesn't give any error. This is the error thrown by SQLPLUS:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-00942: table or view does not exist
    [Generic Connectivity Using ODBC][MySQL][ODBC 3.51
        Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL server during query
    (SQL State: S1T00; SQL Code: 2013)
    ORA-02063: preceding 2 lines from MYSQL_RMG
    I traced the HS connection and here is the result from the .trc file:
    Oracle Corporation --- THURSDAY JUN 12 2008 11:19:51.809
    Heterogeneous Agent Release
    10.2.0.1.0
    (0) [Generic Connectivity Using ODBC] version: 4.6.1.0.0070
    (0) connect string is: defTdpName=MYSQL_RMG;SYNTAX=(ORACLE8_HOA, BASED_ON=ORACLE8,
    (0) IDENTIFIER_QUOTE_CHAR="",
    (0) CASE_SENSITIVE=CASE_SENSITIVE_QUOTE);BINDING=<navobj><binding><datasources><da-
    (0) tasource name='MYSQL_RMG' type='ODBC'
    (0) connect='MYSQL_RMG'><driverProperties/></datasource></datasources><remoteMachi-
    (0) nes/><environment><optimizer noFlattener='true'/><misc year2000Policy='-1'
    (0) consumerApi='1' sessionBehavior='4'/><queryProcessor parserDepth='2000'
    (0) tokenSize='1000' noInsertParameterization='true'
    noThreadedReadAhead='true'
    (0) noCommandReuse='true'/></environment></binding></navobj>
    (0) ORACLE GENERIC GATEWAY Log File Started at 2008-06-12T11:19:51
    (0) hoadtab(26); Entered.
    (0) Table 1 - PROGRESSNOTES
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]MySQL client ran out of
    (0) memory (SQL State: S1T00; SQL Code: 2008)
    (0) (Last message occurred 2 times)
    (0)
    (0) hoapars(15); Entered.
    (0) Sql Text is:
    (0) SELECT * FROM "PROGRESSNOTES"
    (0) [MySQL][ODBC 3.51 Driver][mysqld-4.1.22-community-nt]Lost connection to MySQL
    (0) server during query (SQL State: S1T00; SQL Code: 2013)
    (0) (Last message occurred 2 times)
    (0)
    (0) [A00D] Failed to open table MYSQL_RMG:PROGRESSNOTES
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Commit - rc = -1. Please refer to the
    (0) log file for details.
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    (0) (Last message occurred 2 times)
    (0)
    (0) [S1000] [9013]General error in nvITrans_Rollback - rc = -1. Please refer to
    (0) the log file for details.
    (0) Closing log file at THU JUN 12 11:20:38 2008.
    I have read the MySQL documentation and apparently there's a "Don't Cache Result (forward only cursors)" parameter in the ODBC DNS that needs to be checked in order to cache the results in the MySQL server side instead of the Driver side, but checking that parameter doesn't work for the HS connection. Instead, the SQLPLUS session throws the following message when selecting the same large table:
    SQL> select * from progressnotes@mysql_rmg where "encounterID" = 224720;
    select * from progressnotes@mysql_rmg where "encounterID" = 224720
    ERROR at line 1:
    ORA-02068: following severe error from MYSQL_RMG
    ORA-28511: lost RPC connection to heterogeneous remote agent using
    SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.0.120)(PORT=1521))(CONNECT_DATA=(SID=MYSQL_RMG)))
    Curiously enough, after checking the parameter, the Access connection through the DNS ODBS seems to improve!
    Is there an aditional parameter that needs to be set up in the inithsodbc.ora perhaps? These are current HS paramters:
    # HS init parameters
    HS_FDS_CONNECT_INFO = MYSQL_RMG
    HS_FDS_TRACE_LEVEL = ON
    My SID_LIST_LISTENER entry is:
    (SID_DESC =
    (PROGRAM = HSODBC)
    (SID_NAME = MYSQL_RMG)
    (ORACLE_HOME = D:\oracle\product\10.2.0\db_1)
    Finally, here is my TNSNAMES.ORA entry for the HS connection:
    MYSQL_RMG =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.120)(PORT = 1521))
    (CONNECT_DATA =
    (SID = MYSQL_RMG)
    (HS = OK)
    Your advice will be greatly appeciated,
    Thanks,
    Luis
    Message was edited by:
    lmconsite

    First of all please be aware HSODBC V10 has been desupported and DG4ODBC should be used instead.
    The root cause the problem you describe could be related to a timeout of the ODBC driver (especially while taking care of the comment: it happens only for larger tables):
    (0) [MySQL][ODBC 3.51 Driver]MySQL server has gone away (SQL State: S1T00; SQL
    (0) Code: 2006)
    indicates the Driver or the DB abends the connection due to a timeout.
    Check out the wait_timeout mysql variable on the server and increase it.

  • Postgres' LIMIT .. OFFSET for large table

    Hi!
    I have a really large table (some millions of rows) which I'd like to present on a web page. I let the user choose a limit, say 25 lines per page, and present some buttons to go one page forward or backwards.
    Some years ago, I have done this using PostgreSQL. There's an easy way to do it using LIMIT .. OFFSET. In Oracle, there's no such functionality.
    Currently, my 'workaround' looks like this (a bit more complex in reality):
    1 SELECT * FROM (
    2 SELECT
    3 ROW_NUMBER() OVER ( ORDER BY MSG_RCV_TIME DESC) AS ROWNO,
    4 TO_CHAR(MSG_RCV_TIME) MSG_RCV
    5 FROM MSG_TABLE
    6* ORDER BY MSG_RCV_TIME DESC) WHERE ROWNO BETWEEN 1 AND 10
    This gives back 10 rows, which does the job. The problem is: It takes AGES!. The web server falls in to a timeout before even printing one line. First, Oracle has to suck in all x*1'000'000 lines just to sort out the ones it doesn't need. That can't be the solution, can it?
    In this forum, I have read a few notes about PARTITION, CURSOR and such things, but I didn't really get what the use of it is.
    Any hints on that? This forum is based on Oracle, too (I hope), and it's fast. There must be a solution for this.
    Btw, the table I am talking about is being filled by syslog-ng, and it currently grows by 200MB per day (and it's still in the testing phase). I expect some hundred million lines to be present later.
    Thanks a lot in advance
    André

    See Tom Kyte's site for thisCool. Didn't know this one. How is he checking the performance of the queries?
    The one comment in there that I entirely agree with
    is that such large result sets are meaningless to the
    human eye so I would question exactly what you are
    trying to achieve. As Tom rightly says, nobody is
    ever going to scroll down to rows 999001 - 999010,
    even if they could.Of course not. But you see, as an example, that if you type just one word into google's mask, it returns loads of pages. As soon as you see that your query was not really a good one, you try with more specific words, and it returns less pages. That's exactly what my GUI is going to do. First, it gives you an overview, then, it lets you refine the search.
    Anyway: As soon as I limit the output in the innermost query, I doubt it's useful: Say, I limit the number of rows to browse through to 1000, but syslog-ng is producing 2000 rows per minute - you'll miss the rows you were maybe looking for.
    It's essential to be able to see all the records. I don't mind if nobody ever looks at pages 200'000 to 1'000'000.
    Thanks again for the great link.
    André (who really starts to like Oracle and its community)

Maybe you are looking for

  • [SOLVED][GNOME] 3.14 update broken

    Hi all, Just upgraded my system to the latest 3.14 release and now everything is broken. GDM will not start and I cannot start a GNOME session directly via startx. Is anyone else having this issue? This is on an Intel Haswell based system, by the way

  • Image view in finder - weird results

    Hi. I have a jpg which when viewed using Finder, it shows a different image than when i use preview, photoshop etc to view it. You can view it here: http://www.reddonkey.co.uk/wires.jpg Anyone know why it does this. Something to do with how the file

  • CS5 Print Booklet not working correctly - double sided issues

    Hi, So I'm very familiar with Print Booklet and have done many books in the past double-sided, but somehow in the last few months, my InDesign is having trouble with page sequence when printing booklets. Instead of printing the odd spreads how it sho

  • Vendor punch-out catalog issue

    Hi, We are using SRM server 5.5 with CCM2.0 for classic scenario. We are facing problem while accessing vendor catalog. We have checked catalog call structure. It seems that it is fine with respect to other catalog configured earlier. The error is as

  • How does font size relate to font height

    Hi,Im getting confused about font size. If I use a font of say font size 12. Does that mean - its maximum height is 12 - its maximum ascent above the baseline is 12 but it can also go below (descent) below the baseline making total height something e