Rewriting SQL to avoid multiple scan of a table

Hi,
Is it possible to rewrite the following statement in order to avoid multiple scan of the tables:
SELECT
(ACT.ID_ACCES_CLIENT_TYPE * 100000) + 30024 ss_key,
TO_CHAR(( SYSDATE ),'MM/DD/YYYY') date_key,
30024 transtype_key,
ACT.ID_ACCES_CLIENT_TYPE client_acces_d_sskey,
T.ID_MODELE fonct_mobile_d_sskey,
'0' type_mobile_key,
1 MEMBER
FROM ACCES_CLIENT_TYPE ACT, ACCES AC, TYPE_MODELE T
where ACT.FLAG_ACTIF is not null
and NVL(( ACT.DAT_FIN ),( SYSDATE ))> SYSDATE - ((3 + 0)*30)
and AC.ID_ACCES = ACT.ID_ACCES
and AC.FLAG_ACTIF is not null
and T.TAC = AC.TAC_1
AND mod(ACT.ID_ACCES_CLIENT_TYPE, 2) = 1
union all
SELECT
(ACT.ID_ACCES_CLIENT_TYPE * 100000) + 30025 ss_key,
TO_CHAR(( SYSDATE ),'MM/DD/YYYY') date_key,
30025 transtype_key,
ACT.ID_ACCES_CLIENT_TYPE client_acces_d_sskey,
T.ID_MODELE fonct_mobile_d_sskey,
'1' type_mobile_key,
1 MEMBER
FROM ACCES_CLIENT_TYPE ACT, ACCES AC, TYPE_MODELE T
where ACT.FLAG_ACTIF is not null
and NVL(( ACT.DAT_FIN ),( SYSDATE ))> SYSDATE - ((3 + 0)*30)
and AC.ID_ACCES = ACT.ID_ACCES
and AC.FLAG_ACTIF is not null
and T.TAC = AC.TAC_U
AND mod(ACT.ID_ACCES_CLIENT_TYPE, 2) = 1
union all
SELECT
(ACT.ID_ACCES_CLIENT_TYPE * 100000) + 30026 ss_key,
TO_CHAR(( SYSDATE ),'MM/DD/YYYY') date_key,
30026 transtype_key,
ACT.ID_ACCES_CLIENT_TYPE client_acces_d_sskey,
T.ID_MODELE fonct_mobile_d_sskey,
'2' type_mobile_key,
1 MEMBER
FROM ACCES_CLIENT_TYPE ACT, ACCES AC, TYPE_MODELE T
where ACT.FLAG_ACTIF is not null
and NVL(( ACT.DAT_FIN ),( SYSDATE ))> SYSDATE - ((3 + 0)*30)
and AC.ID_ACCES = ACT.ID_ACCES
and AC.FLAG_ACTIF is not null
and T.TAC = AC.TACG_G
AND mod(ACT.ID_ACCES_CLIENT_TYPE, 2) = 1
Thanks for help

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 44028 | 2340K| 287K (67)|
| 1 | UNION-ALL | | | | |
|* 2 | HASH JOIN | | 15217 | 832K| 95835 (0)|
| 3 | INDEX FAST FULL SCAN | TYPE_MODELE_IDX_003 | 23462 | 320K| 10 (0)|
| 4 | NESTED LOOPS | | 15217 | 624K| 95817 (0)|
|* 5 | TABLE ACCESS FULL | ACCES_CLIENT_TYPE | 13078 | 319K| 91239 (0)|
|* 6 | TABLE ACCESS BY INDEX ROWID| ACCES | 1 | 17 | 2 (50)|
|* 7 | INDEX UNIQUE SCAN | PK_ACCES | 1 | | |
|* 8 | HASH JOIN | | 15079 | 824K| 95835 (0)|
| 9 | INDEX FAST FULL SCAN | TYPE_MODELE_IDX_003 | 23462 | 320K| 10 (0)|
| 10 | NESTED LOOPS | | 15079 | 618K| 95817 (0)|
|* 11 | TABLE ACCESS FULL | ACCES_CLIENT_TYPE | 13078 | 319K| 91239 (0)|
|* 12 | TABLE ACCESS BY INDEX ROWID| ACCES | 1 | 17 | 2 (50)|
|* 13 | INDEX UNIQUE SCAN | PK_ACCES | 1 | | |
|* 14 | HASH JOIN | | 13732 | 683K| 95834 (0)|
| 15 | INDEX FAST FULL SCAN | TYPE_MODELE_IDX_003 | 23462 | 320K| 10 (0)|
| 16 | NESTED LOOPS | | 13732 | 496K| 95817 (0)|
|* 17 | TABLE ACCESS FULL | ACCES_CLIENT_TYPE | 13078 | 319K| 91239 (0)|
|* 18 | TABLE ACCESS BY INDEX ROWID| ACCES | 1 | 12 | 2 (50)|
|* 19 | INDEX UNIQUE SCAN | PK_ACCES | 1 | | |
Predicate Information (identified by operation id):
2 - access("T"."TAC"="AC"."TAC_1")
5 - filter("ACT"."FLAG_ACTIF" IS NOT NULL AND
NVL("ACT"."DAT_FIN",SYSDATE@!)>SYSDATE@!-90 AND MOD("ACT"."ID_ACCES_CLIENT_TYPE",2)=1)
6 - filter("AC"."FLAG_ACTIF" IS NOT NULL AND "AC"."TAC_1" IS NOT NULL)
7 - access("AC"."ID_ACCES"="ACT"."ID_ACCES")
8 - access("T"."TAC"="AC"."TAC_U")
11 - filter("ACT"."FLAG_ACTIF" IS NOT NULL AND
NVL("ACT"."DAT_FIN",SYSDATE@!)>SYSDATE@!-90 AND MOD("ACT"."ID_ACCES_CLIENT_TYPE",2)=1)
12 - filter("AC"."FLAG_ACTIF" IS NOT NULL AND "AC"."TAC_U" IS NOT NULL)
13 - access("AC"."ID_ACCES"="ACT"."ID_ACCES")
14 - access("T"."TAC"="AC"."TACG_G")
17 - filter("ACT"."FLAG_ACTIF" IS NOT NULL AND
NVL("ACT"."DAT_FIN",SYSDATE@!)>SYSDATE@!-90 AND MOD("ACT"."ID_ACCES_CLIENT_TYPE",2)=1)
18 - filter("AC"."FLAG_ACTIF" IS NOT NULL AND "AC"."TACG_G" IS NOT NULL)
19 - access("AC"."ID_ACCES"="ACT"."ID_ACCES")
44 rows selected.

Similar Messages

  • Rewrite sql to avoid filter operation

    Hi All,
    I found below sql and some more sql's causing high CPU usage.
    SELECT :B1 AS ID ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL
                                 FROM ONS
                                 WHERE PARENT_ID = :B1 )), 1, 1, 0) AS IP_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL
                                 FROM ONS
                                 WHERE ULTIMATE_PARENT_GID = :B1 )), 1, 1, 0) AS UP_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL FROM AFFILIATIONS WHERE AFFILIATED_ID= :B1 )), 1, 1, 0) AS AFF_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL FROM JOINT_VENTURES WHERE JOINT_VENTURE_ID= :B1 )), 1, 1, 0) AS JV_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL FROM SUCCESSORS WHERE SUCCESSOR_ID= :B1 )), 1, 1, 0) AS SUC_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL FROM COUNTERPARTY WHERE CP_TAX_AUTHORITY_ID = :B1 )), 1, 1, 0) AS TAX_AUTH_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL FROM COUNTERPARTY WHERE CP_PRIM_REGULATOR_ID = :B1 )), 1, 1, 0) AS PRIM_REG_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL FROM ONS WHERE DUPLICATE_OF_ID = :B1 )), 1, 1, 0) AS DUP_RELATION ,
           DECODE((SELECT 1
                   FROM DUAL
                   WHERE EXISTS (SELECT NULL FROM ONS WHERE REG_AUTHORITY_ID = :B1 )), 1, 1, 0) AS REG_AUTH_RELATION
    FROM DUAL
    | Id  | Operation             | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |                                |       |       |     2 (100)|          |
    |*  1 |  FILTER               |                                |       |       |            |          |
    |   2 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |*  3 |   INDEX RANGE SCAN    | IDX_IMMEDIATE_PARENT_ID        |     1 |     3 |     2   (0)| 00:00:01 |
    |*  4 |  FILTER               |                                |       |       |            |          |
    |   5 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |*  6 |   INDEX RANGE SCAN    | IDX_ULTIMATE_PARENT_ID         |     2 |     4 |     2   (0)| 00:00:01 |
    |*  7 |  FILTER               |                                |       |       |            |          |
    |   8 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |*  9 |   INDEX FAST FULL SCAN| PK_ORG_AFFILIATED_WITH         |     1 |     7 |   294   (7)| 00:00:04 |
    |* 10 |  FILTER               |                                |       |       |            |          |
    |  11 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |* 12 |   INDEX FULL SCAN     | PK_ORG_JOINT_VENTURE_OF        |     1 |     7 |     3   (0)| 00:00:01 |
    |* 13 |  FILTER               |                                |       |       |            |          |
    |  14 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |* 15 |   INDEX FAST FULL SCAN| PK_ONS_SUCCEEDED_BY            |     1 |     7 |    79   (7)| 00:00:01 |
    |* 16 |  FILTER               |                                |       |       |            |          |
    |  17 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |* 18 |   INDEX RANGE SCAN    | IDX_ORG_CP_TAX_AUTHORITY_ID    |     2 |    14 |     2   (0)| 00:00:01 |
    |* 19 |  FILTER               |                                |       |       |            |          |
    |  20 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |* 21 |   INDEX RANGE SCAN    | IDX_ORGCP_PRIM_REGULATOR_ID    |     1 |     4 |     2   (0)| 00:00:01 |
    |* 22 |  FILTER               |                                |       |       |            |          |
    |  23 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |* 24 |   TABLE ACCESS FULL   | ONS                            |     1 |     2 | 27013   (4)| 00:05:25 |
    |* 25 |  FILTER               |                                |       |       |            |          |
    |  26 |   FAST DUAL           |                                |     1 |       |     2   (0)| 00:00:01 |
    |* 27 |   TABLE ACCESS FULL   | ONS                            |     1 |     2 |   475   (3)| 00:00:06 |
    |  28 |  FAST DUAL            |                                |     1 |       |     2   (0)| 00:00:01 |
    Peeked Binds (identified by position):
       2 - :B1 (NUMBER, Primary=1)
       3 - :B1 (NUMBER, Primary=1)
       4 - :B1 (NUMBER, Primary=1)
       5 - :B1 (NUMBER, Primary=1)
       6 - :B1 (NUMBER, Primary=1)
       7 - :B1 (NUMBER, Primary=1)
       8 - :B1 (NUMBER, Primary=1)
       9 - :B1 (NUMBER, Primary=1)
      10 - :B1 (NUMBER, Primary=1)
    Predicate Information (identified by operation id):
       1 - filter( IS NOT NULL)
       3 - access("IMMEDIATE_PARENT_ID"=:B1)
       4 - filter( IS NOT NULL)
       6 - access("ULTIMATE_PARENT_ID"=:B1)
       7 - filter( IS NOT NULL)
       9 - filter("AFFILIATED_ID"=:B1)
      10 - filter( IS NOT NULL)
      12 - access("JOINT_VENTURE_ID"=:B1)
           filter("JOINT_VENTURE_ID"=:B1)
      13 - filter( IS NOT NULL)
      15 - filter("SUCCESSOR_ID"=:B1)
      16 - filter( IS NOT NULL)
      18 - access("CP_TAX_AUTHORITY_ID"=:B1)
      19 - filter( IS NOT NULL)
      21 - access("CP_PRIM_REGULATOR_ID"=:B1)
      22 - filter( IS NOT NULL)
      24 - filter("DUPLICATE_OF_ID"=:B1)
      25 - filter( IS NOT NULL)
      27 - filter("REG_AUTHORITY_ID"=:B1)Oracle Version : 10.2.0.4 RAC 2 nodes
    Is there any possibility to rewrite this sql to avoid filter operation.
    Please let me know if you need any more details....

    My bad..i overlooked the execution plan.
    Below execution plan has been extracted from devlopment database which is exact replica of production database.
    | Id  | Operation                 | Name                           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
    |*  1 |  FILTER                   |                                |      1 |        |      1 |00:00:00.72 |    8028 |   5986 |
    |   2 |   FAST DUAL               |                                |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |
    |   3 |   PARTITION RANGE ALL     |                                |      1 |      1 |      1 |00:00:00.72 |    8028 |   5986 |
    |*  4 |    TABLE ACCESS FULL      | ONS                            |      1 |      1 |      1 |00:00:00.72 |    8028 |   5986 |
    |*  5 |  FILTER                   |                                |      1 |        |      1 |00:00:00.19 |       7 |      0 |
    |   6 |   FAST DUAL               |                                |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |
    |   7 |   PX COORDINATOR          |                                |      1 |        |      1 |00:00:00.19 |       7 |      0 |
    |   8 |    PX SEND QC (RANDOM)    | :TQ10000                       |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
    |   9 |     PX PARTITION RANGE ALL|                                |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
    |* 10 |      INDEX RANGE SCAN     | IDX_ULTIMATE_PARENT_ID         |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
    |* 11 |  FILTER                   |                                |      1 |        |      0 |00:00:00.11 |    1231 |      0 |
    |  12 |   FAST DUAL               |                                |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
    |* 13 |   INDEX FAST FULL SCAN    | PK_ORG_AFFILIATED_WITH         |      1 |      1 |      0 |00:00:00.11 |    1231 |      0 |
    |* 14 |  FILTER                   |                                |      1 |        |      0 |00:00:00.01 |       7 |      0 |
    |  15 |   FAST DUAL               |                                |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
    |* 16 |   INDEX FAST FULL SCAN    | PK_ORG_JOINT_VENTURE_OF        |      1 |      1 |      0 |00:00:00.01 |       7 |      0 |
    |* 17 |  FILTER                   |                                |      1 |        |      0 |00:00:00.02 |     229 |      0 |
    |  18 |   FAST DUAL               |                                |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
    |* 19 |   INDEX FAST FULL SCAN    | PK_ONS_SUCCEEDED_BY            |      1 |      1 |      0 |00:00:00.02 |     229 |      0 |
    |* 20 |  FILTER                   |                                |      1 |        |      1 |00:00:00.01 |       3 |      0 |
    |  21 |   FAST DUAL               |                                |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |
    |* 22 |   INDEX RANGE SCAN        | IDX_CP_TAX_AUTHORITY_ID        |      1 |      2 |      1 |00:00:00.01 |       3 |      0 |
    |* 23 |  FILTER                   |                                |      1 |        |      1 |00:00:00.01 |       3 |      0 |
    |  24 |   FAST DUAL               |                                |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |
    |* 25 |   INDEX RANGE SCAN        | IDX_CP_PRIM_REGULATOR_ID       |      1 |      1 |      1 |00:00:00.01 |       3 |      0 |
    |* 26 |  FILTER                   |                                |      1 |        |      1 |00:00:02.20 |   28923 |  21562 |
    |  27 |   FAST DUAL               |                                |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |
    |  28 |   PARTITION RANGE ALL     |                                |      1 |      1 |      1 |00:00:02.20 |   28923 |  21562 |
    |* 29 |    TABLE ACCESS FULL      | ONS                            |      1 |      1 |      1 |00:00:02.20 |   28923 |  21562 |
    |* 30 |  FILTER                   |                                |      1 |        |      1 |00:00:00.01 |       4 |      5 |
    |  31 |   FAST DUAL               |                                |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |
    |  32 |   PARTITION RANGE ALL     |                                |      1 |      1 |      1 |00:00:00.01 |       4 |      5 |
    |* 33 |    TABLE ACCESS FULL      | ONS                            |      1 |      1 |      1 |00:00:00.01 |       4 |      5 |
    |  34 |  FAST DUAL                |                                |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |
    Predicate Information (identified by operation id):
       1 - filter( IS NOT NULL)
       4 - filter("IMMEDIATE_PARENT_ID"=:B1)
       5 - filter( IS NOT NULL)
      10 - access("ULTIMATE_PARENT_ID"=:B1)
      11 - filter( IS NOT NULL)
      13 - filter("AFFILIATED_ID"=:B1)
      14 - filter( IS NOT NULL)
      16 - filter("JOINT_VENTURE_ID"=:B1)
      17 - filter( IS NOT NULL)
      19 - filter("SUCCESSOR_ID"=:B1)
      20 - filter( IS NOT NULL)
      22 - access("CP_TAX_AUTHORITY_ID"=:B1)
      23 - filter( IS NOT NULL)
      25 - access("CP_PRIM_REGULATOR_ID"=:B1)
      26 - filter( IS NOT NULL)
      29 - filter("DUPLICATE_OF_ID"=:B1)
      30 - filter( IS NOT NULL)
      33 - filter("REG_AUTHORITY_ID"=:B1)It took just 2.20 seconds, but why does it causes more CPU resource ?
    We are about to plugin new module in this database, hence ONS table is partitioned, its partitioned on column PROVIDER which seperates existing and new module in to different partitions which makes easier for loading wihout affecting existing module data(We also make about to load partition local indexes to unusable state). Also this table is the parent table for about 6 child tables. So we decided to partition even child tables by adding PROVIDER column to all child tables and partition on this column. Parent-Child relationship is built upon ID column in all the tables.
    All the sql's will be altered to use PROVIDER column for filtering old and new module data.
    Do you think we are in right approach, I would be thankful if you can help me here for precise designing of this table.
    As a side thought - and one I would have to investigate - since you have declared a number of inddexes with "case insensitive sorting" - is is possible that you could work around this idea to drop a few of the existing indexes on "lower(column)" and use case-insensitive indexes for these comparisons ?Will test it in development database, but what is the performance improvement prediction? And please let me know your suspects which claims "lower(column)" should be avoided and use case-insensitive indexes.
    Anyway we are implementing Text-Index on this table and drop all the unwanted indexes.
    I've written a short note on my blog about the "exists subquery" and the varying cost of the tablescane linesI am regular reader of your blog, after seeing your test case i understood the concept crystal clear. Thanks a lot....

  • Sql query with multiple joins to same table

    I have to write a query for a client to display business officers' names and title along with the business name
    The table looks like this
    AcctNumber
    OfficerTitle
    OfficerName
    RecKey
    90% of the businesses have exactly 4 officer records, although some have less and some have more.
    There is a separate table that has the AcctNumber, BusinessName about 30 other fields that I don’t need
    An individual account can have 30 or 40 records on the other table.
    The client wants to display 1 record per account.
    Initially I wrote a query to join the table to itself:
    Select A.OfficerTtitle, A.OfficerName, B.OfficerTitle, B.OfficerName, C.OfficerTtitle, C.OfficerName, D.OfficerTitle, D.OfficerName where A.AcctNumber = B.AcctNumber and A.AcctNumber = C.AcctNumber and A.AcctNumber = D.AcctNumber
    This returned tons of duplicate rows for each account ( number of records * number of records, I think)
    So added
    And A.RecKey > B.RecKey and B.RecKey > C. RecKey and C.RecKey . D.RecKey
    This works when there are exactly 4 records per account. If there are less than 4 records on the account it skips the account and if there are more than 4 records, it returns multiple rows.
    But when I try to l join this to the other table to get the business name, I get a row for every record on the other table
    I tried select distinct on the other table and the query runs for ever and never returns anything
    I tried outer joins and subqueries, but no luck so far. I was thinking maybe a subquery - if exists - because I don't know how many records there are on an account, but don't know how to structure that
    Any suggestions would be appreciated

    Welcome to the forum!
    user13319842 wrote:
    I have to write a query for a client to display business officers' names and title along with the business name
    The table looks like this
    AcctNumber
    OfficerTitle
    OfficerName
    RecKey
    90% of the businesses have exactly 4 officer records, although some have less and some have more.
    There is a separate table that has the AcctNumber, BusinessName about 30 other fields that I don’t need
    An individual account can have 30 or 40 records on the other table.
    The client wants to display 1 record per account.As someone has already mentioned, you should post CREATE TABLE and INSERT statements for both tables (relevant columns only). You don't have to post a lot of sample data. For example, you need to pick 1 out of 30 or 40 rows (max) for the same account, but it's almost certainly enough if you post only 3 or 4 rows (max) for an account.
    Also, post the results you want from the sample data that you post, and explain how you get those resutls from that data.
    Always say which version of Oracle you're using. This sounds like a PIVOT problem, and a new SELECT .... PIVOT feature was introduced in Oracle 11.1. If you're using Oracle 11, you don't want to have to learn the old way to do pivots. On the other hand, if you have Oracle 10, a solution that uses a new feature that you don't have won't help you.
    Whenever you have a question, please post CREATE TABLE and INSERT statements for some sample data, the results you want from that data, an explanation, and your Oracle version.
    Initially I wrote a query to join the table to itself:
    Select A.OfficerTtitle, A.OfficerName, B.OfficerTitle, B.OfficerName, C.OfficerTtitle, C.OfficerName, D.OfficerTitle, D.OfficerName where A.AcctNumber = B.AcctNumber and A.AcctNumber = C.AcctNumber and A.AcctNumber = D.AcctNumber Be careful, and post the exact code that you're running. The statement above can't be what you ran, because it doesn't have a FROM clause.
    This returned tons of duplicate rows for each account ( number of records * number of records, I think)
    So added
    And A.RecKey > B.RecKey and B.RecKey > C. RecKey and C.RecKey . D.RecKey
    This works when there are exactly 4 records per account. If there are less than 4 records on the account it skips the account and if there are more than 4 records, it returns multiple rows.
    But when I try to l join this to the other table to get the business name, I get a row for every record on the other table
    I tried select distinct on the other table and the query runs for ever and never returns anything
    I tried outer joins and subqueries, but no luck so far. I was thinking maybe a subquery - if exists - because I don't know how many records there are on an account, but don't know how to structure that
    Any suggestions would be appreciatedDisplaying 1 column from n rows as n columns on 1 row is called Pivoting . See the following link fro several ways to do pivots:
    SQL and PL/SQL FAQ
    Pivoting requires that you know exactly how many columns will be in the result set. If that number depends on the data in the table, then you might prefer to use String Aggregation , where the output consists of a huge string column, that contains the concatenation of the data from n rows. This big string can be formatted so that it looks like multiple columns. For different string aggregation techniques, see:
    http://www.oracle-base.com/articles/10g/StringAggregationTechniques.php
    The following thread discusses some options for pivoting a variable number of columns:
    Re: Report count and sum from many rows into many columns

  • SQL challenge: avoid this self-join!!!

    Here's something of a challenging SQL problem. I'm trying to persist an arbitrary number of attributes for an object. I am trying to do this in a regular relational table both for performance and to make future upgrades easier.
    The problem is that I don't know what SQL cleverness I can use to only scan the ATTR table once.
    Does Oracle (or for that matter the SQL standard) have some way to help me? Here's a simplified example:
    Consider a table ATTR with columns OID, ATTR_ID, ATTR_VAL. Unique key is OID, ATTR_ID. Assume any other indexes that you want, but be aware that ATTR_VAL is modestly dynamic.
    I can easily look for a OID for any one ATTR_ID, ATTR_VAL pair:
    SELECT oid FROM attr
    WHERE attr_id = 1 AND attr_val = :b1
    I can also easily do this looking at multiple attributes when I only need one condition to be met with an OR, as:
    SELECT DISTINCT oid FROM attr
    WHERE (attr_id = 1 AND attr_val = :b1)
    OR (attr_id = 31 AND attr_val = :b2)
    But how to handle the condition where I want to have the two ATTR_ID, ATTR_VAL pairs "and-ed" together? I know that I can do this:
    SELECT oid FROM
    (SELECT oid FROM attr WHERE attr_id = 1 AND attr_val = :b1)
    UNION
    (SELECT oid FROM attr WHERE attr_id = 31 AND attr_val = :b2)
    But this will necessitate looking at ATTR twice. This is maybe okay if there are only two conditions, but what about when there might be 10 or even 50? At some point this technique becomes unacceptable.
    Clearly:
    SELECT DISTINCT oid FROM attr
    WHERE (attr_id = 1 AND attr_val = :b1)
    AND (attr_id = 31 AND attr_val = :b2)
    won't work (each row has but one ATTR_ID).
    The following will end up doing the same basic thing as the UNION (it avoids a sort so is preferable):
    SELECT oid FROM attr a1, attr a2
    WHERE a1.oid = a2.oid
    AND (a1.attr_id = 1 AND a1.attr_val = :b1)
    AND (a2.attr_id = 31 AND a2.attr_val = :b2)
    but the fundamental problem of scanning ATTR twice remains.
    What cleverness can I apply here to only scan ATTR once?
    Thanks,
    :-Phil

    An other way of having a dynamic in list build from a singel string is show at asktom at this link http://asktom.oracle.com/pls/ask/f?p=4950:8:2019864::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:210612357425,%7Bvarying%7D%20and%20%7Belements%7D%20and%20%7Bin%7D%20and%20%7Bin%7D%20and%20%7Blist%7D
    an modified version for two columns:
    Create or replace type in_list as object (col1 varchar2(20), col2 varchar2(30));
    Create or replace type in_list_tab as table of in_list;
    Create or replace function fn_in_list( p_string in varchar2) return in_list_tab
    as
    l_string long default p_string || ',';
    l_data in_list_tab := in_list_tab();
    pos number;
    begin
    pos := 0;
    loop
    exit when l_string is null;
    pos := instr( l_string, ',' );
    l_data.extend;
    l_data(l_data.count) := in_list('','');
    l_data(l_data.count).col1 := ltrim(rtrim(substr(l_string, 1, pos - 1)));
    l_string := substr( l_string, pos + 1 );
    if l_string is null
    then
         l_data.Trim;
         exit;
    end if;
    pos := instr( l_string, ',' );
    l_data(l_data.count).col2 := ltrim(rtrim(substr(l_string, 1, pos - 1)));
    l_string := substr( l_string, pos + 1 );
    end loop;
    return l_data;
    end;
    create table testII (cola varchar2(10), colb varchar2(30));
    insert into testII values ('abc',1);
    insert into testII values ('abc',2);
    insert into testII values ('def',1);
    insert into testII values ('def',2);
    commit;
    var b1 varchar2(200);
    exec :b1:='abc,1,def,2';
    select * from testII where (cola,colb) in
    (select col1, col2 from THE ( select cast(fn_in_list(:b1) as in_list_tab) from dual));
    to handle cases like
    attr_id = 41 and attr_val > :b3 I would say dynmaic SQL

  • How to avoid multiple DataConnections with LCD ES2

    Hi, we are just starting using the data connections to connect via a database with LiveCycle Designer. It seems we are missing something important regarding to manipulation of data from the PDF.
    At first we have tried to apply a INSERT command to our first table and be able to browse through the items in the database. We are able to browse through the database only before we have inserted something.
    If we insert an item and then try to browse through the database with (Next, Previous, Last or First) it will crash and have an error of the following :
    (Next, Previous, Last or First) failed. Multiple-step operation generated errors. check each status value [ID:@11]
    So then we have decided to create a second data connection which would have each of the column in the database except for the ID which appears to make everything crash.
    E.G.:  Table_1
    DataConnection1 --> ID, Field1, Field2, Field3, Field4          SELECT Command connection
    DataConnection2 --> Field1, Field2, Field3, Field4               INSERT Command connection
    Seems like we can't have a SELECT and INSERT in the same DataConnection because with those 2 different connections it works fine....
    Then we are trying to show multiple data's in a table which is linked with the ID of the Selected ID in DataConnection1.. To show the data is all fine using a 3rd connection for that Table_2, then we are making sure a blank row is always there at the end of the table to be able to add this new entry to the database with a Add button to INSERT into the Table_2... Unfortunately we do not have a 2nd connection to that table because we cannot link those fields with the database for the purpose of a multiple entry view.
    We have tried to make a 2nd connection for the purpose of the INSERT but it doesnt work at all.
    We are basing ourselves on the sample provided by Stefan Cameron in his blog http://forms.stefcameron.com/2006/12/18/databases-inserting-updating-and-deleting-records/
    I am wondering if we are using the right functionalities and if it is the simplest way to work with databases...
    If anyone can help, it would be greatly appreciated!!
    Thanks in advance!
    Mag

    Don't forget to activate the RESOURCE_LIMIT parameter, which default is FALSE :
    alter system set RESOURCE_LIMIT = true;
    Laurent, I had a similar problem some time ago : I didn't want to avoid multiple access, but only control who was doing what. That's because moving from Client/Server to Web the TERMINAL column in V$SESSION becomes useless.
    I tried your solution, but I had to give up with it, because in my Forms9i application some forms call Reports, which generate a new session.
    I decided to use DBMS_APPLICATION_INFO, and this is satisfactory for my requirements, but I'm interested to discover other solutions.
    P.S. with my solution I'm able to limit accesses, because in the CLIENT_INFO string I put, among other things, the
    application user, so I can control if an user is already connected. The problem is that existing applications have to be modified .....:-(

  • Multiple Scan 20" on a Mac mini ?

    I want to make a low expensive change of computer.
    Is that possible to connect a Multiple Scan 20" on a Mac mini ? (Multiple scan have a DB-15 connector)
    Thanks.
    Claude

    Yes, with a DB-15 to VGA adaptor it should be possible.
    Mac mini G4 1,42/512/80, PowerBook G4 12" 1,5/1,25/80 + 23" ACD   Mac OS X (10.4.7)   as well as iMac G4 17" 1,25/2/320, PM G3 DT 500/576/20 + 17" Syncmaster 710T

  • In  a SQL query whihc has join, How to reduce Multiple instance of a table

    in a SQL query which has join, How to reduce Multiple instance of a table
    Here is an example: I am using Oracle 9i
    is there a way to reduce no.of Person instances from the following query? or can I optimize this query further?
    TABLES:
    mail_table
    mail_id, from_person_id, to_person_id, cc_person_id, subject, body
    person_table
    person_id, name, email
    QUERY:
    SELECT p_from.name from, p_to.name to, p_cc.name cc, subject
    FROM mail, person p_from, person p_to, person p_cc
    WHERE from_person_id = p_from.person_id
    AND to_person_id = p_to.person_id
    AND cc_person_id = p_cc.person_id
    Thnanks in advance,
    Babu.

    SQL> select * from mail;
            ID          F          T         CC
             1          1          2          3
    SQL> select * from person;
           PID NAME
             1 a
             2 b
             3 c
    --Query with only ne Instance of PERSON Table
    SQL> select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
      2         max(decode(m.t,p.pid,p.name)) to_name,
      3         max(decode(m.cc,p.pid,p.name)) cc_name
      4  from mail m,person p
      5  where m.f = p.pid
      6  or m.t = p.pid
      7  or m.cc = p.pid
      8  group by m.id;
            ID FRM_NAME   TO_NAME    CC_NAME
             1 a          b          c
    --Expalin plan for "One instance" Query
    SQL> explain plan for
      2  select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
      3         max(decode(m.t,p.pid,p.name)) to_name,
      4         max(decode(m.cc,p.pid,p.name)) cc_name
      5  from mail m,person p
      6  where m.f = p.pid
      7  or m.t = p.pid
      8  or m.cc = p.pid
      9  group by m.id;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 902563036
    | Id  | Operation           | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |        |     3 |   216 |     7  (15)| 00:00:01 |
    |   1 |  HASH GROUP BY      |        |     3 |   216 |     7  (15)| 00:00:01 |
    |   2 |   NESTED LOOPS      |        |     3 |   216 |     6   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL| MAIL   |     1 |    52 |     3   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL| PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       4 - filter("M"."F"="P"."PID" OR "M"."T"="P"."PID" OR
                  "M"."CC"="P"."PID")
    Note
       - dynamic sampling used for this statement
    --Explain plan for "Normal" query
    SQL> explain plan for
      2  select m.id,pf.name fname,pt.name tname,pcc.name ccname
      3  from mail m,person pf,person pt,person pcc
      4  where m.f = pf.pid
      5  and m.t = pt.pid
      6  and m.cc = pcc.pid;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4145845855
    | Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |        |     1 |   112 |    14  (15)| 00:00:01 |
    |*  1 |  HASH JOIN           |        |     1 |   112 |    14  (15)| 00:00:01 |
    |*  2 |   HASH JOIN          |        |     1 |    92 |    10  (10)| 00:00:01 |
    |*  3 |    HASH JOIN         |        |     1 |    72 |     7  (15)| 00:00:01 |
    |   4 |     TABLE ACCESS FULL| MAIL   |     1 |    52 |     3   (0)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |   6 |    TABLE ACCESS FULL | PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    |   7 |   TABLE ACCESS FULL  | PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("M"."CC"="PCC"."PID")
       2 - access("M"."T"="PT"."PID")
       3 - access("M"."F"="PF"."PID")
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement
    25 rows selected.
    Message was edited by:
            jeneesh
    No indexes created...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Apple Multiple Scan 1705 - usable with modern video cards?

    I still have an old Apple Multiple Scan 1705 display which I've been using continuously since ~1995. Even though I don't have a Mac anymore (I know, I know, mea culpa), I hooked it up to my PC and it has worked great. But, I recently received a newer hand-me-down PC from my sibling. It didn't come with a monitor, but I thought that would be OK as I could just switch the monitor from my old system to the new system.
    As it turns out, it's not that easy. When I boot it up, the monitor goes blank after POST, usually between the XP screen and the welcome screen. Sometimes I can actually get as far as seeing the desktop, but the screen usually goes blank before I can navigate anywhere. It works fine in safe mode, though. From what I've read on the Web, this would appear to be an "infinite loop" problem, where the OS and the video card stop speaking to each other.
    I may be on a fool's errand here, trying to combine a PC video card (an NVIDIA GeForce 7600 GS) with an Apple monitor which is ten years older than it. I've done some basic troubleshooting, such as reinstalling the video card drivers and visually inspecting the hardware and connections. But I wonder if it is even possible for the two to communicate properly. For the record, I'm running Windows XP SP3. Has anyone else been foolish enough to try this?
    My girlfriend hates the bulkiness of this old CRT monitor and wants to get a flat panel anyway, but while I save up for that, it would be nice to be able to use the new computer with the old monitor. Even if it won't work, I've gotten 13 years of use out of the monitor, so I certainly got my money's worth. But I thought I'd check with the community and see if there's any life left in the old display.
    Thanks for your help!

    In Windows XP, in order to change the resolution for the monitor, you have to right-click on the desktop and select "Properties". This bring up "Display Properties". From there, you click the "Settings" tab. This gives you some basic options. For a wider selection of resolutions and refresh rates, click on the "Advanced" button. Then select the "Adapter" tab and press the "List All Modes" button. This lists all the available configurations for resolution, refresh rate, and colors. I fiddled around with these, trying all of the configurations that were supposed to work according to the Apple Multiple Scan 1705 specs. Unfortunately, anything other than VGA 640x480 60Hz would cause a "blue screen of death".
    My girlfriend had indicated that she wanted to get a flat panel display anyway (to get back some of that desk real estate), so I decided to cut my losses and buy a new monitor. Unfortunately, I now have the exact same problem with the new monitor. So, it seems the problem is not with the monitor after all. Or, at the least, that's not the only problem. Once I resolve what other hardware/software issue is going on, I'll see if the 1705 display will work. But for practical purposes, you can consider this question resolved.
    Thanks for entertaining my obscure question.

  • Add multiple scans to the same pdf file scan at time of scanning

    Hello, HP Officejet Pro 6830 How do I get it to add multiple scans to the same pdf file? I need to scan multiple documents and have them all end up in one pdf. Some may be double-sided and it is fine if I have to scan them individually. At present it will only do one scan or a double-sided scan then it wants to save the scan and not ask if there are any more pages. The only option is to save or not. Thank you.

    After your first scan you need to click the + sign at the 7 o'clock position. 

  • Out of Frequency Problem with Apple Multiple Scan 720 Display and Mac mini

    If a resolution with a horizontal and/or vertical frequency not compatible with your display, in this case an Apple Multiple Scan 720 Display, is selected in the Display preference pane of the Mac OS X 10.4.10 system running on a Mac mini (Early '06), you will recieve a message on your screen about being "out of frequency" and the power management feature will shut down and restart your display repeatedly. The following solution should restore your display to proper function. It may apply to other displays and Macintoshes running Mac OS X, but it has only been tested specially on this setup.
    1. Shut down the Mac mini with the power button (press and hold for 5 seconds)
    2. Start the computer up with the Mac mini Mac OS X Install Disc 1 (press the power button, insert the disc, hold the C key down until the progress bar starts up)
    3. Click on the language, then select Utilities > Terminal...
    4. Type "cd /" and return
    5. Type "cd Volumes" and return
    6. Type "cd your hard drive name" and return (you need to enclose the hard drive name in double quotes if more than one word)
    7. Type "cd Library" and return
    8. Type "cd Preferences" and return
    9. Type "rm com.apple.windowserver.plist" and return
    10. Type "exit" and return
    11. Quit Terminal
    12. Select Utilities > Startup Disc...
    13. Select the OS on the hard drive and click on Restart button, holding down the Shift key until the progress bar starts.
    14. Log in to the account that had the display problem in Safe Boot mode
    15. In System Preferences select Display, select a resolution with the appropriate frequency, 60 Hz should be fine, then close System Preferences
    16. Restart the computer, log in to the account normally, and the display should be functioning properly.
    DO NOT SELECT AN INAPPROPRIATE RESOLUTION/FREQUENCY FOR YOUR DISPLAY IN THE DISPLAY PREFERENCE PANE! BE AFRAID, BE VERY AFRAID!
    Mac mini (Early '06)   Mac OS X (10.4.10)   Apple Multiple Scan 720 Display
    Mac mini (Early '06)   Mac OS X (10.4.10)   Apple Multiple Scan 720 Display

    Just a correction and an elaboration:
    It should be Startup Disk... with a k, not a c.
    Both the resolution and the frequency must be appropriate for your display to work properly. Check your documentation or the manual information available on your display for the appropriate resolutions/frequencies. Evidently the Apple Multiple Scan 720 Display is considered so old that no allowance was made for it in the Display preference pane of Mac OS X 10.4.10.
    Mac mini (Early '06)   Mac OS X (10.4.10)  

  • How to store Multiple Ec\xcels into Table Using sql loder

    Plese guide me to store Multiple Ec\xcels into Table Using sql loder

    I am not clear with your question. Do you want to load multipel Excel Files into a table. If so you can follow this link.
    http://www.orafaq.com/wiki/SQL*Loader_FAQ#Can_one_load_data_from_multiple_files.2F_into_multiple_tables_at_once.3F
    Thanks,
    Karthick.

  • How to avoide multiple duplicate entries in adress book?

    How to avoide multiple duplicate entries in adress book? I can add the same contact name and number more than twice and the phone isn't warning me at all!!! I's quite a heck for me.

    not possible from inside AB AFAIK. but you can do the following. in finder open the folder /users/username/library/application support/address book/metdata. switch to the list mode and sort by date modified. quicklook the vcards at the top to see which ones they are.

  • How to avoid multiple call to function:

    In our datawarehouse we have a huge receipt row table where all metrics ar stored in the local currency. On top over that we have views which calculate metrics to the desired currency.
    So basically all views looks like this
    select geo_region,
    product_group,
    customer_group,
    metric1 * (select get_exchange_rate(currency_id) from dual) metric1,
    metric1 * (select get_exchange_rate(currency_id) from dual) metric2,
    metric1 * (select get_exchange_rate(currency_id) from dual) metricx,
    group by..
    As we have about 20 metrics we notices that the function is called 20 times per row.
    Is there really anyway to avoid that? Shouldn't it be it's just the exact same call with the same in-parameters over and over again.
    We've tried with local sys_context and the performance is better but the call to the context is still performed 20 times. Any Ideas?

    Can you avoid multiple function calls? Maybe, if as in your example all the function calls values are computing the same result. If they operate on different columns then you'll have to perform the function call anyway.
    Either way you should be able to eliminate the (near as I can tell) pointless subquery from dual
    You might be able to avoid the repeated function calls if the values are always the same. If every computation you could save the function call (and subquery!) by doing it once and then using assignments after the initial query using variables after the initial query, perhaps using NULL in the query as placeholders to select into a record - something like
    select inital_region,
             product_group,
             customer_group,
             metric1 * exchange_rate(currency_id) metric1,
             null metric2,
    v_metric2 := metric1;
    ...Message was edited by (fixed typo):
    riedelme

  • SQL*Loader with multiple files

    Gurus,
    I search the documentation and this forum and haven't found a solution to my issue yet...
    I am not expert of SQL*Loader. I have used SQL*Loader to copy from one file to a table many times. But I have not copied multiple files into one table especially with different names.
    More specifically....
    I need to load data from multiple files into a table. But the file names will be different each time. A file will be created every hour. The file name will consist of the root file name appended by a time stamp. For example, a file created on 10/07/2010 at 2:15 P.M. would be filea100720101415.txt while a file created on 10/08/2010 at 8:15 A.M. would be filea100820100815.txt. All the files will be in one directory.How can I load the data from the files using SQL*Loader?
    My database: Oracle 10g Release 2
    Operating System: Windows 2003 Server
    Please assist.
    Robert

    sect55 wrote:
    Gurus,
    I search the documentation and this forum and haven't found a solution to my issue yet...
    I am not expert of SQL*Loader. I have used SQL*Loader to copy from one file to a table many times. But I have not copied multiple files into one table especially with different names.
    More specifically....
    I need to load data from multiple files into a table. But the file names will be different each time. A file will be created every hour. The file name will consist of the root file name appended by a time stamp. For example, a file created on 10/07/2010 at 2:15 P.M. would be filea100720101415.txt while a file created on 10/08/2010 at 8:15 A.M. would be filea100820100815.txt. All the files will be in one directory.How can I load the data from the files using SQL*Loader?
    My database: Oracle 10g Release 2
    Operating System: Windows 2003 Server
    Please assist.
    RobertToo bad this isn't in *nix, where you get a powerful shell scripting capability. 
    That said, here is the core of the solution .... you will also need a way to identify files that have been processed vs. new ones. Maybe rename them, maybe move them. But with this sample you can see the basics. From there it is really an issue of DOS scripting, which would better be found by googling around a bit.
    cd c:\loadfiles
    FOR %%datfile IN (*.txt) DO SQLLDR CONTROL=sample.ctl, LOG=sample.log, BAD=baz.bad, DATA=%%datfileTry googling "dos scripting language". You'll find lots of tutorials and ideas on "advanced" (well, as advanced as DOS gets) techniques to solve your problem.
    Edited by: EdStevens on Dec 1, 2010 5:03 PM

  • SQL Loader - Load multiple files in UNIX

    HI all, I'm looking for a bit of help with using SQL LOADER to load multiple files into one table. I've had a look on the forums but still struggling with this one.
    What I want to do is basically upload everything thats in /home/ib. I know you can use INFILE for several files in the control file but I have several hundred files to upload so this isn't practical. Can I pass the directory name as an INFILE parameter?
    Any help would be appreciated.

    On Unix you shouldn't worry about that. See this example :
    [ora102 work db102]$ cat test11.dat
    aaaaa,bbbbb
    ccccc,ddddd
    eeeee,fffff
    [ora102 work db102]$ cat test12.dat
    ggggg,hhhhh
    jjjjj,kkkkk
    lllll,mmmmm
    [ora102 work db102]$ cat test13.dat
    nnnnn,ooooo
    ppppp,qqqqq
    rrrrr,sssss
    [ora102 work db102]$ cat load.sh
    CTL=load.ctl
    echo "load data" > $CTL
    for DAT in test1*.dat
    do
            echo "INFILE "$DAT >> $CTL
    done
    echo "replace"                  >> $CTL
    echo "INTO TABLE test1"         >> $CTL
    echo "fields terminated by ','" >> $CTL
    echo "trailing nullcols"        >> $CTL
    echo "( a, b )"                 >> $CTL
    sqlldr test/test control=load.ctl
    [ora102 work db102]$ ./load.sh
    SQL*Loader: Release 10.2.0.1.0 - Production on Mon Oct 2 11:45:44 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 3
    Commit point reached - logical record count 6
    Commit point reached - logical record count 9
    [ora102 work db102]$ sqlplus test/test
    SQL*Plus: Release 10.2.0.1.0 - Production on Mon Oct 2 11:45:49 2006
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    TEST@db102 SQL> select * from test1 order by a,b;
    A                    B
    aaaaa                bbbbb
    ccccc                ddddd
    eeeee                fffff
    ggggg                hhhhh
    jjjjj                kkkkk
    lllll                mmmmm
    nnnnn                ooooo
    ppppp                qqqqq
    rrrrr                sssss
    9 rows selected.
    TEST@db102 SQL>                                                                              

Maybe you are looking for