Slow split table export (R3load and WHERE clause)

For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
      0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
  |
  ---      1 RETURN
      |
      ---      2 FETCH CE1OC01
          |
          ------   3 IXSCAN CE1OC01~4 #key columns:  2
query execution time [millisec]            |       333
uow elapsed time [microsec]                |   429,907
total user CPU time [microsec]             |         0
total system cpu time [microsec]           |         0
Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
package     time      start date        end date          size MB  MB/min
CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

Hi all, thanks for the quick and extremely helpful answers.
Beck,
Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
Ralph,
The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
Look at the throughput difference between the previous run and the current run:
package     time       start date        end date          size MB  MB/min
CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
            558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
package     time      start date        end date          size MB   MB/min
CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
            90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

Similar Messages

  • Error while using REMAP_TABLE and WHERE clause  together in IMPDP

    I am trying to move some records from a very large table to another small table.
    I am facing trouble while using REMAP_TABLE and WHERE clause together in IMPDP.
    Problem is data filter is not getting applied and all records are getting imported.
    here is how I have simulated this. please advice.
    CREATE TABLE TSHARRHB.TMP1
      A  NUMBER,
      B  NUMBER
    begin
    Insert into TSHARRHB.TMP1
       (A, B)
    Values
       (1, 1);
    Insert into TSHARRHB.TMP1
       (A, B)
    Values
       (2, 2);
    COMMIT;
    end;
    expdp system/password TABLES=tsharrhb.TMP1 DIRECTORY=GRDP_EXP_DIR DUMPFILE=TMP1.dmp REUSE_DUMPFILES=YES LOGFILE=EXP.log PARALLEL=8
    impdp system/password DIRECTORY=GRDP_EXP_DIR DUMPFILE=TMP1.dmp LOGFILE=imp.log PARALLEL=8 QUERY='TSHARRHB.TMP1:"WHERE TMP1.A = 2"'  REMAP_TABLE=TSHARRHB.TMP1:TMP3 CONTENT=DATA_ONLY
    Import: Release 11.2.0.1.0 - Production on Fri Dec 13 05:13:30 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Automatic Storage Management, OLAP, Data Mining
    and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@GRD6.RBSG DIRECTORY=GRDP_EXP_DIR DUMPFILE=TMP1.dmp LOGFILE=SSD_93_TABLES_FULL_EXP.log PARALLEL=8 QUERY=TSHARRHB.TMP1:"WHERE TMP1.A = 2" REMAP_TABLE=TSHARRHB.TMP1:TMP3 CONTENT=DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "TSHARRHB"."TMP3"                           5.421 KB       2 rows
    Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 05:13:33
    here I am expecting only 1 record to get imported but both the records are getting imported. please advice.

    The strange thing compared to your output is that I get an error when I have table prefix in the query block:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=TMP1.dmp LOGFILE=imp.log PARALLEL=8 QUERY=SYSADM.TMP1:"WHERE TMP1.A = 2" REMAP_TABLE=SYSADM.TMP1:TMP3 CONTENT=DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    ORA-31693: Table data object "SYSADM"."TMP3" failed to load/unload and is being skipped due to error:
    ORA-38500: Unsupported operation: Oracle XML DB not present
    Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at Fri Dec 13 10:39:11 2013 elapsed 0 00:00:03
    And if I remove it, it works:
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** DUMPFILE=TMP1.dmp LOGFILE=imp.log PARALLEL=8 QUERY=SYSADM.TMP1:"WHERE A = 2" REMAP_TABLE=SYSADM.TMP1:TMP3 CONTENT=DATA_ONLY
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "SYSADM"."TMP3"                             5.406 KB       1 out of 2 rows
    Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Fri Dec 13 10:36:50 2013 elapsed 0 00:00:01
    Nicolas.
    PS: as you can see, I'm on 11.2.0.4, I do not have 11.2.0.1 that you seem to use.

  • Difference of 'Specify Fragmentation Content'  and 'where clause Filter ' ?

    What is the difference of ‘Specify Fragmentation Content’ and ‘where clause Filter ‘
    As per my understanding Both looks like limit the data ?

    'Specify Fragmentation Content’ is for Union-ing tables, e.g. one table has data for 2008 while other has 2005-2007, so 'Specify Fragmentation Content’ tells server where to go look for data.
    'Where clause Filter' is for limiting data from the table, e.g. where status = 'Funded'

  • Qualifying Expression and WHERE CLAUSE Extension

    I would like to know the exact difference between 'Qualifying Expression and WHERE CLAUSE Extension'. Since both are meant to contain some CONDITION that would facilitate the Edit Check's success, So I am sometimes a bit confused with these too as to when to use what.
    Can someone help Please??

    As you can tell from my previous posts (requests!) - I'm a newbie to OC.
    From the documentation - it appears that both Qualifying expression and Where Clause work the same way but the way they execute is different.
    Qualifying expression is applied after the fetch (key fields and question response data from DCM cursor i.e., after the cursor fetches the data) and Where clause filters before QG fetch.
    HTH

  • What is Split valuation? why and where it will be useful?

    Dear Frndz,
    Kindly explain What is Split valuation? why and where it will be useful?
    Regards,
    SRini

    Hi,
    Split Valuation
    Use
    For certain materials, it is necessary to valuate the various stocks in a particular valuation area separately. Reasons for this include:
    Different origins of the material
    Different grades of quality for the material
    Different statuses for the material
    Differentiation between in-house production and external procurement
    Differentiation between different deliveries
    Features
    If a material is subject to split valuation, the material is managed as several partial stocks, each partial stock is valuated separately.
    Each transaction that is relevant for valuation, be it a goods receipt, goods issue, invoice receipt or physical inventory, is carried out at the level of the partial stock. When you process one of these transactions, you must always specify which partial stock is involved. This means that only the partial stock in question is affected by a change in value, the other partial stocks remain unaffected.
    Alongside the partial stocks, the total stock is also updated. The calculation of the value of the total stock results from the total of the stock values and stock quantities of the partial stocks.
    You define whether the material is subject to split valuation on the accounting view of the material master record. There are two fields for this:
    The valuation category specifies which criterion should be used as the basis for differentiating between the various partial stocks.
    The valuation type specifies an individual characteristic of a partial stock.
    Prerequisites
    The valuation category is defined in the master record of a material. It determines whether the material is subject to split valuation. The specified material type must also be maintained in the material master record.
    Activities
    To specify the valuation type of a material for which valuation types have been defined in the material master record, proceed as follows:
    Branch to the purchase order item detail screen.
    Enter a predefined valuation type in the field Val. type.
    Save the purchase order.
    For more details , pls go through the following link :
    [Split Valuation  |http://help.sap.com/erp2005_ehp_04/helpdata/EN/8a/d1de34e4cb2300e10000009b38f83b/frameset.htm]
    Hope this helps.
    Regards,
    Tejas
    Edited by: Tejas  Pujara on Dec 19, 2008 7:22 AM

  • Can't Export data if WHERE clause contains AND/OR

    I am able to export the results of a query if the WHERE clause only has one condition. But if there is and AND or an OR, you can right-click and choose Export Data, but nothing happens.
    For example, the following will Export just fine:
    SELECT * FROM DUAL
    WHERE ROWNUM = 1;
    But throw in an 'AND', and it won't Export:
    SELECT * FROM DUAL
    WHERE ROWNUM = 1 AND ROWNUM < 2;
    I am running Ver 1.5.3 and haven't applied any patches.

    Unfortunately, as part of trying to fix other issues with the export functionality, 1.5.3 introduced problems where certain types of SQL statements wouldn't export (either nothing happened as you are seeing or reporting error messages like ORA-936). While it is not yet perfect, 1.5.5 handles exporting results much better (it copes with your case that fails in 1.5.3), so I would suggest you upgrade to 1.5.5.
    theFurryOne

  • Export (expdp) with where clause

    Hello Gurus,
    I am trying to export with where clause. I am getting below error.
    Here is my export command.
    expdp "'/ as sysdba'" tables = USER1.TABLE1 directory=DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= “USER1.TABLE1:where auditdate>'01-JAN-10'” Here is error
    [keeth]DB1 /oracle/data_15/db1> DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= USER1.TABLE1:where auditdate>'01-JAN-10'                    <
    Export: Release 11.2.0.3.0 - Production on Tue Mar 26 03:03:26 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_TABLE_03":  "/******** AS SYSDBA" tables=USER1.TABLE1 directory=DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= USER1.TABLE1:where auditdate
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 386 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-31693: Table data object "USER1"."TABLE1" failed to load/unload and is being skipped due to error:
    ORA-00933: SQL command not properly ended
    Master table "SYS"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded
    Dump file set for SYS.SYS_EXPORT_TABLE_03 is:
      /oracle/data_15/db1/TABLE1.dmp
    Job "SYS"."SYS_EXPORT_TABLE_03" completed with 1 error(s) at 03:03:58Version
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production

    Hello,
    You should use parameter file.Another question i can see you are using 11g.Why don't you use data pump?.
    Data Pump is faster and have more features and enhancement than regular imp and exp.
    You can do the following:
    sqlplus / as sysdba
    Create directory DPUMP_DIR3  for 'Type here your os path that you want to export to';then touch a file:
    touch par.txt
    In this file type the following the following :
    tables=schema.table_name
    dumpfile=yourdump.dmp
    DIRECTORY=DPUMP_DIR3
    logfile=Your_logfile.log
    QUERY =abs.texp:"where hiredate>'01-JAN-13' "then do the following
    expdp username/password parfile='par.txt'
    If you will import from Oracle 11g to version 10g then you have to addthe parameter "version=10" to the parameter file above
    BR
    Mohamed ELAzab
    http://mohamedelazab.blogspot.com/

  • Using Filter and where clause in GoldenGate

    Hi,
    I need to use where clause in extract process.
    The condition i need to use is :- where (CODE LIKE '10%' OR CODE LIKE '0%')
    how to use LIKE operation along with OR in extract.
    I have multiple where conditions these are straight i.e = and <> which are working fine. But LIKE is not working.
    Please do assist for the same.
    Thanks.

    GoldenGate uses FILTER and SQLPREDICATE (and COMPUTE is a variation that can work, depending on how you are trying to manipulate the data).
    To filter data, you can use:
    ● A FILTER or WHERE clause in a TABLE statement (Extract) or in a MAP statement (Replicat).
    ● A SQL query or procedure
    ● User exits
    FILTER comparison operators include:
    Comparison operators:
    > (greater than)
    >= (greater than or equal)
    < (less than)
    <= (less than or equal)
    = (equal)
    <> (not equal)
    WHERE clause permissible operators:
    Column names PRODUCT_AMT
    Numeric values -123, 5500.123
    Literal strings "AUTO", "Ca"
    Built-in column tests @NULL, @PRESENT, @ABSENT (column is null, present or absent in the row). These tests are built into Oracle GoldenGate. See “Considerations for selecting rows with FILTER and WHERE” on page 155.
    Comparison operators =, <>, >, <, >=, <=
    Conjunctive operators AND, OR
    Grouping parentheses Use open and close parentheses ( ) for logical grouping of multiple elements.
    You could try using a GoldenGate string function. You know what the leading one or two characters (0 and 10) are.
    Use the @STREXT function to extract a portion of a string and do a comparison there. Or take care of it using SQLEXEC on replicat (call a function to be able to use LIKE).

  • Oracle stored ns and where clauses

    Hello everybody,
    I need to call an Oracle stored function and at the same time do a select on the result of the query, it works well except when I use a where clause. I am connecting to Oracle 8.1.7 through OLEDB using the Oracle provider for
    OLEDB distributed with this Oracle version.
    Here is the query I am doing:
    select * from {call MC.SEC.QryTermbases(?, ?) where ID = ?}
    If I remove the where clause it works well but I need to use the where. I know that I could pass the parameter to the procedure instead of doing that in the select but there are places where I can not do that since the SQL
    query is generated dynamically. The code above is just a demo.
    Thanks very much for your help,
    José.

    Thanks for answering, it is actually possible to do a select on the return of a function, I have tested it with other than "select *" and it has worked well. What has not worked for me is using a "where" clause. That is "select" without "where" has worked but not with the "where".
    I also suspect that it does not work but similar queries work well in MSSQL 2000 and Interbase 6.0 so I thought may be there was a way to do that with Oracle. That is in MSSQL I can treat the result of a function as a normal table and I can do the same thing with a stored procedure that returns a recordset in Interbase.
    Thanks again for answering,
    Jose.

  • Oracle stored functions and where clauses

    Hello everybody,
    I need to call an Oracle stored function and at the same time do a select on
    the result of the query, it works well except when I use a where clause. I
    am connecting to Oracle 8.1.7 through OLEDB using the Oracle provider for
    OLEDB distributed with this Oracle version.
    Here is the query I am doing:
    select * from {call MC.SEC.QryTermbases(?, ?) where ID = ?}
    If I remove the where clause it works well but I need to use the where. I
    know that I could pass the parameter to the procedure instead of doing that
    in the select but there are places where I can not do that since the SQL
    query is generated dynamically. The code above is just a demo.
    Thanks very much for your help,
    Jose.

    Thanks for answering, it is actually possible to do a select on the return of a function, I have tested it with other than "select *" and it has worked well. What has not worked for me is using a "where" clause. That is "select" without "where" has worked but not with the "where".
    I also suspect that it does not work but similar queries work well in MSSQL 2000 and Interbase 6.0 so I thought may be there was a way to do that with Oracle. That is in MSSQL I can treat the result of a function as a normal table and I can do the same thing with a stored procedure that returns a recordset in Interbase.
    Thanks again for answering,
    Jose.

  • Avoid repeating same logic in 'select' and 'where' clauses?

    I'll preface by saying I'm self-taught and have only been fiddling with SQL for a couple of months, so forgive me if this is a dumb question. I have a query written to pull out customers who are configured to have their products stored at the wrong warehouse, according to the first 3 digits of the zip code. Here is an extremely simplified version of a query I'm trying to run:
    select custno, custbuy_zip_cd, custbuy_prim_ship_loc_cd as Warehouse,
    case when substr(custbuy_zip_cd,1,3) in ('839','848') then '20'
    when substr(custbuy_zip_cd,1,3) in ('590','591') then '33'
    end as StdWhse
    from customers
    where case when substr(custbuy_zip_cd,1,3) in ('839','848') then '20'
    when substr(custbuy_zip_cd,1,3) in ('590','591') then '33'
    end <> custbuy_prim_ship_loc_cd
    or (case when substr(custbuy_zip_cd,1,3) in ('839','848') then '20'
    when substr(custbuy_zip_cd,1,3) in ('590','591') then '33'
    end is not null and custbuy_prim_ship_loc_cd is null)
    Now, the query works, but it seems overly convoluted and feels like there must be a way to make it simpler and faster. I'm using the same 'case when' 3 times. Originally, I had thought I could use the aliases from the 'select' clause in the 'where' clause, which would simplify things:
    select custno, custbuy_prim_ship_loc_cd as Warehouse,
    case when substr(custbuy_zip_cd,1,3) in ('839','848') then '20'
    when substr(custbuy_zip_cd,1,3) in ('590','591') then '33'
    end as StdWhse
    from customers
    where StdWhse <> custbuy_prim_ship_loc_cd
    or (StdWhse is not null and custbuy_prim_ship_loc_cd is null)
    I then found out that that caused 'invalid identifier' errors. My first attempt at a solution was to use a subquery in the 'from' clause, but that ran the 'case when' on every single customer instead of the small subset, so it wound up taking much longer even though it looked neater. Any tips on how to clean up that first query to make it run faster?
    this is Oracle 11i, I believe. As a side note, I don't have write access to the database.

    Thanks for all the tips so far - still going through them. You all respond fast! Sorry about using double angle brackets for != and not using code tags, I'll make sure to format my posts properly going forward. I think the double angle brackets messed up the appearance of my original queries a little. Here's how I probably should have pasted my first query in my first post:
    select custno, custbuy_zip_cd, custbuy_prim_ship_loc_cd as Warehouse,
        case when substr(custbuy_zip_cd,1,3) in ('839','848') then '20'
            when substr(custbuy_zip_cd,1,3) in ('590','591') then '33'
        end as StdWhse
    from customers
    where case when substr(custbuy_zip_cd,1,3) in ('839','848') then '20'
            when substr(custbuy_zip_cd,1,3) in ('590','591') then '33'
        end != custbuy_prim_ship_loc_cd
        or (case when substr(custbuy_zip_cd,1,3) in ('839','848') then '20'
            when substr(custbuy_zip_cd,1,3) in ('590','591') then '33'
        end is not null and custbuy_prim_ship_loc_cd is null)The almost unanimous opinion seems to be that I should use a subquery in one way or another, but the problem remains that the only significant logic to narrow down the results is the logic that matches the 'case when' results (which are what the warehouse number should be, based on the zip code) to the current warehouse number. Therefore, it seems like any subquery is going to return my entire list of 600k customers, and take a much longer time than my original (messy) query. At least it has in the test runs I created based on
    Satyaki's and Peter's examples. The query based on my original example takes about 2.5 minutes, and the subquery examples take about 5+ even though they look cleaner.
    to clarify what the query is trying to accomplish, I want it to pull any records where the warehouse number does not equal the correct warehouse number based on zip code (or if the warehouse number is null when it shouldn't be).
    I'll try to create some sample data and sample results. Customers table:
    custno   custbuy_zip_cd  custbuy_prim_ship_loc_cd
    1        59024           20
    2        59024           33desired results:
    custno   custbuy_zip_cd   warehouse   StdWhse
    1        59024            20          33If I could create a table to hold the standard warehouses to join on, the whole thing would be much easier. The full version of the query really has hundreds of zip code prefixes and 5 different warehouses and each account has 4 alternate warehouses as well. However, I'm stuck with read only access so everything has to go right in the query. It wouldn't be the end of the world to just stick with my original query since it's not like it takes hours, and I'll only be running it weekly. I just wanted to make sure there wasn't some other solution that wasn't just cleaner but was also faster.

  • OVD database adapter and WHERE clause

    Hi all,
    We're using OVD 11g, and have a database adapter defined against a table in an Oracle schema. The adapter correctly maps columns to LDAP attributes and creates a virtual directory.
    However the table contains users we don't want to appear in the directory. We have no control over the data in the table hence we cannot remove the unwanted users from the source.
    Is there any way we can specify a WHERE clause in the database adapter which limits the users pulled out of the table and created in the virtual directory? Something like WHERE organisation = 'Company A'.
    Thanks
    Alan

    You can specify LDAP filters in Routing Include/Exclude in Adapter configuration which will eventually translate into where clause for the database adapter.
    For example if you want to exclude users from organization A all you have to do is Add an LDAP filter for that organization in Routing Exclude...
    Same is the case for Routing Include.
    Hope this helps,
    Saggu

  • ADF- Iterator and Where Clause

    Hi all:
    How can I do where clause on the iterator?
    Thanks in advance.

    Hi,
    you want two iterators that point to the same VO. One would be in find mode so you can put in query data and one that is linked to the table to show the result set.
    The manual covers this: http://download-uk.oracle.com/docs/html/B25947_01/web_search_bc.htm#CIHEIJBI
    Frank

  • Tuning Select Statement . field sequence and where clause

    Hi All
    Are there any general guidelines how to write select < field sequence >where clause < field sequence ? Is that shuld be in order of the field sequence in tables?
    And how to use this when we have a view or a inner - join . Is that separate from normal select statement that is using FOR ALL ENTRIES.
    Please let me know any general guidelines available on this,
    Amol

    Hello Amol,
    I have another hint:
    The statement FOR ALL ENTRIES will package the select statements for every five entries in the internal table. So in comparison to the following code sequence...
    LOOP AT itab.
       SELECT * FROM table WHERE key = itab-key.
    ENDLOOP
    the number of select statements is reduced to 20% with
    SELECT * FROM table INTO TABLE ...
         FOR ALL ENTRIES IN itab
         WHERE key = itab-key
    If I'm expecting a <i>huge</i>  amount of data a go a step further and create my own packages by building a range table with around 100-500 entries and execute a select there...
    LOOP AT itab.
       IF counter < 500.
          APPEND itab-key TO range-tab.   " just code example
       ENDIF.
       IF count >= 500.
          SELECT * FROM table APPENDING TABLE ...
             WHERE key IN range_tab
       ENDIF.
       " adjust and calculate counter
    ENDLOOP.
    * Don't forget last select statement after loop
    Best wishes,
    Florin

  • Table function sensitive to where clause?

    Hi-
    In Oracle SQL, you can use the results of a PL/SQL function as a table with the "TABLE()" syntax. Example: "SELECT * FROM TABLE(myfunction(param1,param2)) ..."
    Is there any (non-crazy) way for the function to be aware of the conditions in the WHERE clause of that SELECT statement? For example, if I wanted "myfunction" to know that I had specified "WHERE param3=10' without having to put param3 in the function call, could this be done?
    Other SQL implementations support this. I know of at least one where you can map a table on top of a function where the "in" parameters can correspond to columns on the mapped table. Does Oracle support a similar syntax or strategy?

    Not sure if it is too crazy for you ;)
    But again I rely on a helper function since I am not sure about the purpose of the whole thing:
    SQL> create or replace function set_param (p varchar2) return varchar2
    as
    begin
      dbms_application_info.set_client_info(p);
    return p;
    end set_param;
    Function created.
    SQL> create or replace function myfunction
       return sys.dbms_debug_vc2coll
    as
    begin
       return sys.dbms_debug_vc2coll (sys_context ('userenv', 'client_info'));
    end myfunction;
    Function created.
    SQL> select   *
      from   table (myfunction())
    where   set_param (3) is not null
    COLUMN_VALUE                                                                   
    3                     Hope you get the idea ....

Maybe you are looking for

  • I've downloaded the software and have an icon on my MacBook tool bar. How do I open or use it?

    I'm a retired American living permanently in France but don't speak the language so hope someone will help in English. I've used Firefox before with good results on a foreign currency trading platform. But the company went bankrupt so now I'm trying

  • Is Powerbeats 2 compatible with Apple TV?

    Is Powerbeats 2 compatible with Apple TV?

  • Refresh Mviews

    hi all, I have few Materialized views that are dependent on one another.. ex: mv1, mv2, mv3.. I wrote code that will submit jobs to refresh all 3 views at the same time... but it should be in the order mv1, mv2, mv3.. well, i can loop through it in t

  • Multiple AirPlay for video?

    Does anyone know if it is possible send the same video to multiple AirPlay locations at the same time, (like music does with the multiple speakers function). Right now I have to choose either, my computer or my AppleTv. What if I want them both to pl

  • ITunes doubling play counts after being listened to on iPhone 5

    I just synced my phone to my iTunes after playing an album only once on my iPhone 5 and I noticed that my iTunes play count is showing that each song has actually been listened to twice. Same goes for a song that I played four times, the play count s