Fine tunning the query which uses /*+ USE_CONCAT */ table hint

Some of the queries in our application uses the /*+ USE_CONCAT */ table hint which takes huge time for the execution in Oracle database.when we fire the below query in 250Million production database it takes approx.3min for the execution.
Because of this we are facing a performance issue in the application.
Below is the sample query :
SELECT /*+ USE_CONCAT */ * FROM DI_MATCH_KEY WHERE NORM_COUNTRY_CD = 'US'
AND ((( NORM_CONAME_KEY1 ='WILM I' OR NORM_CONAME_KEY2 = 'WILM I' OR NORM_CONAME_KEY23 = 'WILM I'
OR NORM_CONAME_KEYFIRST ='WILLIAM' ) AND NORM_STATE_PROVINCE = 'CA' ) OR NORM_ADDR_KEY2 = 'CALMN 12 3 OSAI')
Regarding the indexes almost for all the columns combination for the above sql index already created in table.
Your suggestions will be appreciated
Thanks,
Regards,
Krishna kumar

Hi,
Thanks for our valuable inputs.
As suggested find attached explain plan and trace file details in the excel sheet.
TRACE FILE:
<pre>
TKPROF: Release 10.2.0.1.0 - Production on Thu Feb 14 11:11:59 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: implhr01_ora_26457.trc
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT /*+ USE_CONCAT */ COUNT(*) FROM DI_MATCH_KEY WHERE NORM_COUNTRY_CD = 'US'
AND ((( NORM_CONAME_KEY1 ='WILM I' OR NORM_CONAME_KEY2 = 'WILM I' OR NORM_CONAME_KEY23 = 'WILM I'
OR NORM_CONAME_KEYFIRST ='WILLIAM' ) AND NORM_STATE_PROVINCE = 'CA' ) OR NORM_ADDR_KEY2 = 'CALMN 12 3 OSAI')
call count cpu elapsed disk query current rows
Parse 1 0.03 0.05 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 3.97 65.43 30633 64053 0 1
total 4 4.00 65.49 30633 64053 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 76
Rows Row Source Operation
1 SORT AGGREGATE (cr=64053 pr=30633 pw=0 time=65436584 us)
73914 CONCATENATION (cr=64053 pr=30633 pw=0 time=61623673 us)
0 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=4 pr=2 pw=0 time=32617 us)
0 INDEX RANGE SCAN MKADDR_KEY2 (cr=4 pr=2 pw=0 time=32599 us)(object id 122583)
75 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=80 pr=20 pw=0 time=1369427 us)
75 INDEX RANGE SCAN MK_CY_KEY1_ST_PROV (cr=5 pr=0 pw=0 time=2119 us)(object id 122666)
2 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=81 pr=2 pw=0 time=26723 us)
77 INDEX RANGE SCAN MK_CY_KEY2_ST_PROV (cr=4 pr=0 pw=0 time=3641 us)(object id 122667)
2 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=6 pr=0 pw=0 time=47 us)
2 INDEX RANGE SCAN MK_CY_KEY1_AGN (cr=4 pr=0 pw=0 time=28 us)(object id 122670)
73835 TABLE ACCESS BY INDEX ROWID DI_MATCH_KEY (cr=63882 pr=30609 pw=0 time=61503773 us)
73905 INDEX RANGE SCAN MK_CY_KEY2_AGN (cr=266 pr=176 pw=0 time=148198 us)(object id 122745)
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 76
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.03 0.05 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 3.97 65.43 30633 64053 0 1
total 5 4.00 65.50 30633 64053 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.05 0.03 0 0 0 0
Execute 23 0.05 0.08 0 0 0 0
Fetch 98 0.01 0.01 0 137 0 76
total 130 0.11 0.13 0 137 0 76
Misses in library cache during parse: 9
Misses in library cache during execute: 9
2 user SQL statements in session.
23 internal SQL statements in session.
25 SQL statements in session.
Trace file: implhr01_ora_26457.trc
Trace file compatibility: 10.01.00
Sort options: prsela exeela fchela
1 session in tracefile.
2 user SQL statements in trace file.
23 internal SQL statements in trace file.
25 SQL statements in trace file.
11 unique SQL statements in trace file.
561 lines in trace file.
91 elapsed seconds in trace file.
</pre>
EXPLAIN PLAN:
<pre>
ID     OPERATION     OBJECT_NAME     CARDINALITY     BYTES     COST     CPU_COST
0     SELECT STATEMENT          1     53     1886     15753172
1     SORT          1     53          
2     CONCATENATION                         
3     TABLE ACCESS     DI_MATCH_KEY     8     424     10     78334
4     INDEX     MKADDR_KEY2     8          4     30086
5     TABLE ACCESS     DI_MATCH_KEY     3     159     7     53574
6     INDEX     MK_CY_KEY1_ST_PROV     3          4     29286
7     TABLE ACCESS     DI_MATCH_KEY     3     159     7     53740
8     INDEX     MK_CY_KEY2_ST_PROV     3          4     29286
9     TABLE ACCESS     DI_MATCH_KEY     5     265     7     55941
10     INDEX     MK_CY_KEY1_AGN     5          4     29686
11     TABLE ACCESS     DI_MATCH_KEY     2192     116176     1855     15511582
12     INDEX     MK_CY_KEY2_AGN     2228          12     524136
</pre>
Kindly help us regarding this.
Thanks,
Krishna Kumar.

Similar Messages

  • Undo_tablespace and undo_retetion or tune the query  which one to increase

    hi all,
    In my logs file s
    ORA-01555 caused by SQL statement below (SQL ID: 9nc0n06yryhbk, Query Duration=165122 sec, SCN: 0x05ff.062f3363):
    Tue Feb 5 02:26:39 2008
    SELECT /*+ FIRST_ROWS */ /*+ ORDERED */ B.GID ,K.ELEMENT_TYPE ,K.DATA_SOURCE_GID ,K.PROD_ID ,K.OPTION_CD ,K.MCC ,K.SPN ,K.REGION_CD ,K.WW_CD ,K.COUNTRY_CD ,K.ERROR ,B.GBATCH_ID
    ,B.PERIOD_SEQ_NUM ,B.ACTION ,B.ERROR B_ERROR ,B.COST ,B.INPUT_FILE_ROW_NUM ,B.ACTION_STATUS ,B.ACTION_TIMESTAMP FROM T_INPUT_BUCKET B, T_COS_INPUT_KEY K WHERE B.ACTION_STATUS =
    'initial' AND B.GINPUT_KEY_ID = K.GID AND B.GBATCH_ID = :B1 AND ROWNUM < 8001
    Tue Feb 5 02:35:21 2008
    Thread 2 advanced to log sequence 42907
    Mon Feb 4 09:10:55 2008
    ORA-01555 caused by SQL statement below (SQL ID: 9nc0n06yryhbk, Query Duration=104081 sec, SCN: 0x05ff.05ebc008):
    Mon Feb 4 09:10:55 2008
    SELECT /*+ FIRST_ROWS */ /*+ ORDERED */ B.GID ,K.ELEMENT_TYPE ,K.DATA_SOURCE_GID ,K.PROD_ID ,K.OPTION_CD ,K.MCC ,K.SPN ,K.REGION_CD ,K.WW_CD ,K.COUNTRY_CD ,K.ERROR ,B.GBATCH_ID
    ,B.PERIOD_SEQ_NUM ,B.ACTION ,B.ERROR B_ERROR ,B.COST ,B.INPUT_FILE_ROW_NUM ,B.ACTION_STATUS ,B.ACTION_TIMESTAMP FROM T_INPUT_BUCKET B, T_COS_INPUT_KEY K WHERE B.ACTION_STATUS =
    'initial' AND B.GINPUT_KEY_ID = K.GID AND B.GBATCH_ID = :B1 AND ROWNUM < 8001
    Mon Feb 4 09:14:08 2008
    ===============================================
    and my undo_retention
    Current usage:
    UNDO_01 96736 11596 12 88
    UNDO_02 96736 9357 10 90
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 36000
    can anyone please guide me which one is best to increase undo_retention or undo_tablespace
    or tune the query?
    my database version is
    Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
    PL/SQL Release 10.2.0.2.0 - Production
    "CORE     10.2.0.2.0     Production"
    TNS for HPUX: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    thanks in advance

    IMO best is to
    1) tune the query to minimize time and resource use;
    2) set the undo_retention to size the undo tablespace for the required 'consistent read' rebuild requirements;
    3) set the retention guarantee appropriately
    4) size the undo tablespace based on the required size, probably dictated by 2)
    Why is this an 'either one or other' question? When driving a car and looking for best fuel efficiency, you tune up the car, drive properly AND use the right fuel. You don't just pick one and leave it at that.

  • How to set aggregation rule (SUM) to a query which uses multiple tables

    Hi,
    I have a doubt like i have a query .i want to get the sum of few columns in that..how can i achieve it as it joins multiple tables . i am posting the query under this. please help me to resolve this. i need to get summation of column which is marked in bold . i am so sorry to post such a big query.. thanks in advance.
    SELECT DISTINCT
        SAS.ACCOUNT_MONTH_NO,
        SAS.BILL_TO_MAJOR_SALES_CHANNEL,
        SAS.BUS_AREA_ID,
        SAS.CUST_NAME,
        SAS.PART_NO,
        SAS.PART_DESC,
        SAS.PRODUCT_CLASS_CODE,
        SAS.SUPER_FAMILY_CODE,
        *SAS.NET_SALES_AMT_COA_CURR,* 
      *SAS.GROSS_SALES_AMT_COA_CURR*,
        *SAS.SHIPPED_QTY*,
        SAS.SRCE_SYS_ID,
        GWS.SRC_LOCATION,
        GWS.PART_CF_PART_NUMBER,
        GWS.ANALYST_COMMENTS,
        NVL(GWS.CLAIM_QUANTITY,0) AS *CLAIM_QUANTITY*,
        GWS.CUSTOMER_CLAIM_SUBMISSION_DATE,
        GWS.CLAIM_TYPE,
        GWS.CREDIT_MEMO_NO,
        NVL(GWS.CREDIT_MEMO_AMT,0) AS *CREDIT_MEMO_AMT*,
        GWS.TRANS_CREATED_BY,
        GWS.COMPONENT_CODE,
        GWS.DATE_OF_THE_FAILURE,
        GWS.DATE_PART_IN_SERVICE,
        GWS.PROBLEM_CODE,
        NVL(GWS.TOT_AMT_REVIEWED_BY_CA,0) AS *TOT_AMT_REVIEWED_BY_CA*,
        GWS.REGION
      FROM
    SELECT
            TO_CHAR(A.STATUS_DATE, 'YYYYMM') AS ACCOUNT_MONTH_NO,
            A.LOCATION_ID SRC_LOCATION,
            A.CF_PN PART_CF_PART_NUMBER,
            A.ANALYST_COMMENTS,
            A.CLAIM_QUANTITY, 
         A.CUST_CLAIM_SUBM_DATE CUSTOMER_CLAIM_SUBMISSION_DATE,
            A.CLAIM_TYPE,
            A.CREDIT_MEMO_NO,
            A.CREDIT_MEMO_AMT,
            A.CREATED_BY TRANS_CREATED_BY,
            A.FAULT_CODE COMPONENT_CODE,
            A.PART_FAILURE_DATE DATE_OF_THE_FAILURE,
            A.PART_IN_SERVICE_DATE DATE_PART_IN_SERVICE,
            A.FAULT_CODE PROBLEM_CODE,
            A.TOT_AMT_REVIEWED_BY_CA,
            A.PART_BUS_AREA_ID AS BUS_AREA_ID,
            A.PART_SRC_SYS_ID,
            C.CUST_NAME,
            C.BILL_TO_MAJOR_SALES_CHANNEL,
            P.PART_NO,
            P.PART_DESC,
            P.PRODUCT_CLASS_CODE,
            L.REGION
          FROM
            EDWOWN.MEDW_BIS_DTL_FACT A,
            EDWOWN.EDW_MV_DB_CUST_DIM C,
            EDWOWN.EDW_BUSINESS_LOCATION_DIM L,
            EDWOWN.EDW_V_ACTV_PART_DIM P
          WHERE
            A.PART_KEY                       = P.PART_KEY
          AND A.CUSTOMER_KEY                 = C.CUSTOMER_KEY
          AND A.LOCATION_KEY                 = L.LOCATION_KEY
          AND A.PART_SRC_SYS_ID              = 'SOMS'
          AND A.PART_BUS_AREA_ID             = 'USA'
          AND C.BILL_TO_MAJOR_SALES_CHANNEL <> 'IN'
        GWS,
          SELECT
            A.ACCOUNT_MONTH_NO,
            A.BUS_AREA_ID,
            A.NET_SALES_AMT_COA_CURR,
            A.GROSS_SALES_AMT_COA_CURR,
            A.SHIPPED_QTY,
            B.BILL_TO_MAJOR_SALES_CHANNEL,
            A.SRCE_SYS_ID,
            B.CUST_NAME,
            D.PART_NO,
            D.PART_DESC,
            D.PRODUCT_CLASS_CODE,
            D.SUPER_FAMILY_CODE
          FROM
            SASOWN.SAS_V_CORP_SHIP_FACT A,
            SASOWN.SAS_V_CORP_CUST_DIM B,
            SASOWN.SAS_V_CORP_LOCN_DIM C,
            SASOWN.SAS_V_CORP_PART_DIM D
          WHERE
                C.DIVISION_CODE = A.DIVISION_CODE
            AND
                B.B_HIERARCHY_KEY = A.B_HIERARCHY_KEY
            AND
                D.PART_NO = A.PART_NO
            AND
                A.SRCE_SYS_ID = 'SOMS'
            AND
                A.BUS_AREA_ID = 'USA'
            AND
                B.BILL_TO_MAJOR_SALES_CHANNEL <> 'IN'
        SAS
      WHERE
        SAS.ACCOUNT_MONTH_NO              = GWS.ACCOUNT_MONTH_NO(+)
      AND SAS.BILL_TO_MAJOR_SALES_CHANNEL = GWS.BILL_TO_MAJOR_SALES_CHANNEL(+)
      AND SAS.BUS_AREA_ID                 = GWS.BUS_AREA_ID(+)
      AND SAS.PRODUCT_CLASS_CODE          = GWS.PRODUCT_CLASS_CODE(+);thanks in advance
    aswin

    You get rid of the distinct.
    You put sum() around your starred items.
    You put all the remaining columns in a GROUP BY clause.
    You hope that that none of the other tables has more than one row that matches the SAS table, which would cause you to count that row more than once.

  • Cancel the query which uses full transaction log file

    Hi,
    We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
    is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.

    Hi,
    We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
    is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.
    Hello,
    Instead of Putting limit on trn log it would be good to find out cause causing high utilization.Even if you find that your log is growing because of some transaction it would be a blunder to rollback its little easy to do it for Index rebuild but if you
    cancel for some delete operation you would end up in mess.Please don't create a program to delete or kill running operation.
    You can create custom job for alert for trn log file growth.That would be good.
    From 2008 onwards Index rebuild is fully logged so sometimes it causes trn log issue.To solve this either you run index rebuild for specific tables or for selective tables.
    Other option is widely accepted Ola Hallengren script for index rebuild.I suggest you try this
    http://ola.hallengren.com/
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Guide me to tune the query

    Hi All,
    I need to tune the query which is taking more than 1 hour to execute over 8 Lakhs of record.
    SQL> explain plan for SELECT C.aci_cust_code      cust_code,
    2 C.aci_cust_name cust_name,
    3      R.NAME ruledefination,
    4 B.RULECODE ALERTS,
    5 A.custom1 tran_id,
    6 TD_get_value('AMLTRANTYPE', RTRIM(A.custom17)) trantype,
    7 A.CUSTOM18 tran_nature,
    8 A.custom25 tran_date,
    9 A.messageno messageno,
    10 TD_get_value('AMLTRANSTATUS', A.status) msgstatus,
    11 D.acai_acct_type acct_type,
    12 A.custom19 acct_number,
    13 A.CURRENCY CURRENCY,
    14 A.priorityamount amount,
    15 A.operator USERNAME,
    16 A.msgdb_id msgdb_id,
    17 A.msg_mode_in msg_mode_in
    18 FROM MSGDB A,
    19 MSGALERTS B,
    20 AML_CUST_INFO C,
    21      AML_CUST_ACC_INFO D,
    22 RULETBL2 R,
    23           (SELECT tdkey FROM tabledetails WHERE tdidcode = 'AML-INCLUDEQ' ) amlqueues
    24 WHERE A.msgdb_id = B.msgdb_id AND
    25 A.queueid = amlqueues.tdkey AND
    26           A.MSG_MODE_IN = 'AML-TRANS' AND
    27 A.custom15 = C.aci_cust_code AND
    28 A.CUSTOM19=D.ACAI_ACCT_NUMBER(+) AND
    29 TO_CHAR(A.custom25,'YYYYMMDD') BETWEEN
    30      TO_CHAR(TO_DATE('2011/01/01','YYYY/MM/DD'),'YYYYMMDD')
    31 AND TO_CHAR(TO_DATE('2011/01/31','YYYY/MM/DD'),'YYYYMMDD')
    32 AND B.RULECODE = R.RULECODE
    33 ORDER BY A.custom25, msgdb_id,B.rulecode;
    Explained.
    PLAN_TABLE_OUTPUT
    Plan hash value: 1081661146
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 173K| 30M| | 12697 (2)| 00:02:33 |
    | 1 | SORT ORDER BY | | 173K| 30M| 66M| 12697 (2)| 00:02:33 |
    |* 2 | HASH JOIN | | 173K| 30M| | 5580 (4)| 00:01:07 |
    | 3 | VIEW | index$_join$_005 | 3395 | 81480 | | 42 (3)| 00:00:01 |
    |* 4 | HASH JOIN | | | | | | |
    | 5 | INDEX FAST FULL SCAN | IDX_RCODE | 3395 | 81480 | | 10 (0)| 00:00:01 |
    | 6 | INDEX FAST FULL SCAN | SYS_C0040836 | 3395 | 81480 | | 31 (0)| 00:00:01 |
    |* 7 | HASH JOIN | | 1737 | 276K| | 5534 (4)| 00:01:07 |
    |* 8 | HASH JOIN | | 559 | 86645 | | 4575 (3)| 00:00:55 |
    |* 9 | HASH JOIN OUTER | | 448 | 56000 | | 4463 (3)| 00:00:54 |
    | 10 | NESTED LOOPS | | 448 | 47040 | | 4404 (3)| 00:00:53 |
    |* 11 | TABLE ACCESS BY INDEX ROWID| MSGDB | 451 | 35178 | | 4403 (3)| 00:00:53 |
    |* 12 | INDEX RANGE SCAN | I_MODEDATE | 2292 | | | 4323 (3)| 00:00:52 |
    |* 13 | INDEX UNIQUE SCAN | PK_TABLEDETAIL | 1 | 27 | | 0 (0)| 00:00:01 |
    | 14 | INDEX FAST FULL SCAN | ACC_NUMBER_TYPE | 58947 | 1151K| | 58 (4)| 00:00:01 |
    | 15 | TABLE ACCESS FULL | AML_CUST_INFO | 18340 | 537K| | 111 (1)| 00:00:02 |
    | 16 | TABLE ACCESS FULL | MSGALERTS | 868K| 6782K| | 944 (4)| 00:00:12 |
    There is no index on RULECODE of MSGALERTS and RULETBL2 table.
    Could yu guys guide me how to tune this query with or without creating any new index.
    Thanks,

    To emphasise what hoek has said regarding dates, NEVER compare dates with dates by converting them to strings (or numbers). By doing so, you remove vital information from the optimizer.
    For example, what is the difference between "31st Dec 2010" and "1st Jan 2011"? Easy, they're dates, that's 1 day.
    But what's the difference between "20101231" and "20110101"? Easy: 20110101 - 20101231 = 8870.
    That makes the difference between the optimizer guessing 1 row or 8870 rows... a fairly big difference, I think you'll agree, which could well impact on the plan the optimizer chooses.
    One other point - leaving the clause as dates gives:
    AND    a.custom25 BETWEEN TO_DATE('2011/01/01', 'YYYY/MM/DD')
                          AND TO_DATE('2011/01/31', 'YYYY/MM/DD') which excludes any dates on 31st Jan 2011 except midnight, eg. 10am on 31st Jan 2011 won't be returned by your query.
    If you're after rows for a given month, then you could do:
    AND    trunc(a.custom25, 'mm') = TO_DATE('01/01/2011', 'dd/mm/yyyy')

  • How to tune the query for duplicate records while joining the two tables

    hi,i am executing the query which has retrieving multiple tables,in which one of them has duplicate record,how to get single record

    Not enough info...subject says "tune" the query, message says "write" the query...and where is actual query that you had tried ?

  • How to tune the query and difference between CBO AND RBO.. Which is good

    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query,
    2)What approach you take to tune query? Do you use Hints?
    3)Where did you tune the query and what are the issue with query?
    4)What is difference between RBO and CBO? where u use RBO and CBO.
    5)Give some information about hash join?
    6) Using explain plan how do u know where the bottle neck in query .. how u will identify where the bottle neck is from explain plan .
    thanks/Kumar

    Hi,
    kumar73 wrote:
    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query, Use EXPLAIN PLAN to see exactly where it is spending its time, and address those areas.
    See the forum FAQ
    SQL and PL/SQL FAQ
    "3. How to improve the performance of my query?"
    2)What approach you take to tune query? Do you use Hints?Hints can help.
    Even more helpful is writing the SQL efficiently (avoiding multiple scans of the same table, filtering early, using built-in rather than user-defined functions, ...), creating and using indexes, and, for large tables, partitioning.
    Table design can have a big impact on performace.
    Look for ways to do part of what you need before the query. This includes denormalizing (when appropriate), the kind of pre-digesting that often takes place in data warehouses, function-based indexes, and, starting in Oracle 11, virtual columns.
    3)Where did you tune the query and what are the issue with query?Either this question is a vague summary of the entire thread, or I don't understand it. Can you re-phrase this part?
    4)What is difference between RBO and CBO? where u use RBO and CBO.Basically, use RBO if you have Oracle 7 or earlier.

  • Using flex pitch is there a way to fine tune the start point .i'm using it as a audio to  midi function

    using flex pitch is there a way to fine tune the start point .i'm using it as a audio to  midi function

    Thank you for your reply Karsten but unfortunately this didn't help me so far. Or maybe I'm missing something?
    First the link is a tutorial for iMovie on a Mac. I'm using iMovie on iPad so the steps are inapplicable.
    Second it is only possible for me to manipulate the end part of the sound clip to whichever duration I want. But I can't do the same with the 'beginning' of the sound clip.
    I simply want to place some photos in the beginning of my video with no sound in the background then after like 2 secs I want to start the music clip. For some reason that is not possible! Cause every time I drop the music clip unto my project timeline it automatically place it self along with the first frame in the project! And consequently the photos and music are forced to start together.
    Hope I'm making sense...

  • To determine the Queries which uses Berfore Aggregation

    Hi,
    Is there any best way to determine the Queries which uses Berfore Aggregation properies like any table which has information of all the calculated key figures which uses Before aggregation properties.
    Otherwise how can we determine the same using RSRT.
    Please help.
    Thanks
    Bhanukiran

    Hi,
    Below listed are the tables related to queries. See if you get this information in any one of those...
    RSZELTDIR     Directory of the reporting component elements
    RSZELTTXT     Texts of reporting component elements
    RSZELTXREF     Directory of query element references
    RSRREPDIR     Directory of all reports (Query GENUNIID)
    RSZCOMPDIR     Directory of reporting components
    RSZRANGE     Selection specification for an element
    RSZSELECT     Selection properties of an element
    RSZELTDIR     Directory of the reporting component elements
    RSZCOMPIC     Assignment reuseable component <-> InfoCube
    RSZELTPRIO     Priorities with element collisions
    RSZELTPROP     Element properties (settings)
    RSZELTATTR     Attribute selection per dimension element
    RSZCALC     Definition of a formula element
    RSZCEL     Query Designer: Directory of Cells
    RSZGLOBV     Global Variables in Reporting
    Regards,
    Yogesh

  • How to tune the query...?

    Hi all,
    I am having a table with millions of records and the query is taking hours
    time. How to tune the query apart from doing the following things.
    1. Creating or Deleting indexes.
    2. Using Bind variables.
    3. Using Hints.
    4. Updating the Statitics regurarly.
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.
    Thanks in advance,
    Pal

    user546710 wrote:
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.It actually depends on the scenario/problem given.
    You may want to read this first.
    When your query takes too long ...
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting

  • Can I fine tune the Ken Burns effect?

    I'm using iPhoto 6 for the first time - making a slide show. I turned off the default Ken Burns effect for all slides, but I'm manually setting the KB effect on some slides. I like the ability to specify the starting view and the ending view - such as starting the view with a close-up of a bottle of wine, then zooming back to show the table full of food and the people around it, but....
    Is it possible to "fine tune" the KB effect as follows: Have it show the starting view for a specified period of time (e.g. one second), then specify the transition time (e.g. 3 seconds), then have it show the final view for a specified period of time (e.g. 2 seconds).
    I.e., I'd like to set up the KB effect so that it "lingers" on the starting view and ending view for a time period that I specify. Is this possible? (If not, this would be a great feature to add in a future release.)

    You can kinda do that in iPhoto. It's possible to set individual slide display times with the Adjust pane in the slideshow mode if you don't have the Fit slideshow to Music option selected. You can also set transitions and speed of transition.
    To get music to end at the end you will have to estimate the time of the slideshow and use music with a time close to that time. I use a blank, black slide at the end of my slideshow so that the music will end to a blank screen if I miss the timing a bit instead of starting over.
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto (iPhoto.Library for iPhoto 5 and earlier) database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger or later), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 6 and 7 libraries and Tiger and Leopard. Just put the application in the Dock and click on it whenever you want to backup the dB file. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.
    Note: There's now an Automator backup application for iPhoto 5 that will work with Tiger or Leopard.

  • How to find the query name using infoset name

    Hi Experts
           Iam new to the sap queries(SQ01,SQ02), some queries already created.
          now i want to do some modification, my problem is i am not able to find the query name.
          I know the infoset name, can you tell me how to find the query name using the infoset name, is ther any table for this.
    i tried in sq01 also, but its confusion, pls advice me on this.
    thanks in advance.
    regards
    rajaram

    Hi
    try like this..
    SQ02 --> go to --> Query Directory..
    from there you can get all the queries belong to a Infoset.

  • Error erase Queries in Query Manager- The query is used with user-define...

    Hello Experts
    I have deleted a User Field that had a Formatted Search and now I can not remove it because it is linked according to a UF, the error message is as follows:
    "The query is used with user-defined values [Message 952-23]"
    There will be a way to resolve this issue???
    Thanks in advance

    Hi Juan,
    I have tried this very limited, but the formid, is this a number in your case?
    SELECT * from cshs t0 inner join ouqr t1 on t0.queryid = t1.intrnalkey
    where t1.qname = '[%0]'
    Running this query I can get the form id that I need to recreate, but I have a feeling you already know the form id, is this correct?
    - Is there an error when you try to recreate the form id?
    - Will it not let you recreate the same form id because the id is given by the system?
    If it is not possible to recreate the form id, please prepare a backup and log a message. Support should be able to correct the entry in table cshs.
    Hope it helps.
    Jesper

  • Export dump only 1000 tables from the schema which contains 3000 tables.

    Hi,
    I have an requirement to export the dump only with particular 1000 tables from the schema which contains 3000 tables.
    As I want to take the dump, I need to mention the List of tables in "TABLES" Parameter. But syntax won't allow for 1000 tables.
    Kindly guide me on this to proceed further to take the dump of only particular 1000 tables.
    Thanks in advance.
    Thanks,
    Orahar.

    I have an requirement to export the dump only with particular 1000 tables from the schema which contains 3000 tables.
    As I want to take the dump, I need to mention the List of tables in "TABLES" Parameter. But syntax won't allow for 1000 tables.
    Kindly guide me on this to proceed further to take the dump of only particular 1000 tables.You haven't mentioned the oracle release version.
    if you're using 10g, you could use datapump export/import to achieve this. Not a straight way.
    Check Metalink Export/Import DataPump Parameters INCLUDE and EXCLUDE - How to Load and Unload Specific Objects - 341733.1
    Under section 9. Exporting or Importing a large number of objects.
    HTH
    -Anantha

  • " The query cannot use the cache

    Hi , I am testing my query on RSRV and also running it using RSRT but neither RSRV is working and RSRV is throwing a error
    " The query cannot use the cache"
    How to resolve ?
    Tried RSRCACHE and everything looks fine there.

    Hi,
    Thanks for your update.
    for this new error try to Generate this query in RSRT.
    GOTO -> RSRT -> GENERATE tab is there.
    Try it and please update me back.
    Thanks & Regards,
    Vipin

Maybe you are looking for

  • Timer callback executes in XP but not Windows7

    I have a GUI Timer created using the Timer control in the Main Panel of my GUI.  The callback sends commands to a device attached by USB and the device uses an FTDI chip to translate our serial commands to USB and back.  We're testing the code on bot

  • Customize search result types in SharePoint 2013

    Hi I want to create the new Search Result types based on existing "List SharePoint Item" result type. I know how to do it manually . But i want it to automate it. can i do it using powershell script? is there anything available or somebody tried anyt

  • PROPVALUE(Core.Node Inactive) is not valid

    I am creating parameters in a Property Definition and would like to capture core.node inactive, but I receive the error "DRM-25623: Invalid property name 'Core.Node Inactive' found as a parameter to function: PropValue (I also tried Core.NodeInactive

  • ASA 5505 SNMP overwrite

    Dear all, I have a few ASA5505 configured for SNMP I can poll the first ASA from my SNMP server successfully. When I try to add a second device the first device is automatically overwritten and I can see all the infos from the new device. It is not p

  • Open Infopath form in Modal Popup.

    Hello Guys, I am having  SP2013 and I wanted to open my Infopath 2013 forms in to Modal popup in SP2013. At present if I am trying open info path forms, it redirect to the page instead of opening form in to modal popup. Do anybody have idea how can w