Please optimize the query

Hi,
I have a query that will select the same tables and using Subquery. The query is taking too much time as the table has more records. Can anyone help to modify the query which can take minimum time. Thanks in Advance.
Below is the query
SELECT
SIK_ROLLE_ID,SIK_OBJEKT_ID,SIK_AFTALE_ID,SIK_AFTALE_TP,ROLLE_TP,KNID,EJENDOMS_ID,EJENDOMS_TP, VURD_BLB
,VURD_VAKD,VURD_DT,HAIRCUT,SIK_VAKD,SIK_FOER_BLB,HAIRCUT_BLB,FORANST_PRIO_BLB,RETTIGHEDER_BLB,SIK_EFTER_BLB
,OVERSKREVET_MK,OVERSKREVET_BLB,MAN_HAIRCUT_BLB,MAN_OBJ_E_HCUT_BLB,GLDER_FRA_DT, TRANSAKTIONS_TP,SIK_STATUS_TP
FROM
ETZ3EDW.dbo.EWWH_KS_OBJ_KND2_HV A
WHERE
A.SIK_AFTALE_TP = 20130 and A.TRANSAKTIONS_TP not in ('S')
AND A.GLDER_FRA_DT = ( SELECT MAX(B.GLDER_FRA_DT)
FROM ETZ3EDW.DBO.EWWH_KS_OBJ_KND2_HV B
WHERE B.GLDER_FRA_DT <= '2014-01-08'
AND A.SIK_OBJEKT_ID = B.SIK_OBJEKT_ID
AND A.SIK_AFTALE_ID = B.SIK_AFTALE_ID)

Can you show us a execution plan of the query?
CREATE TABLE #tmp (col DATE)
INSERT INTO #tmp
SELECT MAX(B.GLDER_FRA_DT)
FROM ETZ3EDW.DBO.EWWH_KS_OBJ_KND2_HV BJOIN ETZ3EDW.dbo.EWWH_KS_OBJ_KND2_HV AON A.SIK_OBJEKT_ID = B.SIK_OBJEKT_ID
AND A.SIK_AFTALE_ID = B.SIK_AFTALE_ID
WHERE B.GLDER_FRA_DT <= '2014-01-08'SELECT
SIK_ROLLE_ID,SIK_OBJEKT_ID,SIK_AFTALE_ID,SIK_AFTALE_TP,ROLLE_TP,KNID,EJENDOMS_ID,EJENDOMS_TP, VURD_BLB
,VURD_VAKD,VURD_DT,HAIRCUT,SIK_VAKD,SIK_FOER_BLB,HAIRCUT_BLB,FORANST_PRIO_BLB,RETTIGHEDER_BLB,SIK_EFTER_BLB
,OVERSKREVET_MK,OVERSKREVET_BLB,MAN_HAIRCUT_BLB,MAN_OBJ_E_HCUT_BLB,GLDER_FRA_DT, TRANSAKTIONS_TP,SIK_STATUS_TP
FROM
ETZ3EDW.dbo.EWWH_KS_OBJ_KND2_HV A
WHERE
A.SIK_AFTALE_TP = 20130 and A.TRANSAKTIONS_TP not in ('S')
AND A.GLDER_FRA_DT IN (SELECT col FROM #tmp)
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence

Similar Messages

  • The query processor ran out of stack space during query optimization. Please simplify the query

    Can you suggest me that what should i do in this case.
    Actually i am having one table that is a MasterTable.I am referring this table in more than 300 tables that means i am having foreign key of this primary key in 300+ tables.
    due to this i am getting following error during deleting any row,
    doesn't matter that data is existing in reference table or not.
    Error that i am getting is 
    "The query processor ran out of stack space during query optimization. Please simplify the query"
    Can you suggest me that what should i do to avoid this error,because i am unable to delete this entry.
    Apart from it,i am getting performance problem too,so is it due to such huge FK on it.
    Please suggest me on following points
    1. Is it worst way to handle it,if yes then please suggest me solution for it.
    2. If it is a correct way then what should i do if getting error during deleting any record.
    3. Is it right to create Foreign key on each table where i am saving data of this master. if no then how to manage integrity.
    4. What people do in huge database when they wants to create foreign key for any primary key.
    5. Can you suggest me that how DBA's are handling this in big database,where they are having huge no. of tables.

    The most common reason of getting such error is having more than 253 foreign key constraints on a table. 
    The max limit is documented here:
    http://msdn.microsoft.com/en-us/library/ms143432(SQL.90).aspx 
    Although a table can contain an unlimited number of FOREIGN KEY constraints, the recommended maximum is 253. Depending on the hardware configuration hosting SQL Server, specifying additional foreign key constraints may be expensive for the query
    optimizer to process. If you are on 32 bit, then you might want to move to 64 bit to get little bigger stack space but eventually having 300 FK is not something which would work in long run.
    Balmukund Lakhani | Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • Help needed to optimize the query

    Help needed to optimize the query:
    The requirement is to select the record with max eff_date from HIST_TBL and that max eff_date should be > = '01-Jan-2007'.
    This is having high cost and taking around 15mins to execute.
    Can anyone help to fine-tune this??
       SELECT c.H_SEC,
                    c.S_PAID,
                    c.H_PAID,
                    table_c.EFF_DATE
       FROM    MTCH_TBL c
                    LEFT OUTER JOIN
                       (SELECT b.SEC_ALIAS,
                               b.EFF_DATE,
                               b.INSTANCE
                          FROM HIST_TBL b
                         WHERE b.EFF_DATE =
                                  (SELECT MAX (b2.EFF_DATE)
                                     FROM HIST_TBL b2
                                    WHERE b.SEC_ALIAS = b2.SEC_ALIAS
                                          AND b.INSTANCE =
                                                 b2.INSTANCE
                                          AND b2.EFF_DATE >= '01-Jan-2007')
                               OR b.EFF_DATE IS NULL) table_c
                    ON  table_c.SEC_ALIAS=c.H_SEC
                       AND table_c.INSTANCE = 100;

    To start with, I would avoid scanning HIST_TBL twice.
    Try this
    select c.h_sec
         , c.s_paid
         , c.h_paid
         , table_c.eff_date
      from mtch_tbl c
      left
      join (
              select sec_alias
                   , eff_date
                   , instance
                from (
                        select sec_alias
                             , eff_date
                             , instance
                             , max(eff_date) over(partition by sec_alias, instance) max_eff_date
                          from hist_tbl b
                         where eff_date >= to_date('01-jan-2007', 'dd-mon-yyyy')
                            or eff_date is null
               where eff_date = max_eff_date
                  or eff_date is null
           ) table_c
        on table_c.sec_alias = c.h_sec
       and table_c.instance  = 100;

  • Please give the query to find out primary key in table in Sql plus

    Dear friends,
    Please give me the query to find out the primary key in Sql plus.

    hi
    SQL> DESC user_constraints
    Name                                                                                                  
    OWNER                                                                                                 
    CONSTRAINT_NAME                                                                                       
    CONSTRAINT_TYPE                                                                                       
    TABLE_NAME                                                                                            
    SEARCH_CONDITION                                                                                      
    R_OWNER                                                                                               
    R_CONSTRAINT_NAME                                                                                     
    DELETE_RULE                                                                                           
    STATUS                                                                                                
    DEFERRABLE                                                                                            
    DEFERRED                                                                                              
    VALIDATED                                                                                             
    GENERATED                                                                                             
    BAD                                                                                                   
    RELY                                                                                                  
    LAST_CHANGE                                                                                           
    INDEX_OWNER                                                                                           
    INDEX_NAME                                                                                            
    INVALID                                                                                               
    VIEW_RELATED                                                                                          
    SQL> SELECT constraint_name,table_name,r_constraint_name,status
      2  FROM user_constraints WHERE constraint_type='P';
    CONSTRAINT_NAME                TABLE_NAME                     R_CONSTRAINT_NAME              STATUS
    SYS_C003141                    CUSTOMERS                                                     ENABLED
    PK_DEPT                        DEPT                                                          ENABLED
    SYS_C003139                    SALESREPS                                                     ENABLEDKhurram

  • Please explain the query?

    hello all,
    please explain below query used in solution below, thanks in advance!
    ELECT MAX(P1.ET) AS ST, P2.ST AS ET
    FROM XYZ AS P1
    INNER JOIN XYZ AS P2 ON (P1.ST < P2.ST)
    GROUP BY P2.ST
    HAVING MAX(P1.ET) < P2.ST
    IF OBJECT_ID('XYZ') IS NOT NULL
    DROP TABLE XYZ
    GO
    CREATE TABLE XYZ
    id int identity(1,1),
    ST smalldatetime NOT NULL,
    ET smalldatetime NOT NULL
    GO
    INSERT INTO XYZ (ST, ET)
    VALUES ('2010-01-01 9:00AM', '2010-01-01 10:00AM')
    INSERT INTO XYZ (ST, ET)
    VALUES ('2010-01-01 9:00AM', '2010-01-01 12:00PM')
    INSERT INTO XYZ (ST, ET)
    VALUES ('2010-01-01 1:00PM', '2010-01-01 2:00PM')
    INSERT INTO XYZ (ST, ET)
    VALUES ('2010-01-01 3:00PM', '2010-01-01 5:00PM')
    INSERT INTO XYZ (ST, ET)
    VALUES ('2010-01-01 11:00AM', '2010-01-01 12:00PM')
    GO
    WITH Gaps(Gap) AS
    SELECT COALESCE(SUM(DATEDIFF(MINUTE,ST,ET)), 0)
    FROM (
    SELECT MAX(P1.ET) AS ST, P2.ST AS ET
    FROM XYZ AS P1
    INNER JOIN XYZ AS P2 ON (P1.ST < P2.ST)
    GROUP BY P2.ST
    HAVING MAX(P1.ET) < P2.ST
    ) gaps
    SELECT (
    COALESCE(DATEDIFF(MINUTE, MIN(ST), MAX(ET)), 0)
    - (SELECT Gap FROM Gaps)
    ) / 60.0 TotalHrs
    FROM XYZ

    SELECT MAX(P1.ET) AS ST, P2.ST AS ET
    FROM XYZ AS P1
    INNER JOIN XYZ AS P2 ON (P1.ST < P2.ST)
    GROUP BY P2.ST
    HAVING MAX(P1.ET) < P2.ST
    Finds all of the gaps (that is time that is not in any interval in your original data.  To see how it works, let's look at an example.  I'm going to use data that is a little different than your original data because your data has no gaps. 
    The query still works if you have no gaps, but it is easier to see what it is doing if the data has some gaps.  Also, I'm going to explicitly set the id column instead of making it an identity.  This will make it a little easier to identify each
    row in the following explanation. So the data I'm going to work with is
    CREATE TABLE XYZ
    id int,
    ST smalldatetime NOT NULL,
    ET smalldatetime NOT NULL
    GO
    INSERT INTO XYZ (id, ST, ET)
    VALUES (1, '2010-01-01 9:00AM', '2010-01-01 10:00AM')
    INSERT INTO XYZ (id, ST, ET)
    VALUES (2, '2010-01-01 9:00AM', '2010-01-01 12:00PM')
    INSERT INTO XYZ (id, ST, ET)
    VALUES (3, '2010-01-02 1:00PM', '2010-01-02 2:00PM')
    INSERT INTO XYZ (id, ST, ET)
    VALUES (4, '2010-01-03 3:00PM', '2010-01-03 5:00PM')
    INSERT INTO XYZ (id, ST, ET)
    VALUES (5, '2010-01-03 7:00PM', '2010-01-03 9:00PM')
    Notice that the gaps here are from row 2 to row 3 (12PM on the 1st to 1PM on the 2nd) and row 3 to row 4 (2PM on the 2nd to 3PM on the 3rd) and row 4 to row 5 (5PM on the 3rd to 7PM on the 3rd).  So that's what the above subquery should be finding for
    us.
    To see what a query you don't understand is doing, simplify it to the smallest part you can and see what it returns and then build it up to the final query.  So the simplest thing we can do is just the from clause.  That gives us
    SELECT *
    FROM XYZ AS P1
    INNER JOIN XYZ AS P2 ON (P1.ST < P2.ST)
    /* That gives us the result
    1 2010-01-01 09:00:00 2010-01-01 10:00:00 3 2010-01-02 13:00:00 2010-01-02 14:00:00
    2 2010-01-01 09:00:00 2010-01-01 12:00:00 3 2010-01-02 13:00:00 2010-01-02 14:00:00
    1 2010-01-01 09:00:00 2010-01-01 10:00:00 4 2010-01-03 15:00:00 2010-01-03 17:00:00
    2 2010-01-01 09:00:00 2010-01-01 12:00:00 4 2010-01-03 15:00:00 2010-01-03 17:00:00
    3 2010-01-02 13:00:00 2010-01-02 14:00:00 4 2010-01-03 15:00:00 2010-01-03 17:00:00
    1 2010-01-01 09:00:00 2010-01-01 10:00:00 5 2010-01-03 19:00:00 2010-01-03 21:00:00
    2 2010-01-01 09:00:00 2010-01-01 12:00:00 5 2010-01-03 19:00:00 2010-01-03 21:00:00
    3 2010-01-02 13:00:00 2010-01-02 14:00:00 5 2010-01-03 19:00:00 2010-01-03 21:00:00
    4 2010-01-03 15:00:00 2010-01-03 17:00:00 5 2010-01-03 19:00:00 2010-01-03 21:00:00
    Now we want to Group by P2.ST and get the MAX(P1.ET) and P2.ST, so that gives us
    SELECT MAX(P1.ET) AS ST, P2.ST AS ET
    FROM XYZ AS P1
    INNER JOIN XYZ AS P2 ON (P1.ST < P2.ST)
    GROUP BY P2.ST
    /* Result is
    2010-01-01 12:00:00 2010-01-02 13:00:00
    2010-01-02 14:00:00 2010-01-03 15:00:00
    2010-01-03 17:00:00 2010-01-03 19:00:00
    Now with this sample data there are no rows in the output with MAX(P1.ET) > P2.ST.  But if there was one, you would not want that row because it is not a real gap (obviously, a gap can't start today and end yesterday).  (If you want to see how
    you could get a case like that, add a row 6 to the sample data with a ST of 2010-01-01 7:00PM and an ET of 2010-01-03 9:00PM.)
    So we add a HAVING MAX(P1.ET) < P2.ST to remove those cases.
    That leaves us with all of the gaps.  So then with
    SELECT COALESCE(SUM(DATEDIFF(MINUTE,ST,ET)), 0)
    FROM (
    SELECT MAX(P1.ET) AS ST, P2.ST AS ET
    FROM XYZ AS P1
    INNER JOIN XYZ AS P2 ON (P1.ST < P2.ST)
    GROUP BY P2.ST
    HAVING MAX(P1.ET) < P2.ST
    ) gaps
    we get the total amount of time in all gaps.  Then the final result is just the time from the earliest ST to the latest ET minus the total time from the gap.
    Tom

  • Please optimize the below code!!!!urgent

    friends,
    i know that cluster tables cannot be joined with transparent tables....
    however i need performance improvement for the following code....
    if possible is there a way to join bkpf or bseg to improve performance....can we create view foe bkpf and bseg if yes then how.....
    please modify the below code for improvement in performance.
    START-OF-SELECTION.
    SELECT bukrs belnr gjahr budat FROM bkpf INTO TABLE i_bkpf
    WHERE bukrs = p_bukrs AND "COMPANY CODE
    gjahr = p_gjahr AND "FISCAL YEAR
    budat IN s_budat. "POSTING DATE IN DOC
    IF sy-subrc = 0.
    SELECT bukrs belnr gjahr hkont shkzg dmbtr FROM bseg INTO TABLE
    i_bseg FOR ALL ENTRIES IN i_bkpf
    WHERE bukrs = i_bkpf-bukrs AND "COMPANY CODE
    belnr = i_bkpf-belnr AND "A/CING DOC NO
    gjahr = i_bkpf-gjahr AND "FISCAL YEAR
    hkont = p_hkont. "General Ledger Account"
    IF sy-subrc = 0.
    SELECT bukrs belnr gjahr hkont shkzg dmbtr FROM bseg INTO TABLE
    i_bseg1 FOR ALL ENTRIES IN i_bseg
    WHERE bukrs = i_bseg-bukrs AND "COMPANY CODE
    belnr = i_bseg-belnr AND "A/CING DOC NO
    gjahr = i_bseg-gjahr. "FISCAL YEAR
    ENDIF.
    ENDIF.
    IF NOT i_bseg1[] IS INITIAL.
    LOOP AT i_bseg1.
    IF i_bseg1-hkont = p_hkont AND i_bseg1-shkzg = 'S'.
    v_sumgl = v_sumgl + i_bseg1-dmbtr.
    ELSEIF i_bseg1-hkont = p_hkont AND i_bseg1-shkzg = 'H'.
    v_sumgl = v_sumgl - i_bseg1-dmbtr.
    ELSEIF i_bseg1-hkont NE p_hkont .
    IF i_bseg1-shkzg = 'H'.
    i_bseg1-dmbtr = - i_bseg1-dmbtr.
    ENDIF.
    i_alv-hkont = i_bseg1-hkont.
    i_alv-dmbtr = i_bseg1-dmbtr.
    APPEND i_alv.
    v_sumoffset = v_sumoffset + i_bseg1-dmbtr.
    ENDIF.
    ENDLOOP.
    regards
    Essam.([email protected])

    Hi ,
    <b>Pls use for all entries as below to join these two tables :</b>
    For pool and cluster table you can create secondary index and you can use select distinct, group for pool and cluster table. You can use native SQL statement for pool and cluster table.
    see the query for Bseg table :
    example  :
    *Code to demonstrate select command
    *Code to demonstrate select into internal table command
    TYPES: BEGIN OF t_bkpf,
    *  include structure bkpf.
      bukrs LIKE bkpf-bukrs,
      belnr LIKE bkpf-belnr,
      gjahr LIKE bkpf-gjahr,
      bldat LIKE bkpf-bldat,
      monat LIKE bkpf-monat,
      budat LIKE bkpf-budat,
      xblnr LIKE bkpf-xblnr,
      awtyp LIKE bkpf-awtyp,
      awkey LIKE bkpf-awkey,
    END OF t_bkpf.
    DATA: it_bkpf TYPE STANDARD TABLE OF t_bkpf INITIAL SIZE 0,
          wa_bkpf TYPE t_bkpf.
    TYPES: BEGIN OF t_bseg,
    *include structure bseg.
      bukrs     LIKE bseg-bukrs,
      belnr     LIKE bseg-belnr,
      gjahr     LIKE bseg-gjahr,
      buzei     LIKE bseg-buzei,
      mwskz     LIKE bseg-mwskz,         "Tax code
      umsks     LIKE bseg-umsks,         "Special G/L transaction type
      prctr     LIKE bseg-prctr,         "Profit Centre
      hkont     LIKE bseg-hkont,         "G/L account
      xauto     LIKE bseg-xauto,
      koart     LIKE bseg-koart,
      dmbtr     LIKE bseg-dmbtr,
      mwart     LIKE bseg-mwart,
      hwbas     LIKE bseg-hwbas,
      aufnr     LIKE bseg-aufnr,
      projk     LIKE bseg-projk,
      shkzg     LIKE bseg-shkzg,
      kokrs     LIKE bseg-kokrs,
    END OF t_bseg.
    DATA: it_bseg TYPE STANDARD TABLE OF t_bseg INITIAL SIZE 0,
          wa_bseg TYPE t_bseg.
    *Select FOR ALL ENTRIES command
    SELECT bukrs belnr gjahr bldat monat budat xblnr awtyp awkey
      UP TO 100 ROWS
      FROM bkpf
      INTO TABLE it_bkpf.
    IF sy-subrc EQ 0.
    * The FOR ALL ENTRIES comand only retrieves data which matches
    * entries within a particular internal table.
      SELECT bukrs belnr gjahr buzei mwskz umsks prctr hkont xauto koart
             dmbtr mwart hwbas aufnr projk shkzg kokrs
        FROM bseg
        INTO TABLE it_bseg
        FOR ALL ENTRIES IN it_bkpf
        WHERE bukrs EQ it_bkpf-bukrs AND
              belnr EQ it_bkpf-belnr AND
              gjahr EQ it_bkpf-gjahr.
    ENDIF.
    <b>Pls reward pts if found usefull :)</b>
    regards
    Sathish

  • How to optimize the query

    Hi,
    This below query is taking more than 1hour. So I want to optimize this query.Any suggestion is appriciated.
    In Table customer_details total number of records are 40,000. Among which custno with id='J' are 7000.
    select distinct(A.custno)
    from
    (select distinct custno,id,
    case
    when id='I'
    then 0
    else 1
    end as myid from customer_details where custno not in
    (select custno from customer_details where id='J'))A
    group by A.custno
    having sum(A.myid)>0

    Why instead of
                  from   customer_details
                 where   custno not in (select   custno
                                          from   customer_details
                                         where   id = 'J')
    ...not simply
                  from   customer_details
                 where   custno != 'J'
    ...?

  • Can we optimize the query  :  It is taking a lot of time

    the following query is taking a lot of time ...
    Can anyone suggest to make it run faster ???
    table B has 12 million records
    table a has 10000 records
    CREATE TABLE less_time
         PARALLEL
         NOLOGGING
         AS
         SELECT a.user_id,a.mp_id, COUNT(product_id) product _count
         FROM table a,table b
         WHERE TRUNC(a.call_time) < TRUNC(SYSDATE) - 1
         AND a.history_id = b.call_id
         AND b.product_type = 1
         AND b..product_status= 50
         AND user_id IS NOT NULL
         GROUP BY user_id,mp_id;

    analyze index indx_mtx compute statistics
    thanks for explaining David ...
    Actually I somewhat bettered it
    i made a consolidated index on table b
    like create index indx_ty on table b (call_id,mp_id,user_id,b.product_type,product_status)
    now the explain plan shows that the query is much much faster also i did one trick in the query
    SELECT a.user_id,a.mp_id, COUNT(product_id) product _count
    i changed to SELECT a.user_id,a.mp_id, COUNT(1) product _count
    to enable it to do FFS
    thanks anyways ....

  • Optimize the query as it takes long time

    hi dear please help to optimize thi sql and take more than 45minits to return out put. This has to be optimize at lease 3-4 mints.
    SELECT DISTINCT(ce.event_source) AS mobile_no
    FROM CUSTEVENTSOURCE ce,CUSTHASPACKAGE cp
    WHERE (cp.package_id =119 AND ce.customer_ref = cp.customer_ref)
    AND cp.end_dat IS NULL
    AND SUBSTR(ce.event_source,1,3) ='071'
    AND cp.customer_ref IN (SELECT CUSTOMER_REF FROM CUSTHASPACKAGE
    WHERE end_dat IS NULL
    GROUP BY CUSTOMER_REF HAVING COUNT(0) =1 )
    following are the indexs corrosponding to tables exist:
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    CREATE TABLE GENEVA_ADMIN.CUSTEVENTSOURCE
    CUSTOMER_REF VARCHAR2(20 BYTE) NOT NULL,
    PRODUCT_SEQ NUMBER(9) NOT NULL,
    EVENT_SOURCE VARCHAR2(40 BYTE) NOT NULL,
    START_DTM DATE NOT NULL,
    END_DTM DATE,
    EVENT_TYPE_ID NUMBER(9) NOT NULL,
    EVENT_SOURCE_LABEL VARCHAR2(40 BYTE) NOT NULL,
    CREDIT_LIMIT_MNY NUMBER(18),
    EVENT_SOURCE_TXT VARCHAR2(255 BYTE),
    EVENT_SOURCE_UPPER VARCHAR2(40 BYTE) NOT NULL,
    RATING_TARIFF_ID NUMBER(9),
    COMPETITOR_RATING_TARIFF_ID NUMBER(9),
    EVENT_FILTER_1_ID NUMBER(9),
    RECEIVE_ACCOUNT_1_NUM VARCHAR2(20 BYTE),
    RATING_TARIFF_1_ID NUMBER(9),
    ATTRIBUTE_NUMBER_1 NUMBER(2),
    MATCH_TYPE_1 NUMBER(9),
    ATTRIBUTE_VALUE_1 VARCHAR2(32 BYTE),
    GUIDE_RULE_1_DESC VARCHAR2(255 BYTE),
    EVENT_FILTER_2_ID NUMBER(9),
    RECEIVE_ACCOUNT_2_NUM VARCHAR2(20 BYTE),
    RATING_TARIFF_2_ID NUMBER(9),
    ATTRIBUTE_NUMBER_2 NUMBER(2),
    MATCH_TYPE_2 NUMBER(9),
    ATTRIBUTE_VALUE_2 VARCHAR2(32 BYTE),
    GUIDE_RULE_2_DESC VARCHAR2(255 BYTE),
    EVENT_FILTER_3_ID NUMBER(9),
    RECEIVE_ACCOUNT_3_NUM VARCHAR2(20 BYTE),
    RATING_TARIFF_3_ID NUMBER(9),
    ATTRIBUTE_NUMBER_3 NUMBER(2),
    MATCH_TYPE_3 NUMBER(9),
    ATTRIBUTE_VALUE_3 VARCHAR2(32 BYTE),
    GUIDE_RULE_3_DESC VARCHAR2(255 BYTE)
    CREATE INDEX GENEVA_ADMIN.CUSTEVENTSOURCE_AK1 ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (EVENT_SOURCE, EVENT_TYPE_ID)
    CREATE INDEX GENEVA_ADMIN.CUSTEVENTSOURCE_AK2 ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (EVENT_SOURCE_LABEL, EVENT_TYPE_ID)
    CREATE INDEX GENEVA_ADMIN.CUSTEVENTSOURCE_AK3 ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (RECEIVE_ACCOUNT_1_NUM)
    CREATE INDEX GENEVA_ADMIN.CUSTEVENTSOURCE_AK4 ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (RECEIVE_ACCOUNT_2_NUM)
    CREATE INDEX GENEVA_ADMIN.CUSTEVENTSOURCE_AK5 ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (RECEIVE_ACCOUNT_3_NUM)
    CREATE INDEX GENEVA_ADMIN.CUSTEVENTSOURCE_AK6 ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (EVENT_SOURCE_UPPER, EVENT_TYPE_ID)
    CREATE UNIQUE INDEX GENEVA_ADMIN.CUSTEVENTSOURCE_PK ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (CUSTOMER_REF, PRODUCT_SEQ, EVENT_SOURCE, START_DTM, EVENT_TYPE_ID)
    CREATE INDEX GENEVA_ADMIN.SAN_CUSTEVENTSOURCE_IND1 ON GENEVA_ADMIN.CUSTEVENTSOURCE
    (EVENT_SOURCE, EVENT_TYPE_ID, START_DTM, END_DTM)
    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    CREATE TABLE GENEVA_ADMIN.CUSTHASPACKAGE
    CUSTOMER_REF VARCHAR2(20 BYTE) NOT NULL,
    PACKAGE_SEQ NUMBER(9) NOT NULL,
    PACKAGE_ID NUMBER(9) NOT NULL,
    START_DAT DATE NOT NULL,
    END_DAT DATE,
    SUBS_PRODUCT_SEQ NUMBER(9),
    SUBSCRIPTION_REF VARCHAR2(20 BYTE)
    CREATE UNIQUE INDEX GENEVA_ADMIN.CUSTHASPACKAGE_PK ON GENEVA_ADMIN.CUSTHASPACKAGE
    (CUSTOMER_REF, PACKAGE_SEQ)
    CREATE INDEX GENEVA_ADMIN.UDARA_CUSTHASPACKAGE_PK1 ON GENEVA_ADMIN.CUSTHASPACKAGE
    (PACKAGE_ID)
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    please help me to optimize
    Thanks
    DBA

    hi Karthick
    following is SQL plan as u requested:
    rows,plan
    ==== ====
    ,SELECT STATEMENT
    ,SORT UNIQUE
    ,FILTER
    ,SORT GROUP BY
    ,TABLE ACCESS BY INDEX ROWID CUSTHASPACKAGE
    ,NESTED LOOPS
    ,NESTED LOOPS
    ,TABLE ACCESS BY INDEX ROWID CUSTEVENTSOURCE
    ,INDEX RANGE SCAN SAN_CUSTEVENTSOURCE_IND1
    ,TABLE ACCESS BY INDEX ROWID CUSTHASPACKAGE
    ,INDEX RANGE SCAN CUSTHASPACKAGE_PK
    ,INDEX RANGE SCAN UDARA_CUSTHASPACKAGE_PK1
    I higly appriciate if any one help to optimied this sql
    sql for plan:
    EXPLAIN PLAN FOR
    SELECT DISTINCT(ce.event_source) AS mobile_no
    FROM CUSTEVENTSOURCE ce,CUSTHASPACKAGE cp
    WHERE (ce.customer_ref = cp.customer_ref)
    AND cp.end_dat IS NULL
    AND ce.event_source LIKE '071%'
    AND cp.customer_ref IN (SELECT CUSTOMER_REF
                   FROM CUSTHASPACKAGE
                   WHERE package_id =119
                   AND end_dat IS NULL
                   GROUP BY CUSTOMER_REF
                   HAVING COUNT(customer_ref) =1 )
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    this one hase change according to request. but original sql is in first in this session
    Thanks
    dba

  • Please tune the query

    Hi folks..pls tune/rewrite my query
    SELECT
    bu_code,bu_type,cust_no ,cur_code,sales_date,receipt_no,till_no,card_no,invoice_total,amount_of_goods,(invoice_total - amount_of_goods) AS amount_of_non_goods,
    pay_in_advance AS amounts_of_advance_pay,amount_of_discounts, Error_flag
    FROM
    (select
                distinct  'STO' AS BU_TYPE,
    workinv.tot_cust_no AS  cust_no,
    workinv.comp_code as comp_code,
    workinv.cash_no as till_no,
    workinv.receipt_no  as receipt_no,
    workinv.sales_date as sales_date,
    workinv.cur_code as cur_code,
    invhead.acct_usr_no as card_no,
    invhead.inv_no as inv_no,
    invhead.sto_no as bu_code, (SELECT MAX (DECODE (e.sum_code, 'TOTAL', e.amount_incl))
                                              FROM invoice_sums_t e
                                              WHERE e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS invoice_total,
                                             (SELECT MAX (DECODE (e.sum_code, 'PIA', e.amount_incl))
                                               FROM invoice_sums_t e
                                               WHERE e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS pay_in_advance,
                                             (SELECT SUM(e.amount_incl)
                                              FROM invoice_sums_t e
                                              WHERE E.SUM_CODE LIKE 'GOODS0%'
                                              AND e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS amount_of_goods,
                                             (SELECT SUM(e.amount_incl)
                                              FROM invoice_sums_t e
                                             WHERE E.SUM_CODE LIKE 'DISCOUNT0%'
                                              AND e.inv_no = invsums.inv_no
                                              AND e.comp_code = invsums.comp_code) AS amount_of_discounts ,
                                      CASE workinv.error_flag WHEN 'H' THEN 'Y' ELSE 'N' END
                                      AS Error_flag,
                                      WORKINV.ERROR_FLAG AS invoice_on_hold
    FROM  work_invoice_info_t workinv,
              invoice_header_t invhead,
              invoice_sums_t invsums,
              i_invoice_info_t_log invlog,
              o_pam_document_header_log_t opdhlt
    WHERE  invhead.comp_code= workinv.comp_code
    AND TRIM(workinv.Tot_cust_no) =TRIM(invlog.tot_cust_no)
    AND TRIM (workinv.sto_no) = invhead.sto_no
    AND TRIM (workinv.sales_date) =TO_CHAR (invhead.sales_date, 'YYMMDD')
    AND TRIM (workinv.cash_no) =TO_NUMBER (TRIM (invhead.cash_no))
    AND TRIM (workinv.receipt_no) =TO_NUMBER (TRIM (invhead.receipt_no))
    AND invhead.comp_code = invsums.comp_code
    AND invhead.inv_no = invsums.inv_no
    AND TRIM(workinv.sto_no) = invlog.sto_no
    AND TRIM(workinv.receipt_no) = invlog.receipt_no
    AND TRIM(workinv.cash_no) = invlog.cash_no
    AND TRIM(workinv.sales_date) = invlog.sales_date)dear folks am debugging the code as step by step
    i have taken inline query and selecting 1 as column from joining the same tables used in the above query
    select
       1
    FROM  work_invoice_info_t workinv,
              invoice_header_t invhead,
              invoice_sums_t invsums,
              i_invoice_info_t_log invlog,
              o_pam_document_header_log_t opdhlt
    WHERE  invhead.comp_code= workinv.comp_code
    AND TRIM(workinv.Tot_cust_no) =TRIM(invlog.tot_cust_no) --*if iam firing only this much then output is coming --in 2sec*
    AND TRIM (workinv.sto_no) = invhead.sto_no -- if i add the below 2 cols then it is taking lot of time(half an hr)
    AND TRIM (workinv.sales_date) =TO_CHAR (invhead.sales_date, 'YYMMDD')Hence should i create indexes on both 'sto_no' and 'sales_date' cols ?
    please shed some light on this

    newbie wrote:
    ..so how can i tune please share ideasTuning is a vast area. Many people spend their entire working lives tuning other people's code. Those people make fine livings from their work. They couldn't do that if it was merely a matter of squinting at some shonky piece of SQL and saying, "Ah, that's the badger!" No, tuning requires a great deal of context and additional information. Explain plans, statistcs, metadata, right down to what version of the database you're using.
    Now, you already have been provided with links to helpful threads: these explain how you can proceed in collecting this information and investigating your problem. The sooner you start reading those links the sooner you can start diagnosing the poor performance.
    If you still can't crack it, by all means post here again. But don't bother until you have gathered all the ibnformation you need to [post so that we can understand your situation.
    Cheers, APC                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Please solve the query

    Hi,
    I have a query which requires me to find the language that is known by only on programmer
    PNAME PROF1 PROF2
    anand pascal basic
    altaf clipper cobol
    juliana cobol dbase
    kamala c dbase
    mary c++ oracle
    nelson cobol dbase
    partick pascal clipper
    qadir assembly c
    ramesh pascal dbase
    resecca basic cobol
    remitha c assembly
    revathi pascal basic
    vijaya foxpro c
    The answer to this query is
    C++
    foxpro
    oracle.

    Hi APC,
    SQL> SELECT prof, p_cnt FROM(
      2  SELECT prof1 AS prof, count(pname) AS p_cnt
      3  FROM user_lang
      4  GROUP BY prof1
      5  UNION
      6  SELECT prof2 , count(pname)
      7  FROM user_lang
      8  GROUP BY prof2 )
      9  WHERE p_cnt = 1;
    PROF                      P_CNT
    assembly                      1
    basic                         1
    c++                           1
    clipper                       1
    foxpro                        1
    oracle                        1
    6 rows selected.You have to count AFTER the union... ;-)
    Regards,
    Gerd

  • Please solve the query in a single query

    1. display names of scott's collegue who earn more than scott
    using the emp table of scott schema

    As it's homework, we could always post you a really poor example of code like this:
    SQL> ed
    Wrote file afiedt.buf
      1  select *
      2  from emp
      3  where (1,empno) in (
      4                    select case when sal > (select sal from emp where ename = 'SCOTT') then 1 else 0 end as flag, empno
      5                    from emp
      6*                  )
    SQL> /
         EMPNO ENAME      JOB              MGR HIREDATE                   SAL       COMM     DEPTNO
          7839 KING       PRESIDENT            17/11/1981 00:00:00       5000                    10But it would be better if you gave us an example of what you have done yourself, before we show you where you have gone wrong and how it can be improved.

  • Feature requestion: please optimize the Synchronize Folder function.

    With a lot of subfolders, Synchronize Folders is dreadfully slow. I use an eye-fi card to bring photos from my camera to my Mac Pro, and then Synchronize to pull the new ones in to Lightroom. I have my photos organized in folders by day within folders for each year, and as the year goes on, this process takes longer and longer. It would be nice if this process could be sped up (for example, with an option to just search for new folders instead of a deep examination of all photos in all subfolders).

    - ability to choose deletion of an email on handset only
    - desktop software working with all older BB's allowing drag and drop type of transferring data, contacts etc. (BB link doesn't recognize my old Storm) 
    - auto power on/off
    - contacts syncing with yahoo & Outlook (almost two weeks trying to work around it and no luck)

  • Please Solve The SQL Query

    Please Slove the Query as below:
              T1
    |     A1     |     A2     |
    |     TRUE     |     FALSE     |
    |     FALSE     |     TRUE     |
    |     TRUE     |     TRUE     |
    |     FALSE     |     TRUE     |
    Table Name is: T1 and it is having 2 columns A1 and A2.
    Question is:
    Using a simple SQL query try to get the total number of "TRUE" in the Table T1. Don't use any PL/SQL command,
    just use simple SQL command.
    Please write the full query.
    Manojit.

    select (Nvl(a.cnt_a1,0) + Nvl(b.cnt_a2,0)
    from (select count(1) CNT_A1 from t1 where a1 = 'TRUE') A
    (select count(1) CNT_A2 from t1 where as = 'TRUE') B
    Please Slove the Query as below:
              T1
    |     A1     |     A2     |
    |     TRUE     |     FALSE     |
    |     FALSE     |     TRUE     |
    |     TRUE     |     TRUE     |
    |     FALSE     |     TRUE     |
    Table Name is: T1 and it is having 2 columns A1 and A2.
    Question is:
    Using a simple SQL query try to get the total number of "TRUE" in the Table T1. Don't use any PL/SQL command,
    just use simple SQL command.
    Please write the full query.
    Manojit.

  • Hai sir any body please answer to the query to which i am posting (please)

    hi
    any body please to the query which iam posting. i could not what exactly happens i was thinking so much even then i could not get the answer.
    Q) When we use the fix on dense or sparse on which dimension performance is good? How can you justify that?
    (IN PDF it is written this way)
    When you use the FIX command only on a dense dimension, Analytic Services retrieves the
    Entire block that contains the required value or values for the member or members that you
    Specify. Thus, I/O is not affected, and the calculation performance time is improved.
    When you use the FIX command on a sparse dimension, Analytic Services retrieves the block
    for the specified sparse dimension member or members. Thus, I/O may be greatly reduced.
    I cannot justify (visualize the above answer can any body help me the way it is going to happen from disk to the memory in the point of the performance) please elaborate what exact process takes.

    Hi,
    In Essbase (aka Analytic Services) block storage option (BSO) databases, the data block is the basic unit of I/O. A block contains cells that represent the intersections of stored members from dense dimensions. Unique data blocks are created for the existence of sparse dimension combinations (based on data loads or calculations).
    In a calc script, a FIX on a dense member will require the calculator to perform I/O on ALL the existing data blocks in the database. This may be logically necessary in some cases but it is not efficient in terms of minimizing I/O. Worse yet, a series of dense FIX statements (not nested but one after the other) will cause multiple, full passes on all data blocks. Conversely, a FIX on a sparse member will only require I/O on the data blocks related to that sparse member via the index. This is where you get I/O efficiency.
    To illustrate this in a simplistic example, let's say you have 1 million data blocks. One of the dense dimensions is Period with twelve months rolling up to quarters and total year. One of the sparse dimensions is Scenario with ten distinct scenarios (Actual, Budget, Forecast, etc.) and a Label Only setting at the top of the dimension (i.e., on Scenario at Gen 1). Consider the I/O differences on each FIX example to follow...
    FIX(Jan)
    ... any calc action here must perform I/O on all 1 million blocks
    ENDFIX
    FIX(Feb)
    ... any calc action here must perform I/O on all 1 million blocks AGAIN if written in sequence like this after the Jan FIX above
    ENDFIX
    However...
    FIX(Budget)
    ... any calc action here must perform I/O on only 1/10th of the database or 100,000 of the blocks
    ENDFIX
    Now these examples oversimplify some things to make the basic point about which you have asked a question. I hope this helps. Please post any follow-up questions.
    Good luck, Darrell Barr

Maybe you are looking for