BW perfomance - query Frontend

Hello
I have a problem.
form st03n
my query
have
40% OLAP%
10% DB
50% front end.
What would the problem?
If I create the aggregate, we can't solve the problem right?
What is the flow?
->frontend input -> olap -> Db -> olap -> frontend output?
Regards
Dank

Hi Dank
Why OLAP is high you already know from the previous replies.
I am concerned about Front End now.
Is this for all the queries?
Do you see network problem?
While you are running the query is it stuck in gateway? ( please involve your basis to check this).
Are you transferring lot of records ?
Please also open the Query Designer and let me know what is the settings for "Access type of Result Rows" for all the characteristics being used in the query. You would find this is "Advanced" tab for a Characteristics
Aggregate should not help here as the time being consumed is 10% only and SAP recommends aggregate if the time is more than 30% .
Can you please also run the query from RSRT in debug mode ( with Display Statistics option) and give the value of DBTRANs and DBSEL ?  That will tell you if you need an aggregate or not
BW : Aggregates
Regards
Anindya

Similar Messages

  • Query Perfomance - Time, Read Texts

    Hi all,
    I have an performance issue regarding an BI 7 Webtemplate. Most of the time is used to read texts. Is there a way to speed this up? Due to BIA everything else is nicely fast.
    21510 ms: OLAP Frontend Events
    55 ms: Nicht zugeordnet  (0)
    156 ms: Process request  (12600)
    44 ms, 947415: Display content  (12605)
    1 ms: RFC BICS_PROV_GET_RESULT_SET (10000)
    1 ms: OLAP sonstige Zeit xxx (3999)
    11 ms: Cache lesen xxx (2505)
    25 ms: Cache schreiben xxx (2510)
    2 ms: Cache-Erzeugung xxx (2500)
    142 ms: Datamanager xxx (9000)
    10 ms: OLAP: Datenselektion xxx (3110)
    3 ms, 605: OLAP: Daten lesen xxx (3100)
    -> 20601 ms: OLAP: Texte lesen xxx (3900) <-
    179 ms, 1210: OLAP: Datentransfer xxx (3200)
    12 ms: Abap DP Result Set  (13054)
    218 ms: Abap DP Ausgabe  (13055)
    18 ms, 1210: Get provider result set  (13040)
    31 ms, 1180: Get result set  (13004)
    1 ms: RFC RSBOLAP_BICS_STATISTIC_INFO (10000)
    Cheers, Miroslav.
    Edited by: Miroslav Simunic on May 6, 2009 3:58 PM

    Hello,
    Try to restrict the query with more selections and try to select small number of records.
    navigation attr and infoobjects will not have much impact.
    Right now the query is selecting lots of records so try to reduce it by having more selections in the query.
    Thanks
    Ajeet

  • Difference between DB time OLAP time frontend time when execute the query

    hi all,
    can you any one explain the difference between DB time, OLAP time, frontend time when execute the query

    Each BEx quey form a SQL query which is called query execution plan.
    So,
    DB Time is the time taken to read the record from data base by the SQL statement.
    OLAP time is the time taken to process the calculations/formula involved.
    Frontend Time is the time taken to present the output. For ex: while displaying the data in Bex analyser if your query involves a hierarchy, then the presentation time will be more. It also includes the time user taken to enter the query selection input.

  • Perfomance Based on Infocube creation or Bex Query creation

    Hi,
    I just wanted to know is there any performance is affected whether Universe is created on top of Bex query or on top of Infocube.
    Out of these which is the Best practice to follow.
    Regards
    Gaurav

    This is the best practice to follow
    build your universe on top of Query not direct infocube
    this will keep the row level security applied.
    the performance will depend on how you design your query and the amount of data to be retrieved from the infocube.
    good luck

  • Frontend Query display in Analyzer taking more time

    Hi  ,
    We  have severe performance issue with one of Query .
    We are using  several Hierarchy nodes in query  , we have lot of customer exits and query is built on virtual Provider.  All these performance issues cannot be ruled out as user is insisting to have them .   I tried all suggestions posted in SDN before i am posting this message.
    My query is executing fine in RSRT  it takes approximately 2 minutes to execute in RSRT , However when i try to execute the same query in Analyzer it takes more than 20 min  and work book built on the same query is still worst , It takes 1 hours to display the result.
    Kindly suggest me to over come this issue.
    Thanks in Advance
    Anil

    Hi Anil
    Please find the link https://www.sdn.sap.com/irj/scn/advancedsearch?query=query+performance&cat=sdn_all . It contains various document for query performance and sap notes. Hope this helps.
    Also try to restrict your query with variable to improve the performance.
    Cheers

  • How to Improve perfomance of this query

    Hi,
    Please help me to improve this query performance. Objective of this query is to find out individual count who order the product in last two year and create a matrix with time period
    Challenge is Both the table having more than 600 million record so it is taking to much time to execute
    SELECT count(unique b.individual_id),
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955'))>= SYSDATE - 45 THEN 1 ELSE 0 END )) AS one_month ,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955'))>= SYSDATE - 105 and NVL(ORDER_DT,TO_DATE('05-MAY-1955')) <= SYSDATE - 45 THEN 1 ELSE 0 END )) AS Three_month,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955')) >= SYSDATE - 195 and NVL(ORDER_DT,TO_DATE('05-MAY-1955')) <= SYSDATE - 105 THEN 1 ELSE 0 END )) AS six_month,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955')) >= SYSDATE - 380 and NVL(ORDER_DT,TO_DATE('05-MAY-1955')) <= SYSDATE - 195 THEN 1 ELSE 0 END )) AS one_year,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955')) >= SYSDATE - 745 and NVL(ORDER_DT,TO_DATE('05-MAY-1955'))<= SYSDATE - 380 THEN 1 ELSE 0 END )) AS two_year
    from ORDER b, address a
    where b.individual_id = a.individual_id
    and a.COUNTRY_CD ='US'
    group by a.COUNTRY_CD ;
    Thanks
    Neeraj
    Edited by: user4522368 on Aug 17, 2010 12:10 AM

    user4522368 wrote:
    Hi,
    Please help me to improve this query performance. Objective of this query is to find out individual count who order the product in last two year and create a matrix with time period
    Challenge is Both the table having more than 600 million record so it is taking to much time to execute Dombrooks has provided you with an excellent response.
    In addition, you should mention how much time the query is currently taking and how much do you expect it to take, what is your database version etc.
    One of the most important thing is to post your SQLs and EXPLAIN PLAN outcomes in readable format. You can do this by wrapping it within \ and \ tags.
    Now, based on the limited details that you have provided, following are my questions/observations:
    a) You claim that both tables have more than 600 million rows but your plan shows that optimizer is expecting to find only 46 million rows in ORDER table. You may want to confirm if statistics on both the tables are correct.
    b) Your plan appears to suggest that the UNIQUE is not affecting the query results. Based on your knowledge of your data, do you need the COUNT(UNIQUE individual_id) or can it be just COUNT(individual_id)?
    c) Finally, if you are interested in only last two years of data, you should probably have a WHERE predicate on your ORDER table which filters data based on ORDER_DT. Something like following:
    SELECT count(unique b.individual_id),
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955'))>= SYSDATE - 45 THEN 1 ELSE 0 END )) AS one_month ,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955'))>= SYSDATE - 105 and NVL(ORDER_DT,TO_DATE('05-MAY-1955')) <= SYSDATE - 45 THEN 1 ELSE 0 END )) AS Three_month,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955')) >= SYSDATE - 195 and NVL(ORDER_DT,TO_DATE('05-MAY-1955')) <= SYSDATE - 105 THEN 1 ELSE 0 END )) AS six_month,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955')) >= SYSDATE - 380 and NVL(ORDER_DT,TO_DATE('05-MAY-1955')) <= SYSDATE - 195 THEN 1 ELSE 0 END )) AS one_year,
    sum((CASE WHEN NVL(ORDER_DT,TO_DATE('05-MAY-1955')) >= SYSDATE - 745 and NVL(ORDER_DT,TO_DATE('05-MAY-1955'))<= SYSDATE - 380 THEN 1 ELSE 0 END )) AS two_year
    from ORDER b, address a
    where b.individual_id = a.individual_id
    and a.COUNTRY_CD ='US'
    and b.ORDER_DT <= (SYSDATE - 380)
    group by a.COUNTRY_CD ;

  • SQL query perfomance is bad, Need suggestion on creating proper index

    Hello Team,
    I am executing below query on a 8 Million rows table, This query is taking around 2.5 minutes, Could some one suggest me a some index criteria which can improve my query response time to milli seconds.
    SELECT c4_pvcx0.sy_objectid, c4_pvcx0.sy_objectid, c4_pvcx0.sy_objectid FROM c4_pvcx c4_pvcx0 WHERE c4_pvcx0.c4_az_iosqos = 'C4ir00018}' AND c4_pvcx0.sy_objectid != 'C4vd00F}iD' AND c4_pvcx0.sy_pendoperation != 2 AND (c4_pvcx0.sy_changeorderid = 'SYxf0003Ga' AND c4_pvcx0.sy_version = 1 OR c4_pvcx0.sy_version = 0 AND NOT (c4_pvcx0.sy_objectid IN (SELECT c4_pvcx0.sy_objectid FROM c4_pvcx c4_pvcx0 WHERE c4_pvcx0.sy_changeorderid = 'SYxf0003Ga' AND c4_pvcx0.sy_version = 1))) ORDER BY c4_pvcx0.sy_objectid ASC
    Table definition is
      CREATE TABLE "C4_PVCX"
       (     "SY_OBJECTID" CHAR(10 BYTE) NOT NULL ENABLE,
         "SY_CREATEDDATE" VARCHAR2(24 BYTE) NOT NULL ENABLE,
         "SY_MODIFIEDDATE" VARCHAR2(24 BYTE) NOT NULL ENABLE,
         "SY_COMMITTEDDATE" VARCHAR2(24 BYTE) NOT NULL ENABLE,
         "SY_SEQUENCENUMBER" NUMBER(*,0),
         "SY_CHANGEORDERID" CHAR(10 BYTE) NOT NULL ENABLE,
         "SY_VERSION" NUMBER(*,0),
         "SY_PENDOPERATION" NUMBER(*,0),
         "SY_SNOOKERSEQNUM" NUMBER(*,0),
         "VPN" VARCHAR2(16 BYTE) NOT NULL ENABLE,
         "DOMAIN" VARCHAR2(16 BYTE) NOT NULL ENABLE,
         "SRAZ_BANDWIDTH" NUMBER(*,0),
         "SRZA_BANDWIDTH" NUMBER(*,0),
         "SRBACKUPROLE" VARCHAR2(12 BYTE) NOT NULL ENABLE,
         "SRA_CONSUME_BW" VARCHAR2(5 BYTE) NOT NULL ENABLE,
         "SRZ_CONSUME_BW" VARCHAR2(5 BYTE) NOT NULL ENABLE,
         "SRPRIORITY" NUMBER(*,0),
         "SRUNIPRIORITY" NUMBER(*,0),
         "SRA_PRIMTP" VARCHAR2(44 BYTE) NOT NULL ENABLE,
         "SRZ_PRIMTP" VARCHAR2(44 BYTE) NOT NULL ENABLE,
         "SRNAME" VARCHAR2(64 BYTE) NOT NULL ENABLE,
         "SRDESTSERV" VARCHAR2(32 BYTE) NOT NULL ENABLE,
         "SRCOST" NUMBER(*,0),
         "SREMSADMINSTATUS" VARCHAR2(16 BYTE) NOT NULL ENABLE,
         "SRA_CONSUME_CID" VARCHAR2(5 BYTE) NOT NULL ENABLE,
         "SRZ_CONSUME_CID" VARCHAR2(5 BYTE) NOT NULL ENABLE,
         "RATYPE" VARCHAR2(4 BYTE) NOT NULL ENABLE,
         "RAA_VCI" NUMBER(*,0),
         "RAZ_VCI" NUMBER(*,0),
         "RAA_VPI" NUMBER(*,0),
         "RAZ_VPI" NUMBER(*,0),
         "RA_FRTT" NUMBER(*,0),
         "RAQOS" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAA_FRDISC" VARCHAR2(5 BYTE) NOT NULL ENABLE,
         "RAZ_FRDISC" VARCHAR2(5 BYTE) NOT NULL ENABLE,
         "RAAZ_TDTYPE" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAAZ_SCR" NUMBER(*,0),
         "RAAZ_PCR" NUMBER(*,0),
         "RAAZ_MBS" NUMBER(*,0),
         "RAAZ_CDVT" NUMBER(*,0),
         "RAAZ_MCR" NUMBER(*,0),
         "RAAZ_CDV" NUMBER(*,0),
         "RAAZ_MAXCTD" NUMBER(*,0),
         "RAAZ_CLR" NUMBER(*,0),
         "RAZA_TDTYPE" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAZA_SCR" NUMBER(*,0),
         "RAZA_PCR" NUMBER(*,0),
         "RAZA_MBS" NUMBER(*,0),
         "RAZA_CDVT" NUMBER(*,0),
         "RAZA_MCR" NUMBER(*,0),
         "RAZA_CDV" NUMBER(*,0),
         "RAZA_MAXCTD" NUMBER(*,0),
         "RAZA_CLR" NUMBER(*,0),
         "RAAZ_ICR" NUMBER(*,0),
         "RAAZ_RIF" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAAZ_NRM" NUMBER(*,0),
         "RAAZ_RDF" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAAZ_ADTF" NUMBER(*,0),
         "RAAZ_TRM" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAAZ_TBE" NUMBER(*,0),
         "RAAZ_CDF" VARCHAR2(4 BYTE) NOT NULL ENABLE,
         "RAZA_ICR" NUMBER(*,0),
         "RAZA_RIF" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAZA_NRM" NUMBER(*,0),
         "RAZA_RDF" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAZA_ADTF" NUMBER(*,0),
         "RAZA_TRM" VARCHAR2(8 BYTE) NOT NULL ENABLE,
         "RAZA_TBE" NUMBER(*,0),
         "RAZA_CDF" VARCHAR2(4 BYTE) NOT NULL ENABLE,
         "C4PVC_ID" VARCHAR2(40 BYTE) NOT NULL ENABLE,
         "C4AZ_UPC" VARCHAR2(4 BYTE) NOT NULL ENABLE,
         "C4ZA_UPC" VARCHAR2(4 BYTE) NOT NULL ENABLE,
         "C4AZ_CAST" VARCHAR2(12 BYTE) NOT NULL ENABLE,
         "C4ZA_CAST" VARCHAR2(12 BYTE) NOT NULL ENABLE,
         "C4AZ_QOSINDEX" NUMBER(*,0),
         "C4ZA_QOSINDEX" NUMBER(*,0),
         "SRPROFILE" CHAR(10 BYTE) NOT NULL ENABLE,
         "SRNODE" CHAR(10 BYTE) NOT NULL ENABLE,
         "SRNETWORK" CHAR(10 BYTE) NOT NULL ENABLE,
         "SRA_TP" CHAR(10 BYTE) NOT NULL ENABLE,
         "SRZ_TP" CHAR(10 BYTE) NOT NULL ENABLE,
         "SRSOID" CHAR(10 BYTE) NOT NULL ENABLE,
         "C4_AZ_IOSQOS" CHAR(10 BYTE) NOT NULL ENABLE,
         "C4_ZA_IOSQOS" CHAR(10 BYTE) NOT NULL ENABLE
    Currently i have below indexes on this table.
      CREATE UNIQUE INDEX  "C4_PVCX_IDX" ON "PRFT1"."C4_PVCX" ("SY_OBJECTID", "SY_VERSION")
      CREATE UNIQUE INDEX    "C4_PVCX CR" ON "PRFT1"."C4_PVCX" ("SY_CHANGEORDERID", "SY_OBJECTID", "SY_VERSION")
      CREATE INDEX "C4_PVCX_CD" ON "PRFT1"."C4_PVCX" ("SY_COMMITTEDDATE")
    Execution Plan
    Plan hash value: 1884930072
    | Id  | Operation           | Name       | Rows  | Bytes | Cost (%CPU)| Time
    |
    |   0 | SELECT STATEMENT    |            |   795 | 30210 | 73650   (5)| 00:14:44
    |
    |   1 |  SORT ORDER BY      |            |   795 | 30210 | 73650   (5)| 00:14:44
    |
    |*  2 |   FILTER            |            |       |       |            |
    |
    |*  3 |    TABLE ACCESS FULL| C4_PVCX    | 15909 |   590K| 73646   (5)| 00:14:44
    |
    |*  4 |    INDEX UNIQUE SCAN| C4_PVCX_CR |     1 |    24 |     3   (0)| 00:00:01
    |
    Predicate Information (identified by operation id):
       2 - filter("C4_PVCX0"."SY_VERSION"=0 AND  NOT EXISTS (SELECT /*+ */ 0
                  FROM "PRFT1"."C4_PVCX" "C4_PVCX0" WHERE "C4_PVCX0"."SY_VERSION"=1
    AND
                  "C4_PVCX0"."SY_OBJECTID"=:B1 AND "C4_PVCX0"."SY_CHANGEORDERID"='SY
    xf0003Ga
                  ') OR "C4_PVCX0"."SY_VERSION"=1 AND "C4_PVCX0"."SY_CHANGEORDERID"=
    'SYxf000
                  3Ga')
       3 - filter("C4_PVCX0"."C4_AZ_IOSQOS"='C4ir00018}' AND
                  "C4_PVCX0"."SY_PENDOPERATION"<>2 AND
                  "C4_PVCX0"."SY_OBJECTID"<>'C4vd00F}iD')
       4 - access("C4_PVCX0"."SY_CHANGEORDERID"='SYxf0003Ga' AND
                  "C4_PVCX0"."SY_OBJECTID"=:B1 AND "C4_PVCX0"."SY_VERSION"=1)
    Statistics
              0  recursive calls
              0  db block gets
         296336  consistent gets
         294809  physical reads
              0  redo size
            466  bytes sent via SQL*Net to client
            480  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
              0  rows processedThanks & Regards
    Satish
    Edited by: Satish Kumar Ballepu on May 16, 2009 2:54 AM

    before index creation. check "OR" condition in your query.
    that query means below code
    AND    ((c4_pvcx0.sy_changeorderid = 'SYxf0003Ga'AND c4_pvcx0.sy_version = 1)
                                                    OR  (c4_pvcx0.sy_version = 0)  --> rerturn rows  "sy_version=0" without sy_changeorderid condition"
            AND NOT ( c4_pvcx0.sy_objectid IN (
                                               SELECT c4_pvcx0.sy_objectid
                                               FROM   c4_pvcx c4_pvcx0
                                               WHERE  c4_pvcx0.sy_changeorderid = 'SYxf0003Ga'
                                               AND    c4_pvcx0.sy_version = 1
    {code}
    is it right?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Query Perfomance

    Hi friends ,
                      I have a report runnning in production , the performance of the query dramatically decreases when I apply a filter to one of my characteristics, is there any way I can get handle the performance ?
    thanks ,

    Hi,
    I can recommend you to use InfoCubes instead of DSO for queries. The performance of DSO reports is very bad. With an InfoCube you can use aggregates described in the other threads. In general for you reports you can use precalculation by Reporting Agent or Broadcaster.
    Take a look what you can do:
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    Additional:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    --> site 9
    and
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    Regards
    Andreas

  • Help analyzing a single query perfomance

    Hi,
    we're using MaxDB 7.7 in a large e-commerce project.
    Recently we're having performances issues and I'm trying to understand what's the problem.
    This is the query
    select articolo2_.ID as col_0_0_, articolo2_.CODICE as col_1_0_, articolo2_.CODICEREPOSITORY as col_2_0_, articolo2_.DESCRIZIONE as col_3_0_, marchio5_.NOME as col_4_0_, articolo2_.MODELLO as col_5_0_, categoria6_.NOME as col_6_0_, unitamisur7_.DESCRIZIONE as col_7_0_, articoloco1_.ID as col_8_0_, articoloco1_.CODICE as col_9_0_, articoloco1_.STATOCOMMERCIALE as col_10_0_, articoloco1_.DATA_STATOCOMMERCIALE as col_11_0_, articolo2_.ALTRO1 as col_12_0_, articolo2_.ALTRO2 as col_13_0_, articolo2_.ALTRO3 as col_14_0_, articolo2_.ALTRO4 as col_15_0_, articolo2_.ALTRO5 as col_16_0_, articoloco1_.ATTIVO as col_17_0_, articoloco1_.LOTTOORDINE as col_18_0_, articoloco1_.MINIMOORDINE as col_19_0_, articoloco1_.SCORTAMINIMA as col_20_0_, MIN(prezzo0_.PREZZO) as col_21_0_, SUM(dispmagazz4_.DISPONIBILITA) as col_22_0_, SUM(dispmagazz4_.DISPFUTURA) as col_23_0_, iva8_.CODICE as col_24_0_, iva8_.PERC as col_25_0_, articolo2_.STIMAPESO as col_26_0_, MAX(prezzo0_.FLAG) as col_27_0_, articolo2_.COMPOSTO as col_28_0_, articoloco1_.ELEMENTO_ARTICOLOCOMPOSTO as col_29_0_, articoloco1_.DATA_CREAZIONE as col_30_0_, articoloco1_.DATA_ULTIMAMODIFICA as col_31_0_, MIN(prezzo0_.PREZZOPREC) as col_32_0_, MIN(prezzo0_.DATAPREZZOPREC) as col_33_0_, MIN(prezzo0_.PREZZOPRECNOFLAG) as col_34_0_, articolo2_.ALTEZ as col_35_0_, articolo2_.LARG as col_36_0_, articolo2_.PROF as col_37_0_, articolo2_.TAGLIA as col_38_0_, articolo2_.COLORE as col_39_0_, articolo2_.TIPOMISURE as col_40_0_, articolo2_.STIMACOLLI as col_41_0_, articolo2_.KEYWORDS as col_42_0_, MIN(prezzo0_.PREZZOIVATO) as col_43_0_, MIN(prezzo0_.PREZZOPRECIVATO) as col_44_0_, MIN(prezzo0_.PREZZOPRECNOFLAGIVATO) as col_45_0_, articolo2_.CODSTRUTTURAVARIANTE as col_46_0_, articolo2_.VARIANTE1 as col_47_0_, articolo2_.VARIANTE2 as col_48_0_, articolo2_.VARIANTE3 as col_49_0_, articolo2_.VARIANTE4 as col_50_0_, articolo2_.VARIANTE5 as col_51_0_, articolo2_.VARIANTE6 as col_52_0_, articolo2_.VARIANTE7 as col_53_0_, articolo2_.VARIANTE8 as col_54_0_, articolo2_.CARATTERISTICA1 as col_55_0_, articolo2_.CARATTERISTICA2 as col_56_0_, articolo2_.CARATTERISTICA3 as col_57_0_, articolo2_.CARATTERISTICA4 as col_58_0_, articolo2_.CARATTERISTICA5 as col_59_0_, MIN(articolifo3_.ID_PUNTOPARTENZA) as col_60_0_, iva10_.CODICE as col_61_0_, iva10_.PERC as col_62_0_, articolo2_.QTACONFEZIONE as col_63_0_, articolo2_.QTACARTONE as col_64_0_, articolo2_.QTABANCALE as col_65_0_
    from ECF3.PREZZO prezzo0_, ECF3.ARTICOLO_COMMERCIALE articoloco1_, ECF3.ARTICOLO articolo2_, ECF3.ARTICOLO_FORNITORE articolifo3_, ECF3.MARCHIO marchio5_, ECF3.CATEGORIA categoria6_, ECF3.UNITAMISURA unitamisur7_, ECF3.IVA iva8_, ECF3.IVA iva10_, ECF3.DISPMAGAZZINO dispmagazz4_, ECF3.MACROCATEGORIA macrocateg13_, ECF3.REPARTO reparto14_
    where prezzo0_.ID_ARTICOLOCOMM=articoloco1_.ID and articoloco1_.ID_ARTICOLO=articolo2_.ID and articolo2_.ID=articolifo3_.ID_ARTICOLO(+) and articolo2_.ID_MARCHIO=marchio5_.ID and articolo2_.ID_CATEGORIA=categoria6_.ID and articolo2_.ID_UM=unitamisur7_.ID and articolo2_.ID_IVA=iva8_.ID and articolo2_.ID_IVAINGROSSO=iva10_.ID and categoria6_.ID_MACROCATEGORIA=macrocateg13_.ID and macrocateg13_.ID_REPARTO=reparto14_.ID and articolifo3_.ABITUALE='S' and dispmagazz4_.ID_ARTICOLOCOMM=articoloco1_.ID and articoloco1_.ID_AZIENDA=1 and articoloco1_.ID_CANALE=9 and articoloco1_.ATTIVO='A' and prezzo0_.ID_LISTINO=47 and prezzo0_.PREZZO>0 and reparto14_.NOME='ELDOM' and (upper(articolo2_.DESCRIZIONE) like '%mp3%' or upper(articolo2_.MODELLO) like '%mp3%' or upper(articolo2_.KEYWORDS) like '%mp3%')
    group by articolo2_.ID , articolo2_.CODICE , articolo2_.CODICEREPOSITORY , articolo2_.DESCRIZIONE , marchio5_.NOME , articolo2_.MODELLO , categoria6_.NOME , unitamisur7_.DESCRIZIONE , articoloco1_.ID , articoloco1_.CODICE , articoloco1_.STATOCOMMERCIALE , articoloco1_.DATA_STATOCOMMERCIALE , articolo2_.ALTRO1 , articolo2_.ALTRO2 , articolo2_.ALTRO3 , articolo2_.ALTRO4 , articolo2_.ALTRO5 , articoloco1_.ATTIVO , articoloco1_.LOTTOORDINE , articoloco1_.MINIMOORDINE , articoloco1_.SCORTAMINIMA , iva8_.CODICE , iva8_.PERC , articolo2_.STIMAPESO , articolo2_.COMPOSTO , articoloco1_.ELEMENTO_ARTICOLOCOMPOSTO , articoloco1_.DATA_CREAZIONE , articoloco1_.DATA_ULTIMAMODIFICA , articolo2_.ALTEZ , articolo2_.LARG , articolo2_.PROF , articolo2_.TAGLIA , articolo2_.COLORE , articolo2_.TIPOMISURE , articolo2_.STIMACOLLI , articolo2_.KEYWORDS , articolo2_.CODSTRUTTURAVARIANTE , articolo2_.VARIANTE1 , articolo2_.VARIANTE2 , articolo2_.VARIANTE3 , articolo2_.VARIANTE4 , articolo2_.VARIANTE5 , articolo2_.VARIANTE6 , articolo2_.VARIANTE7 , articolo2_.VARIANTE8 , articolo2_.CARATTERISTICA1 , articolo2_.CARATTERISTICA2 , articolo2_.CARATTERISTICA3 , articolo2_.CARATTERISTICA4 , articolo2_.CARATTERISTICA5 , iva10_.CODICE , iva10_.PERC , articolo2_.QTACONFEZIONE , articolo2_.QTACARTONE , articolo2_.QTABANCALE
    order by MIN(prezzo0_.PREZZO) ASC
    and this is the explain result
    SCHEMANAME    TABLENAME           COLUMN_OR_INDEX                     STRATEGY                                  PAGECOUNT
                  UNITAMISUR7_                                            TABLE SCAN                                         1
                  PREZZO0_            PREZZO_LISTINO_idx                  JOIN VIA INDEXED COLUMN                        13655
                                      ID_LISTINO                               (USED INDEX COLUMN)                           
                  ARTICOLOCO1_        ID                                  JOIN VIA KEY COLUMN                             5401
                  REPARTO14_          IDX_NOME_MONDO                      JOIN VIA INDEXED COLUMN                            1
                                                                          TABLE HASHED                                       
                                      NOME                                     (USED INDEX COLUMN)                           
                  ARTICOLIFO3_        IDX_ARTICOLO_FORNITORE_ARTICOLO     JOIN VIA INDEXED COLUMN                         5478
                                      ID_ARTICOLO                              (USED INDEX COLUMN)                           
                  ARTICOLO2_          ID                                  JOIN VIA KEY COLUMN                             8098
                  MARCHIO5_           ID                                  JOIN VIA KEY COLUMN                                5
                                                                          TABLE HASHED                                       
                  CATEGORIA6_         ID                                  JOIN VIA KEY COLUMN                                8
                                                                          TABLE HASHED                                       
                  IVA8_               ID                                  JOIN VIA KEY COLUMN                                1
                                                                          TABLE HASHED                                       
                  IVA10_              ID                                  JOIN VIA KEY COLUMN                                1
                                                                          TABLE HASHED                                       
                  DISPMAGAZZ4_        DISPMAGAZZINO_IDARTICOLOCOMM_IDX    JOIN VIA INDEXED COLUMN                          801
                                      ID_ARTICOLOCOMM                          (USED INDEX COLUMN)                           
                  MACROCATEG13_       ID                                  JOIN VIA KEY COLUMN                                1
                                                                          TABLE HASHED                                                                               
    NO TEMPORARY RESULTS CREATED                  
    INTERNAL      TEMPORARY RESULT                                        TABLE SCAN                                         1
                  JDBC_CURSOR_54                                               RESULT IS COPIED   , COSTVALUE IS        289903
    The query takes an average of 25 seconds in working hours. It returns 611 rows. MaxDB 7.7 running on Linux, 2 processors, 4 cores each, 32GB Ram. Cache size is 4GB for this db and data size is about 5GB. Cache hit 100%.
    What I'm not sure is if the query is already optimized and the problem is the workload on the server or the query can be optimized.
    The query is generated by Hibernate so I really can't tweak the SQL.
    Thank you for any suggestion !

    Dear Lars,
    thanks for your help.
    I think the problem is on some joins because removing them and using an added field in place of them (de-normalyzing information) the query is much faster.
    First, all the joins are backed by indexes with integer keys.
    These are the tables involved  (i removed the fields not related to joins for clarity), query time are not average, they're a single run times in working hours :
    CREATE TABLE ECF3.ARTICOLO (
         ID INTEGER NOT NULL,
         DESCRIZIONE VARCHAR() ASCII(512) NOT NULL,
         MODELLO VARCHAR() ASCII(100) NOT NULL,
         KEYWORDS VARCHAR() ASCII(512),
         ID_UM INTEGER,
         ID_IVA INTEGER,
         ID_IVAINGROSSO INTEGER,
         ID_CATEGORIA INTEGER,
         ID_MARCHIO INTEGER,
         PRIMARY KEY (ID)
    The table contains 115.729 rows, 4.857 rows match the search condition on this table fields, like on '%MP3%' (query runs in 0.4 sec).
    CREATE TABLE ECF3.ARTICOLO_COMMERCIALE (
         ID INTEGER NOT NULL,
         ID_AZIENDA INTEGER,
         ID_CANALE INTEGER,
         ID_ARTICOLO INTEGER,
         ATTIVO CHAR() ASCII(1) NOT NULL,
         PRIMARY KEY (ID)
    The table contains 413.916 rows, 45.086 rows match the search condition on this table field, ID_AZIENDA=1, ID_CANALE=9, ATTIVO='A' (query runs in 0.48 sec).
    CREATE TABLE ECF3.ARTICOLO_FORNITORE (
         ID INTEGER NOT NULL,
         ABITUALE CHAR() ASCII(1),
         ID_ARTICOLO INTEGER,
         PRIMARY KEY (ID)
    The table contains 115.503 rows, all match the search condition on this table field, ABITUALE='S' (query runs in 0.7 sec).
    CREATE TABLE ECF3.PREZZO (
         ID INTEGER NOT NULL,
         PREZZO FIXED(14,3),
         ID_LISTINO INTEGER,
         ID_ARTICOLOCOMM INTEGER,
         PRIMARY KEY (ID)
    The table contains 2.246.518 rows, 58.396 match the search condition on this table field, ID_LISTINO=47 AND PREZZO>0 (query runs in 0.4 sec)
    CREATE TABLE ECF3.DISPMAGAZZINO (
         ID_MAGAZZINO INTEGER NOT NULL,
         ID_ARTICOLOCOMM INTEGER NOT NULL,
         PRIMARY KEY (ID_MAGAZZINO,ID_ARTICOLOCOMM)
    Table contains 404.664 rows, no conditions on query, only joins with other table ARTICOLO_COMMERCIALE.
    There are then the smaller tables
    CREATE TABLE ECF3.REPARTO (
         ID INTEGER NOT NULL,
         NOME VARCHAR() ASCII(100) NOT NULL,
         PRIMARY KEY (ID)
    CREATE TABLE ECF3.MACROCATEGORIA (
         ID INTEGER NOT NULL,
         NOME VARCHAR() ASCII(100) NOT NULL,
         ID_REPARTO INTEGER DEFAULT           0 NOT NULL,
         PRIMARY KEY (ID)
    CREATE TABLE ECF3.CATEGORIA (
         ID INTEGER NOT NULL,
         ID_MACROCATEGORIA INTEGER NOT NULL,
         NOME VARCHAR() ASCII(100) NOT NULL,
         PRIMARY KEY (ID)
    This is a 'tree', reparto has only 5 rows, macrocategoria has 29 rows, categoria has 1120 rows. The search condition on reparto gets to 765 categoria rows.
    CREATE TABLE ECF3.MARCHIO (
         ID INTEGER NOT NULL,
         NOME VARCHAR() ASCII(100) NOT NULL,
         PRIMARY KEY (ID)
    The table contains 1189 rows
    CREATE TABLE ECF3.IVA (
         ID INTEGER NOT NULL,
         PRIMARY KEY (ID)
    This table contains 15 rows.
    CREATE TABLE ECF3.UNITAMISURA (
         ID INTEGER NOT NULL,
         PRIMARY KEY (ID)
    This table contains 6 rows.
    The CATEGORIA, MARCHIO, MACROCATEGORIA, REPARTO was added later to the structure. Before this we had a field CATEGORIA, MARCHIO, REPARTO in the ARTICOLO table.
    Using the fields instead of the joins the query is fast (2 sec.). Adding these tables has slowed down to 25 sec. (15 in non working hours).
    This is what I don't understand, they're small tables, I understand the joins add complexity but the time difference between the two version of the model is very high !
    Thanks for help and sorry for the long post.

  • Perfomance Tunig VisualVM, Query Console - Search using OQL?! What is an ov

    Hi,
    I am currently Profiling a Java Application with Java VisualVM (JDK 6.0.25)
    When you use that tool to create a Heap Dump you can inspect the data in memory, that is nice but of course you can hardly click through 250.000 items to see if what class the chars/strings whatever belongs to.
    But luckily there is a SQL like query editor ... however the syntax is bit tricky:
    What I am searching for is all chars/strings that belong to a class called "ErrorPrinter".
    How would you define such query?
    Further: There is a sample query to find out "Overallocated String" ... well either my english is not good enough or I don't know ... but can anybody explain: What is an overallocated String and how to resolve that?
    I have read the http://visualvm.java.net/oqlhelp.html but it does not contain so much ...
    Thank you very much,
    Sebastian Wagner

    Thanks for your answer,
    I understand now the meaning of overallocated in that sense,
    the use-case for the search that I would like to perform in the Query Console is:
    I made a HEAP-Dump using VisualVM, now I analyze the Heap-Dump.
    char[] is the biggest memory consumer in that HEAP-Dump. Each char has some references, if you click through those references at some point you will reach a class that is part of my code.
    Now I would like to find out which classes do have the upmost consumers of "char" references in the HEAP-Dump.
    For example I have a class ErrorEvaluatorAsciiText that has an attribute "String" now I want to search the heap if there is any (and how many / size) of that String stored in the HEAP-Dump.
    Select count(s) from java.util.char where s.reference=ErrorEvaluatorAsciiText
    I can perform a query like:
    select o from char[] o where o.rererence=String
    but I can't do any query like:
    select o from java.lang.String o where o.rererence=StromElement
    Results in:
    Please, check the oql queryjavax.script.ScriptException: sun.org.mozilla.javascript.internal.EcmaError: ReferenceError: "StromElement" is not defined. (#1)
    => But actually of course the interesting part for me is to see how many string-instances of the current heap have referals this class.
    or for example:
    select o from int[] o where o.rererence=SimpleVisitable
    or
    select o from int[] o where o.rererence=priorityWasSetForVisitNr
    But actually from the VisualVM "Instances" view I can see that there is at least one int[] that has Type SimpleVisitable or Field priorityWasSetForVisitNr as reference.
    Even better of course would be a query that gives me the top10 references of char[] in my code...
    Hope this explains my problem ...
    Thanks!
    Sebastian

  • How to decrease the OLAP/DB and Frontend time of query

    Hi,
    While  I am excuiting the query, it takes time to dispaly the query output. Reason behind is it have the high OLAP/DB and Front end time.
    Can anyone help me how to decrease the OLAP/DB and Front end time.
    Regards,

    Probably Existing Aggregates are not useful .. you can edit them based on below factors
    You can also check ST03N - > BI Workload - > select Weekly/Monthly - >Choose All data tab - >dbl click on Infocube - > choose query - > check the DB time % > 30% and Aggregation ratio is > 10 % then create aggregates.
    Now goto RSRT choose query - > Execute debug mode - > check 'Display Aggr found' and choose 'Display Stats' -> now you can check records selected Vs records transferred in % .. it should be > 10 % .. That means the query is fetching records from Aggrs which will also improve query performance!
    Also check RSRT - Execute in Debug and choose display aggregates found.. If it found aggregates then the query is using/fetching data from aggr's
    Edited by: Srinivas on Sep 14, 2010 3:40 PM

  • Perfomance on query

    Hi,
    I have this table with 10.000.000 Records.
    Col1, Col2, Col3, Col4, Col5, Col6
    PK - Col1, Col2, Col5
    I want to perform this query:
    select * from <table> where
    col1 < 1250000
    order by col1, col5 desc
    Wich indexes will speed up this process?
    Thanks

    also try to use the column name in the select statement instead of * ....it will b faster....but after creating index as per the explain plan

  • Required query perfomance tuning which has START WITH CONNECT BY PRIOR

    Hi,
    I have below small query and the CDDS table with 40+ million records.
    SELECT -LEVEL, COMPONENT_ID, COMPONENT_TYPE, COMPONENT_STATUS,
    PARENT_COMPONENT_ID, PARENT_COMPONENT_TYPE, other_info
    BULK COLLECT INTO ltbl_cdds_rec
    FROM CDDS
    START WITH
    PARENT_COMPONENT_ID IN
    ( SELECT dns_name
    FROM RAS_CARD
    WHERE ras_name = <<INPUT_PARAMATER>>
    AND parent_component_type = 'CT_NRP')
    CONNECT BY PARENT_COMPONENT_ID = PRIOR COMPONENT_ID;
    To process this query, its taking 3 hours.
    Please suggest the way forward to tune the query for better performance.

    Create statement for CDDS:
    CREATE TABLE CDDS
    COMPONENT_TYPE VARCHAR2(30 BYTE),
    COMPONENT_ID VARCHAR2(255 BYTE),
    PARENT_COMPONENT_TYPE VARCHAR2(30 BYTE),
    PARENT_COMPONENT_ID VARCHAR2(255 BYTE),
    COMPONENT_VERSION_NO VARCHAR2(30 BYTE),
    COMPONENT_STATUS VARCHAR2(30 BYTE),
    ODS_CREATE_DATE DATE,
    ODS_LAST_UPDATE_DATE DATE,
    OTHER_INFO VARCHAR2(255 BYTE)
    TABLESPACE APPL_DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 64K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    NOMONITORING
    ENABLE ROW MOVEMENT;
    Create statement for RAS_CARD:
    CREATE TABLE RAS_CARD
    RAS_NAME VARCHAR2(20 BYTE),
    SLOT VARCHAR2(2 BYTE),
    RAS_CARD_ID VARCHAR2(30 BYTE),
    CARD_TYPE VARCHAR2(5 BYTE),
    IP_ADDRESS VARCHAR2(15 BYTE),
    DNS_NAME VARCHAR2(255 BYTE),
    STATUS VARCHAR2(15 BYTE),
    NRP_NO CHAR(2 BYTE),
    NRP_TOTAL_ALLOC_CAPACITY NUMBER(10),
    CREATED_BY VARCHAR2(10 BYTE),
    NRP_ALLOCATED_CAPACITY NUMBER(10),
    NIDB_DRA2_KEY VARCHAR2(15 BYTE),
    NIDB_DRN1_KEY CHAR(6 BYTE),
    ODS_CREATE_DATE DATE,
    LAST_UPDATED_BY VARCHAR2(10 BYTE),
    ODS_LAST_UPDATE_DATE DATE,
    WATERMARK NUMBER(38)
    TABLESPACE APPL_DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    NOMONITORING;
    Explain Plan for the below query:
    select * from CDDS
    where PARENT_COMPONENT_ID IN
    ( SELECT dns_name
    FROM RAS_CARD
    WHERE ras_name = <<INPUT_PARAMATER>>
    | Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 107 | 12 | | |
    | 1 | TABLE ACCESS BY INDEX ROWID| CDDS | 1 | 62 | 1 | | |
    | 2 | NESTED LOOPS | | 1 | 107 | 12 | | |
    | 3 | SORT UNIQUE | | | | | | |
    |* 4 | TABLE ACCESS FULL | RAS_CARD | 4 | 180 | 6 | | |
    | 5 | PARTITION RANGE ITERATOR | | | | | KEY | KEY |
    |* 6 | INDEX RANGE SCAN | CDDS_I02 | 10 | | 1 | KEY | KEY |
    ---------------------------------------------------------------------------------------------

  • Frontend Server Name is Exposed during Lyncdiscover Query

    Hi,
    Pls see Yellow Text in below image, my FE Name is Exposed on Internet during LyncDiscover.domainname.com
    We are running Lync 2013 Standard Single Server Setup. My Pool Name is same as FrontEnd Server FQDN.
    I only want to expose webext.domainname.com FQDN, wondering where is the issue?
    Best Regards, Ranjit Singh

    Hi Ranjit Singh,
    The internal website is published on ports 80/443, while the external site is published on 8080/4443. It is recommend to use a reverse proxy server, such as TMG 2010 or IIS ARR, to publish the
    external website and redirect 80/443 from the web to the FE server over 8080/4443.
    If you do not use a reverse proxy, this can cause external Lync users unable to access the web services, such as Address Book, Conferencing URL, etc.
    Best regards,
    Eric
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

Maybe you are looking for

  • What is best practice for calling XI Services with Web Dynpro-Java?

    We are trying to expose XI services to Web Dynpro via "Web Services".  Our XI developers have successfully generated the WSDL file(s) for their XI services and handed off to the Web Dynpro developers. The Java developers put the WSDL file to their lo

  • Get Realm Jdeveloper Soa 11g PS3

    Hi I've a problem with jdeveloper authentication with a Oracle SOA 11g PS3. I've successfully installed the DemoSeed Community into the server, i've checked into "secutiry realm->myrealm->users and groups" and i found all the seed users. During the d

  • Rounding to 50

    Hello all, i'd like to present some SQL for rounding number to whole 50. SQL code works from Oracle 9iR2 beyond. Here's the code : with input as (select 1125.18 val from dual) select val, case when round (val) - round (val, -2) >= 25 then round (val,

  • Reordering columns in JTable

    I am reordering columns in a JTable, and then I refresh the view of the JTable, now the columns are moved back to the original position. But I want the columns to be in the place where I moved even after refreshing the view. How do I do that?? plz wi

  • HAL and ODI

    Hi, Does anyone have documents which gives comparison between HAL and other oracle ETL tools like ODI,OWB. If so please send. Thanks, Deepti