Performance issue with SQL with different input data

Hi Experts,
I have an SQL query that uses three tables imtermination, imconnection, imconnectiondesign. Each of these tables have around 23Lakh, 11Lakh, 11Lakh rows respectively. There are appropriate indexes on these tables.
Now there is a query:
SELECT
/*+ NO_MERGE(a) ORDERED USE_NL(c) */ c.objectid,
c.typeid,
c.transactionstatus,
c.usersessionid,
cd.objectid designid,
c.reservationid,
c.networkid,
c.networktype,
cd.inprojectid,
cd.inprojecttype,
cd.outprojectid,
cd.outprojecttype,
cd.asiteid,
cd.asitetype,
cd.anetworkelementid,
cd.anetworkelementtype,
cd.aportid,
cd.aporttype,
cd.achannelpath,
c.asignaltype,
cd.zsiteid,
cd.zsitetype,
cd.znetworkelementid,
cd.znetworkelementtype,
cd.zportid,
cd.zporttype,
cd.zchannelpath,
c.zsignaltype,
c.signaltype,
c.visualkey,
c.resourcestate,
cd.assignmentstate,
c.effectivefrom,
cd.effectiveto,
c.channelized,
c.circuitusage,
c.hardwired,
c.consumedsignaltype,
c.flowqualitycode,
c.capacityused,
c.percentused,
c.maxcapacity,
c.warningthreshold,
c.typecode,
cd.lastupdateddate,
c.lastreconcileddate,
c.bandwidth,
c.unit
FROM
(SELECT terminatedid
FROM imtermination t1
WHERE t1.networkelementid = 9200150)
a,
imconnectiondesign cd,
imconnection c
WHERE cd.objectid = a.terminatedid
AND c.objectid = cd.connectionid
AND(SUBSTR('10000000000000000000011111111111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', c.consumedsignaltype + 1, 1) = '1' OR SUBSTR('10000000000000000000011111111111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', c.signaltype + 1, 1) = '1')
AND c.typeid = '$131'
AND cd.assignmentstate IN(2, 3)
The above query takes around 70 secs to execute when input t1.networkelementid = 9200150. Moreover i have observed in the enterprise manager that this has very high i/o wait time.
Now the same query takes around 5 secs to execute when the input t1.networkelementid = 42407448. Both these obejcts with id 9200150 and 42407448 have almost same number of rows and output and without any condition each have 6500 rows in all the three tables.
The execution plan for both these queries with t1.networkelementid = 9200150 and t1.networkelementid = 42407448 is also coming same.
The rows that are corresponding to t1.networkelementid = 9200150 in these three tables are the result of the data created through the application over a period of time. While in case of rows corresponding to t1.networkelementid = 42407448 i have created manually and are contiguous in the three tables.
Does the above behavior is because in case of t1.networkelementid = 42407448 the rows that corresponds to it are not contiguous as they are created over a period of time ?
Execution Statistics
Total     Per Execution     Per Row
Executions     1     1     0.02
CPU Time (sec)     0.91     0.91     0.02
Buffer Gets     11943     11943.00     238.86
Disk Reads     4804     4804.00     96.08
Direct Writes     0     0.00     0.00
Rows     50     50.00     1
Fetches     5     5.00     0.10
          User I/O Waits(98.7%)          
               CPU(1.3%)
Enterprise manager shows high db file scattered read in case of t1.networkelementid = 9200150, the input for which it is taking 70 secs.
Request experts to provide some pointers to fix this issue as i am not an expert in db tuning.
Thanks in advance for your help.
Regards

Hi David,
Please find below the output:
SQL> SELECT table_name, num_rows, last_analyzed
  2    FROM all_tables
  3   WHERE table_name IN ('IMTERMINATION', 'IMCONNECTIONDESIGN', 'IMCONNECTION')
  4  /
TABLE_NAME                       NUM_ROWS LAST_ANAL
IMTERMINATION                     2338746 19-SEP-11
IMCONNECTIONDESIGN                1129298 19-SEP-11
IMCONNECTION                      1169373 19-SEP-11
IMTERMINATION                       19852 13-SEP-11
IMCONNECTIONDESIGN                   6820 13-SEP-11
IMCONNECTION                         9926 13-SEP-11
6 rows selected.
SQL> SELECT table_name, index_name,num_rows, last_analyzed
  2    FROM all_indexes
  3   WHERE table_name IN ('IMTERMINATION', 'IMCONNECTIONDESIGN', 'IMCONNECTION')
  4  order by table_name,index_name
  5  /
TABLE_NAME                     INDEX_NAME                       NUM_ROWS LAST_ANAL
IMCONNECTION                   IMCONNECTION_A_NE                    9925 13-SEP-11
IMCONNECTION                   IMCONNECTION_A_NE                 1169154 19-SEP-11
IMCONNECTION                   IMCONNECTION_A_PORT                 84743 19-SEP-11
IMCONNECTION                   IMCONNECTION_A_PORT                  3371 13-SEP-11
IMCONNECTION                   IMCONNECTION_A_SITE               1169373 19-SEP-11
IMCONNECTION                   IMCONNECTION_A_SITE                  9926 13-SEP-11
IMCONNECTION                   IMCONNECTION_NET                        0 19-SEP-11
IMCONNECTION                   IMCONNECTION_NET                       12 13-SEP-11
IMCONNECTION                   IMCONNECTION_PK                      9926 13-SEP-11
IMCONNECTION                   IMCONNECTION_PK                   1169373 19-SEP-11
IMCONNECTION                   IMCONNECTION_RES                        0 13-SEP-11
IMCONNECTION                   IMCONNECTION_RES                        0 19-SEP-11
IMCONNECTION                   IMCONNECTION_ST                         2 13-SEP-11
IMCONNECTION                   IMCONNECTION_ST                        60 19-SEP-11
IMCONNECTION                   IMCONNECTION_TYPEID                     4 13-SEP-11
IMCONNECTION                   IMCONNECTION_TYPEID                    64 19-SEP-11
IMCONNECTION                   IMCONNECTION_UR                     26880 19-SEP-11
IMCONNECTION                   IMCONNECTION_UR                         3 13-SEP-11
IMCONNECTION                   IMCONNECTION_VK                      9810 13-SEP-11
IMCONNECTION                   IMCONNECTION_VK                   1191866 19-SEP-11
IMCONNECTION                   IMCONNECTION_Z_NE                 1169173 19-SEP-11
IMCONNECTION                   IMCONNECTION_Z_NE                    9925 13-SEP-11
IMCONNECTION                   IMCONNECTION_Z_PORT                 84092 19-SEP-11
IMCONNECTION                   IMCONNECTION_Z_PORT                  3370 13-SEP-11
IMCONNECTION                   IMCONNECTION_Z_SITE                  9926 13-SEP-11
IMCONNECTION                   IMCONNECTION_Z_SITE               1169373 19-SEP-11
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_CON            1129298 19-SEP-11
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_CON               6820 13-SEP-11
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_PK             1129298 19-SEP-11
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_PK                6820 13-SEP-11
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_ST                6820 13-SEP-11
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_ST             1129298 19-SEP-11
IMTERMINATION                  IMTERMINATION_ID                    19852 13-SEP-11
IMTERMINATION                  IMTERMINATION_ID                  2279477 19-SEP-11
IMTERMINATION                  IMTERMINATION_NE                    19850 13-SEP-11
IMTERMINATION                  IMTERMINATION_NE                  2327175 19-SEP-11
IMTERMINATION                  IMTERMINATION_PORT                 168835 19-SEP-11
IMTERMINATION                  IMTERMINATION_PORT                   6741 13-SEP-11
IMTERMINATION                  IMTERMINATION_SITE                  19852 13-SEP-11
IMTERMINATION                  IMTERMINATION_SITE                2391415 19-SEP-11
40 rows selected.
SQL> select table_name,index_name,column_name,column_position
  2    FROM all_ind_columns
  3   WHERE table_name IN ('IMTERMINATION', 'IMCONNECTIONDESIGN', 'IMCONNECTION')
  4  order by table_name,index_name, column_position
  5  /
TABLE_NAME                     INDEX_NAME
COLUMN_NAME
COLUMN_POSITION
IMCONNECTION                   IMCONNECTION_A_NE
ANETWORKELEMENTID
              1
IMCONNECTION                   IMCONNECTION_A_NE
ANETWORKELEMENTID
              1
IMCONNECTION                   IMCONNECTION_A_PORT
APORTID
              1
IMCONNECTION                   IMCONNECTION_A_PORT
APORTID
              1
IMCONNECTION                   IMCONNECTION_A_SITE
ASITEID
              1
IMCONNECTION                   IMCONNECTION_A_SITE
ASITEID
              1
IMCONNECTION                   IMCONNECTION_NET
NETWORKID
              1
IMCONNECTION                   IMCONNECTION_NET
NETWORKID
              1
IMCONNECTION                   IMCONNECTION_PK
OBJECTID
              1
IMCONNECTION                   IMCONNECTION_PK
OBJECTID
              1
IMCONNECTION                   IMCONNECTION_RES
RESERVATIONID
              1
IMCONNECTION                   IMCONNECTION_RES
RESERVATIONID
              1
IMCONNECTION                   IMCONNECTION_ST
RESOURCESTATE
              1
IMCONNECTION                   IMCONNECTION_ST
RESOURCESTATE
              1
IMCONNECTION                   IMCONNECTION_TYPEID
TYPEID
              1
IMCONNECTION                   IMCONNECTION_TYPEID
TYPEID
              1
IMCONNECTION                   IMCONNECTION_UR
USERSESSIONID
              1
IMCONNECTION                   IMCONNECTION_UR
USERSESSIONID
              1
IMCONNECTION                   IMCONNECTION_VK
VISUALKEY
              1
IMCONNECTION                   IMCONNECTION_VK
VISUALKEY
              1
IMCONNECTION                   IMCONNECTION_Z_NE
ZNETWORKELEMENTID
              1
IMCONNECTION                   IMCONNECTION_Z_NE
ZNETWORKELEMENTID
              1
IMCONNECTION                   IMCONNECTION_Z_PORT
ZPORTID
              1
IMCONNECTION                   IMCONNECTION_Z_PORT
ZPORTID
              1
IMCONNECTION                   IMCONNECTION_Z_SITE
ZSITEID
              1
IMCONNECTION                   IMCONNECTION_Z_SITE
ZSITEID
              1
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_CON
CONNECTIONID
              1
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_CON
CONNECTIONID
              1
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_PK
OBJECTID
              1
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_PK
OBJECTID
              1
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_ST
ASSIGNMENTSTATE
              1
IMCONNECTIONDESIGN             IMCONNECTIONDESIGN_ST
ASSIGNMENTSTATE
              1
IMTERMINATION                  IMTERMINATION_ID
TERMINATEDID
              1
IMTERMINATION                  IMTERMINATION_ID
TERMINATEDID
              1
IMTERMINATION                  IMTERMINATION_NE
NETWORKELEMENTID
              1
IMTERMINATION                  IMTERMINATION_NE
NETWORKELEMENTID
              1
IMTERMINATION                  IMTERMINATION_PORT
PORTID
              1
IMTERMINATION                  IMTERMINATION_PORT
PORTID
              1
IMTERMINATION                  IMTERMINATION_SITE
SITEID
              1
IMTERMINATION                  IMTERMINATION_SITE
SITEID
              1
40 rows selected.
Plan without sql hints:
SQL> select * from table(dbms_xplan.display)
  2  /
PLAN_TABLE_OUTPUT
Plan hash value: 2493901029
| Id  | Operation                     | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT              |                       |    40 |  9960 |  6316   (1)| 00:01:16 |
|   1 |  NESTED LOOPS                 |                       |    40 |  9960 |  6316   (1)| 00:01:16 |
|   2 |   NESTED LOOPS                |                       |  1359 |   160K|  3592   (1)| 00:00:44 |
|   3 |    TABLE ACCESS BY INDEX ROWID| IMTERMINATION         |  1359 | 16308 |   915   (1)| 00:00:11 |
|*  4 |     INDEX RANGE SCAN          | IMTERMINATION_NE      |  1359 |       |     6   (0)| 00:00:01 |
|*  5 |    TABLE ACCESS BY INDEX ROWID| IMCONNECTIONDESIGN    |     1 |   109 |     2   (0)| 00:00:01 |
|*  6 |     INDEX UNIQUE SCAN         | IMCONNECTIONDESIGN_PK |     1 |       |     1   (0)| 00:00:01 |
|*  7 |   TABLE ACCESS BY INDEX ROWID | IMCONNECTION          |     1 |   128 |     2   (0)| 00:00:01 |
|*  8 |    INDEX UNIQUE SCAN          | IMCONNECTION_PK       |     1 |       |     1   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   4 - access("T1"."NETWORKELEMENTID"=9200150)
   5 - filter("CD"."ASSIGNMENTSTATE"=2 OR "CD"."ASSIGNMENTSTATE"=3)
   6 - access("CD"."OBJECTID"="TERMINATEDID")
   7 - filter((("C"."CONSUMEDSIGNALTYPE"=21 OR "C"."CONSUMEDSIGNALTYPE"=22 OR
              "C"."CONSUMEDSIGNALTYPE"=23 OR "C"."CONSUMEDSIGNALTYPE"=24 OR "C"."CONSUMEDSIGNALTYPE"=25 OR
              "C"."CONSUMEDSIGNALTYPE"=26 OR "C"."CONSUMEDSIGNALTYPE"=27 OR "C"."CONSUMEDSIGNALTYPE"=28 OR
              "C"."CONSUMEDSIGNALTYPE"=29 OR "C"."CONSUMEDSIGNALTYPE"=30 OR "C"."CONSUMEDSIGNALTYPE"=31 OR
              "C"."CONSUMEDSIGNALTYPE"=32 OR "C"."CONSUMEDSIGNALTYPE"=33) OR ("C"."SIGNALTYPE"=21 OR
              "C"."SIGNALTYPE"=22 OR "C"."SIGNALTYPE"=23 OR "C"."SIGNALTYPE"=24 OR "C"."SIGNALTYPE"=25 OR
              "C"."SIGNALTYPE"=26 OR "C"."SIGNALTYPE"=27 OR "C"."SIGNALTYPE"=28 OR "C"."SIGNALTYPE"=29 OR
              "C"."SIGNALTYPE"=30 OR "C"."SIGNALTYPE"=31 OR "C"."SIGNALTYPE"=32 OR "C"."SIGNALTYPE"=33)) AND
              "C"."TYPEID"='$131')
   8 - access("C"."OBJECTID"="CD"."CONNECTIONID")
32 rows selected.

Similar Messages

  • Performance issue in correlation with hidden objects in reports

    Hello,
    is there a known performance issue in correlation with hidden objects in reports using conditional suppressions? (HFM version 11.1.2.1)
    Using comprehensive reports, we have huge performance differences between the same reports with and without hidden objects. Furthermore we suspect that some trouble with our reporting server environment base on using these reports through enduser.
    Every advice would be welcome!
    Regards,
    bsc
    Edited by: 972676 on Nov 22, 2012 11:27 AM

    If you said that working with EVDRE for each separate sheet is fin ethat's means the main problem it is related to your VB custom macro interdependecy.
    I suggest to add a log (to write into a text file)for you Macro and you will se that actually that minute is staying to perform operations from custom Macro.
    Kind Regards
    Sorin Radulescu

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • Using XPath with SQL to extract XML data

    Given data such as this one:
    <?xml version="1.0"?>
    <ExtendedData>
       <Parameter name="CALLHOLD"><BooleanValue>true</BooleanValue></Parameter>
      <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>
      <Parameter name="ALLCF"><BooleanValue>true</BooleanValue></Parameter>
      <Parameter name="RealProv"><BooleanValue>false</BooleanValue></Parameter>
    </ExtendedData>I normally use extractValue function as shown below for example to extract the value for the last parameter in the data above, e.g:
    select extractValue(extended_data,'/ExtendedData/Parameter[@name="RealProv"]/BooleanValue') "my_column_alias" from tableAny ideas on how I may return the value of the parameter xsi:nil from this node:
    <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>I'd like to extract the true in xsi:nil="true"...
    Thanks,
    Edited by: HouseofHunger on May 15, 2012 2:13 PM
    Edited by: HouseofHunger on May 15, 2012 2:13 PM

    Extractvalue() has a third parameter we can use to declare namespace mappings :
    SQL> with sample_data as (
      2    select xmltype('<?xml version="1.0"?>
      3  <ExtendedData>
      4    <Parameter name="CALLHOLD"><BooleanValue>true</BooleanValue></Parameter>
      5    <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>
      6    <Parameter name="ALLCF"><BooleanValue>true</BooleanValue></Parameter>
      7    <Parameter name="RealProv"><BooleanValue>false</BooleanValue></Parameter>
      8  </ExtendedData>') doc
      9    from dual
    10  )
    11  select extractvalue(
    12           doc
    13         , '/ExtendedData/Parameter[@name="BARRING_PASSWORD"]/@xsi:nil'
    14         , 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"'
    15         )
    16  from sample_data
    17  ;
    EXTRACTVALUE(DOC,'/EXTENDEDDAT
    true
    If you're on 11.2.0.2 and up, extractvalue() is deprecated.
    One should use XMLCast/XMLQuery instead :
    SQL> with sample_data as (
      2    select xmltype('<?xml version="1.0"?>
      3  <ExtendedData>
      4    <Parameter name="CALLHOLD"><BooleanValue>true</BooleanValue></Parameter>
      5    <Parameter xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="BARRING_PASSWORD" xsi:nil="true"/>
      6    <Parameter name="ALLCF"><BooleanValue>true</BooleanValue></Parameter>
      7    <Parameter name="RealProv"><BooleanValue>false</BooleanValue></Parameter>
      8  </ExtendedData>') doc
      9    from dual
    10  )
    11  select xmlcast(
    12           xmlquery('/ExtendedData/Parameter[@name="BARRING_PASSWORD"]/@xsi:nil'
    13            passing doc
    14            returning content
    15           ) as varchar2(5)
    16         )
    17  from sample_data
    18  ;
    XMLCAST(XMLQUERY('/EXTENDEDDAT
    true
    Note : the xsi prefix is predefined when using Oracle XQuery, so in this case we don't have to declare it explicitly.
    Edited by: odie_63 on 15 mai 2012 15:23

  • COGI Fail in b.ground with msg "No batch input data for screen SAPMSSY0120"

    Hi Experts,
    I have recorded the COGI transaction using SHDB and used in my program.  I have scheduled the program in background and this is always failing with the following error "No batch input data for screen SAPMSSY0 0120".  Please help me in this regard.
    Thanks,
    srinivas.

    Hi Ramya,
    This screen doesn't contain any field.  Also this screen shouldn't come in between of my transaction.  But my program is working fine if I execute online.
    Also I don't know where I need to call that screen. 
    Thanks,
    srinivas.

  • Performance issues of SQL access to AW

    Hi Experts:
    I wonder whether there is performance issues when using SQL to access AW. When using SQL to access cubes in AW, the SQL queries the relational views for AW objects. And the views are based on OLAP_TABLE function. We know that, views based on any table function are not able to make use of index. That is to query a subset of the data of a view, we would have to full scan the view and then apply the filter. Such query plan always lead to bad performance.
    I want to know, when I use SQL to retrieve a small part of data in an AW-cube, will Oracle OLAP engine retrieve all data in the cube and then apply the filter? If the Oracle OLAP engine only retrieves data needed from AW, how can she did it?
    Thanks.

    For most requests the OLAP_TABLE function can reduce the amount of data it produces by examining the rowsource tree , or WHERE clause. The data in Oracle OLAP is highly indexed. There are steps a user can take to optimize the index use. Specifically, pin down the dimension(s) defined in the OLAP_TABLE function LIMITMAP via (NOT)IN lists on the dimension, parent, level or GID columns. Use of valuesets for the INHIER object, instead of a boolean object.
    In 10g, WHERE clauses like SALES > 50 are also processed prior to sending data out.
    For large requests (thousands of rows) performance can be a problem because the data is being sent through the object layer. In 10 this can be ameliorated by wrapping the OLAP_TABLE function call with a SQL MODEL clause. The SQL MODEL knows a bit more about the Olap options and does not require use to pipe the data through the object layer.
    SQL MODEL example (note no ADT defintion, using of auto ADT) This can be wrapped in a CREATE VIEW statement :
    select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
    sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()
    Example of WHERE clause with above select.
    SELECT *
    FROM (select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
    sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()))
    WHERE GEOG NOT IN ('USA', 'CANADA')
    and GEOG_GID = 1
    and TIME_PARENT IN ('2004')
    and CHANNEL = 'CATALOG'
    and SALES > 50000;

  • Performance issues using JavaFX on different notebooks

    Hello,
    unfortunately i feel forced to ask a question about performance issues. As far as i understood, most of the problems with JavaFX 2.0 that are discussed here or somewhere else seem to have there roots in a huge count of nodes and/or some kind of animations. In opposition to this I'm writing a business application for the desktop with JavaFX as a Swing-replacement. There are no permanent animations at all and the scene has all in all 200 nodes(measured it with ScenicView) yet. I use no WebView-Nodes.
    The application runs very well on my development computer with 16 GB of RAM, a Intel Corei7 2600K processor and a mid-class AMD grafics device. Also, there are no performance issues on a Core i3 Notebook. As soon as i start the application on my Thinkpad X220 with Core i7-2620M CPU, 8 GB RAM and Intel HD graphics it get's almost unusable. For example, there is a diagramm(see attached screenshot), which is situated in a scrollpane. As soon as i try to pan it, the panning extremely slow(i'd say 2-4 fps). Another example is that hovering buttons is not very responsive. Moving the mouse over a button causes the hover effect 0.3-0.5 seconds after it entered the button. Fading transitions that i use at some places are not smooth as well.
    I made sure that the app is running using a hardware renderer on each computer. Also, i tried to force a fallback to the software renderer using the "-Dprism.order=j2d" switch but it only performed worse.
    So my first question is: Are there known issues with mobile processors/graphic devices of the mentioned type?

    I just tested it with Java 8 ea build 61 and beside the fact that a small amount of css didn't work anymore, it runs very well with this version of Java(/FX). Damn...i hope that some of this performance fixes will be backported to some Java 7 update.

  • Performance issue in SQL while using max

    Hi
    I have log table which desired changed date. if any transaction entered for the changed date and less, then the name and location should be shown based on change log table. if change log entry has two values for a same date for a customer then need to show the max value for the row.
    Log table
    ID   CUSTOMER_ID                NAME        LOCATION      CHANGED_DATE
    1          1                      sar          boston          11-1-13
    2          2                      var          chn             12-1-13
    3          1                      gar          boston          13-1-13
    4          1                      nar          boston          13-1-13
    Transaction table
    ID            CUSTOMER_ID               DATE         QTY  
    1                 1                    11-1-13       10
    2                 2                    12-1-13        9
    3                 1                    10-1-13        8
    4                 1                    13-1-13        7
    Required Result
    ID            CUSTOMER_ID                 NAME               LOCATION            DATE         QTY  
    1                 1                       sar                  boston            11-1-13       10
    2                 2                       var                  chn               12-1-13        9
    3                 1                       sar                  boston            10-1-13        8
    4                 1                       nar                  boston            13-1-13        7I got the above result using the max when multiple log entry for same date. but if use max value i got performance issue when log entry table has more entries.
    can you help me in this?
    Edited by: Tsk on Apr 23, 2013 1:12 AM

    How do I ask a question on the forums?
    SQL and PL/SQL FAQ

  • Webi Report Performance issue as compared with backaend SAP BW as

    Hi
    We are having Backend as SAP BW.
    Dashboard and Webi reports are created with Backend SAP BW.
    i.e thru Universe on top of Bex query and then webi report and then dashboard thru live office.
    My point is that when we have created webi reprts with date range as parameter(sometimes as mandatory variable which comes as prompt in webi) or sometimes taking L 01 Calender date from Bex and creating a prompt in webi,  we are facing that reports are taking lot of time to open. 5 minutes 10 minutes and sometimes 22 minutes  to open.
    This type of problem never ocurred when Backened was Oracle.
    Also when drilling in webi report,it takes lot of time .
    So can you suggest any solution?

    Hi Gaurav,
    We logged this issue with support already, it is acknowledged.
    What happens is that whenever you use an infoobject in the condition
    (so pull in the object into condition and build a condition there,
    or use that object in a filter object in the universe and then use the filter)
    this will result in that object being added to the result set.
    Since the query will retrieve a lot of different calendar days for the period you selected,
    the resultset will be VERY big and performance virtually non-existent.
    The workaround we used is to use a BEX variable for all date based selections.
    One optional range variable makes it possible to build various types of selections
    (less (with a very early startdate), greater (with a very far in the future enddate) and between).
    Because the range selection is now handled by BEX, the calendar day will not be part of the selection...
    Good luck!
    Marianne

  • Performance issue while working with large files.

    Hello Gurus,
    I have to upload about 1 million keys from a CSV file on the application server and then delete the entries from a DB table containing 18 million entries. This is causing performance problems and my programm is very slow. Which approach will be better?
    1. First read all the data in the CSV and then use the delete statement?
    2. Or delete each line directly after reading the key from the file?
    And another program has to update about 2 million entries in a DB table containing  20 million entries. Here I also have very big performance problems(the program has been running for more the 14 hours). Which is the best way to work with such a large amount?
    I tried to rewrite the program so that it will run parallel but since this program will only run once the costs of implementing a aRFC parallization are too big. Please help, maybe someone doing migration is good at this
    Regards,
    Ioan.

    Hi,
    I would suggest, you should split the files and then process each set.
    lock the table to ensure it is available all time.
    After each set ,do a commit and then proceed.
    This would ensure there is no break in middle and have to start again by deleteing the entries from files which are already processed.
    Also make use of the sorted table and keys when deleting/updating DB.
    In Delete, when multiple entries are involved , use of  an internal table might be tricky as some records may be successfully deleted and some maynot.
    To make sure, first get the count of records from DB that are matching in Internal table set 1
    Then do the delete from DB with the Internal tabel set 1
    Again check the count from DB that are matching in Internal table set 1 and see the count is zero.
    This would make sure the entire records are deleted. but again may add some performance
    And the goal here is to reduce the execution time.
    Gurus may have a better idea..
    Regards
    Sree

  • The schedule in Power BI report refresh error with the powerpivot or powerview with SQL Analysis services as data source

    In the Power BI Admin Centre, the data sources that cannot be setup with Analysis services, and it has the error to schedule the powerpivot or powerview report to have data refresh. The error is "我們無法重新整理此報表中的資料來源種類。" - english, "We
    can not rearrange the source of information in this report types ." We would like to know the power bi schedule function can support SQL analysis service or not. Please advise. Thanks.
    Winsee

    It is not currently supported. You might be able to create a linked server in SQL to be a proxy for the connection and enable scheduled refresh if you are desperate.
    https://support.office.com/en-US/Article/Supported-data-sources-cb69a30a-2225-451f-a9d0-59d24419782e#__supported_data_sources
    http://artisconsulting.com/Blogs/GregGalloway

  • Solve Performance Issue... with multiplethread

    I made an application with a huge row indatabase table...
    every siingle process could process thousand rows
    Is it make sense to divide one process in single http request
    for example ( divide process by create ten application module )
    into multiple thread?
    has anyone have some problem with me?
    and how the best strategy to solve this problem?

    OK, this helps to understand the problem.
    We had a problem alike yours. What we end up doing with is to read the files into a temporary db table, committing every 500 rows to reduce memory usage. We do this without any validation, just to get hold of the data in the db.
    After all data from a file is in a db table we do the validation (you can even use pl/sql for this) and show all rows to the use which are not valid. This gives the user the change to correct the rows (or dismiss them).
    After that (now knowing that hte data should be processed without any error) we do the real work of inserting the data.
    All you ahve to do is to work in chunks (we use 500-1000 rows) before we commit the data already processed. Flags in the temporary table allow us to start the process again if something happens during processing the data.
    Working in chunks allows the framework to free and regain some memory used while doing the work.
    Timo

  • Performance issues home sharing with Apple TV2 from Mountain Lion Server

    I have a Mac Mini which I have just upgraded to Mountain Lion Server from Snow Leopard Server.
    I have noticed that the performance of Streaming a film using Home Sharing to an Apple TV2 has degraded compared to the Snow Leopard setup. In fact the film halts a number of times during play back which is not ideal.
    I have tested the network between the 2 devices and cannot find a fault.
    Has anyone come across this problem before.
    Are there any diagnostic tools I can use to measure the home sharing streaming service to the AppleTV2 device.
    Any help much appreciated

    Well, I tried a few other things and one worked but again just the first time I tried connecting to the desktop PC with iTunes. I flashed my router with the latest update and the ATV2 could see the iTunes library and I was able to play media. Later in the day I was going to show off to my daughter that I had fixed it and, to my dismay, no go. I tried opening the suggested ports but no luck.
    I then tried loading iTunes on a Win7 laptop and it works perfectly with the ATV2. Both the laptop and the ATV2 are connected to the router wirelessly while the Desktop is connected to the router by Ethernet. Not sure if this is part of the issue as it sounds like this didn't work for others. The only other difference between the laptop and desktop is that the desktop has Win7 SP1 loaded while the laptop does not; now I'm scarred to load it though I don't think that's the issue. All in all, a very vexing situation. Hopefully Apple comes up with a solution soon.

  • Performance issues in bootcamp with Passmark benchmark test

    I wonder if anyone can shed some light on this
    I've got a Win XP 32bit SP3 installed on bootcamp. When i test this using the passmark benchmark test (latest version on a 30 day trial). I get poor CPU performance. I am running a 2.53 13" MacBook Pro. I can compare the results with other peoples results from the passmark database, by same CPU and also by other macbooks.
    My scores are nearly half what other similar macbooks are showing in CPU Math, which is quite a difference.
    So i wonder what's up, and how can i check it further? Half sounds like only one CPU running.
    I've compared VMFusion and Bootcamp as well and there's not much in them, both results are showing around half the performance. (This is Fusion running the bootcamp install. A free windows install in Fusion shows slightly better results but not that much)
    Any pointers or help would be great, for example is there some way under OS X i can check the performance of the macbook as a baseline?
    Is there something i've not done whilst installing Win XP through book camp?
    thanks in advance!
    paul

    Maybe until software catches up and is patched to work with Intel 3000 plus 6970M.
    Apple did get early jump on Sandy Bridge +
    Can't find the thread from someone else (thanks for making searching threads so clumbersome)
    AMD Mobility
    http://support.amd.com/us/gpudownload/windows/Pages/radeonmob_win7-64.aspx
    And for Nvidia 320M users:
    http://www.nvidia.com/object/notebook-win7-winvista-64bit-275.27-beta-driver.htm l
    3DMark06 will almost surely I suppose need a patch.
    I'd hit the web sites for PC laptop gaming if you can.
    http://www.bing.com/search?q=6970M+3DMark06
    http://www.notebookcheck.net/Intel-HD-Graphics-3000.37948.0.html
    This should be right now your alley -
    http://forums.macrumors.com/showthread.php?t=1101179&page=2
    http://www.bing.com/search?q=6970M+with+Intel+3000+graphics
    Most of this type stuff is at your fingertips with a little searching.

  • Performance issues for iOS with high resolution.

    I made an app with a resolution of 480x320 for iOS. It works quite well.
    I then remade it with a resolution of 960x640. In AIR for iOS settings I set Resolution to "High".
    The app looked great, however there was a noticeable drop in performance.
    The app functioned the same way as the original lower resolution app. but it was lagging.
    Has anyone else had this problem?
    Am I doing something wrong?

    With my game, I had around 60fps on the 3GS, and around 50 on the iPhone 4, with the high settings. I got around 10 fps extra by using: stage.quality = StageQuality.LOW;
    Air 2.6. I tried with Air 2.7, but it seems like that command can't be used there (?)

Maybe you are looking for