Performance issue in first run

Hi Experts,
I am having one performance issue. In the first run of z program performance is very poor.  but in the second run it is fast. Performance is getting affected at one select query on table FAGLFLEXA. There is no buffering selected at technical setting level for this. Please guide in this case.

Hello Swapnil,
Please turn on a SQL trace in ST05 when you experience performance issue again and ask your developer to tune up the z program.
Thanks,
Siva Kumar

Similar Messages

  • IOS text performance,slow at first run

    Hi,
    (iPod4 / GPU rendering)
    I have a dynamic textField  (non-TLF) with numbers which  increment +1 on each ENTER_FRAME...like a fast counter.
    For example  'tally' the score at the end of a game level, quickly counting up the scores.
    My issue:
    It often lags in the begining, but then quickly comes up to speed.
    I reuse this textField, just adding/removing it from the display list as needed, never nulling it.
    What can i do to speed this up?
    It's able to handle it great, but once its off the display list for x amount of time, adding it back and starting up that counter lags a little. It does not lag on android GPU.
    Is iOS needing to grab ahold of native text engine or something?
    thanks!

    I'm running into something similar on a project today. Adding dynamic text fields to the project causes a noticeable performace drop for the entire app. Removing them and even nulling them doesn't seem to help. Anyone have a trick for using text fields efficiently?

  • Treeview datawindow with icon - performance issue

    Hi all,
    i have a treeview dw and have set the "use tree node icon" property with 2 different icons (bmp) at the first level (when the level is expanded or collapsed)
    Then a different icon (ico) file is used for the second level
    Unfortunately, this is causing a huge performance issue when i run the application.
    If i disable the "use tree node icon" property, everything works fine.
    Is there any known issues on the treeview dw with icons? any work around?
    am using pb11.2
    thanking you in advance,
    -a

    First question is which jdev version you are using?
    I made a quick test and did not see a long busy state.
    Run your app with -Djba.debugoutout=console as java option. You get more output in the log window, but may be some hint about what's going on.
    Timo

  • Performance issues - slowdowns...

    Hi all,
    Brand new iMac 20 inch Core 2 Duo. Working super for the first couple of days and now I notice some performance issues.
    First of all, I had some crashes with Safari today. I had installed the Flip4Mac wmv add-on a few days ago. So, I deleted the Flip4Mac WMV browser plugins as suggested in the forums here and no crashes since.
    However, Safari now takes a while to load if starting from "closed". The first few days, it opened much, much faster.
    Also now iCal is slow to load (almost freezing system with the spinning colour wheel bussing about). It eventually loads, and works as expected - but the first few days it was super fast to load up.
    Finally, I have noticed some other occasions where the system just becomes unresponsive. I still have mouse control (not a hard lock up), but I can't switch between programs or launch a new one. Waiting 10-20 seconds usually does the trick and I am again able to work as expected.
    I am new to Macs after years as a PC user. In fact, I have a new MacBook Pro coming next week as well. Needless to say, I have made the switch by jumping in! So, I am learning a whole new world here and could use some help! I sure would like to retain the amazing performance I witnessed the first few days, so any assistance is most appreciated.
    Best,
    Stu

    Allan's suggestions may be better than what I'm going to offer so I'd try them first. If gentler methods don't help, you may want to consider more drastic measures. As he pointed out, brand new hardware is not always perfect.
    If you can easily back up all the data you have, I'd consider doing that, then do an erase and install. To do this, boot your Mac, insert the system install or restore disk that came with your Mac, restart, and hold down the C key right after it chimes. When you run through the installation prompts, click the Options Tab when you see it and choose erase and install, remembering that this will wipe out data you have not backed up elsewhere. (not trying to be condescending, just covering all the bases).
    There could be some sort of small easy-to-fix issue that Allan's tips will fix. If, OTOH, there is a hardware issue, doing and erase and install will clear up that question pretty quickly. The erase and install will put you back to "factory condition" as far as software is concerned. If problems develop again in a day or two (or sooner) then you should call Apple. Just don't forget to mention you've already done an erase and install.
    good luck
    Jeff

  • Pivot - Performance Issue with large dataset

    Hello,
    Database version : Oracle 10.2.0.4 - Linux
    I'm using a function to return a pivot query depending on an input "RUN_ID" value
    For example, i consider two differents "RUN_ID" (e.g. 119 and 120) with exactly the same dataset
    I have a performance issue when i run the result query with the "RUN_ID"=120.
    Pivot:
    SELECT   MAX (a.plate_index), MAX (a.plate_name), MAX (a.int_well_id),
             MAX (a.row_index_alpha), MAX (a.column_index), MAX (a.is_valid),
             MAX (a.well_type_id), MAX (a.read_index), MAX (a.run_id),
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC190', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC304050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC306050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC30050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC3011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC104050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC106050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC10050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC1011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC204050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC206050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC20050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC2011050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC80050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'CALC70050301', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'RAW0', a.this_value,
                          NULL
             MAX (DECODE (a.value_type || a.value_index,
                          'RAW5030', a.this_value,
                          NULL
             MAX (a.dose), MAX (a.unit), MAX (a.int_plate_id), MAX (a.run_name)
        FROM vw_well_data a
       WHERE a.run_id = :app_run_id
    GROUP BY a.int_well_id, a.read_index
    Run the query :
    SELECT Sql_FullText,(cpu_time/100000) "Cpu Time (s)",
                    (elapsed_time/1000000) "Elapsed time (s)",
                    fetches,buffer_gets,disk_reads,executions
    FROM v$sqlarea
    WHERE Parsing_Schema_Name ='SCHEMA';
    With results :
    SQL_FULLTEXT     Cpu Time (s)     Elapsed time (s)     FETCHES     BUFFER_GETS     DISK_READS     EXECUTIONS
    query1 (RUN_ID=119)      22.15857     3.589822     1     2216     354     1
    query2 (RUN_ID=120)      1885.16959     321.974332     3     7685410     368     3
    Explain Plan for RUNID 119_
    PLAN_TABLE_OUTPUT
    Plan hash value: 3979963427
    | Id  | Operation                          | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                   |                      |   261 | 98397 |   434   (2)| 00:00:06 |
    |   1 |  HASH GROUP BY                     |                      |   261 | 98397 |   434   (2)| 00:00:06 |
    |   2 |   VIEW                             | VW_WELL_DATA         |   261 | 98397 |   433   (2)| 00:00:06 |
    |   3 |    UNION-ALL                       |                      |       |       |            |          |
    |*  4 |     HASH JOIN                      |                      |   252 | 21168 |   312   (2)| 00:00:04 |
    |   5 |      NESTED LOOPS                  |                      |   249 | 15687 |   112   (2)| 00:00:02 |
    |*  6 |       HASH JOIN                    |                      |   249 | 14442 |   112   (2)| 00:00:02 |
    |   7 |        TABLE ACCESS BY INDEX ROWID | PLATE                |    29 |   464 |     2   (0)| 00:00:01 |
    |*  8 |         INDEX RANGE SCAN           | IDX_PLATE_RUN_ID     |    29 |       |     1   (0)| 00:00:01 |
    |   9 |        NESTED LOOPS                |                      | 13286 |   544K|   109   (1)| 00:00:02 |
    |  10 |         TABLE ACCESS BY INDEX ROWID| RUN                  |     1 |    11 |     1   (0)| 00:00:01 |
    |* 11 |          INDEX UNIQUE SCAN         | PK_RUN               |     1 |       |     0   (0)| 00:00:01 |
    |  12 |         TABLE ACCESS BY INDEX ROWID| WELL                 | 13286 |   402K|   108   (1)| 00:00:02 |
    |* 13 |          INDEX RANGE SCAN          | IDX_WELL_RUN_ID      | 13286 |       |    46   (0)| 00:00:01 |
    |* 14 |       INDEX UNIQUE SCAN            | PK_WELL_TYPE         |     1 |     5 |     0   (0)| 00:00:01 |
    |  15 |      TABLE ACCESS BY INDEX ROWID   | WELL_RAW_DATA        | 26361 |   540K|   199   (2)| 00:00:03 |
    |* 16 |       INDEX RANGE SCAN             | IDX_WELL_RAW_RUN_ID  | 26361 |       |    92   (2)| 00:00:02 |
    |  17 |     NESTED LOOPS                   |                      |     9 |   891 |   121   (2)| 00:00:02 |
    |* 18 |      HASH JOIN                     |                      |     9 |   846 |   121   (2)| 00:00:02 |
    |* 19 |       HASH JOIN                    |                      |   249 | 14442 |   112   (2)| 00:00:02 |
    |  20 |        TABLE ACCESS BY INDEX ROWID | PLATE                |    29 |   464 |     2   (0)| 00:00:01 |
    |* 21 |         INDEX RANGE SCAN           | IDX_PLATE_RUN_ID     |    29 |       |     1   (0)| 00:00:01 |
    |  22 |        NESTED LOOPS                |                      | 13286 |   544K|   109   (1)| 00:00:02 |
    |  23 |         TABLE ACCESS BY INDEX ROWID| RUN                  |     1 |    11 |     1   (0)| 00:00:01 |
    |* 24 |          INDEX UNIQUE SCAN         | PK_RUN               |     1 |       |     0   (0)| 00:00:01 |
    |  25 |         TABLE ACCESS BY INDEX ROWID| WELL                 | 13286 |   402K|   108   (1)| 00:00:02 |
    |* 26 |          INDEX RANGE SCAN          | IDX_WELL_RUN_ID      | 13286 |       |    46   (0)| 00:00:01 |
    |  27 |       TABLE ACCESS BY INDEX ROWID  | WELL_CALC_DATA       |   490 | 17640 |     9   (0)| 00:00:01 |
    |* 28 |        INDEX RANGE SCAN            | IDX_WELL_CALC_RUN_ID |   490 |       |     4   (0)| 00:00:01 |
    |* 29 |      INDEX UNIQUE SCAN             | PK_WELL_TYPE         |     1 |     5 |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("WELL_RAW_DATA"."RUN_ID"="WELL"."RUN_ID" AND
                  "WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
       6 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
       8 - access("PLATE"."RUN_ID"=119)
      11 - access("RUN"."RUN_ID"=119)
      13 - access("WELL"."RUN_ID"=119)
      14 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      16 - access("WELL_RAW_DATA"."RUN_ID"=119)
      18 - access("WELL"."RUN_ID"="WELL_CALC_DATA"."RUN_ID" AND
                  "WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
      19 - access("PLATE"."RUN_ID"="WELL"."RUN_ID" AND "PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      21 - access("PLATE"."RUN_ID"=119)
      24 - access("RUN"."RUN_ID"=119)
      26 - access("WELL"."RUN_ID"=119)
      28 - access("WELL_CALC_DATA"."RUN_ID"=119)
      29 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
    Explain Plan for RUNID 120_
    PLAN_TABLE_OUTPUT
    Plan hash value: 599334230
    | Id  | Operation                           | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                           |     2 |   754 |    24   (5)| 00:00:01 |
    |   1 |  HASH GROUP BY                      |                           |     2 |   754 |    24   (5)| 00:00:01 |
    |   2 |   VIEW                              | VW_WELL_DATA              |     2 |   754 |    23   (0)| 00:00:01 |
    |   3 |    UNION-ALL                        |                           |       |       |            |          |
    |*  4 |     TABLE ACCESS BY INDEX ROWID     | WELL_RAW_DATA             |     1 |    21 |     3   (0)| 00:00:01 |
    |   5 |      NESTED LOOPS                   |                           |     1 |    84 |     9   (0)| 00:00:01 |
    |   6 |       NESTED LOOPS                  |                           |     1 |    63 |     6   (0)| 00:00:01 |
    |   7 |        NESTED LOOPS                 |                           |     1 |    58 |     6   (0)| 00:00:01 |
    |   8 |         NESTED LOOPS                |                           |     1 |    27 |     3   (0)| 00:00:01 |
    |   9 |          TABLE ACCESS BY INDEX ROWID| RUN                       |     1 |    11 |     1   (0)| 00:00:01 |
    |* 10 |           INDEX UNIQUE SCAN         | PK_RUN                    |     1 |       |     0   (0)| 00:00:01 |
    |  11 |          TABLE ACCESS BY INDEX ROWID| PLATE                     |     1 |    16 |     2   (0)| 00:00:01 |
    |* 12 |           INDEX RANGE SCAN          | IDX_PLATE_RUN_ID          |     1 |       |     1   (0)| 00:00:01 |
    |* 13 |         TABLE ACCESS BY INDEX ROWID | WELL                      |     1 |    31 |     3   (0)| 00:00:01 |
    |* 14 |          INDEX RANGE SCAN           | IDX_WELL_RUN_ID           |    59 |       |     2   (0)| 00:00:01 |
    |* 15 |        INDEX UNIQUE SCAN            | PK_WELL_TYPE              |     1 |     5 |     0   (0)| 00:00:01 |
    |* 16 |       INDEX RANGE SCAN              | IDX_WELL_RAW_DATA_WELL_ID |     2 |       |     2   (0)| 00:00:01 |
    |* 17 |     TABLE ACCESS BY INDEX ROWID     | WELL_CALC_DATA            |     1 |    36 |     8   (0)| 00:00:01 |
    |  18 |      NESTED LOOPS                   |                           |     1 |    99 |    14   (0)| 00:00:01 |
    |  19 |       NESTED LOOPS                  |                           |     1 |    63 |     6   (0)| 00:00:01 |
    |  20 |        NESTED LOOPS                 |                           |     1 |    58 |     6   (0)| 00:00:01 |
    |  21 |         NESTED LOOPS                |                           |     1 |    27 |     3   (0)| 00:00:01 |
    |  22 |          TABLE ACCESS BY INDEX ROWID| RUN                       |     1 |    11 |     1   (0)| 00:00:01 |
    |* 23 |           INDEX UNIQUE SCAN         | PK_RUN                    |     1 |       |     0   (0)| 00:00:01 |
    |  24 |          TABLE ACCESS BY INDEX ROWID| PLATE                     |     1 |    16 |     2   (0)| 00:00:01 |
    |* 25 |           INDEX RANGE SCAN          | IDX_PLATE_RUN_ID          |     1 |       |     1   (0)| 00:00:01 |
    |* 26 |         TABLE ACCESS BY INDEX ROWID | WELL                      |     1 |    31 |     3   (0)| 00:00:01 |
    |* 27 |          INDEX RANGE SCAN           | IDX_WELL_RUN_ID           |    59 |       |     2   (0)| 00:00:01 |
    |* 28 |        INDEX UNIQUE SCAN            | PK_WELL_TYPE              |     1 |     5 |     0   (0)| 00:00:01 |
    |* 29 |       INDEX RANGE SCAN              | IDX_WELL_CALC_RUN_ID      |   486 |       |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("WELL_RAW_DATA"."RUN_ID"=120)
      10 - access("RUN"."RUN_ID"=120)
      12 - access("PLATE"."RUN_ID"=120)
      13 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      14 - access("WELL"."RUN_ID"=120)
      15 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      16 - access("WELL"."INT_WELL_ID"="WELL_RAW_DATA"."INT_WELL_ID")
      17 - filter("WELL"."INT_WELL_ID"="WELL_CALC_DATA"."INT_WELL_ID")
      23 - access("RUN"."RUN_ID"=120)
      25 - access("PLATE"."RUN_ID"=120)
      26 - filter("PLATE"."INT_PLATE_ID"="WELL"."INT_PLATE_ID")
      27 - access("WELL"."RUN_ID"=120)
      28 - access("WELL"."WELL_TYPE_ID"="TSF_LAYOUT_WELL_TYPE"."WELL_TYPE_ID")
      29 - access("WELL_CALC_DATA"."RUN_ID"=120)I need some advice to understand the issue and to improve the performance.
    Thanks,
    Grégory

    Hello,
    Thanks for your response.
    Stats are computed recently with DBMS_STATS package (case 2) and we have histogramm on 'RUN_ID' columns.
    I tried to use the deprecated "analyze" method (case 1) and obtained better results!
    DECLARE
       -- Get tables used in the view vw_well_data --
       CURSOR c1
       IS
          SELECT table_name, last_analyzed
            FROM user_tables
           WHERE table_name LIKE 'WELL%';
    BEGIN
       FOR r1 IN c1
       LOOP
          -- Case 1 : Analyze method : Perf is good --
          EXECUTE IMMEDIATE    'analyze table '
                              || r1.table_name
                              || ' compute statistics ';
          -- Case 2 : DBMS_STATS --
          DBMS_STATS.gather_table_stats ('SCHEMA', r1.table_name);
       END LOOP;
    END;The explain plans are the same as before
    Any explanations, suggestions ?
    Thanks,
    Gregory

  • Captivate 6 Output intermittent performance issues

    I am getting feedback from users that some of my captivate e-learning is having intermittent performance issues when being run via our LMS. I am hearing of slow down and pausing between slides and questiosn in quizs. I have seen this in action and I cannot pin point what is causing this. The problem doesnt affect most users, then some will report these extreme slow downs.
    I have ruled out any specific browser or OS. Does anyone have similar experiences and any ideas around what may be causing it? I want to rule out Captivate if possible and potentially point the finger at our LMS. However I dont know how the content within Captivate is cached/downloaded when being played.
    Does anyone know how Captviate operates when run via an LMS. Does it download in full, or buffer content and download progressively as the person moves through the content? If I can work this out I may be able to pinpoint the problem.
    Sorry if this is a bit vague, any help or additional experiences would be greatly appreciated.
    Jay

    There is also network latency to consider, in addition to server latency.  In our building, network bandwidth availablity can surge or slow down depending on which subnetwork we're on, if there is several VOiP or web video conferencing sessions taking up bandwidth.  If you're distributing to geographically separated organizations, then you may also experience latency issues related to the local infrastructure.
    Just a thought...

  • Accrual Reconciliation Load Run  report performance issue.

    we have significant performance issues when running accrual reconciliation load run report. we had to cancel after have it run for a day. any idea on how to resolve it?

    We experienced similar issue. As the runtime of this report depends on the input parameters. Remember, your first run of this report going to take significant amount of time and the subsequent runs will be much shorter.
    But w had to apply the patches referred in the MOS article to resolve the performance issue.
    Accrual Reconciliation Load Run Has Slow Performance [ID 1490578.1]
    Thanks,
    Sunthar....

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • Performance issue in browsing SSAS cube using Excel for first time after cube refresh

    Hello Group Members,
    This is a continuation of my earlier blog question -
    https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
    As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
    I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
    4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
    refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
    Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
    daily cube refresh?
    Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
    a significant time based on the bandwidth of the network and connection.
    Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
    Best Regards, Arka Mitra.

    Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
    I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
    select [HIERARCHY_UNIQUE_NAME]
    from $system.mdschema_hierarchies
    where CUBE_NAME = 'YourCubeName'
    select [LEVEL_UNIQUE_NAME]
    from $system.mdschema_levels
    where CUBE_NAME = 'YourCubeName'
    Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
    http://artisconsulting.com/Blogs/GregGalloway

  • New DVR Issues (First Run, Channel Switching, etc.)

    I've spent the last 30 minutes trying to find answers through the search with no luck, so sorry if I missed something.
    I recently switched to FIOS from RCN cable in New York.  I've gone through trying to setup my DVR and am running into issues and was hoping for some answers.
    1.  I setup two programs to record at 8PM, I was watching another channel at the time and only half paying attention.  Around 8:02 I noticed a message had popped up asking if I would like to switch channels to start recording.  I was expecting it to force it to switch like my old DVR, but in this case it didn't switch and I missed the first two minutes of one of the shows.  I typically leave my DVR on all day and just turn off the TV, this dual show handling will cause issues with that if I forget to turn off the DVR.  Is there a setting I can change that will force the DVR to choose one of the recording channels?
    2.  I setup all my recordings for "First Run" because I only want to see the new episodes.  One show I setup was The Daily Show on comedy central, which is shown weeknights at 11pm and repeated 3-4 times throughout the day.  My scheduled recordings is showing all these as planned recordings even though only the 11pm show is really "new".  Most of the shows I've setup are once a week so they aren't a problem, but this seems like it will quickly fill my DVR.  Any fixes?
    Thanks for the help.
    Solved!
    Go to Solution.

    I came from RCN about a year ago.  Fios is different in several ways, not all of them desirable.  Here are several ways to get--and fix--unwanted recordings from a series recording setup.
    Some general principles. 
    Saving changes.  When you originally create a series with options, or if you go back to edit the options for an existing series, You MUST save the Series Options changes.  Pretty much everywhere else in the user interface, when you change an option, the change takes effect immediately--but not in Series Options.  Look at the Series Options window.  Look at the far right side.  There is a vertical "Save" bar, which you must navigate to and click OK on to actually save your changes.  Exiting the Series Options window without having first saved your changes loses all your attempted changes--immediately.
    Default Series Options.  This is accessed  from [Menu]--DVR--Settings--Default Series Options.  This will bring up the series options that will automatically be applied to the creation of a NEW series. The options for every previously created series will not be affected by a subsequent modification of the Default Series Options.  You should set these options to the way you would like them to be for the majority of series recordings that you are likely to create.  Be sure to SAVE your changes.  This is what you will get when you select "Create Series Recording" from the Guide.  When creating a new series recording where you think that you may want options different from the default, select "Create Series with Options" instead.  Series Options can always be changed for any individual series set up later--but not for all series at once.
    Non-series recordings.  With Fios you have no directly available options for these.  With RCN and most other DVRs, you can change the start and end times for individual episodes, including individual episodes that are also in a series.  With Fios, your workarounds are to create a series with options for a single program, then delete the series later;  change the series options if the program is already in a series, then undo the changes you made to the series options later; or schedule recordings of the preceding and/or following shows as needed.
    And now, to the unwanted repeats. 
    First, make sure your series options for the specific series in question--and not just the series default options--include "First Run Only".  If not, fix that and SAVE.  Then check you results by viewing the current options using the Series Manager app under the DVR menu.
    Second, and most annoying, the Guide can have repeat programs on your channel tagged as "New".  It happens.  Set the series option "Air Time" to "Selected Time".  To make this work correctly, you must have set up the original series recording after selecting the program in the Guide at the exact time of a first run showing (11pm, in your case), and not on a repeat entry in the Guide.  Then, even it The Daily Show is tagged as New for repeat showings, these will be ignored. 
    Third, another channel may air reruns of the program in your series recording, and the first showing of a rerun episode on the other channel may be tagged as "New".  These can be ignored in your series if you set the series option "Channel" to "Selected Channel".  Related to this, if there is both an SD and HD channel broadcasting you series program, you will record them both if the series option "Duplicates" is set to "Yes".  However, when the Channel option is set to "Selected Channel", the Duplicates Option is always effectively "No", regardless of what shows up on the options screen.  
    As for you missing two minutes,  I have sereral instances in which two programs start recording at the same time.  To the best of my recollection, whenever the warning message has appeared, ignoring it has not caused a loss of recording time.  You might have an older software version.  Newest is v.1.8.  Look at Menu--Settings--System Info.  Or, I might not have noticed the loss of minutes.  I regularly see up to a minute of previous programming at the start of a recording, or a few missing seconds at the beginning or end of a recording.  There are a lot of possibilities for that, but the DVR clock being incorrect is not one of them.  With RCN, the DVR clocks occasionally drifted off by as much as a minute and a half.

  • Performance issue - application running on front

    Hi, I have a Strange performance issue:
    - when I launch my app from flash builder without touching anything, it is slow,
    - when I launch it from flash builder and immediately open another window and keep it in front of it, it is really fast
    it is a windowed application, full screen, displaying multiple objects moving around
    had anyone already had the issue?
    thanks,
    YAnn

    For monitoring Azure using SCOM 2012R2, you can refer below link
    http://blogs.technet.com/b/dcaro/archive/2012/05/02/how-to-monitor-your-windows-azure-application-with-system-center-2012.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"Mai Ali | My blog:
    Technical | Twitter:
    Mai Ali

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Performance issues with Homesharing?

    I have a Time Capsule as the base station for my wireless network, then 2 Airport Express setup to extend the network around the house, an iMac i7 as the main iTunes Library and couple of iPads, and a couple of Apple TVs. Everything has the latest software, but I have several performance issues with Home sharing. I've done several tests making sure nothing is taking additional bandwidth, so here are the list of issues:
    1) With nothing else running, trying playing a movie via home sharing in an iPad 2 which is located on my iMac, it stops and I have to keep pressing the play button over and over again. I typically see that the iPad tries to download part of the movie first and then starts playing so that it deals with the bandwidth, but in many cases it doesn't.
    2) When trying to play any iTunes content (movies, music, photos, etc) from my Apple TV I can see my computer library, but when I go in on any of the menus, it says there's no content. I have to reboot the Apple TV and then problem fixed. I's just annoying that I have to reboot.
    3) When watching a Netflix movie on my iPad and with Airplay I send the sound to some speakers via Airplay through an Airport Express. At time I lose the connection to the speakers.
    I've complained about Wifi's instability, but here I tried to keep everything with Apples products to avoid any compatibility issues and stay within N wireless technology, which I understood it was much more stable.
    Has anyone some suggestions?

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Performance issues with FDK in large XML documents

    In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
    The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
    When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
    Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
    PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).

    FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
    Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
    --Franz

  • MacBook Pro performance issues w/2nd monitor and FCP7

    I have this MacBook Pro bought brand-new in January 2010:
      Model Name: MacBook Pro
      Model Identifier: MacBookPro5,2
      Processor Name: Intel Core 2 Duo
      Processor Speed: 3.06 GHz
      Number Of Processors: 1
      Total Number Of Cores: 2
      L2 Cache: 6 MB
      Memory: 8 GB
      Bus Speed: 1.07 GHz
    and until today had never attached a second monitor to it. Today I hooked up my Samsung 24" to do some dual screen editing in Final Cut 7.0.3. I was unable to play back my video at full speed in the second monitor, and after a few seconds of skippy playback I'd get that error message about unable to play back video at full speed and to check my RT settings. I was using a Mini DisplayPort to DVI adapter. My computer has no issues playing the video in the laptop's monitor at any resolution and any quality settings (I've never changed the RT settings or anything else in the menu ever but I tried every combination this time). I then tried using my TV as a 2nd monitor with an HDMI adapter. Same performance issues. I then tried my friend's newer 13" MBP 8,1 and it performed flawlessly with the same project & footage. I feel like my $3,000 computer should outperform a $1,200 one even if mine is a year and a half older. Any advice?
    Chris

    Wow, you posted this perfectly to coincide with an identical problem, albeit using Logic Pro 9.1.5 rather than FCP.
    Last week, I purchased a 23" external monitor to use alongside my "flagship" 2011 15" hi-res, 2.3 i7 Macbook Pro with 8Gb of RAM.
    It is connected via a mini-DVI to D-sub analog (not that that should matter?) and all appeared fine.
    The first issue I had was with my MBP's fan now running CONSTANTLY, when I have the second monitor attached. Even when the machine is completely idle.
    When using the machine to record audio, this is a fairly hefty problem and not something I had anticipated - indeed why would I anticipate such a thing?
    What is far, far worse though is that over the last few days I have had repeated problems with performance drop-outs and errors in Logic and I have trying to fathom out why. Realising that the only major system change made, was the above monitor connection, I ran some tests.
    I restarted my MBP, no other apps were running and with my new 23" monitor attached acting as main display with MBP built in display on as secondary
    I loaded up a fairly demanding Logic project which was hitting 40% to 60% CPU usage when using the built in MBP display last week
    I ran activity monitor and had CPU usage history open
    The above project now repeatedly overloads and playback halts in a given 8 bar section - with CPU at 80% most of the time
    I disconnected the external display, no shut down, I just let the machine switch to the built in 15".
    Started the same project, the same 8 bar section and hey presto - CPU usage back down to 40% to 60%
    The above was reflected in the CPU usage history with the graph showing CPU use down by about a half, when running this Logic project WITHOUT the external display.
    There is a very useful benchmark Logic project that has been used as a test by many users to gauge Logic performance on given Apple hardware.
    The project has about 100 tracks pre-configured with CPU intensive plugins, designed to tax the CPU.
    The idea is that you load up the project with tracks muted, press play and then unmute the tracks steadily until Logic us unable to play contiunously because of a system performance error.
    On my MBP, with the external monitor NOT attached, I can play back around 50 of the audio tracks in this benchmark project.
    With the monitor attached, I can get about 22 tracks playing.... which is actually a far worse a performance drop (-50% I think!?) than with the first example!
    I did also try with just the external monitor attached and not the MBP display and performance was about 10% better than with dual monitors - so still extremely poor, to say the least.
    This machine is the flagship MBP and has a dedicated AMD Radeon HD6750 GPU which should take care of most if not ALL graphics processing - I mean it's capable of running some pretty demanding games!
    Putting aside the issue of constant fan noise, there is no reason AT ALL, why using an external monitor should tax the i7 CPU this way - it's not as though Logic is graphically demanding... far from it.
    I am on 10.6.8, Logic 9.1.5, all apps up to date via "Software Update".
    I will of course, be contacting Apple...

Maybe you are looking for