A possible way to improve the performance of MainStage

Some of you may know that I haven't bothered much with MainStage and Logic recently having had a lot of grief with them over the last year.
However, I just discovered some articles written about OS X memory management and the positive impact of disabling the dynamic paging system on a Mac (meaning no more virtual memory). It seems to be the case that disabling dynamic paging is improving the general responsiveness of a Mac very significantly. While some are claiming that it's only good for Snow Leopard, others are saying it works with Lion as well. My experience is that it indeed works well.
Here's a link to an article that talks about the issue and describes how to disable paging. Mileage may vary but I think it might be worthwhile for those who have reported problems with MS and Logic due to overloads to try it out.
Here's where I first found out about it
http://workstuff.tumblr.com/post/20464780085/something-is-deeply-broken-in-os-x- memory-management
and here's a discussion in MacWorld about it
http://hints.macworld.com/article.php?story=201106020948369
I'd be very interested to know how it goes.
D

Hi there,
Thank you for your response. I thought there's no answer for this issue and planning to change it to ALV.
I looked at the codes and debugged it. After the select statement from ZZUWT table (customized), there's a loop that takes much time (10-12mins) for 1000 records. and several nested perform routines. It's a standard program so I'm hesitant to edit it.
I'll be checking on the primary keys. If it's ok, may I ask for your assistance in this part? Though I'm not that familiar and haven't tried creating a report painter before.
Thanks anyway...:D

Similar Messages

  • Is there a way to improve the performance of a report group?

    Hi gurus,
    Is there a way to improve the performance of a report group? 
    Points will be given later....
    Thanks!

    Hi there,
    Thank you for your response. I thought there's no answer for this issue and planning to change it to ALV.
    I looked at the codes and debugged it. After the select statement from ZZUWT table (customized), there's a loop that takes much time (10-12mins) for 1000 records. and several nested perform routines. It's a standard program so I'm hesitant to edit it.
    I'll be checking on the primary keys. If it's ok, may I ask for your assistance in this part? Though I'm not that familiar and haven't tried creating a report painter before.
    Thanks anyway...:D

  • Is There any way to improve the performance on this code

    Hi all can any one tell me how to improve the performance of this below code.
    Actually i need to calculate opening balance of gl account so instead of using bseg am using bsis
    So is there any way to improve this code performance.
    Any help would be appreciated.
    REPORT  ZTEMP5 NO STANDARD PAGE HEADING LINE-SIZE 190.
    data: begin of collect occurs 0,
           MONAT TYPE MONAT,
           HKONT TYPE HKONT,
           BELNR TYPE BELNR_D,
           BUDAT TYPE BUDAT,
           WRBTR TYPE WRBTR,
           SHKZG TYPE SHKZG,
           SGTXT TYPE SGTXT,
           AUFNR TYPE AUFNR_NEU,
           TOT   LIKE BSIS-WRBTR,
    end of collect.
    TYPES: BEGIN OF TY_BSIS,
           MONAT TYPE MONAT,
           HKONT TYPE HKONT,
           BELNR TYPE BELNR_D,
           BUDAT TYPE BUDAT,
           WRBTR TYPE WRBTR,
           SHKZG TYPE SHKZG,
           SGTXT TYPE SGTXT,
           AUFNR TYPE AUFNR_NEU,
    END OF TY_BSIS.
    DATA: IT_BSIS TYPE TABLE OF TY_BSIS,
          WA_BSIS TYPE TY_BSIS.
    DATA: TOT TYPE WRBTR,
          SUMA TYPE WRBTR,
          VALUE TYPE WRBTR,
          VALUE1 TYPE WRBTR.
    SELECTION-SCREEN: BEGIN OF BLOCK B1.
    PARAMETERS:  S_HKONT LIKE WA_BSIS-HKONT DEFAULT '0001460002' .
    SELECT-OPTIONS: S_BUDAT FOR WA_BSIS-BUDAT,
                    S_AUFNR FOR WA_BSIS-AUFNR DEFAULT '200020',
                    S_BELNR FOR WA_BSIS-BELNR.
    SELECTION-SCREEN: END OF BLOCK B1.
    AT SELECTION-SCREEN OUTPUT.
      LOOP AT SCREEN.
        IF SCREEN-NAME = 'S_HKONT'.
          SCREEN-INPUT = 0.
          MODIFY SCREEN.
        ENDIF.
      ENDLOOP.
    START-OF-SELECTION.
      SELECT MONAT
             HKONT
             BELNR
             BUDAT
             WRBTR
             SHKZG
             SGTXT
             AUFNR
             FROM BSIS
             INTO TABLE IT_BSIS
             WHERE HKONT EQ S_HKONT
             AND   BELNR IN S_BELNR
             AND   BUDAT IN S_BUDAT
             AND   AUFNR IN S_AUFNR.
    *  if sy-subrc <> 0.
    *    message 'No Data' type 'I'.
    *  endif.
      SELECT SUM( WRBTR )
             FROM BSIS
             INTO COLLECT-TOT
             WHERE HKONT EQ S_HKONT
             AND BUDAT < S_BUDAT-LOW
             AND AUFNR IN S_AUFNR.
    END-OF-SELECTION.
      CLEAR: S_BUDAT, S_AUFNR, S_BELNR, S_HKONT.
      LOOP AT IT_BSIS INTO WA_BSIS.
    IF wa_bsis-SHKZG = 'H'.
       wa_bsis-WRBTR = 0 - wa_bsis-WRBTR.
    ENDIF.
        collect-MONAT  = wa_bsis-monat.
        collect-HKONT  = wa_bsis-hkont.
        collect-BELNR  = wa_bsis-belnr.
        collect-BUDAT  = wa_bsis-budat.
        collect-WRBTR  = wa_bsis-wrbtr.
        collect-SHKZG  = wa_bsis-shkzg.
        collect-SGTXT  = wa_bsis-sgtxt.
        collect-AUFNR  = wa_bsis-aufnr.
        collect collect into  collect.
        CLEAR: COLLECT, WA_BSIS.
      ENDLOOP.
      LOOP AT COLLECT.
        AT end of HKONT.
          WRITE:/65 'OpeningBalance',
                 85  collect-tot.
          skip 1.
        ENDAT.
        WRITE:/06 COLLECT-BELNR,
               22 COLLECT-BUDAT,
               32 COLLECT-WRBTR,
               54 COLLECT-SGTXT.
        AT end of MONAT.
          SUM.
          WRITE:/ COLLECT-MONAT COLOR 1.
          WRITE:32 COLLECT-WRBTR COLOR 1.
          VALUE = COLLECT-WRBTR.
          SKIP 1.
        ENDAT.
        VALUE1 = COLLECT-TOT +  VALUE.
        AT end of MONAT.
          WRITE:85 VALUE1.
        ENDAT.
      endloop.
      CLEAR: COLLECT, SUMA, VALUE, VALUE1.
    TOP-OF-PAGE.
      WRITE:/06 'Doc No',
             22 'Post Date',
             39 'Amount',
             54 'Text'.
    Moderator message : See the Sticky threads (related for performance tuning) in this forum. Thread locked.
    Edited by: Vinod Kumar on Oct 13, 2011 11:12 AM

    Hi Ben,
    both BSIS selects would become faster if you can add Company Code BUKRS as 1st field of WHERE clause, because it's the 1st field of primary key and HKONT is the 2nd field of primary key.
    If you have no table index with HKONT as 1st field it's a full database access.
    If possible, try to add BUKRS as 1st field of WHERE clause, otherwise ask for an additional BSIS index at your basis team.
    Regards,
    Klaus

  • Ways to improve the performance of my query?

    Hi all,
    I have created a multi provider which enables to fetch the data from 3 ods. And each ods contains huge amount of data. As a result my query performance is very slow..
    apart from creating indexes on ods? is there any other to be carried out to improve the performance of my query. Since all the 3 info providers are ods.
    thanxs
    haritha

    Haritha,
    If you still need more info, just have a look below:
    There are few ways your queries can be improved:
    1. Your Data volume in your InfoProviders.
    2. Dim table, how you have manage your objects into your dim table.
    3. Query that runs from multiprovider vs cube itself. when running from multiproviders at the time of execution the system has to create more tables hence the query performance will be affected.
    4. Aggregates into the cube, and they perfection of designing the aggregates.
    5. OLAPCHACHE
    6. Calculation formula
    etc.
    and also you can go thru the links below:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
    Hope this helps you.
    ****Assign Points*******
    Gattu

  • How can we improve the performance while fetching data from RESB table.

    Hi All,
    Can any bosy suggest me the right way to improve the performance while fetching data from RESB table. Below is the select statement.
    SELECT aufnr posnr roms1 roanz
        INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
        FROM resb
        WHERE kdauf  = p_vbeln
        AND   ablad  = itab-sposnr+2.
    Here I am using 'KDAUF'  & 'ABLAD' in condition. Can we use secondary index for improving the performance in this case.
    Regards,
    Himanshu

    Hi ,
    Declare intenal table with only those four fields.
    and try the beloe code....
    SELECT aufnr posnr roms1 roanz
    INTO  table itab
    FROM resb
    WHERE kdauf = p_vbeln
    AND ablad = itab-sposnr+2.
    yes, you can also use secondary index for improving the performance in this case.
    Regards,
    Anand .
    Reward if it is useful....

  • How to improve the Performance of Swing Components

    Hi
    I have developed a GUI Framework with Swing components. I observed that when number of components are more, the GUI is slow. Could anybody suggest me the ways to improve the performance of Swing Components
    Thanks

    Hi There - I haven't found issues so far with the speed of the GUI building, it seems fast when compared with .NET applications
    Are you doing any database querying on form load? This can slow down the loading of the GUI, I use persistence in an application and before the GUI is shown the persistence unit logs in to the database and registers itself - it does take 2/3 seconds to register and complete whatever it does before the form builds and shows
    Also check if you are loading/using any network based files on form load, this can also slow down the building considerably as well. Apart from that it's hard to say where your issue may lie apart from removing controls and checking build speed until you find the one that's slowing it down (not a good approach though) I sometimes had to load csv files from a network drive and if the file server is slow or LAN speed isn't great this can also cause a delay, hope you find it.....

  • How to improve the performance of policy import process

    Hello OES 10g Experts,
    I have an application specific policy XML of OES 10g. While importing it to OES using policyIX.sh command, the process is taking around 3.5 hrs in DEV servers. The policy XML size is around 2 MB. However there are other applications which are being imported in less than 30 mins.
    Perhaps it is due to the size of the XML, the policy import process is taking huge time which may not be viable in production systems. But I would like to know if there is a way to improve the performance of import process. The command syntax I am using is
    ./policyIX.sh -import -disableTransaction config.xml policy.xml
    I have tried removing the parameter -disableTransaction but there is no performance improvement.
    Appreciate any inputs.
    Thanks
    Mahendra.

    Go through following link:-
    http://docs.oracle.com/cd/E21764_01/doc.1111/e14316/org_mangmnt.htm#OMUSG2741

  • How to improve the performance of 11i?

    Dear all,
    We just upgraded from 10.7 to 11i and found that the performance is not acceptable. Currently there are about 120 staff in our company. Can someone kindly suggest some way to improve the performance?
    Thanks,
    George

    Dear Alan,
    The reports are running slowly when more than one hundred users are logged in. Should we tune the database or setup another report server to shorten the report running time?
    Thanks,
    George

  • How to improve the performance in Xcelsius

    Hi ,
    I have nearly 5000 records in excel and when I  try to import this data in to xcelsius its taking nearly 3 hrs to get imported and after that when I try ti select the sorce data again its taking more than 3 hrs .I think this is happening because I have increased max. no. of rows to 5000 but is there any way to improve the performance
    Regards,
    Sany

    hi sany
    5000 records is much to much. Xcelsius is designed for highly aggregated small data sets.
    Although 5000 records may work, think about that the default maxrows value is about 512. I think this is a kind of sign.
    Try to reduce you data set or dynamically refresh the data you really need at runtime.
    Best regards
    Ulrich
    http://www.xcelsius-insight.com

  • Improve the performance of filter

    Hi All,
    We are using Oracle Coherence Standard Edition in our application.
    Any way to improve the performance of NamedCache.keySet(filter) configured as distributed?
    I know there is no scope for indexing since we are using Standard Edition.
    Thanks

    user1096084 wrote:
    Hi All,
    We are using Oracle Coherence Standard Edition in our application.
    Any way to improve the performance of NamedCache.keySet(filter) configured as distributed?
    I know there is no scope for indexing since we are using Standard Edition.
    ThanksNot only is there no indexing, but also, I believe, all data is sent to the querying node for filtering instead of filtering on the storage nodes in parallel. If you upgraded, then you could benefit from parallel querying which is only available in a higher edition.
    Best regards,
    Robert

  • Possible ways to Imporve the client copy performance

    Hi All,
    Would you pleae suggest all the ways to improve the client copy performance and how I can improve teh Cc performance which is already started.
    Thanks & Regards
    Rajesh Meda

    I don't think is portal related. I suggest you ask in one of the admin forums.

  • Help to improve the performance of a procedure.

    Hello everybody,
    First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
    Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
    The shorter calls (less than 30 minutes) have segmentID = 'C1'.
    The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
    So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
    But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
    As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
    So I decided to come here and ask you for some tips on how to improve the performance of this.
    I think you are getting confused already, so I'm just going to put some comments in the code.
    I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
    DECLARE
    CURSOR cur_c21 IS
        select * from table1
        where segmentID = 'C21'
        order by start_date_of_call;     // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
    CURSOR cur_c22 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;
    CURSOR cur_c22_2 IS
        select * from table1
        where segmentID = 'C22'
        order by start_date_of_call;  
    cursor cur_c23 is
        select * from table1
        where segmentID = 'C23'
        order by start_date_of_call;
    v_temp_rec_c22 cur_c22%ROWTYPE;
    v_dur table1.duration%TYPE;           // using this for storage of the duration of the call. It's number.
    BEGIN
    insert into table2
    select * from table1 where segmentID = 'C1';     // inserting the calls which are less than 30 minutes long
    -- and here starts the mess
    FOR rec_c21 IN cur_c21 LOOP        // taking the first part of the call
       v_dur := rec_c21.duration;      // recording it's duration
       FOR rec_c22 IN cur_c22 LOOP     // starting to check if there is a middle part for the call
          IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
            (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)                
    /* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
          THEN
             v_dur := v_dur + rec_c22.duration;     // updating the new duration
             v_temp_rec_c22:=rec_c22;               // recording the current record in another variable because I use it for the next check
             FOR rec_c22_2 in cur_c22_2 LOOP
                IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND 
                  (rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)        
    /* logic is the same as before but comparing with the last value in v_temp...
    And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
                THEN
                   v_dur:=v_dur + rec_c22_2.duration;
                   v_temp_rec_c22:=rec_c22_2;
                END IF;
             END LOOP;                     
          END IF;
          EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND 
                   (rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);       
    /* exiting the loop if we have at least one middle part.
    (I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
       END LOOP;
       FOR rec_c23 IN cur_c23 LOOP             
          IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
             (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration          
    /* we should always have one last part, so we need this check.
    If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
    (yes we can have these situations in calls longer than 30 and less than 60 minutes). */
          THEN
             v_dur:=v_dur + rec_c23.duration;
             rec_c21.duration:=v_dur;               // updating the duration
             rec_c21.segmentID :='C1';
             INSERT INTO table2 VALUES rec_c21;     // inserting the whole call in table2
          END IF;
          EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
                    (rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;                 
                    // exit the loop when the last part has been found.
       END LOOP;
    END LOOP;
    END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
    It's my first post here so hope this is the right sub-forum.
    I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
    I know I'm still missing a lot of knowledge so every help is really appreciated.
    Thank you very much in advance!

    Atiel wrote:
    Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
    SQL> ed
    Wrote file afiedt.buf
      1  select 'C1' as segmentid
      2        ,start_date_of_call, duration, callingnumber, callednumber
      3  from (
      4        select distinct
      5               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      6              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      7              ,callingnumber
      8              ,callednumber
      9        from table1
    10*      )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349
    Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
    Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
    SQL> select * from table1;
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    C21        15-MAR-2012 09:07:26  134676480 5581790386      0113496771567          219        100      10.16
    C23        11-MAY-2012 09:37:26  134676480 5581790386      0113496771567          321         73       2.71
    C21        11-MAY-2012 12:13:10 3892379648 1982032041      0631432831624          959         80       2.87
    C22        11-MAY-2012 12:43:10 3892379648 1982032041      0631432831624          375         57       8.91
    C22        11-MAY-2012 13:13:10  117899264 1982032041      0631432831624          778         27       1.42
    C23        11-MAY-2012 13:43:10  117899264 1982032041      0631432831624          308         97       3.26
    7 rows selected.
    SQL> ed
    Wrote file afiedt.buf
      1  with t2 as (
      2  select 'C1' as segmentid
      3        ,start_date_of_call, duration, callingnumber, callednumber
      4  from (
      5        select distinct
      6               min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
      7              ,sum(duration) over (partition by callingnumber, callednumber) as duration
      8              ,callingnumber
      9              ,callednumber
    10        from table1
    11       )
    12  )
    13  --
    14  select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
    15        ,t1.col1, t1.col2, t1.col3
    16  from   t2
    17         join table1 t1 on (   t1.start_date_of_call = t2.start_date_of_call
    18                           and t1.callingnumber = t2.callingnumber
    19                           and t1.callednumber = t2.callednumber
    20*                          )
    SQL> /
    SEGMENTID  START_DATE_OF_CALL     DURATION CALLINGNUMBER   CALLEDNUMBER          COL1       COL2       COL3
    C1         11-MAY-2012 12:13:10 8020557824 1982032041      0631432831624          959         80       2.87
    C1         15-MAR-2012 09:07:26  269352960 5581790386      0113496771567          219        100      10.16
    C1         31-JUL-2012 23:20:23  134676480 4799842978      0813391427349          556         40       5.32
    SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows).

  • How to improve the performance of the FACT load

    Hi Team,
    We have 13 Dimension tables and 1 fact table. I've designed the fact job in the both the ways(by joining all the Dimension table and fetch SK values and by lookup all the dimension table and fetch SK values) but both are taking so much of time to load (for 36million records its took around 7 hours to complete) pls provide me some tips to improve the performance of fact load..The logic which am using is simple upsert logic
    SOURCE(Staging)--->query->lookup->TC->MO->Target (Facttable)

    See if you can achieve full push down. This shall improve performance. If too many logics applied in a single dataflow then you should think of splitting the operations into multiple dataflows and execute DFs in parallel wherever possible. This shall improve performance drastically.

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

  • How to improve the performance of adobe forms

    Hi,
    Please give me some suggestions as to how to improve the performance of adobe form?
    Right now when I' am doing user events it is working fine for first 6 or 7 user events. From the next
    one it is hanging.
    I read about Wizard form design approach, how to use the same here.
    Thanks,
    Aravind

    Hi Otto,
    The form is created using HCM forms and processes. I' am performing user events in the form.
    User events will doa round trip, in which form data will be sent to backend SAP system. Processing will
    happen on the ABAP side and result will appear on the form. First 6 or 7 user events works correctly,
    the result is appearing on the form. Around 8 or 9th one, the wait symbol appears and the form is not
    re-rendered. The form is of size 6 pages. The issue is not coming with form of size 1 page.
    I was reading ways to improve performance during re-rendering given below.
    http://www.adobe.com/devnet/livecycle/articles/DynamicInteractiveFormPerformance.pdf
    It talks about wizard form design approach. But in SFP transaction, I am not seeing any kind of wizard.
    Let me know if you need further details.
    Thanks,
    Aravind

Maybe you are looking for