Performance issue - in 4.7 ( previously worked in ECC 6 only)

Hi experts,
I have worked on ECC 6.0 and now i have got a new job and now working in 4.7 for support project. I feel it is very difficult after working in 6.0.
I have not used occurs 0, like etc.. statements in 6.0.
In 4.7 what are the things that i should consider to avoid performance issues.
plz guide me.
thanks in advance.

Hi Sakthi,
I faced the same problem,. I am in support now for last 3 months...
So let me share some of the tips and tricks I follow...
In ECC 6.0 there is always less data... since the database may be from 2005 but not early than that...
But for 4.7 EE it may be from 1997 also... like mine...
Since of a huge database,,, some times the select statements you write will not work in PRD & QAS..
You will get Request Timed Out.
So you need to be more carefull at select statements,,, match as many as primary keys.. and learn Secondary indexes,,, and must should have a good idea on foreign key relations very well...
About the like or Occurs..etc.  you dont worry about that,, that is very easy.... and if there is a modification you can write the code almost all like the ECC 6.0...
In SAP Wiki, there are performance tips & Coding procedures.. Lokk at that it will be help full...
Thanks & regards,
Dileep .C

Similar Messages

  • Performance issue after Upgrade from 4.7 to ECC 6.0 with a select query

    Hi All,
    There is a Performance issue after Upgrade from 4.7 to ECC 6.0 with a select query in a report painter.
    This query is working fine when executed in 4.7 system where as it is running for more time in ECC6.0.
    Select query is on the table COSP.
    SELECT (FIELD_LIST)
            INTO CORRESPONDING FIELDS OF TABLE I_COSP PACKAGE SIZE 1000
            FROM  COSP CLIENT SPECIFIED
            WHERE GJAHR IN SELR_GJAHR
              AND KSTAR IN SELR_KSTAR
              AND LEDNR EQ '00'
              AND OBJNR IN SELR_OBJNR
              AND PERBL IN SELR_PERBL
              AND VERSN IN SELR_VERSN
              AND WRTTP IN SELR_WRTTP
              AND MANDT IN MANDTTAB
            GROUP BY (GROUP_LIST).
       LOOP AT I_COSP      .
         COSP                           = I_COSP      .
         PERFORM PCOSP       USING I_COSP-_COUNTER.
         CLEAR: $RWTAB, COSP                          .
         CLEAR CCR1S                         .
       ENDLOOP.
    ENDSELECT.
    I have checked with the table indexes, they were same as in 4.7 system.
    What can be the reson for the difference in execution time. How can this be reduced without adjusting the select query.
    Thanks in advance for the responses.
    Regards,
    Dedeepya.

    Hi,
    ohhhhh....... lots of problems in select query......this is not the way you should write it.
    Some generic comments:
    1. never use SELECT
                       endselect.
       SELECT
      into table
       for all entries in table
      where.
       use perform statment after this selection.
    2. Do not use into corresponding fields. use exact structure type.
    3. use proper sequence of fields in the where condition so that it helps table go according to indexes.
        e.g in your case
              sequence should be
    LEDNR
    OBJNR
    GJAHR
    WRTTP
    VERSN
    KSTAR
    HRKFT
    VRGNG
    VBUND
    PARGB
    BEKNZ
    TWAER
    PERBL
    sequence should be same as defined in table.
    Always keep select query as simple as possible and perform all other calculations etc. afterwords.
    I hope it helps.
    Regards,
    Pranaya

  • Event ID 5013 - Old Exchange 2003 reference from Active Directory - Performance Issues or Just a Message?

    Hello,
    I am working on an existing 2010 Exchange implementation.  The site has one Exchange 2010 server with no other Exchange servers running and/or available.  I looked through the application logs and I see multiple Event ID 5013 errors that state,
    "The routing group for Exchange server <ServerName>.<DomainName>.LOCAL was not determined in routing tables with timestamp 6/24/2014 8:01:48 PM. Recipients will not be routed to this server."
    I would like to know if by having this old 2003 Exchange server in AD, if it could be causing performance issues or if it is just a warning message only.  I have located the old server name using ADSIedit under, "CN=Configuration,DC=<DomainName>DC=Local\CN=Services\CN=Microsoft
    Exchange\CN=First Organization\CN=Administrative Groups\CN=First Administrative Group\CN=Servers".  The old 2003 Exchange server is then listed in the right window pane so it could be deleted but I would need to know if this would effect the Exchange
    environment other than removing the error from the event logs.
    Thank you for your help,
    Michael

    Hi,
    When you install Exchange 2010 in an existing Exchange 2003 environment, a routing group connector is created automatically. If you remove Exchange 2003 from you environment, you should also delete this routing group connector.
    Besides, here is a related article about Event ID 5013 which may help you for your reference.
    http://technet.microsoft.com/en-us/library/ff360498(v=exchg.140).aspx
    Best regards,
    Belinda Ma
    TechNet Community Support

  • "itunes could not restore the iphone because the password was incorrect" I know for a fact im using the correct password. and it has previously worked. is anyone else having this issue.

    "itunes could not restore the iphone because the password was incorrect" I know for a fact im using the correct password. and it has previously worked. is anyone else having this issue.

    Hi, I also get the message "itunes could not restore the iphone because the password was incorrect" though I am never prompted to enter a password anywhere! Nor was the backup password encrypted to begin with! Really strange, any pointers will be greatly appriciated!

  • BCS: Performance Issue - Most of the time is spent doing commit work.

    Hello,
    We are experiencing performance issues with our BI server. This performance issue can been seen during our BCS runs. Our DBA has indicated that he sees a very high percentage of time time is spent doing "commit work".
    Curretly, we are running BI7 nw2004s, Basis 700, support pack 14.
    Anyone else experience this? As the BCS run is mainly stanard SAP code, I was wondering if there may be some snotes that correct this?
    Thank you for any help you could provide us.

    If it is related to SEM-BCS and new EHP releases, then there are still big problems with the monitor and tasks status management (meaning problems with performance). Is it the case? If yes, then you'd better  look for already released notes reg this and formulate your own OSSs if you don't find anything relevant.

  • 2.0 performance issues, new Encore user, typically do AE work

    I keep great tabs on my machine and have just recently started to learn Encore because it came with my Production Studio Premium. I don't typically do the final authoring at the prod house, I'm the effects guy and this laptop run AE like a champ (HP nx9420).
    But when I run Encore I take tremendous performance hits... brand new project lags a little bit, the Library is pointless because the scroll speed is incredibly slow. With that closed, and nothing yet imported to make the project, Encore seems ok. Add one clip, start one menu setup and now I wait forever for Encore to catch up to what I am doing.
    Is there known issues? Tweaks? Performance tips/tricks out there? I used the software for about 4 hours tonight, forcing process kills to get back to a usable state every so often.
    Any help would be great! Thanks guys!

    Yeah I know all that.
    Appreciate the post though... others that are not hardware savvy might no know this... I am more a hardware guru than software.
    I got this laptop with 1gb, and I am doing no multitasking. Well, I was at first, but after the performance issues I totally stopped trying to even run PS and create all the graphics ahead of time instead of as needed.
    After looking at it more over the last couple nights I have decided what What I am seeing is a memory leak... its random, I've not been able to isolate it to anything specific yet. I have not been able to duplicate the issue anywhere but I do know this: if i run it, i it will happen. Encore takes typical 20-40k on Process Mem Usage under Task Manager. Every time I've hit this slow down I hit Task Manager and Encore is using 150-400k Mem Usage, the really bad hits it takes even more. Even after quitting out I have to reboot to clear memory, or run a ram scrubber to clear it all out...
    Thanks for the look at my post, and the help!

  • Performance issue working with OciArray

    We have an performance issue with Oracle Spatial objects, e.g. SDO_GEOMETRY.
    To get data from an Oracle Array (OciArray) there is functionality like:
    OciNumberToRealArray
    which is performing much better than doing
    OciNumberToReal
    for each instance.
    But now we are sending an SDO_GEOMETRY to Oracle, so during the bind process we currently do:
    for each number
    do
    OciNumberFromReal
    OciCollAppend
    enddo
    We noticed that for the overall query, including getting the result, it is spending approx
    6.3% in OciNumberFromReal
    7.4% in OciCollAppend
    On average we are sending 1500 values, so we are talking a lot of calls here.
    Now since there is a way to get things from an array a lot faster using API's:
    OCICollGetElemArray
    OCINumberToRealArray
    I am wondering why there is no way to set/put values into an array, like in:
    OciNumberFromRealArray
    or perhaps better
    OciNumbersFromRealArray
    I also noticed that there is no way of getting integer values from an array directly, you can only get one integer value at a time.
    So there is no equivalent like:
    OciNumberToIntArray
    (there is OciNumberToInt and OciNumberFromInt)
    Is no one experiencing this kind of stuff?
    Regards,
    Patrick

    能提高多少性能?我怎么使用不成功,下面是部分代码,nOrdiCount是集合中元素总个数,m_pGeometry是一个GEOMETRY对象,执行OCICollGetElemArray之后,nOrdiCount的值被覆盖了。是oracle里面的代码把它覆盖了,我可以用这段代码去6个元素,超过6个元素,执行OCINumberToRealArray就无法返回了,不知何故?
    OCINumber** pArray = new OCINumber*[nOrdiCount];
    uword nCount = nOrdiCount;
    memset(pArray, 0, sizeof(OCINumber*)*nOrdiCount);
    OCICollGetElemArray(pConn->m_hOCIEnv, pConn->m_hOCIError, m_pGeometry->sdo_ordinates, 0, &exists, (void**)pArray, NULL, &nCount);
    OCINumberToRealArray(pConn->m_hOCIError, (const OCINumber**)pArray, nCount, sizeof(real), pOrdiateData);
    *OrdiateCount += nCount;
    delete []pArray;
    补充:
    的确能够提高性能,但是其中有bug,OCICollGetElemArray函数中把我的函数中局部变量(栈变量)覆盖了,里面应该是内存访问越界了,期待改进!
    Edited by: zj511025 on 2010-11-5 上午2:10

  • Performance Issues with 10.6.7 and External USB Drives

    I've had a few performance issues come up with the latest 10.6.7 that seem to be related to external USB drives. I have a 2TB USB drive tha I have my iMovie content on this drive and after 10.6.7 update, iMovie is almost unusable. Finder even seems slow when browsing files on this drive as well. It seems like any access to the drive is delayed in all applications. Before the update, the performance was acceptable, but now it almost unusable. Most of the files on this drive are large dv files.
    Anyone else experience this?

    Matt,
    If you want help, please start your own thread here:
    http://discussions.apple.com/forum.jspa?forumID=1339&start=0
    And if your previous thread you aren't getting sufficient help for your iPhone, post a new topic here:
    http://discussions.apple.com/forum.jspa?forumID=1139
    You'll get a wider audience, and won't confuse the original poster. Performance issues can be caused by numerous issues as outlined in my FAQ*
    http://www.macmaps.com/Macosxspeed.html
    If every person who had a performance issue posted to this thread, we'd never find a solution for the initial poster. Let's isolate each case one by one. It is NOT necessarily the same issue, even if the symptoms are the same. There are numerous contributing factors at work with computers, and if we don't isolate them, we'll never get to the root cause.

  • Performance issues with 0CO_OM_WBS_1

    We use BW3.5 & R/3 4.7 and encounter huge performance issues with 0CO_OM_WBS_1? Always having to do a full load involving approx 15M records even though there are on the average 100k new records since previous load. This takes a longtime.
    Is there a way to delta-enable this datasource?

    Hi,
    This DS is not delta enabled and you can only do a full load.  For a delta enabled one, you need to use 0CO_OM_WBS_6.  This works as other Financials extractors, as it has a safety delta (configurable, default 2 hours, in table BWOM_SETTINGS).
    What you should do is maybe, use the WBS_6 as a delta and only extract full loads for WBS_1 for shorter durations.
    As you must have an ODS for WBS_1 at the first stage, I would suggest do a full load only for posting periods that are open.  This will reduce the data load.
    You may also look at creating your own generic data source with delta; if you are clear on the tables and logic used.
    cheers...

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Performance issue in Webi rep when using custom object from SAP BW univ

    Hi All,
    I had to design a report that runs for the previous day and hence we had created a custom object which ranks the dates and then a pre-defined filter which picks the date with highest rank.
    the definition for the rank variable(in universe) is as follows:
    <expression>Rank([0CALDAY].Currentmember,  Order([0CALDAY].Currentmember.Level.Members ,Rank([0CALDAY].Currentmember,[0CALDAY].Currentmember.Level.Members), BDESC))</expression>
    Now to the issue I am currently facing,
    The report works fine when we ran it on a test environment ie :with small amount of data.
    Our production environment has millions of rows of data and when I run the report with filter it just hangs.I think this is because it tries to rank all the dates(to find the max date) and thus resulting in a huge performance issue.
    Can someone suggest how this performance issue can be overcome?
    I work on BO XI3.1 with SAP BW.
    Thanks and Regards,
    Smitha.

    Hi,
    Using a variable on the BW side is not feasible since we want to use the same BW query for a few other reports as well.
    Could you please explain what you mean by 'use LAG function'.How can it be used in this scenario?
    Thanks and Regards,
    Smitha Mohan.

  • How should I report forum performance issues?

    The forums rely heavily on the caching features of browsers to improve the speed of page rendering. Performance of these forums should greatly improve after a few pages because more and more of the images, css and javascript is cached in the browser. As a consequence, when reporting forums performance issues the report should include some information on the state of the browser cache to determine whether the issue is a browser issue or a server issue. Such detailed information is generally not available from just watching the browser screen, but needs to come from specialized tools such as performance monitor plugins and recording proxies.
    The preferred report method for performance issues is to use the speed reporting features build into or available as a plugin for a browser for both the page you want to report a problem with and several refence pages in the site. Detailed instructions are listed below separated out for different browsers. If possible, please use Firefox for submitting the report because it provides an export format that can be read back electronically.
    Known performance issues
    The performance issues with any screen with a Rich Text Editor, such as the Reply window and the compose Private Message window have been acknowleged and improvements are being implemented.
    Mozilla Firefox (preferred)
    Warning: it is currently not recommended to generate a speed report when logged in. The speed report has enough detail for somebody else to hijack your session and impersonate you on the forums. If you really must report while logged in, make sure you log out your browser after generating the speed report and wait at least 4 hours before posting.
    Install the Firebug plugin
    Install the NetExport 0.6 extension for Firebug
    Enable all Firebug panels
    Switch to the "Net" panel in Firebug
    Click on this link
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Browse to the page where you are experiencing the performance problem.
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Click on this link
    Export the data from the Firebug Net panel
    Browse to the page where you are experiencing the performance problem.
    Export the data from the Firebug Net panel
    When you report a performance problem please attach the 6 exports from the Firebug Net panel and an explanation of how you are experiencing the issues (for instance how much slower it is then normal) and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting. If you have non-standard tweaks to your Firefox configuration (such as pipelining enabled) or are running any plugins please include that information in your report as well.
    Google Chrome
    Open the Developer Tools (Ctrl-Shift-J)
    Navigate to the resources tab
    Enable resource tracking.
    Click on this link
    Export the resource loading data.
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Navigate to the page where you experience the performance problem
    Export the data
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Click on this link
    Export the data
    Reset the data by disabling and enabling resource tracking
    Navigate to the page where you experience the performance problem
    Export the data
    Since Google Chrome does not have an export format for the Resource Tracking information best current practice is to take a screenshot and note the hover details for any resource with a tail that is longer then 25% of the total load time. When you report a performance problem please attach the screenshots and an explanation of how you are experiencing the issues (for instance how much slower it is then normal)  and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting.
    Apple Safari
    The Apple Safari Web Inspector has a Resources panel similar to the Resources panel in the Google Chrome developer tools.To get there, follow these steps:
    Show the menu bar.
    Go to preferences
    Go to the Advanced Tab
    Check “Show  Develop menu in menu bar”.
    From the Develop menu select “Show Web  Inspector”. 
    Collecting the performance information and exporting works exactly the same as in Google Chrome. Please refer to the instructions for Google Chrome.
    Microsoft Internet Explorer
    IE does not have native features to analyze web traffic. No plugins have been found that produce the required information (please let us know if we missed any). For now, please reproduce the issue with Firefox, Chrome or Safari.
    Please note that due to the reliance on Javascript for the interactive effects the performance of these forums will be much better on MS IE 8 then on previous versions of MS IE.

    Hi
    It works, check once again...
    regards
    Swami

  • Performance Issue Executing a BEx Query in Crystal Report E 4.0

    Dear Forum
    I'm working for a customer with big performance issue Executing a BEx Query in Crystal via transient universe.
    When query is executed directly against BW via RSRT query returns results in under 2 seconds.
    When executed in crystal, without the use of subreports multiple executions (calls to BICS_GET_RESULTS) are seen. Runtimes are as long as 60 seconds.
    The Bex query is based on a multiprovider without ODS.
    The RFC trace shows BICS connection problems, CS as BICS_PROV_GET_INITIAL_STATE takes a lot of time.
    I checked the note 1399816 - Task name - prefix - RSDRP_EXECUTE_AT_QUERY_DISP, and itu2019s not applicable because the customer has the BI 7.01 SP 8 and it has already
                domain RSDR0_TASKNAME_LONG in package RSDRC with the
                description: 'BW Data Manager: Task name - 32 characters', data
                type: CHAR; No. Characters: 32, decimal digits: 0
                data element RSDR0_TASKNAME_LONG in package RSDRC with the
                description 'BW Data Manager: Task name - 32 characters' and the
                previously created domain.
    as described on the message
    Could you suggest me something to check, please?
    Thanks en advance
    Regards
    Rosa

    Hi,
    It would be great if you would quote the ADAPT and tell the audience when it is targetted for a fix.
    Generally speaking, CR for Enteprise  isn't as performant as WebI,  because uptake was rather slow .. so i'm of the opinion that there is improvements to be gained.   So please work with Support via OSS.
    My onlt recommendations can be :
    - Patch up to P2.12 in bi 4.0
    -  Define more default values on the Bex query variables.
    - Implement this note in the BW 1593802    Performance optimization when loading query views 
    Regards,
    H

  • Performance issue in browsing SSAS cube using Excel for first time after cube refresh

    Hello Group Members,
    This is a continuation of my earlier blog question -
    https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
    As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
    I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
    4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
    refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
    Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
    daily cube refresh?
    Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
    a significant time based on the bandwidth of the network and connection.
    Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
    Best Regards, Arka Mitra.

    Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
    I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
    select [HIERARCHY_UNIQUE_NAME]
    from $system.mdschema_hierarchies
    where CUBE_NAME = 'YourCubeName'
    select [LEVEL_UNIQUE_NAME]
    from $system.mdschema_levels
    where CUBE_NAME = 'YourCubeName'
    Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
    http://artisconsulting.com/Blogs/GregGalloway

  • N8 email client performance issue

    The email client on the N8 mostly works OK, but the performance is terrible when trying to reply to an email that is "long". Actually there really doesn't have to be that much text in the previous message(s) when the client starts to be really really slow. When typing, the lag is really making the typing almost impossible.
    Is there plans to improve the performance? Will the Anna update improve the client? N8 isn't the best hardware out there, but still it has plenty of power to handle large emails also. This can be lightly tested using some webmail with the phones browser. When typing a reply to a long email there is no problem.
    Others having performance issues when trying to reply to a long email?

    Same here, not only answering large emails, sending email with attachments is about the same. I am having sync problems too, Messaging looks erratic.

Maybe you are looking for

  • HT1926 itunes will not run and indicates apple application support is needed?

    After installing itunes on a pc running vista, when I try to run it I get a message sayinjg "Apple application support is needed to run itunes.  Uninstall itunes and reinstall.  error 2" Apple application support is installed itunes was installed by

  • SFSB and BMT JTA Transaction Scope confusion

    Hi, I'm a bit confused with the scope of a UserTransaction. * Classic SFSB with BMT @Stateful @TransactionManagement(TransactionManagementType.BEAN) public class SFSBean implements SFS {      private @Resource UserTransaction tx;      private @Persis

  • Multiple Time Resets

    This picture shows the possibility of resetting the elapsed time with a button. I tried using a for loop to have multiple buttons but the time does not work. Is the problem with the Timing Subvi? Would it be best to subtract Get Date/Time in Seconds

  • Publishing BEX Analyzer reports

    Hi, Is it possible to pubish BEX Analyzer reports (through Broadcast mail) in BI 7.0?

  • Multiple Playbook Problems

    When I start up my Playbook it gives me the error "Thank you for participating! The blackberry 10 dev alpha device program has ended. For details on what to do next, please visit developer.blackberry.com" I connected the playbook to my computer howev