Essbase performance issue when calc scripts are run on FDM cube on same server

We have a large Essbase application which has high usage on a daily basis, which is being impacted when we run Calc scripts on an FDM forecast cube which is on the same server. The large application is on EIS 11.1.2 and the FDM cubes are being migrated to the same server and also being upgraded from EIS 7.1 on Unix to EIS 11.1.2 on NT. Every time the Calc scripts are run on the FDM cube, the performance of the Essbase application is degraded and it shuts down after some time.

Sudhir,
Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

Similar Messages

  • Calc scripts are running low

    Hi All,
    Few of our calc cripts are runnig low for EPM applications.
    Its happening like some of the calc scripts are running fine..while a few other are running slow.
    Can you ugget what thing needed to be checked
    Thanks

    Hi,
    The version is not mentioned.
    Hope the below tuning methods are helpful:
    1. Check that compression settings are still present. In EAS, expand the application and database. Right-click on the database > Edit > Properties > Storage tab. Check that your "Data compression" is not set to "No compression" and that "Pending I/O access mode" is set to "Buffered I/O". Sometimes the compression setting can revert to "no compression", causing the rapid growth of the data files on disk.
    2. On the Statistics tab, check the "Average clustering ratio". This shoud be close to 1. If it is not, restructure you database, by right-clicking on it and choosing "Restructure...". This will reduce any fragmentation caused by repeated data import and export. Fragmentation will naturally reduce performance over time, but this can happen quite quickly when there are many data loads taking place.
    3. Check the caches and block sizes.
         a.Recommended block size: 8 to 100Kb
         b.Recommended Index Cache:
              Minimum=1 meg
              Default=10 meg
              Recommendation=Combined size of all ESS*.IND files if possible; otherwise as large as possible                     given the available RAM.
         c.Recommended Data File Cache:
              Minimum=8 meg
              Default=32 meg
              Recommendation=Combined size of all ESS*.PAG files if possible; otherwise as large as possible           given the available RAM, up to a maximum of 2Gb.
              NOTE this cache is not used if the database is buffered rather than direct I/O (Check “Storage”           tab). Since all Planning databases are buffered, and most customers use buffered for native           Essbase applications too, this cache setting is usually not relevant.
         d. Recommended Data Cache:
              Minimum=3 meg
              Default=3 meg
              Recommendation=0.125 * Combined size of all ESS*.PAG files, if possible, otherwise as large as           possible given the available RAM.
    A good indication of the health of the caches can be gained by looking at the “Hit ratio” for the cache on the Statistics tab in EAS. 1.0 is the best possible, lower means lower performance.
    4. Check system resources:
    Recommended virtual memory setting (NT systems): 2 to 3 times the RAM available. 1.5 times the RAM on older systems.
    Recommended disk space:
    A minimum of double the combined total of all .IND and .PAG files. You need double because you have to have room for a restructure, which will require twice the usual storage space whilst it is ongoing.
    Please see the below document for reference:
         Improving the Performance of Business Rules and Calculation Scripts (Doc ID 855821.1)
    -Regards,
    Priya

  • Calc scripts are running slow(all of a sudden)

    All of a sudden, for the past few days, we are noticing that all our calc scripts have been running very slow.
    The same scripts used to run much faster earlier.
    Has anybody seen this kind of scenario?
    We did a RAM upgrade on the eas server, and have restarted all services.
    Other than that, nothing has changed in our system.
    Thanks.

    It can be quite common for calcs to slow down over time, but there are some things to do to mitigate this.
    1. Are you using Intelligent Calc? All things being equal (a very broad statement in essbase, since things are never equal) if there is more activity by users, it could affect how many blocks are marked dirty. This is probably not your issue, because a properly written calc wouldn't slow down much for this reason. I had to mention it though because I have seen an installation where their calc was 'Calc All' and they used intelligent calc to create the scope of the calc. (bad, very bad)
    2. Do you perform DB restructures? (either explicity by Restructuring or by exporting level 0, clearing and import level 0 then agg) If this is not done on a regular basis (regular depends on the usage of the cube) then you could be experiencing fragmentation, which increases the size of the database, increasing run times.
    3. Have you just added another fiscal year to the database? More data means bigger database.
    RAM upgrade on the EAS server shouldn't affect calc times (unless essbase services are also running on the EAS server, then there might be something to it).
    Most of these (and other) issues can be mitigated by applying proper scope to your calcs (Fix statements).
    What environment are you running in? Windows or Unix?
    New application?
    What kind of time increases are we talking about here?
    Robert

  • Email alert when calc scripts are done

    Hi - I have Essbase 6.5 and wanted to know how do I setup an email alert when one of my calcs are done running?
    Thanks

    Hi CLAU,
    we use second method suggested by Glenn.
    instead of BLAT we use SendMail
    here is the sample batch file which call a maxl script (which can be your calc script) and then emails the log.
    hope this helps.
    -Dornakal
    www.dornakal.blogspot.com
    Batch File to call the MaxL Script and send email about status of the
    Rem ******************************************************************************
    Rem Object Type: Batch File
    Rem Object Name: BatchFileName.bat
    Rem Script Date: 01-27-2009
    Rem Created by: Dornakal
    Rem Purpose: This script loads the following data into Cube
    Rem Changed By:
    Rem Change Date:
    Rem Description:
    Rem ******************************************************************************
    Rem This starts the log file
    echo "Start of log" > E:\Logs\Dataload.log
    Rem This calls MaxL script to load data
    essmsh E:\Scripts\MaxL \DataLoad.mxl >> E:\Logs\Dataload.log
    Rem Send mail about the status of the job
    sendmail -b E:\Logs\Dataload.log -s "Subject of the mail (data load status)." -f Sendersemail -r Recievers email -r Receiver’s email -X HQSMTP.yourcompany.net

  • Performance issue when a Direct I/O option is selected

    Hello Experts,
    One of my customers has a performance issue when a Direct I/O option is selected. Reports that there was increase in memory usage when Direct I/O storage option is selected when compared to Buffered I/O option.
    There are two applications on the server of type BSO. When using Buffered I/O, experienced a high level of Read and Write I/O's. Using Direct I/O reduces the Read and Write I/O's, but dramatically increases memory usage.
    Other Information -
    a) Environment Details
    HSS - 9.3.1.0.45, AAS - 9.3.1.0.0.135, Essbase - 9.3.1.2.00 (64-bit)
    OS: Microsoft Windows x64 (64-bit) 2003 R2
    b) What is the memory usage when Buffered I/O and Direct I/O is used? How about running calculations, database restructures, and database queries? Do these processes take much time for execution?
    Application 1: Buffered 700MB, Direct 5GB
    Application 2: Buffered 600MB to 1.5GB, Direct 2GB
    Calculation times may increase from 15 minutes to 4 hours. Same with restructure.
    c) What is the current Database Data cache; Data file cache and Index cache values?
    Application 1: Buffered (Index 80MB, Data 400MB), Direct (Index 120MB; Data File 4GB, Data 480MB).
    Application 2: Buffered (Index 100MB, Data 300MB), Direct (Index 700MB, Data File 1.5GB, Data 300MB)
    d) What is the total size of the ess0000x.pag files and ess0000x.ind files?
    Application 1: Page File 20GB, Index 1.7GB.
    Application 2: Page 3GB, index 700MB.
    Any suggestions on how to improve the performance when Direct I/O is selected? Any performance documents relating to above scenario would be of great help.
    Thanks in advance.
    Regards,
    Sudhir

    Sudhir,
    Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you.

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Oracle Retail 13 - Performance issues when open, save, approving worksheets

    Hi Guys,
    Recently we started facing performance issues when we started working with Oracle Retail 13 worksheets from within the java GUI at clients desktops.
    We run Oracle Retail 13.1 powered by Oracle Database 11g R1 and AS 10g in latest release.
    Issues:
    - Opening, saving, approving worksheets with approx 9 thousands of items takes up to 15 minutes.
    - Time for smaller worksheets is also around 10 minutes just to open a worksheet
    - Also just to open multiple worksheets takes "ages" up to 10-15 minuts
    Questions:
    - Is it expected performance for such worksheets?
    - What is your experience with Oracle Retail 13 in terms of performance while working with worksheets - how much time does it normally take to open edit save a worksheet?
    - What are the average expected times for such operations?
    Any feedback and hints would be much appreciated.
    Cheers!!

    Hi,
    I guess you mean Order/Buyer worksheets?
    This is not normal, should be quicker, matter of seconds to at most a minute.
    Database side tuning is where I would look for clues.
    And the obvious question: remember any changes to anything that may have caused the issue? Are the table and index statistics freshly gathered?
    Best regards, Erik Ykema

  • Performance Issues when editing large PDFs

    We are using Adobe 9 and X Professional and are experiencing performance issues when attempting to edit large PDF files.  (Windows 7 OS). When editing PDFs that are 200+ pages, we are seeing pregnated pauses (that feel like lockups), slow open times and slow to print issues. 
    Are there any tips or tricks with regard to working with these large documents that would improve performance?

    You said "edit." If you are talking about actual editing, that should be done in the original and a new PDF created. Acrobat is not a very good editing tool and should only be used for minor, critical edits.
    If you are talking about simply using the PDF, a lot depends on the structure of the PDF. If it is full of graphics, it will be slow. You can improve this performance by using the PDF Optimize to reduce graphic resolution and such. You may very likely have a bloated PDF that is causing the problem and optimizing the structure should help.
    Be sure to work on a copy.

  • IPhone5 with ios 6.1.2 uses cellular data when connected to wifi.  Every hour to the minute even when no apps are running, iCloud is off, Ads are off, Diagnostics are off.  Running on AT&T cellular network.

    I have 3 iPhone5's with ios 6.1.2 that user cellular data when connected to wifi.  Every hour to the minute even when no apps are running, iCloud is off, Ads are off, Diagnostics are off.  Running on AT&T cellular network. 
    Date Time To/From
    Type
    Direction
    Msg/KB
    03/14
    01:30 PM
    phone
    Internet/MEdia Net
    Sent
    78 KB
    03/14
    01:02 PM
    phone
    Internet/MEdia Net
    Sent
    4397 KB
    03/14
    12:48 PM
    phone
    Internet/MEdia Net
    Sent
    19520 KB
    03/14
    12:30 PM
    phone
    Internet/MEdia Net
    Sent
    19503 KB
    03/14
    11:30 AM
    phone
    Internet/MEdia Net
    Sent
    2427 KB
    03/14
    10:30 AM
    phone
    Internet/MEdia Net
    Sent
    359 KB
    03/14
    09:30 AM
    phone
    Internet/MEdia Net
    Sent
    1396 KB
    03/14
    09:25 AM
    phone
    Internet/MEdia Net
    Sent
    1 KB
    03/14
    08:25 AM
    phone
    Internet/MEdia Net
    Sent
    30 KB
    03/14
    07:25 AM
    phone
    Internet/MEdia Net
    Sent
    16 KB
    03/14
    06:25 AM
    phone
    Internet/MEdia Net
    Sent
    45 KB
    03/14
    05:25 AM
    phone
    Internet/MEdia Net
    Sent
    15 KB
    03/14
    04:25 AM
    phone
    Internet/MEdia Net
    Sent
    14 KB
    03/14
    03:25 AM
    phone
    Internet/MEdia Net
    Sent
    21 KB
    03/14
    02:25 AM
    phone
    Internet/MEdia Net
    Sent
    23 KB
    03/14
    01:25 AM
    phone
    Internet/MEdia Net
    Sent
    16 KB
    03/14
    12:25 AM
    phone
    Internet/MEdia Net
    Sent
    23 KB

    Hello All Who Read This!
    I just bought an iPhone 5 on the 21st of March. I ate through 850 MB's in 12 days. I read up on the issue of data consumption looking for tips to conserve data. I did not know there was a problem.
    This morning I found out that I was leaking data very little, but daily it accumulates. I am in Barcelona, Spain. I do not have a contract, I am on a prepaid Yoigo carrier.
    I happened upon an article/blog by Robert Parks at snnyc.com "problems persist in ios 6.1.2.".
    Although his situation did not relate to me, I tried a few tests on my own to see if I could stop the data leak without turning off my celular data and possibly push notifications, I think I found a solution that might help someone:
    In ios 6.1.3 goto Settings/Mail, Contacts,Calenders and scroll down to Calendars and turn off " New Invitation Alerts" and also turn off "Shared Calendar Alerts".
    My Cellular Network Data seems to be behaving. I only see a change in MB used when I open an ap that uses data.
    All The Best

  • My iiPad mini is lagging to much after ios 7 upgrade. Please apple solve this issue. Games which are running smoothly in ios 6 lagging too much.  This is the worst experience  Apple

    my iiPad mini is lagging to much after ios 7 upgrade. Please apple solve this issue. Games which are running smoothly in ios 6 lagging too much.  This is the worst experience  Apple

    Settings>General>Reset>Reset all settings.
    *Note:* Settings, preferences, network settings, etc *will* be reset. *NO data or media will be lost*
    and/or
    Settings>iCloud>Documents and Data (turn it OFF)
    If you want to voice your opinion, please do so here:
    http://www.apple.com/feedback/

  • How to Improve performance issue when we are using BRM LDB

    HI All,
    I am facing a performanc eissue when i am retriving the data from BKPF and respective BSEG table....I see that for fiscal period there are around 60lakhs records. and to populate the data value from the table to final internal table its taking so much of time.
    when i tried to make use of the BRM LDB with the SAP Query/Quickviewer, its the same issue.
    Please suggest me how to improve the performance issue.
    Thanks in advance
    Chakradhar

    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
    Rob

  • Performance Issues when running 1.5.0_9 with servers

    Hi. We have an application in Java. We run it using 1.5.0_9 on Win2003 and have no problems when running it on single Intel CPU PCs and Servers. However, whenever we are finding that some higher spec servers are running the application significantly slower... around 30%. It seems that it's either a problem with the Enterprise version of Win2k3 or perhaps with AMD Opteron CPUs.
    Does anyone know of any known issues with Win2k3 Enterprise or AMD Opteron CPUs or simply any dual CPU technology?
    Thanks
    Edited by: gingerdazza on Jul 2, 2008 1:21 AM

    I'm able to recreate the problem where a section of a screen goes black using with the following config.
    JDK 1.7.0_06
    2 graphics cards
    3 monitors
    Create a New JavaFX Application Project (Hello World) in NetBeans 7.2. Once executed position the window so it spans two monitors, ensure the monitors are on seperate graphics cards and then hover the mouse of the Hello World button and one half of the window goes black. Positioning the window so it doesn't span a monitor draws the window correctly.
    Has anyone experienced this issue?
    Thanks.

  • Essbase performance issue

    Hi all,
    We encounter a Essbase perfromance issue that we don't know the root cause.
    We have configured a server to run Essbase with 8 core CPU and 16GB RAM. We found that the Essbase calculation can use up to 80% CPU and about 8GB RAM only. I also checked the IO rate at the same time but the disk loading is not very heavy. We just suspect that what kind of resource are waiting at Essbase calculation engine? It is not CPU bounded, memory bounded, and IO bounded.
    Do you think it can help if we keep the whole Essbase database (around 30GB) into RAM based disk drive can speed up the IO performance?
    Thanks if you have some ideas for us to investigate.
    Edited by: hyperion planning user on Jun 2, 2009 12:27 AM
    Edited by: hyperion planning user on Jun 2, 2009 12:36 AM

    I'm confused -- is it CPU bound or not?
    You write:
    We found that the Essbase calculation can use up to 80% CPU and about 8GB RAM only.Do you mean 80% of all eight of your CPUs? That sure sounds CPU-bound to me. In fact, I wish (within reason) that most of my Essbase calculations worked that way -- that would men that I have the disck caches tuned to their utmost efficiency.
    This means you're getting data from disk almost as fast as is possible.
    You're not going to be able to get everything into memory for two reasons:
    1) 30 GB of .IND and .PAG/.DAT files isn't going to fit into Essbase's addressable memory space. See: using RAM disk to speed up Essbase calculation and rollup
    2) Even when the database is nice and small and you can stick the whole thing in a cache, uncompressed, Essbase still is "smart" and will keep a portion of it on disk during calcs -- this doesn't make sense in isolation, but empirically, you can monitor disk usage during a supposedly database that is in theory total enclosed in the cache and see it getting hit. This may be related to Essbase's general housekeeping -- I don't know. In any case, this is generally not a real world case, unless you're running your business on my Very Favorite Database In The Whole Wide World -- Sample.Basic.
    Or are you saying that you will define a real (and it would help if you really could allocate real RAM, and not an OS-managed sort-of-RAM-sort-of-DASD situation) RAM drive and point Essbase there. That is sort of risky, isn't it? How will you flush it to real DASD for backup? Exports?
    Regards,
    Cameron Lackpour

  • Essbase Performance Issues on Sun Solaris 10

    We have a new Hyperion Environment 11.1.1.3 with Essbase sitting on a Solaris box. We are running a calculation script under the "FINSTMT" database that is called CALCALL. This is the default calculation for a database in Essbase (it runs a command called CALC ALL). We are running this same calc against the same database outline and data set across the environments to benchmark performance.
    The script in the new environment should run faster, but it runs slower. The server is basically sleeping and we were curious if anyone can recommend configurations within the app or for the OS? Things like semiphors, shared memory, etc... Also if anyone has suggests or ideals to tweak Essbase performance on a Solaris 10 machine and/or UNIX. What should I do to the Essbase.config file?
    Mike

    I can't help you solaris tuning, but some things to look at.
    1. Is the Essbase.cfg file the same on both servers? You might have parallel calculation turned on in one and not the other. Caches could also be set differently
    2. Are the database caches set the same? This could impact performance as well
    3. Are you doing an apples to apples comparison? Is one database loaded and recalculated many times while the other is not (or restructured or reloaded)

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

Maybe you are looking for

  • HT201363 I do not remember the answer to my secret. my account no reset email

    I have read through the instructions to get the secret answer. but my account does not reset the secret answer to email. Please help me reset the email.

  • Doubt on Mapping, Context handling/Node functions

    Hi Experts... This is the structure of a sample message that we get from Agile. BOM1 and BOM2 are two BOMs (basically a material) having BOM Items A,B,Cu2026. (basically Material components) PARTS and DOCUMENTS have the material related information.

  • Reply Mail with Pdf or Word file

    Hi Guys, Iphone 5s (ios 7) does not seem to support attachments (pdf or ms office suites) while replying to mails. Can you please suggest a way about???? Regards Sandeep

  • Itunes download, itunes download

    I simply am trying to download itunes software for a new Samsung Windows computer at work and it says "try later" experiencing issues from the Applie ITunes website.  Is this a known issue?

  • My screen keeps going pixelated

    Hi there, I have a macbook Pro 13", bought in 2010. Recently my screen has been going pixelated at random times, it normally goes away fairly quick but is happening more and more. I'm wondering if it is the Video Card which needs to be replaced? Can