Calc scripts running very Long time

Hi All,
Recently, i am migrated the objects from Production to Test region. We have 5 applications and each of the application has a set of calc scripts.
In test region, they are running really long time. Where as in Production, they run for less time.
In TEST region each Calc script is taking 10 times more time than the Production times.
No Dimension added or no script is updated. No difference in objects between TEST and PROD.
Please suggest me, why is this difference.
Thanks
Mahesh

The obvious first question would be if the hardware is different. You would expect prod to be a more powerful server and therefore perform better.I'm seeing a lot of virtualized test servers (who knows, really, what power the box has) and real prod servers. That can make a huge difference in performance.
It makes benchmarking tough -- yes, you can see how long something will take relative to another process, but there isn't any way to know how it will perform in production until you sneak it over there and benchmark it. It can be a real PITA for Planning.
And yes, the theory is that dev and prod are similar so that the above isn't an issue, but that seems to be a more theoretical than actual kind of thing.
Regards,
Cameron Lackpour

Similar Messages

  • Report script taking very long time to export in ASO

    Hi All,
    My report script is taking very long time to execute and finally a message appears as timed out.
    I'm working on ASO Cubes and there are 14 dimensions for which i need to export all data for all the dimensions for only one version.
    The data is very huge and the member count in each dimension is also huge, so which is making me difficult to export the data.
    Any suggestions??
    Thanks

    Here is a link that addresses several ways to optimize your report script. I utilize report scripts for Level 0 exports in an ASO environment as well, however the majority of our dimemsions are attribute dimensions.
    These are the most effective solutions we have implemented to improve our exports via report scripts:
    1. Make sure your report script is written in the order of how the Report Extractor retrieves data.
    2. Supressing Zero and Missing Data
    3. We use the LINK command within reports for some dimensions that are really big and pull at Level 0
    4. Using Symmetric reports.
    5. Breakout the exports in multiple reports.
    However, you may also consider some additional solutions outlined in this link:
    1. The MDX optimizing commands
    2. Back end system settings
    http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/drpoptim.htm
    I hope this helps. Maybe posting your report script would also help users to provide feedback.
    Thanks
    Edited by: ronnie on Jul 14, 2011 9:25 AM
    Edited by: ronnie on Jul 14, 2011 9:53 AM

  • Launch ASCP plan run very long time

    Hi,
    I launched constrained ASCP plan and it took very long time. I launch ASCP plan in friday and it still not finished yet till monday, Memory Based Shapshot & Snapshot Delete Worker are still running, Loader Worker With Direct Load Option is still in pending phase. MSC: Share Plan Partitions has been set to Yes.
    When I run query below :
    select table_name,
           partition_name
    from   all_tab_partitions
    where  table_name like 'MSC_NET_RES_INST%'
    OR     table_name like 'MSC_SYSTEM_ITEMS%'
    order by substr(partition_name,instr(partition_name,'_',-1,1)+1);
    The results are:
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS_0
    MSC_NET_RES_INST_AVAIL
    NET_RES_INST_AVAIL_0
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS_1
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS__21
    MSC_NET_RES_INST_AVAIL
    NET_RES_INST_AVAIL__21
    MSC_NET_RES_INST_AVAIL
    NET_RES_INST_AVAIL_999999
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS_999999
    Please help me how to increase the performance when launching the plan. Is change MSC: Share Plan Partitions to No the only way to increase the performance in running plan?
    Thanks & Regards,
    Yolanda

    Hi Yolanda,
    a) So does it means that plan was working fine earlier but you are facing this issue recently.. ? If so then what you have changed at server side or have you applied any recent patches.. ?
    b) If you have not completed plan for single time,
    I will suggest that run data collection in complete refresh mode for one organization which is having relatively small data. Further, you can modify plan options in order to reduce planning calculation load like
    - disable pegging
    - remove any demand schedule / supply schedule / global forecast etc
    - enable only single organization which having relatively small demand and supply picture
    - disable forecast spread
    Once one plan run will be completed, then expand your collection scope to other organizations and also enabling above mentioned setting.
    There are lots of points need to consider for performance issue like server configuration, hardware configuration,  num of demands etc. So you can raise SR in parallel while working on above points.
    Thanks,
    D

  • When we run $CRS_HOME/root.sh scripts-This hangs for a very long time

    Hi,
    At the time of oracle cluster ware installation, when we run $CRS_HOME/root.sh scripts…
    bash-3.00# /export/home/oracle/product/10.2.0/crs/root.sh
    WARNING: directory '/export/home/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/export/home/oracle/product' is not owned by root
    WARNING: directory '/export/home/oracle' is not owned by root
    WARNING: directory '/export/home' is not owned by root
    WARNING: directory '/export' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/export/home/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/export/home/oracle/product' is not owned by root
    WARNING: directory '/export/home/oracle' is not owned by root
    WARNING: directory '/export/home' is not owned by root
    WARNING: directory '/export' is not owned by root
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: rac1 rac1-priv rac1
    node 2: rac2 rac2-priv rac2
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/rdsk/c0d1s1
    Format of 1 voting devices complete.
    Startup will be queued to init within 30 seconds.
    This hangs for a very long time; I controlled out of it and re-run it. The same result, it stops at the last line for hours now. Any idea?
    Thanks
    ANup
    Edited by: user485641 on २५ अप्रैल, २००९ ७:०५ अपराह्न

    what os ? what Oracle version?
    What you find on cluster log.... ?

  • Remote client copy (SCC9) runs a very long time!

    remote client copy (SCC9) runs a very long time!
    how to do it quickly process?
    (eg use imp and exp-oracle tool, as it can be done to understand what the SAP data has been copied and are now in a different location, for Developers)

    scn001 wrote:
    remote client copy (SCC9) runs a very long time!
    > how to do it quickly process?
    > (eg use imp and exp-oracle tool, as it can be done to understand what the SAP data has been copied and are now in a different location, for Developers)
    Hi,
    You can export the client, as well but it will take long time too, depended to your client size. Please note that client copy operation should be performed by standard SAP client management tools, such as client export/import or remote copy.
    Ask this question to SAP support first. Technically, you can choose many ways to copy a SAP client, but as far as I know that SAP will not support you (such as errors you faced during the client export/import or the problems related by the copy operation), if you use any other 3rd party tool while for the client copy purposes.
    Best regards,
    Orkun Gedik
    Edited by: Orkun Gedik on Jun 30, 2011 10:57 AM

  • [SOLVED] systemd-tmpfiles-clean takes a very long time to run

    I've been having an issue for a while with systemd-tmpfiles-clean.service taking a very long time to run. I've tried to just ignore it, but it's really bothering me now.
    Measuring by running:
    # time systemd-tmpfiles --clean
    systemd-tmpfiles --clean 11.63s user 110.37s system 10% cpu 19:00.67 total
    I don't seem to have anything funky in any tmpfiles.d:
    # ls /usr/lib/tmpfiles.d/* /run/tmpfiles.d/* /etc/tmpfiles.d/* | pacman -Qo -
    ls: cannot access /etc/tmpfiles.d/*: No such file or directory
    error: No package owns /run/tmpfiles.d/kmod.conf
    /usr/lib/tmpfiles.d/gvfsd-fuse-tmpfiles.conf is owned by gvfs 1.20.1-2
    /usr/lib/tmpfiles.d/lastlog.conf is owned by shadow 4.1.5.1-9
    /usr/lib/tmpfiles.d/legacy.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/libvirt.conf is owned by libvirt 1.2.4-1
    /usr/lib/tmpfiles.d/lighttpd.conf is owned by lighttpd 1.4.35-1
    /usr/lib/tmpfiles.d/lirc.conf is owned by lirc-utils 1:0.9.0-71
    /usr/lib/tmpfiles.d/mkinitcpio.conf is owned by mkinitcpio 17-1
    /usr/lib/tmpfiles.d/nscd.conf is owned by glibc 2.19-4
    /usr/lib/tmpfiles.d/postgresql.conf is owned by postgresql 9.3.4-1
    /usr/lib/tmpfiles.d/samba.conf is owned by samba 4.1.7-1
    /usr/lib/tmpfiles.d/slapd.conf is owned by openldap 2.4.39-1
    /usr/lib/tmpfiles.d/sudo.conf is owned by sudo 1.8.10.p2-1
    /usr/lib/tmpfiles.d/svnserve.conf is owned by subversion 1.8.8-1
    /usr/lib/tmpfiles.d/systemd.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/systemd-nologin.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/tmp.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/uuidd.conf is owned by util-linux 2.24.1-6
    /usr/lib/tmpfiles.d/x11.conf is owned by systemd 212-3
    How do I debug why it is taking so long? I've looked in man 8 systemd-tmpfiles and on google, hoping to find some sort of --dubug option, but there seems to be none.
    Is it some how possible to get a list of the directories that it looks at when it runs?
    Anyone have any suggestions on how else to fix this.
    Anyone else have this issue?
    Thanks,
    Gary
    Last edited by garyvdm (2014-05-08 18:57:43)

    Thank you very much falconindy. SYSTEMD_LOG_LEVEL=debug helped my find my issue.
    The cause of the problem was thousands of directories in /var/tmp/ created by a test suite with a broken clean up method. systemd-tmpfiles-clean was recursing through these, but not deleting them.

  • Query running for a very long time

    Hi I have a sales Query which is running for a very long time,
    It is built on a multiprovider. The multiprovider gets feed from cube and DSO.
    It is taking very long time to run.
    I need to do some performance tuning can you please suggest.

    Hi Ravi,
    First maitain BW statistics for that query and after that go to se11 and give table RSDDSTAT. just check how many records selected in data base and how many records transferred.
    according to this statistics create aggregates by giving propose aggregates by using statistics tables. after running a query check in RSRT whether the data is coming from aggregates or not.
    do not exclude any selections try to include in query restrictions.
    try to create secondary indexes.
    let us know the result.

  • Inventory aged reports are taking a very long time to run

    We are using Standard delievered extractors for Inventory.  We have build an Aged report and it is taking a very long time to run as more an more data is added.  We put the inventory in buckets 0-30, 31-60, .... >365 days.  We are aging based on a batch date the user enters.  the problem is it has to go through every record to recalculate because they are non cumulative.
    any ideas/suggestions on how to make this more efficient?  New design?

    Hi MM,
    We can use snapshot of monthly data from Query and store it in DSO at month level.
    We had used APD on Query and then Stored them in DSO1(WO)->DSO2(STD)->Cube->report based on Snap Shot.
    From the New Query , calculate the Age.
    Rgds
    SVU

  • Calculations on new server is taking a very long time

    Hello,
    We are upgrading from a 9.2 to 9.3 system. The 9.3 system is on completely different servers than the 9.2 system. We have migrated a couple of applications and the calcs on the 9.3 system are taking a very long time. For instance a calc that we ran on the 9.2 system that used to take 1 hour now takes 7. Also, on a different application a calc that used to take 15 minutes now takes an hour. The applications that we put on 9.3 are identical to the application on 9.2 (we used the migration utility). Additionally, I have ensured that the essbase.cfg files have the same cache settings. What else can we do to get the performance on the 9.3 system to be equal to if not better than the 9.2 system?
    I appreciate any help.

    I would recommend that you run perfmon.exe (if using Windows) to look at the CPU and Disk I/O utilization.
    On a well tuned system, you will see a "toggling" of max CPU and max I/O as each keeps the other busy and takes relatively small breaks in activity.
    My suspicion is that the I/O is the bottleneck on the new system, maybe it's using an external array that is shared, perhaps? If this is the case, the CPU utilization will be relatively low and the I/O will be relatively high (if you are familiar with the concept of duty cycle, the I/O duty cycle will be well more than 50% if it's the bottleneck, much less than 50% if the CPU is the bottleneck, and about even if they are well balanced).
    The other major possibility (aside from weak disk configurations) are that a raid drive is in fault mode and the controller is rebuilding the set. The other possibilities are more like fringe cases compared to these two.

  • Explain plan generating it self taking very long time !!! What to do ?

    Hi,
    I am trying to generate explain plan for a query which is running more than 2 hours. I am trying to generate explain plan for query by "explin plan for select ..." but it self is taking very long time.
    Kindly suggest me how to reduce time for explain plan ? Secondly why explain plan itself taking very long time ?
    thanks & regards
    PKP

    Just guessing here, but I've experienced this behaviour when I did two explain's within a second. This is because a plan is identified by a statement_id or, if you don't provide a statement_id, by the timestamp (of DATE datatype). So if you execute two explain plans within the second (using a script), it has two sets of data with the same primary key. The hierarchical query that needs to be executed to provide you the nicely indented text, will exponentially expand in this case.
    Maybe time to clean up your plan_table ?
    Hope this helps.
    Regards,
    Rob.

  • Firefox takes very long time to open and often crashes for some websites

    i have used firefox 3.6 with windows xp, vista and now windows 7. there is same problem i am facing at all after well usage of week. firefox takes very long time to open approx. 1 minute at first click. after refreshing computer when i click again, it becomes ready to use but meanwhile you re working firefox opens in another window perhaps for the first click. what is this please? i have reinstalled firefox but it is not fixing problem
    == Troubleshooting information ==
    how can firefox be ready for use at first click?

    Open Activity Monitor and kill this process - rapportd.
    Reinstalling OS X Without Erasing the Drive
    Boot to the Recovery HD: Restart the computer and after the chime press and hold down the COMMAND and R keys until the menu screen appears. Alternatively, restart the computer and after the chime press and hold down the OPTION key until the boot manager screen appears. Select the Recovery HD and click on the downward pointing arrow button.
    Reinstalling OS X Without Erasing the Drive
    Repair the Hard Drive and Permissions: Upon startup select Disk Utility from the main menu. Repair the Hard Drive and Permissions as follows.
    When the recovery menu appears select Disk Utility and press the Continue button. After Disk Utility loads select the Macintosh HD entry from the the left side list.  Click on the First Aid tab, then click on the Repair Disk button. If Disk Utility reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported click on the Repair Permissions button. Wait until the operation completes, then quit Disk Utility and return to the main menu.
    Reinstall OS X: Select Reinstall OS X and click on the Continue button.
    Note: You will need an active Internet connection. I suggest using Ethernet if possible because it is three times faster than wireless.
    Alternatively, see:
    Reinstall OS X Without Erasing the Drive
    Choose the version you have installed now:
    OS X Yosemite- Reinstall OS X
    OS X Mavericks- Reinstall OS X
    OS X Mountain Lion- Reinstall OS X
    OS X Lion- Reinstall Mac OS X
         Note: You will need an active Internet connection. I suggest using Ethernet
                     if possible because it is three times faster than wireless.

  • Discoverer report is taking a very long time

    Hi All,
    I need help on below discoverer issue.
    discoverer report is taking a very long time for rows to be retrieved on export when it is run for India and it is required for month end. For some reason only 250 rows are retrieved at a time and retrieval is slow so it is taking 10 minutes to bring back 10,000 rows.
    Regards
    Kumar

    Please post the details of the application release, database version and OS along with the discoverer version.
    I need help on below discoverer issue.
    discoverer report is taking a very long time for rows to be retrieved on export when it is run for India and it is required for month end. For some reason only 250 rows are retrieved at a time and retrieval is slow so it is taking 10 minutes to bring back 10,000 rows.Please see these links.
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Long+AND+Time&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Performance&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Slow&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • Brand New Installation of Snow Leopard Takes A Very Long Time To Log In

    Hello All
    I installed Snow Leopard onto a friends computer this past weekend and I have run into a strange issue. The computer in question is a Black MacBook 2.16 GHz with 2GB of RAM. The first thing I did was to back up the computer to an external hard drive using TimeMachine. Next, I formatted the drive and zeroed out all of the data. I then installed Mac OS X 10.6 onto the computer. During the initial boot process, I restored the applications, home directory, and misc. files from the TimeMachine backup. The first login took an extremely long time to log in. I figured that the Mac was sorting out the newly restored files and home directory and such and this was probably normal. Then I ran all of the software updates to get the machine completely current. When I rebooted, it took a really long time to log in. I thought maybe the machine was sorting out and installing the new updates (even though it has never taken this long on my MacBook. So I started up from the Snow Leopard install CD and ran DiskUtility and repaired the disk and permissions. Upon another reboot (I timed it this time) it, again, took a very long time ... *8.5 minutes!* I'm at my wits end and am thinking about rebuilding the machine again to see if this build is better. Also, as far as I can tell, the System is working perfectly once I get logged in; no spinning beach ball, hangs, kernel panics or what have you. Also, the machine didn't have this issue before I rebuilt it. Has anybody ever heard of this before? If so, is there a nice fix for it?
    Thank you all for reading and have a good day.

    Check /System/Library/StartupItems & /Library/StartupItems on the Black MacBook. Any items in either folder is not part of the Snow Leopard install & may be causing the slow starts. Also check System Preferences > Accounts > Login Items, especially if the log in screen comes up quickly & most of the delay is after entering the account password. It is normal to see the iTunes Helper app login item, but any others may be causing the delay, especially if they are not fully compatible with Snow Leopard or are apps that require Rosetta to start up before they will run. The same goes for any items n the "Others" section of System Preferences.
    Message was edited by: R C-R

  • I am trying to do a full Time Machine Backup to a new external disk. The backup starts, and it says "Time remaining about 4 days." That seems like a very long time, but the real problem is that the computer "logs off" after a few hours, and the b.u. stops

    I am trying to do a full Time Machine Backup to a new external disk. The backup starts, and it says "Time remaining about 4 days." That seems like a very long time, but the real problem is that the computer "logs off" after a few hours, and the backup stops. The system preferences are set to "Never" for Computer sleep and Display sleep. The computer does not ordinarily log off automatically, but it has done this twice since I started the Time Machine backup.

    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Enter the word "Starting" (without the quotes) in the String Matching text field. You should now see log messages with the words "Starting * backup," where * represents any of the words "automatic," "manual," or "standard." Note the timestamp of the last such message. Clear the text field and scroll back in the log to that time. Select the messages timestamped from then until the end of the backup, or the end of the log if that's not clear. Copy them (command-C) to the Clipboard. Paste (command-V) into a reply to this message.
    If all you see are messages that contain the word "Starting," you didn't clear the search box.
    If there are runs of repeated messages, post only one example of each. Don't post many repetitions of the same message.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Some personal information, such as the names of your files, may be included — anonymize before posting.

  • EHP4 for ERP 6.0 upgrade phase MAIN_SHDRUN/DDIC_UPG running takes long time

    Dear all,
    I just tried to upgrade a fresh ERP 6.0 EHP4 ready (Linux X86_64 - Oracle) to EHP4 ERP 6.0.
    I go through phase MAIN_SHDRUN/DDIC_UPG running, but it takes a very long times.
    I checked the server resources usage, it 's very low.
    I checked on my ERP 6.0 system and also the shadow system (background job, SM50), it seems no activities on both).
    MAIN_SHDRUN/DDIC_UPG running - what is this phase about?
    Its normal for this phase to takes about several hours??
    What should i check to ensure that the upgrade process is still running?
    How can i estimate time needed for this EHP4 upgrade?
    I installed that freshERP 6.0 EHP4 ready system only on around 3-4 hours.
    Regards,
    Fendhy.

    Dear All,
    I have same issue in ERP 6.0, EHP4 update, "MAIN_SHDRUN/DDIC_UPG" phase is still running, 24 hours passed. (only for this phase)
    This is a NEW installation and on Production. I had no issues with DEV and QA and has taken only 50 minutes for the same phase.
    Everything is same between DEV and PRD except, Production is on AIX HACMP cluster and using latest Ehpi installer + latest SPAM - version 41.
    - There is no any error or warnings in /usr/sap/SID/EHPI/abap/log directory
    - Phase is running, writing and reading files to installation directory
    - No any automatic cluster fail-over mechanisms enabled
    - Cluster has two nodes, DB+ASCS on one node and CI on other node.
    - Shadow instance is running, can login, no errors in sm21, st22, no any running or canceled background jobs
    - I can see 15% to 35% CPU "waits"  in CI node where ehpi is running, but 50% of CPU is idle
    - Also on same CI node, 70% to 100% disk busy waits can be seen on disks which mounted Ehpi installation directory
    - On DB node CPU utilization and disk utilization is low
    What could be the reason? Any idea ? How to diagnose.
    Thanks,
    P.D

Maybe you are looking for