Launch ASCP plan run very long time

Hi,
I launched constrained ASCP plan and it took very long time. I launch ASCP plan in friday and it still not finished yet till monday, Memory Based Shapshot & Snapshot Delete Worker are still running, Loader Worker With Direct Load Option is still in pending phase. MSC: Share Plan Partitions has been set to Yes.
When I run query below :
select table_name,
       partition_name
from   all_tab_partitions
where  table_name like 'MSC_NET_RES_INST%'
OR     table_name like 'MSC_SYSTEM_ITEMS%'
order by substr(partition_name,instr(partition_name,'_',-1,1)+1);
The results are:
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS_0
MSC_NET_RES_INST_AVAIL
NET_RES_INST_AVAIL_0
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS_1
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS__21
MSC_NET_RES_INST_AVAIL
NET_RES_INST_AVAIL__21
MSC_NET_RES_INST_AVAIL
NET_RES_INST_AVAIL_999999
MSC_SYSTEM_ITEMS
SYSTEM_ITEMS_999999
Please help me how to increase the performance when launching the plan. Is change MSC: Share Plan Partitions to No the only way to increase the performance in running plan?
Thanks & Regards,
Yolanda

Hi Yolanda,
a) So does it means that plan was working fine earlier but you are facing this issue recently.. ? If so then what you have changed at server side or have you applied any recent patches.. ?
b) If you have not completed plan for single time,
I will suggest that run data collection in complete refresh mode for one organization which is having relatively small data. Further, you can modify plan options in order to reduce planning calculation load like
- disable pegging
- remove any demand schedule / supply schedule / global forecast etc
- enable only single organization which having relatively small demand and supply picture
- disable forecast spread
Once one plan run will be completed, then expand your collection scope to other organizations and also enabling above mentioned setting.
There are lots of points need to consider for performance issue like server configuration, hardware configuration,  num of demands etc. So you can raise SR in parallel while working on above points.
Thanks,
D

Similar Messages

  • Calc scripts running very Long time

    Hi All,
    Recently, i am migrated the objects from Production to Test region. We have 5 applications and each of the application has a set of calc scripts.
    In test region, they are running really long time. Where as in Production, they run for less time.
    In TEST region each Calc script is taking 10 times more time than the Production times.
    No Dimension added or no script is updated. No difference in objects between TEST and PROD.
    Please suggest me, why is this difference.
    Thanks
    Mahesh

    The obvious first question would be if the hardware is different. You would expect prod to be a more powerful server and therefore perform better.I'm seeing a lot of virtualized test servers (who knows, really, what power the box has) and real prod servers. That can make a huge difference in performance.
    It makes benchmarking tough -- yes, you can see how long something will take relative to another process, but there isn't any way to know how it will perform in production until you sneak it over there and benchmark it. It can be a real PITA for Planning.
    And yes, the theory is that dev and prod are similar so that the above isn't an issue, but that seems to be a more theoretical than actual kind of thing.
    Regards,
    Cameron Lackpour

  • QT 7.6.7 won't launch or takes a VERY long time to launch

    Sometimes Quicktime won't launch at all. Sometimes it launches after I open Internet Explorer 32-bit and open a QT movie link from the browser. Sometimes it launches after 30 minutes. It is very unpredictable. I am not sure what is wrong with my Windows Vista 64-bit system. I have tried to repair the installation as well as uninstall and reinstall and reregister my Pro license for QT.
    Has anyone else come across this problem before?
    Sincerely,
    markerline
    Windows Vista 64-bit Workstation (HP), Quicktime 7.6.7 Pro

    could be the firewall i suppose. i'm running norton 360 3.0 with antivirus and firewall being set by this program. the windows firewall is off. I heard there was an issue of security with QuickTime prior to 7.6.7 update.
    I double checked and I am not experiencing this type of problem on my Windows 7 64-bit machine which is a notebook.

  • Explain plan generating it self taking very long time !!! What to do ?

    Hi,
    I am trying to generate explain plan for a query which is running more than 2 hours. I am trying to generate explain plan for query by "explin plan for select ..." but it self is taking very long time.
    Kindly suggest me how to reduce time for explain plan ? Secondly why explain plan itself taking very long time ?
    thanks & regards
    PKP

    Just guessing here, but I've experienced this behaviour when I did two explain's within a second. This is because a plan is identified by a statement_id or, if you don't provide a statement_id, by the timestamp (of DATE datatype). So if you execute two explain plans within the second (using a script), it has two sets of data with the same primary key. The hierarchical query that needs to be executed to provide you the nicely indented text, will exponentially expand in this case.
    Maybe time to clean up your plan_table ?
    Hope this helps.
    Regards,
    Rob.

  • Demand Planning Job taking a very long time

    Hi,
    We run our demand planning job once every month. The job is taking very very long time (around 60,000 secss) to complete. When I check the job log system passes some steps and just waits in some specific steps with the message
    Step 059 started (program /SAPAPO/TS_BATCH_RUN, variant ZADAU_OFF1, user ID APP-BATCH)
    SQL: 07.01.2008 09:54:28 APP-BATCH
    TRUNCATE TABLE "/BI0/0600000002"
    SQL-END: 07.01.2008 09:54:28 00:00:00
    SQL: 07.01.2008 09:54:30 APP-BATCH
    TRUNCATE TABLE "/BI0/0600000002"
    SQL-END: 07.01.2008 09:54:30 00:00:00
    Itwaits for aorung 2 hours with thie same repeated message before it goes to the next step. Not sure what is the issue. Our fiunctional guys have tried changing the macros, etc to optimize performance. But still same issue. Any quick help would be greatly appreciated.
    Thanks in advance.
    Regards,
    PK

    Hi SB,
    No we are not managing infocube keyfigures in the planning book. Appreciate your time.
    Regards,

  • Remote client copy (SCC9) runs a very long time!

    remote client copy (SCC9) runs a very long time!
    how to do it quickly process?
    (eg use imp and exp-oracle tool, as it can be done to understand what the SAP data has been copied and are now in a different location, for Developers)

    scn001 wrote:
    remote client copy (SCC9) runs a very long time!
    > how to do it quickly process?
    > (eg use imp and exp-oracle tool, as it can be done to understand what the SAP data has been copied and are now in a different location, for Developers)
    Hi,
    You can export the client, as well but it will take long time too, depended to your client size. Please note that client copy operation should be performed by standard SAP client management tools, such as client export/import or remote copy.
    Ask this question to SAP support first. Technically, you can choose many ways to copy a SAP client, but as far as I know that SAP will not support you (such as errors you faced during the client export/import or the problems related by the copy operation), if you use any other 3rd party tool while for the client copy purposes.
    Best regards,
    Orkun Gedik
    Edited by: Orkun Gedik on Jun 30, 2011 10:57 AM

  • [SOLVED] systemd-tmpfiles-clean takes a very long time to run

    I've been having an issue for a while with systemd-tmpfiles-clean.service taking a very long time to run. I've tried to just ignore it, but it's really bothering me now.
    Measuring by running:
    # time systemd-tmpfiles --clean
    systemd-tmpfiles --clean 11.63s user 110.37s system 10% cpu 19:00.67 total
    I don't seem to have anything funky in any tmpfiles.d:
    # ls /usr/lib/tmpfiles.d/* /run/tmpfiles.d/* /etc/tmpfiles.d/* | pacman -Qo -
    ls: cannot access /etc/tmpfiles.d/*: No such file or directory
    error: No package owns /run/tmpfiles.d/kmod.conf
    /usr/lib/tmpfiles.d/gvfsd-fuse-tmpfiles.conf is owned by gvfs 1.20.1-2
    /usr/lib/tmpfiles.d/lastlog.conf is owned by shadow 4.1.5.1-9
    /usr/lib/tmpfiles.d/legacy.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/libvirt.conf is owned by libvirt 1.2.4-1
    /usr/lib/tmpfiles.d/lighttpd.conf is owned by lighttpd 1.4.35-1
    /usr/lib/tmpfiles.d/lirc.conf is owned by lirc-utils 1:0.9.0-71
    /usr/lib/tmpfiles.d/mkinitcpio.conf is owned by mkinitcpio 17-1
    /usr/lib/tmpfiles.d/nscd.conf is owned by glibc 2.19-4
    /usr/lib/tmpfiles.d/postgresql.conf is owned by postgresql 9.3.4-1
    /usr/lib/tmpfiles.d/samba.conf is owned by samba 4.1.7-1
    /usr/lib/tmpfiles.d/slapd.conf is owned by openldap 2.4.39-1
    /usr/lib/tmpfiles.d/sudo.conf is owned by sudo 1.8.10.p2-1
    /usr/lib/tmpfiles.d/svnserve.conf is owned by subversion 1.8.8-1
    /usr/lib/tmpfiles.d/systemd.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/systemd-nologin.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/tmp.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/uuidd.conf is owned by util-linux 2.24.1-6
    /usr/lib/tmpfiles.d/x11.conf is owned by systemd 212-3
    How do I debug why it is taking so long? I've looked in man 8 systemd-tmpfiles and on google, hoping to find some sort of --dubug option, but there seems to be none.
    Is it some how possible to get a list of the directories that it looks at when it runs?
    Anyone have any suggestions on how else to fix this.
    Anyone else have this issue?
    Thanks,
    Gary
    Last edited by garyvdm (2014-05-08 18:57:43)

    Thank you very much falconindy. SYSTEMD_LOG_LEVEL=debug helped my find my issue.
    The cause of the problem was thousands of directories in /var/tmp/ created by a test suite with a broken clean up method. systemd-tmpfiles-clean was recursing through these, but not deleting them.

  • When we run $CRS_HOME/root.sh scripts-This hangs for a very long time

    Hi,
    At the time of oracle cluster ware installation, when we run $CRS_HOME/root.sh scripts…
    bash-3.00# /export/home/oracle/product/10.2.0/crs/root.sh
    WARNING: directory '/export/home/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/export/home/oracle/product' is not owned by root
    WARNING: directory '/export/home/oracle' is not owned by root
    WARNING: directory '/export/home' is not owned by root
    WARNING: directory '/export' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/export/home/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/export/home/oracle/product' is not owned by root
    WARNING: directory '/export/home/oracle' is not owned by root
    WARNING: directory '/export/home' is not owned by root
    WARNING: directory '/export' is not owned by root
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: rac1 rac1-priv rac1
    node 2: rac2 rac2-priv rac2
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/rdsk/c0d1s1
    Format of 1 voting devices complete.
    Startup will be queued to init within 30 seconds.
    This hangs for a very long time; I controlled out of it and re-run it. The same result, it stops at the last line for hours now. Any idea?
    Thanks
    ANup
    Edited by: user485641 on २५ अप्रैल, २००९ ७:०५ अपराह्न

    what os ? what Oracle version?
    What you find on cluster log.... ?

  • Query running for a very long time

    Hi I have a sales Query which is running for a very long time,
    It is built on a multiprovider. The multiprovider gets feed from cube and DSO.
    It is taking very long time to run.
    I need to do some performance tuning can you please suggest.

    Hi Ravi,
    First maitain BW statistics for that query and after that go to se11 and give table RSDDSTAT. just check how many records selected in data base and how many records transferred.
    according to this statistics create aggregates by giving propose aggregates by using statistics tables. after running a query check in RSRT whether the data is coming from aggregates or not.
    do not exclude any selections try to include in query restrictions.
    try to create secondary indexes.
    let us know the result.

  • Inventory aged reports are taking a very long time to run

    We are using Standard delievered extractors for Inventory.  We have build an Aged report and it is taking a very long time to run as more an more data is added.  We put the inventory in buckets 0-30, 31-60, .... >365 days.  We are aging based on a batch date the user enters.  the problem is it has to go through every record to recalculate because they are non cumulative.
    any ideas/suggestions on how to make this more efficient?  New design?

    Hi MM,
    We can use snapshot of monthly data from Query and store it in DSO at month level.
    We had used APD on Query and then Stored them in DSO1(WO)->DSO2(STD)->Cube->report based on Snap Shot.
    From the New Query , calculate the Age.
    Rgds
    SVU

  • ODI Planning metadata load takes very long time

    Hi,
    I am using ODI for Hyperion Planning metadata load.
    We are facing performance issues, as the process is taking a very long time.
    The process uses "LKM File to SQL" and "IKM SQL to Hyperion Planning"
    The number of rows to process in the file is around 70000. The file is generated from DRM. The ODI integration process takes around 2 hours to process and load the file to Planning. Even if we add 1 new row to the file and everything else remains same in the file, the process takes that long.
    So, the whole process takes around 3 hours to load to Planning for all dimensions.
    I tried increasing the fetch rows to 200 in source but there is no significant increase in performance. The Heap size is set to maximum of 285 MB in odiparams.bat.
    How can the performance be improved?
    Can I use different LKM or change any other setting?
    Thanks,
    RS

    Hi John,
    In my current implementation, the dimension hierarchies are maintained in DRM.
    So, business directly makes changes in DRM and exports the hierarchies in a text file.
    I agree that loading 70000 records is odd on regular basis, but it makes it easier for business to retain control of their hierarchies and maintainance is easy in DRM.
    On bulk loading to DB table and loading to Planning, I have 2 questions:
    1. Do you think that "LKM SQL to SQL" [Oracle to Planning] will have significant improvement in performance over "LKM File to SQL" [File to Planning], as we are still using "Sunopsis Memory engine" as staging area?
    2. I checked your blog, there you have suggested using "Sunopsis memory engine" for "LKM SQL to SQL".
    Is it mandatory to use "Sunopsis emory engine" as staging area, can we use any other user defined staging area [Oracle tables]?
    Cheers,
    RS

  • I am trying to do a full Time Machine Backup to a new external disk. The backup starts, and it says "Time remaining about 4 days." That seems like a very long time, but the real problem is that the computer "logs off" after a few hours, and the b.u. stops

    I am trying to do a full Time Machine Backup to a new external disk. The backup starts, and it says "Time remaining about 4 days." That seems like a very long time, but the real problem is that the computer "logs off" after a few hours, and the backup stops. The system preferences are set to "Never" for Computer sleep and Display sleep. The computer does not ordinarily log off automatically, but it has done this twice since I started the Time Machine backup.

    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Enter the word "Starting" (without the quotes) in the String Matching text field. You should now see log messages with the words "Starting * backup," where * represents any of the words "automatic," "manual," or "standard." Note the timestamp of the last such message. Clear the text field and scroll back in the log to that time. Select the messages timestamped from then until the end of the backup, or the end of the log if that's not clear. Copy them (command-C) to the Clipboard. Paste (command-V) into a reply to this message.
    If all you see are messages that contain the word "Starting," you didn't clear the search box.
    If there are runs of repeated messages, post only one example of each. Don't post many repetitions of the same message.
    When posting a log extract, be selective. Don't post more than is requested.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Some personal information, such as the names of your files, may be included — anonymize before posting.

  • 0HR_PT_3 init is taking a very long time.

    Hi Friends,
    I am trying to extract HR Data (Quotas) to BW, i did Initialization and its been running for a very long time but no records are getting transfered.  Is there any config, setting which i should do before i do Init for this Datasource?. or is there any OSS note which i should refer before this?.
    Please let me know. your help is appreciated.
    Than ks in adv.
    BN

    Hi BN,
    We have done the setting on the R/3 for time frame for 01.01.2010 to 01.01.2012 and then i did my init in the background and expecting it to take some time. The Init Load is succesfully completed with 60k records to PSA for those 2 years but it took 16 long hours, business wants the data for the last four years, does that mean it takes even longer to load the data?.
    Loading time depends on how much data you have. If you have data between 2008 and 2010(in your case) it takes more time, if you have less time it will get complete in less time.
    All this is in Dev system hence we will have smaller volumes of data. I am wondering how much more time will it take in Production.
    This is for sure, you will have much data in your production system. to plan this you can for full repair loads.
    i) first do init with out data transfer, this sets the daily delta.
    ii) load your historical data based on calender year.( Try to load 6 months data in one load)
    Another question is that if I want to extract records which falls outside the Time Frame config, how can i extract those records to BW?. one of the OSS note Says that ONLY those records will be extracted to BW which fall in this time frame.
    If you want entire data then no need to give the selections while loading.
    Lets consider the below example which may suits in your case.
    I think you have already loaded data from 2010 to 2012 and now you want to load the data which is there with time stamp below than 2010. Then follow the below steps.
    fill the setup tables for all the data.(With out any selections)
    In RSA3 check you have data or not for specified period.
    If you have data in RSA3, then you can load it using full repair IP.
    Say if you want to load which falls before 12.31.2007, then give the selections in IP as From: 01.01.1990 and to : 12.31.2007.(In IP selections below you will find "use conversion routine". use this option it will pick in right format.
    Regards,
    Venkatesh

  • Firefox takes very long time to open and often crashes for some websites

    i have used firefox 3.6 with windows xp, vista and now windows 7. there is same problem i am facing at all after well usage of week. firefox takes very long time to open approx. 1 minute at first click. after refreshing computer when i click again, it becomes ready to use but meanwhile you re working firefox opens in another window perhaps for the first click. what is this please? i have reinstalled firefox but it is not fixing problem
    == Troubleshooting information ==
    how can firefox be ready for use at first click?

    Open Activity Monitor and kill this process - rapportd.
    Reinstalling OS X Without Erasing the Drive
    Boot to the Recovery HD: Restart the computer and after the chime press and hold down the COMMAND and R keys until the menu screen appears. Alternatively, restart the computer and after the chime press and hold down the OPTION key until the boot manager screen appears. Select the Recovery HD and click on the downward pointing arrow button.
    Reinstalling OS X Without Erasing the Drive
    Repair the Hard Drive and Permissions: Upon startup select Disk Utility from the main menu. Repair the Hard Drive and Permissions as follows.
    When the recovery menu appears select Disk Utility and press the Continue button. After Disk Utility loads select the Macintosh HD entry from the the left side list.  Click on the First Aid tab, then click on the Repair Disk button. If Disk Utility reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported click on the Repair Permissions button. Wait until the operation completes, then quit Disk Utility and return to the main menu.
    Reinstall OS X: Select Reinstall OS X and click on the Continue button.
    Note: You will need an active Internet connection. I suggest using Ethernet if possible because it is three times faster than wireless.
    Alternatively, see:
    Reinstall OS X Without Erasing the Drive
    Choose the version you have installed now:
    OS X Yosemite- Reinstall OS X
    OS X Mavericks- Reinstall OS X
    OS X Mountain Lion- Reinstall OS X
    OS X Lion- Reinstall Mac OS X
         Note: You will need an active Internet connection. I suggest using Ethernet
                     if possible because it is three times faster than wireless.

  • Discoverer report is taking a very long time

    Hi All,
    I need help on below discoverer issue.
    discoverer report is taking a very long time for rows to be retrieved on export when it is run for India and it is required for month end. For some reason only 250 rows are retrieved at a time and retrieval is slow so it is taking 10 minutes to bring back 10,000 rows.
    Regards
    Kumar

    Please post the details of the application release, database version and OS along with the discoverer version.
    I need help on below discoverer issue.
    discoverer report is taking a very long time for rows to be retrieved on export when it is run for India and it is required for month end. For some reason only 250 rows are retrieved at a time and retrieval is slow so it is taking 10 minutes to bring back 10,000 rows.Please see these links.
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Long+AND+Time&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Performance&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Slow&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

Maybe you are looking for

  • How do I add a SCXI-1160 to my system?

    This falls into the 'had some extra parts, let's use them' category. My current configuration is PCI-6221 into a SCXI-1001 Chassis with a 1349 Cable Adapter.  The only modules I have in this chassis right now are a 1102/b/c with a 1303 terminal block

  • How do I configure OS X Server (Mountain Lion) to deliver mail from another domain to my mailbox?

    How do I configure OS X Server (Mountain Lion) to deliver mail from another domain to my mailbox? I run a personal Server at my office. It's configured under my own domain as server.mydomain.com. It's setup that it properly receives and sends e-mail

  • Infoset query with KNB1 customer's ADRC information

    Hello, An explanation into the problem: I'm trying to create an infoset query with KNA1, KNB1 and ADRC which would do the following: In KNA1-SORTL we hold customer numbers which can be found from KNB1. This query should: Display KNA1-KUNNR and releva

  • Flash CS6: Code hinting doesn't work in AS3 document

    Hi, For some reason code hinting is not working at all in an AS3 class document in Flash CS6. It works fine on the timeline. For instance: 1. Create a new AS3 class, name it "MyClass" 2. Save the doc immediately (in CS5 code hinting would not work un

  • IPod Classic 160 keeps syncing large part of library

    Every time I connect my Ipod Classic iTunes (version 12.01.26, running on Windows 8.1) syncs thousands of tracks that already were on the iPod. I've got the settings for Sync music set for 'Entire music library'. It's not such a huge problem, however