Why check_ponit took very long time in some case?

In our project, we have a check_ponit thread, we found some time check_ponit took very long time. Most case it took only several minisecond.
The update data is same as quickly check_point case.
What factor will impact checkponit time?

Hello,
Checkpoints write data from the log file(s) to the actual database files. That means it is affected primarily by the amount of data in the log that has not yet been checkpointed. This is usually how much data has been committed since the last checkpoint. It could also be affected by other disk activity on the same partition from any other aspect of the system.
Regards,
George

Similar Messages

  • Launch ASCP plan run very long time

    Hi,
    I launched constrained ASCP plan and it took very long time. I launch ASCP plan in friday and it still not finished yet till monday, Memory Based Shapshot & Snapshot Delete Worker are still running, Loader Worker With Direct Load Option is still in pending phase. MSC: Share Plan Partitions has been set to Yes.
    When I run query below :
    select table_name,
           partition_name
    from   all_tab_partitions
    where  table_name like 'MSC_NET_RES_INST%'
    OR     table_name like 'MSC_SYSTEM_ITEMS%'
    order by substr(partition_name,instr(partition_name,'_',-1,1)+1);
    The results are:
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS_0
    MSC_NET_RES_INST_AVAIL
    NET_RES_INST_AVAIL_0
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS_1
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS__21
    MSC_NET_RES_INST_AVAIL
    NET_RES_INST_AVAIL__21
    MSC_NET_RES_INST_AVAIL
    NET_RES_INST_AVAIL_999999
    MSC_SYSTEM_ITEMS
    SYSTEM_ITEMS_999999
    Please help me how to increase the performance when launching the plan. Is change MSC: Share Plan Partitions to No the only way to increase the performance in running plan?
    Thanks & Regards,
    Yolanda

    Hi Yolanda,
    a) So does it means that plan was working fine earlier but you are facing this issue recently.. ? If so then what you have changed at server side or have you applied any recent patches.. ?
    b) If you have not completed plan for single time,
    I will suggest that run data collection in complete refresh mode for one organization which is having relatively small data. Further, you can modify plan options in order to reduce planning calculation load like
    - disable pegging
    - remove any demand schedule / supply schedule / global forecast etc
    - enable only single organization which having relatively small demand and supply picture
    - disable forecast spread
    Once one plan run will be completed, then expand your collection scope to other organizations and also enabling above mentioned setting.
    There are lots of points need to consider for performance issue like server configuration, hardware configuration,  num of demands etc. So you can raise SR in parallel while working on above points.
    Thanks,
    D

  • Why is it that it would take a very long time [if ever] to change or adjust the tempo of a loop in the audio track when i set it to adjust regions to locators? Until now the spinning wheel of death is still very much spinning. thanks for any help.   Is th

    Why is it that it would take a very long time [if ever] to change or adjust the tempo of a loop in the audio track when i set it to adjust regions to locators? Until now the spinning wheel of death is still very much spinning. thanks for any help.
    Is there another way to adjust tempo of loops the faster way, any other technique?

    No clue why the final processes have suddenly started to take so long. Two things I'd try: a) capture from an older tape to see if some problem with the new tape is at fault.  And b) check the health of your RAM and the hard drive.
    The red frame sounds a bit like a glitch we used to have in OnLocation (actually in its predecessor HDV Rack) which was caused by a partial GOP. But that was a product of HDV Rack recording from the live video stream. It turned out that HDV cameras intentionally interrupt the data stream for an instant upon starting to record--specifically to avoid recording a partial GOP to tape. So my gut says that the tape has partial GOPs at the points where you stopped/started recording.

  • Why is it that Photoshop Elements takes a very long time to open a png file?

    Even though I wait for a very long time (minutes)  for Photoshop Elements 11 to open a PNG file, the computer's process monitor shows nothing discernible going on while I am waiting.
    This is on a Windows 7 machine with an Intel I7 4 core processor.

    I killed Firefox and Thunderbird and tried it again.  Using a stopwatch, it took 5 minutes 54 seconds to open the file.  In the interim the application windows were frozen.  The process monitor showed Photoshop not responding.  The thread analysis showed a thread waiting on splwow64.exe. 
    splwow64.exe allows 32-bit applications to connect with the 64-bit printer spooler service on x64 Windows builds.
    I should also point out that when I exit Photoshop, it does not seem to die.  I cannot restart it in the normal way.  The starter software just goes away when I click on PE 11.  I finally fo into the task monitor and kull the PhotoshopElements process.  Then I can restart Photoshop Elements.  Although after a long time since I exited PE while I was writing this post, I then just tried opening PE the normal way and it worked.  Many of these items I mention are probably red herrings.

  • After installing Lion my time machine using a Time capsule backups take a very long time, particularly indexing the back up, does any one know why and a fix?

    I have installed Lion over Snow Leopard and noticed a marked increase in the time it takes my time capsule/time machine to back up.  It seems to spend a very long time indexing the back up.  Does any one know why and more importantly is there a "fix"?

    The first index with Lion takes a very long time.  Could take over 10 hours.  You just have to wait.  You can see the progress by opening Console and entering backupd in the search box

  • Why UPD processes execute report RSM13000 for very long time?

    Hi,
    When I use sm50 or sm51 to check my CRM system which version is 5.0, I found some UPD processes execute report RSM13000 for very long time. And some DIA processes are executing report SAPMSSY1 for verly long time too. They are occupy lot of processes of system. How does this happen? What's the reason and how to solve it? Thanks for your help!
    Many thanks and Best regards,
    Long

    Please check the configurtion of your system.As per your reply it seems that the work processes are not sufficient in the system.
    Hi wait time and response time  ,is  problem in the system.Either you have a lot of load on the system or you CPU's are over loaded.
    Please check the system status from st06 and look for the cpu utilisation /memory utilisation.
    SM66 to see the long running online and background jobs.
    St03n - look for the workload and wait times for all transactions.
    st02-may help in checking the memory paramaetrs.
    Hope it will help you in resolving the issue.
    Thanks
    Amit

  • Why it takes a very long time while itunes backup my iphone?

    I am using a iPhone 5S with iOS8.
    When I sync it with itunes, step 2 of 7, Backing up, takes a very long time, it took more than a hour.
    Is something goes wrong??

    Try unchecking "Automactically fill free space with songs"

  • Why is it taking a very long time to migrate files files macbook to iMac?

    It is taking a very long time to migrate files from a macbook to a new iMac (24 hours). Is this normal for around 40GB of data?

    Here is some info on various methods:
    http://support.apple.com/kb/HT6025
    And if you're using Mavericks:
    http://support.apple.com/kb/HT5872
    And more good info:
    http://pondini.org/OSX/Setup.html

  • Firefox takes very long time to open and often crashes for some websites

    i have used firefox 3.6 with windows xp, vista and now windows 7. there is same problem i am facing at all after well usage of week. firefox takes very long time to open approx. 1 minute at first click. after refreshing computer when i click again, it becomes ready to use but meanwhile you re working firefox opens in another window perhaps for the first click. what is this please? i have reinstalled firefox but it is not fixing problem
    == Troubleshooting information ==
    how can firefox be ready for use at first click?

    Open Activity Monitor and kill this process - rapportd.
    Reinstalling OS X Without Erasing the Drive
    Boot to the Recovery HD: Restart the computer and after the chime press and hold down the COMMAND and R keys until the menu screen appears. Alternatively, restart the computer and after the chime press and hold down the OPTION key until the boot manager screen appears. Select the Recovery HD and click on the downward pointing arrow button.
    Reinstalling OS X Without Erasing the Drive
    Repair the Hard Drive and Permissions: Upon startup select Disk Utility from the main menu. Repair the Hard Drive and Permissions as follows.
    When the recovery menu appears select Disk Utility and press the Continue button. After Disk Utility loads select the Macintosh HD entry from the the left side list.  Click on the First Aid tab, then click on the Repair Disk button. If Disk Utility reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported click on the Repair Permissions button. Wait until the operation completes, then quit Disk Utility and return to the main menu.
    Reinstall OS X: Select Reinstall OS X and click on the Continue button.
    Note: You will need an active Internet connection. I suggest using Ethernet if possible because it is three times faster than wireless.
    Alternatively, see:
    Reinstall OS X Without Erasing the Drive
    Choose the version you have installed now:
    OS X Yosemite- Reinstall OS X
    OS X Mavericks- Reinstall OS X
    OS X Mountain Lion- Reinstall OS X
    OS X Lion- Reinstall Mac OS X
         Note: You will need an active Internet connection. I suggest using Ethernet
                     if possible because it is three times faster than wireless.

  • Why does the iphone restoring in very long time??

    i just wanted to restore my iphone 4 and it takes very very long time and still didn't restore..is it normal??

    Yes

  • Non-self-contained movie huge & takes a very very long time to export

    This is for FCE 4.
    I have a 1.5 hour movie, and when I export it to quicktime (Not quicktime conversion) it takes WAY too long.
    I have "make self-contained movie" unchecked, so I thought creating a reference movie would be very quick. Why is it taking so long and why is the resulting file huge?
    My source files is DV, and only minor editing and effects were used. However the entire movie was cropped & resized. Is that why?

    Ah I think I figured it out.
    I did another test project with the same source DV file, but didn't do ANY editing to it.
    It took 2 minutes to export a 70mb reference mov file from a 7 minute DV sequence.
    Then I cropped & resized the sequence by changing parameters in the "motion" tab.
    Then it took 20 minutes to export a much larger reference mov file from the same 7 minute DV sequence.
    Then I clicked "render all" after selecting every render option in the drop-down menu.
    Then it took less than 1 minute to export a 70mb reference mov file from the same 7 minute DV sequence.
    So I guess I was having problems because I didn't render "everything", because I didn't select "FULL" on the drop-down menu for rendering. I thought only RED segments in the timeline needs to be rendered prior to export, but apparently the "FULL" render must be selected as well so that the entire timeline is nice and purple. (or some shade of blue)
    Also, a cropped and resized video is very large in size (GB) if it is exported as a reference file only without prior rendering. * Can someone else confirm that this is the normal behavior in FCE? *
    Oh well, lesson learned.. This movie is going to take a very long time to export, and result in a very large file. But I can't use it without cropping & resizing it!
    Message was edited by: Yongwon Lee

  • [SOLVED] systemd-tmpfiles-clean takes a very long time to run

    I've been having an issue for a while with systemd-tmpfiles-clean.service taking a very long time to run. I've tried to just ignore it, but it's really bothering me now.
    Measuring by running:
    # time systemd-tmpfiles --clean
    systemd-tmpfiles --clean 11.63s user 110.37s system 10% cpu 19:00.67 total
    I don't seem to have anything funky in any tmpfiles.d:
    # ls /usr/lib/tmpfiles.d/* /run/tmpfiles.d/* /etc/tmpfiles.d/* | pacman -Qo -
    ls: cannot access /etc/tmpfiles.d/*: No such file or directory
    error: No package owns /run/tmpfiles.d/kmod.conf
    /usr/lib/tmpfiles.d/gvfsd-fuse-tmpfiles.conf is owned by gvfs 1.20.1-2
    /usr/lib/tmpfiles.d/lastlog.conf is owned by shadow 4.1.5.1-9
    /usr/lib/tmpfiles.d/legacy.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/libvirt.conf is owned by libvirt 1.2.4-1
    /usr/lib/tmpfiles.d/lighttpd.conf is owned by lighttpd 1.4.35-1
    /usr/lib/tmpfiles.d/lirc.conf is owned by lirc-utils 1:0.9.0-71
    /usr/lib/tmpfiles.d/mkinitcpio.conf is owned by mkinitcpio 17-1
    /usr/lib/tmpfiles.d/nscd.conf is owned by glibc 2.19-4
    /usr/lib/tmpfiles.d/postgresql.conf is owned by postgresql 9.3.4-1
    /usr/lib/tmpfiles.d/samba.conf is owned by samba 4.1.7-1
    /usr/lib/tmpfiles.d/slapd.conf is owned by openldap 2.4.39-1
    /usr/lib/tmpfiles.d/sudo.conf is owned by sudo 1.8.10.p2-1
    /usr/lib/tmpfiles.d/svnserve.conf is owned by subversion 1.8.8-1
    /usr/lib/tmpfiles.d/systemd.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/systemd-nologin.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/tmp.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/uuidd.conf is owned by util-linux 2.24.1-6
    /usr/lib/tmpfiles.d/x11.conf is owned by systemd 212-3
    How do I debug why it is taking so long? I've looked in man 8 systemd-tmpfiles and on google, hoping to find some sort of --dubug option, but there seems to be none.
    Is it some how possible to get a list of the directories that it looks at when it runs?
    Anyone have any suggestions on how else to fix this.
    Anyone else have this issue?
    Thanks,
    Gary
    Last edited by garyvdm (2014-05-08 18:57:43)

    Thank you very much falconindy. SYSTEMD_LOG_LEVEL=debug helped my find my issue.
    The cause of the problem was thousands of directories in /var/tmp/ created by a test suite with a broken clean up method. systemd-tmpfiles-clean was recursing through these, but not deleting them.

  • TDMS Shell - DB Export from source/sender system taking a VERY long time

    We're trying to build a TDMS Receiver system using the TDMS Shell technique. We've run into a situation wherein the initial  DB Export from source/sender system is taking a VERY long time.
    We are on ECC 6.0, running on AIX 6.1 and DB UDB v9.7. We're executing the DB export from sapinst, per instructions. Our DB export parallelizes, then the parallel processes one by one whittle away to just one remaining, and there we find out that the export is at that point single-threaded, and exporting table BSIS.
    BSIS is an FI transactional data table. We're wondering why is the DB export trying to get BSIS and its contents out??? Isn't the DB export in TDMS Shell technique only supposed to get SAP essential only config and master data, and NOT transactional data?
    Our BSIS table is nearly 700 GB in size by itself.  That export has been running for nearly a week now, with no end in site.
    What are we doing wrong? We suspect we may have missed something, but really don't think we do. We also suspect that the EXCLUSION table in the TDMS Shell technique may be the KEY to this whole thing. It's supposed to automatically exclude very large tables, but in this case, it most certainly missed out in excluding BSIS for some reason.
    Anyway, we're probably going to fire up an OSS Message with SAP Support to help us address this perplexing issue. Just thought we'd throw it out there to the board to see if anyone else somewhere has run into similar circumstances and challenges.  In the meantime, any feedback and/or advice would be dearly appreciated. Cheers,

    Hello
    Dont be bothered by the other TPL file DDLDB6_LRG.TPL, we are only concerned with DDLDB6.TPl.
    Answer following questions to help me analyze the situation -
    1) What is the current size of export dump
    2) Since when is the exports running
    3) What is the size of the source DB? Do you have huge amount of custom developments?
    4) Did you try to use table splitting?
    5) Do you doubt that there may be other transaction tables (like BSIS) which have been exported completely?
    6) Did you update the SAP Kernel of your source system to latest version before starting the Shell package?
    7) Were the DB statistics update during the shell or were they already updated before starting Shell?
    8) Is your system a distributed system i.e. Central instance and Database instance are on different application servers?

  • Notebook Taking a very long time to power off

    Hi All,
    My notebook (HP ENVY 17-j005tx Notebook) when shutting down only takes a couple of seconds, however it can take upto 5 minutes to power off...
    At first i thought some service or program was taking a long time to shutdown, I installed Windows 8.1 SDK Performance Toolkit to try and identify a culprit; ran the command "xbootmgr -trace shutdown -traceFlags BASE+DIAG+LATENCY -noPrepReboot" and succeffully generated trace file.
    I noticed the notbook had rebooted in just a few seconds..
    Anyhow I proceeded to analyse the trace file using the wpa.exe program anf could not find any program or service taking much time to shutdown!!!
    SO I understand now the Windows OS is shutting down nicely though the Notebook is taking a very long time to power off.
    Anyone have any suggestions?  I do have NVIDIA Driver 337.88 installed and did find another thread here where they suspected this driver version has not been tested by HP and after they applied it their notebook shutdown slower than usual - is this somehting maybe to look at...
    I'm currently downoading sp63414.exe NVIDIA driver for my notebook - should I uninstall my current driver and apply HP's?
    Jim
    HP ENVY 17-j005tx Notebook, HP ENVY Recline 27-k001a, HP ProLiant MicroServer Gen8 G2020T, HP MediaSmart EX495 Server, HP MediaVault 2020, HP ENVY 120 AiO Printer
    This question was solved.
    View Solution.

    Thanks visruth,..
    What I tried is run "msconfig" and boot into "safe mode", rebooted then shutdown, shutdown was back to normal again.
    So then reran "msconfig" disabled all non-Microsoft Services, shutdown and again all powered off as normal (quickly after shutdown).
    So rebooted and re-enabled all services again using "msconfig".  THen disabled the following services, choosing these specifically as I'm not so sure why they should be running and some not know what they are used for...
    rebooted and then shutdown and all shutdown and power off was normal...
    It seems one of these services is causing the power off to take several minutes....
    I suspect the bluetooth services, or the GSService, or IconMan_R, or possibly the Netgear Genie...
    Funny enough I did uninstall netgear genie a few weeks ago and see for some unknown reason the Service is still there???? Hmm...
    Jim
    HP ENVY 17-j005tx Notebook, HP ENVY Recline 27-k001a, HP ProLiant MicroServer Gen8 G2020T, HP MediaSmart EX495 Server, HP MediaVault 2020, HP ENVY 120 AiO Printer

  • 0HR_PT_3 init is taking a very long time.

    Hi Friends,
    I am trying to extract HR Data (Quotas) to BW, i did Initialization and its been running for a very long time but no records are getting transfered.  Is there any config, setting which i should do before i do Init for this Datasource?. or is there any OSS note which i should refer before this?.
    Please let me know. your help is appreciated.
    Than ks in adv.
    BN

    Hi BN,
    We have done the setting on the R/3 for time frame for 01.01.2010 to 01.01.2012 and then i did my init in the background and expecting it to take some time. The Init Load is succesfully completed with 60k records to PSA for those 2 years but it took 16 long hours, business wants the data for the last four years, does that mean it takes even longer to load the data?.
    Loading time depends on how much data you have. If you have data between 2008 and 2010(in your case) it takes more time, if you have less time it will get complete in less time.
    All this is in Dev system hence we will have smaller volumes of data. I am wondering how much more time will it take in Production.
    This is for sure, you will have much data in your production system. to plan this you can for full repair loads.
    i) first do init with out data transfer, this sets the daily delta.
    ii) load your historical data based on calender year.( Try to load 6 months data in one load)
    Another question is that if I want to extract records which falls outside the Time Frame config, how can i extract those records to BW?. one of the OSS note Says that ONLY those records will be extracted to BW which fall in this time frame.
    If you want entire data then no need to give the selections while loading.
    Lets consider the below example which may suits in your case.
    I think you have already loaded data from 2010 to 2012 and now you want to load the data which is there with time stamp below than 2010. Then follow the below steps.
    fill the setup tables for all the data.(With out any selections)
    In RSA3 check you have data or not for specified period.
    If you have data in RSA3, then you can load it using full repair IP.
    Say if you want to load which falls before 12.31.2007, then give the selections in IP as From: 01.01.1990 and to : 12.31.2007.(In IP selections below you will find "use conversion routine". use this option it will pick in right format.
    Regards,
    Venkatesh

Maybe you are looking for