R&R backup takes too long

Hello,
I have lenovo T410 and R&R 4.3. I have scheduled backup every day at 3.30 am. My laptot is in sleep mode that time, it wakes up and after backup it go to sleep again. For a long time all worked perfectly.
Few days ago suddenly whole backup process takes 4-5 hours instead of expected few minutes. Backup size is still around 3 GB each day (incremental), it is saved on eSATA HDD. I noticed that, because my laptop is up when I come to work at morning, while it sould be sleeping.
Does anyone know, why the backup process takes such a time?
And one more question. In preferences I have set max. 7 incremental backups. I would expect, that each week I'll have new full backup, but it means only that older backups are deleted. Is there a way, how to make R&R make every X days full backup?
Thanks a lot.

The 1st RNR backup is a base (0 ) full c drive backup, compressed.  Let's say the compressed size is 15 GB.
Then Incremental 1,2,3,4,5,6,7 at 3 GB each.  (per max =7).
Then the MERGE incrementals occurs. Level 0 (15GB) is merged into level  1 (3 GB) to make the new level 0.  Then the old level 2-7 are renamed to level 1>6.  Yes this takes many hours. It requires reading all the 15GB and 3 GB into a new work area.
Look in folder C:\SWShare\RR.log for a line as follows:   
RescueRecovery: Running: C:\Program Files (x86)\Lenovo\Rescue and Recovery\br_funcs.exe merge destination="D:\RRbackups\SZ\CC5F1FA34DEF30F6-B487​057ED9063ADC" drive=C: pw=0 uuid=0 level=2
Engine.log:
Verbatim: MergeIncrementals(DriveLetter='C:', Destination='D:\RRbackups\SZ\CC5F1FA34DEF30F6-B487​057ED9063ADC', Local='0', High level='2'
===
Too avoid the Merge, set max incrementals=32. I assume you have plenty of free space on the HDD?
Launch of RNR is from Windows task scheduler.  Control panel>admin tools>task scheduler library .  Find subfolder TVT, entry Launch RNR.  You can modify the schedule to choose weekly backup, days Mon Wed Sat or similar.
==
A full backup is only created the first time. I use RNR to take a weekly backup.  I use other software, ie www.nero.com, to take my daily backup of critical files.  You might want to RNR backup weekly, and maybe use windows file backup to backup daily.
You will have a headache when  max=32 hits 32.  What do I do? Since you backup daily, you will hit 32 within the first month.  Read my other post on RNR Max=32, what happens?

Similar Messages

  • Cisco Works LMS 3.2 backup takes too long time to finish

    Hello guys,
    We have problem with scheduled LMS 3.2 backup. It  takes more than 12hours to finish. You can see in log that backup is  scheduled for 18:00h and it waits untill 22:07 to start doing anything.  We can't figure out why is there such a delay. It started to happen  after system reported lack of free space on disk. And we feed up the  space, but then faced this problem with long duration. Before this  problem occured backup used to last for 45 min od 1 hour.
    Backup to 'C:/BACKUP' started at: [Mon Apr 11 18:01:26 2011]
    Apps file : D:\\PROGRA~1\\CSCOpx\\backup\\manifest\\campus\\properties\\datafiles.txt
    [Mon Apr 11 22:07:07 2011] Archiving the contents of the following directories into C:\BACKUP\2\campus\filebackup.tar
    [Mon Apr 11 22:07:07 2011] D:\PROGRA~1\CSCOpx\campus\etc\cwsi\DeviceDiscovery.properties
    D:\PROGRA~1\CSCOpx\campus\etc\cwsi\ANIServer.properties
    D:\PROGRA~1\CSCOpx\campus\etc\cwsi\ut.properties
    D:\PROGRA~1\CSCOpx\campus\etc\cwsi\discoverysnmp.conf
    D:\PROGRA~1\CSCOpx\campus\etc\cwsi\datacollectionsnmp.conf
    Do you have any ideas what went wrong and how to fix it?
    Thanks,
    Marija

    This is a catch 22 , you need to apply a patch to fix a backup problem , yet you want to backup before applying a patch.  This is where you will have to assume some risk, and apply the patch w/o a backup.
    Go to Common Services -> Device and Credentials -> Device Mgmt and export the entire DCR (All devices) to a CSV file.  Make sure you browse to someplace outside application space, I usually place the file at c:\.  Also make sure you check mark the include credentials box, so you also have all your device passwords.  This is your D/R plan, if things were to fail you will have your device list you can import in an emergency.
    Or you can try the suggestion of clearing DFM to mitigate potential loss so you can get a good backup of the rest of LMS.  Just be sure to view your notifications and cut/paste into note pad so they can be rebuilt.  You will lose history and local customizations to Polling and Thresholds and DDV.
    # net stop crmdmgt
    kill any sm_* / brstart processes still running
    # cd ...\CSCOpx\bin
    # perl dbRestoreOrig.pl dsn=dfmInv  dmprefix=INV
    # perl dbRestoreOrig.pl dsn=dfmFh   dmprefix=FH
    # perl dbRestoreOrig.pl dsn=dfmEpm  dmprefix=EPM
    Use explorer to go to ...\CSCOpx\objects\smarts\local\repos\icf and delete all .rps files
    perform the backup.

  • When I back up my library, the pictures do not go to my external HD as albums/pastes, they are copied all together, so I have to arrange one by one to get all organized. It takes too long. Is there any option to organize it automatically? Thanks

    When I back up my library, the pictures do not go to my external HD as albums/pastes, they are copied all together, so I have to arrange one by one to get all organized. It takes too long. Is there any option to organize it automatically? Thanks

    If you backup correctly to a correctly formatted drive (Mac OS extended (journaled)  )then everything can be restored exactly as it was prir to the backup
    How exactly are you "backing up"?
    LN

  • Ipod takes too long to connect

    I just bought an 80 gb 5.5 gen ipod last week and everything was great, but now when i plug it into the PC it takes too long to detect it as a disc drive and once i open the itunes it takes too long to show the info i have stored, and sometimes only shows the music folder (no movies, tvshows, podcasts, etc) Is something wrong with my ipod?
    Maybe is because of the songs and videos i already have... but i have more than 60 gb free.
    It has also crashed twice while using the search feature, it just resets for itself.
      Windows XP Pro   1.5 Ghz, 512 mb RAM

    Alright so I found the answer to part of my question just now. Thinking when I first got my iPod Touch, I remembered that backups weren't as long. Then I thought to when things started getting worst, and I figured out that it started when I put too many apps on my iPod. So i removed all of them and my backup was the fastest I've seen in a while. Then I just but back the ones I really wanted and everything seems fine....for now.

  • Sometimes my computer takes too long to connect to new website. I am running a pretty powerful work program at same time, what is the best solution? Upgrading speed from cable network, is it a hard drive issue? do I need to "clean out" the computer?

    Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy.  It is a Macbook Pro  osx 10.6.8 (late 2010).

    Almost certainly none of the above!  Try each of the following in this order:
    Select 'Reset Safari' from the Safari menu.
    Close down Safari;  move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
    Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
    Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
              defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false

  • Accessing BKPF table takes too long

    Hi,
    Is there another way to have a faster and more optimized sql query that will access the table BKPF? Or other smaller tables that contain the same data?
    I'm using this:
       select bukrs gjahr belnr budat blart
       into corresponding fields of table i_bkpf
       from bkpf
       where bukrs eq pa_bukrs
       and gjahr eq pa_gjahr
       and blart in so_DocTypes
       and monat in so_monat.
    The report is taking too long and is eating up a lot of resources.
    Any helpful advice is highly appreciated. Thanks!

    Hi max,
    I also tried using BUDAT in the where clause of my sql statement, but even that takes too long.
        select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat in so_budat.
    I also tried accessing the table per day, but it didn't worked too...
       while so_budat-low le so_budat-high.
         select bukrs gjahr belnr budat blart monat
         appending corresponding fields of table i_bkpf
         from bkpf
         where bukrs eq pa_bukrs
         and gjahr eq pa_gjahr
         and blart in so_DocTypes
         and budat eq so_budat-low.
         so_budat-low = so_budat-low + 1.
       endwhile.
    I think our BKPF tables contains a very large set of data. Is there any other table besides BKPF where we could get all accounting document numbers in a given period?

  • Report Takes Too Long Time

    Hi!
    I am in troubble
    following is the query
    SELECT inv_no, inv_name, inv_desc, i.cat_id, cat_name, i.sub_cat_id,
    sub_cat_name, asset_cost, del_date, i.bl_id, gen_desc bl_desc, p.prvcode, prvdesc, cur_loc,
    pldesc, i.pmempno, pmname, i.empid, empname
    FROM inv_reg i,
    cat_reg c,
    sub_cat_reg s,
    gen_desc_reg g,
    ploc p,
    province r,
    pmaster m,
    iemp_reg e
    WHERE i.sub_cat_id = s.sub_cat_id
    AND i.cat_id = s.cat_id
    AND s.cat_id = c.cat_id
    AND i.bl_id = g.gen_id
    AND i.cur_loc = p.plcode
    AND p.prvcode = r.prvcode
    AND i.pmempno = m.pmempno(+)
    AND i.empid = e.empid(+)
    &wc
    order by prvdesc, pldesc, cat_name, sub_cat_name, inv_no
    and this query returns 32000 records
    when i run this query on reports 10g
    then it takes 10 to 20 minuts to generate report
    how can i optimize it...?

    Hi Waqas Attari
    Pls study & try this ....
    When your query takes too long ...
    hope it helps....
    Regards,
    Abdetu...

  • OPM process execution process parameters takes too long time to complete

    PROCESS_PARAMETERS are inserted every 15 min. using gme_api_pub packages. some times it takes too long time to complete the batch ,ie completion of request. it takes about 5-6 hrs long time ,in other time s it takes only 15-20 mins.This happens at regular interval...if anybody can guide me I will be thankful to him/her..
    thanks in advance.
    regds,
    Shailesh

    Generally the slowest part of the process is in the extraction itself...
    Check in your source system and see how long the processes are taking, if there are delays, locks or dumps in the database... If your source is R/3 or ECC transactions like SM37, SM21, ST22 can help monitor this activity...
    Consider running less processes in parallel if you have too many and see some delays in jobs... Also indexing some of the tables in the source system to expedite the extraction, make sure there are no heavy processes or interfaces running in the source system at the same time you're trying to load... Check with your Basis guys for activity peaks and plan accordingly...
    In BW also check in your SM21 for database errors or delays...
    Just some ideas...

  • Web application deployment takes too long?

    Hi All,
    We have a wls 10.3.5 clustering environment with one admin server and two managered servers separately. When we try to deploy a sizable web application, it takes about 1 hour to finish. It seems that it takes too long to finish the deployment. Here is the output from one of two managerd server system log. Could anyone tell me it is normal or not? If not, how can I improve this?
    Thanks in advance,
    John
    +####<Feb 29, 2012 12:11:03 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535463373> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:05 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <9baa7a67b5727417:26f76f6c:135ca05cff2:-8000-00000000000000b0> <1330535465664> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_NEW to STATE_PREPARED on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466493> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_PREPARED to STATE_ADMIN on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149059> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] is transitioning from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 12:11:06 PM EST> <Info> <Deployer> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330535466809> <BEA-149060> <Module copyrequest of application copyrequest [Version=COPYREQUEST0002bb] successfully transitioned from STATE_ADMIN to STATE_ACTIVE on server Pinellas1tMS3.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442300> <BEA-320143> <Scheduled 1 data retirement tasks as per configuration.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive HarvestedDataArchive>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive HarvestedDataArchive. Retired 0 records in 0 ms.>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320144> <Size based data retirement operation started on archive EventsDataArchive>+
    +####<Feb 29, 2012 1:00:42 PM EST> <Info> <Diagnostics> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330538442301> <BEA-320145> <Size based data retirement operation completed on archive EventsDataArchive. Retired 0 records in 0 ms.>+
    +####<Feb 29, 2012 1:10:23 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <weblogic.cluster.MessageReceiver> <<WLS Kernel>> <> <> <1330539023098> <BEA-003107> <Lost 2 unicast message(s).>+
    +####<Feb 29, 2012 1:10:36 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539036105> <BEA-000111> <Adding Pinellas1tMS2 with ID -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 to cluster: Pinellas1tCluster1 view.>+
    +####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084375> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
    +####<Feb 29, 2012 1:11:24 PM EST> <Info> <Cluster> <entwl3t-vm.co.pinellas.fl.us> <Pinellas1tMS3> <[STANDBY] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1330539084507> <BEA-000128> <Updating -9071779833610528123S:entwl2t-vm:[7005,7005,-1,-1,-1,-1,-1]:entwl2t-vm:7005,entwl3t-vm:7007:Pinellas1tDomain:Pinellas1tMS2 in the cluster.>+
    Edited by: john wang on Feb 29, 2012 10:36 AM
    Edited by: john wang on Feb 29, 2012 10:37 AM
    Edited by: john wang on Feb 29, 2012 10:38 AM

    Hi John,
    There may be some circumstances like when there are many files in the WEB-INF folder and JPS don't use TLD.
    I don't think a 1hour deployment is normal, it should be much more faster.
    Since you are using 10.3.5, I suggesto you to install the corresponding patch:
    1. Download patch 10118941p10118941_1035_Generic.zip
    2. Uncompress the file p10118941_1035_Generic.zip
    3. Copy the required files (patch-catalog_XXXXX.xml, CIRF.jar ) to the Patch Download Directory (typically, this folder is <WEBLOGIC_HOME>/utils/bsu/cache_dir).
    4. Rename the file patch-catalog_XXXXX.xml into patch-catalog.xml .
    5. Start Smart Update from <WEBLOGIC_HOME>/utils/bsu/bsu.sh .
    6. Select "Work Offline" mode.
    7. Go to File->Preferences, and select "Patch Download Directory".
    8. Click "Manage Patches" on the right panel.
    9. You will see the patch in the panel below (Downloaded Patches)
    10. Click "Apply button" of the downloaded patch to apply it to the target installation and follow the instructions on the screen.
    11. Add "-Dweblogic.jsp.ignoreTLDsProcessingInWebApp=true" to the Java options to ignore additional findTLDs cost.
    12. Restart servers.
    Hope this helps.
    Thanks,
    Cris

  • Finishing Backup takes too much time.

    Finishing backup takes too much time in my system , sometimes 30 minutes even 1 hour! and sometimes right after finishing backup(before the icon in the menu bar stops) it starts to take another backup( and usually 10MB or something small !) or the estimation is wrong .
    Sometimes I see my system that is taking backups hours and hours(just with small stops!)
    and the speed rate I of data transfer is reduced, while I set it to be on maximum speed in Time Capsule.

    And the log with Time-stamps from Console
    4/6/09 2:31:24 PM /System/Library/CoreServices/backupd[2141] Starting standard backup
    4/6/09 2:31:34 PM /System/Library/CoreServices/backupd[2141] Mounted network destination using URL: afp://[email protected]/Sina's%20Time%20Capsule
    4/6/09 2:31:34 PM /System/Library/CoreServices/backupd[2141] Backup destination mounted at path: /Volumes/Sina's Time Capsule
    4/6/09 2:31:39 PM /System/Library/CoreServices/backupd[2141] Disk image /Volumes/Sina's Time Capsule/Sina’s MacBook_0017f2347181.sparsebundle mounted at: /Volumes/Backup of Sina’s MacBook
    4/6/09 2:31:39 PM /System/Library/CoreServices/backupd[2141] Backing up to: /Volumes/Backup of Sina’s MacBook/Backups.backupdb
    4/6/09 2:39:45 PM /System/Library/CoreServices/backupd[2141] No pre-backup thinning needed: 862.6 MB requested (including padding), 205.56 GB available
    4/6/09 2:46:52 PM /System/Library/CoreServices/backupd[2141] Bulk setting Spotlight attributes failed.
    4/6/09 2:48:07 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:48:07 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:48:07 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:50:06 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:50:06 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:57:21 PM /System/Library/CoreServices/backupd[2141] Bulk setting Spotlight attributes failed.
    4/6/09 3:03:57 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 3:03:58 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 3:03:58 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 3:12:55 PM /System/Library/CoreServices/backupd[2141] Copied 29426 files (162.6 MB) from volume Macintosh Disk.
    4/6/09 3:17:36 PM /System/Library/CoreServices/backupd[2141] No pre-backup thinning needed: 678.9 MB requested (including padding), 205.56 GB available
    4/6/09 3:37:18 PM /System/Library/CoreServices/backupd[2141] Bulk setting Spotlight attributes failed.
    4/6/09 3:37:57 PM /System/Library/CoreServices/backupd[2141] Copied 2362 files (100.3 MB) from volume Macintosh Disk.
    4/6/09 3:42:00 PM /System/Library/CoreServices/backupd[2141] Starting post-backup thinning

  • RPURMP00 program takes too long

    Hi Guys,
    Need some help on this one guys. Not getting any where with this issue.
    I am running RPURMP00 ( Program to  Create Third-Party Remittance Posting Run ) and while running it in test mode for 1 employee it takes too long .
    I ran this in background during off hours , but it takes 19,000 + sec to run and then cancels .
    The long text message is “No entry in table T51R6_FUNDINFO (Remittance detail table for all entities) for key 0002485844 “   and     “Job cancelled after system exception ERROR_MESSAGE”
    I check the program and I found a nested loop within the program (include RPURMP02 ) and decided to debug it with a break point.
    It short dumped and here is the st22 message and source code extract.
          ----Message -
    " Time limit exceeded ".
    "The program "RPURMP00" has exceeded the maximum permitted runtime without
    Interruption and has therefore been terminated."
          ----Source code extract -
    Include RPURMP02
      172 &----                   
      173 *&      Form  get_advice_info                                                               
      174 &----                   
      175 *       text                                                                               
    176 ----                   
      177 *  -->  p1        text                                                                     
      178 *  <--  p2        text                                                                      
      179 ----                   
      180 FORM get_advice_info .                                                                     
      181                                                                               
    182 * get information for advice form only if vendor sub-group and                             
      183 * employee detail is maintained                                                             
      184   IF ( NOT t51rh-lifsg IS INITIAL ) AND                                                    
      185      ( NOT t51rh-hrper IS INITIAL ).                                                       
      186                                                                               
    187 *   get remittance items employee number                                                   
      188     SELECT * FROM t51r4 WHERE remky = t51r5-remky. "#EC CI_GENBUFF "SAM0632658             
      189 *     get payroll seqno determined by PERNR and RDATN                                      
    >>>>>       SELECT * FROM t51r8 WHERE pernr = t51r4-pernr                                        
      191                             AND rdatn = t51r5-rdatn                                        
      192                             ORDER BY PRIMARY KEY. "#EC CI_GENBUFF                          
      193         EXIT.                                                                               
    194       ENDSELECT.                                                                               
    Has anyone ever come across this situation? Any input from anyone on this?
    Regards.
    CJ

    Hi,
    What is your SAP version?
    Have you checked if some OSS notes is there on performance.
    Regards,
    Atish

  • AME CS6 rendering with AE and Pr takes too long

    Hi Guys,
    Need some help here. i have rendered a 30 secs mp4 video with 1920 x 1080 HD format 25 frames w/o scripting in AME for 4 hours!
    Why does it take too long? I have rendered a 2 minute video with same format w/ scripting but only spare less than 30 minutes for rendering.
    Im using After Effects and Premium Pro both CS6 and using Dynamic Link in AME.
    What seems to be wrong in my current settings?
    Any help would be appreciated.
    Thanks!

    This may be a waste of time, but it won't take a minute and is something you should always do whenever things go strangely wrong  ............ trash the preferences, assuming you haven't done it already.
    Many weird things happen as a result of corrupt preferences which can create a vast range of different symptoms, so whenever FCP X stops working properly in any way, trashing the preferences should be the first thing you do using this free app.
    http://www.digitalrebellion.com/prefman/
    Shut down FCP X, open PreferenceManager and in the window that appears:-
    1. Ensure that only  FCP X  is selected.
    2. Click Trash
    The job is done instantly and you can re-open FCP X.
    There is absolutely no danger in trashing preferences and you can do it as often as you like.
    The preferences are kept separately from FCP X and if there aren't any when FCP X opens it automatically creates new ones  .  .  .  instantly.

  • TM  backup takes too much space

    I restored the boot disk (Macintosh HD) from TM's last backup.
    The first new backup takes too much space on your external drive!
    Is it true?
    Macintosh HD:
    Capacity: 250.66 GB
    Available: 130.01 GB
    Message was edited by: tamias

    Thank you, I understand.
    I have 3 partitions and I just restored the system partition, the other two were normal.
    Macintosh HD:
    Capacity: 250.66 GB
    Available: 131.05 GB
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk0s2
    Mount Point: /
    Work:
    Capacity: 374.11 GB
    Available: 160.36 GB
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk0s3
    Mount Point: /Volumes/Work
    Extra:
    Capacity: 73.3 GB
    Available: 31.64 GB
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk0s4
    Mount Point: /Volumes/Extra
    I must have three times to recover the system partition.
    I'm not sure that the drive for 1 year of operation is out of order. During the day the computer works fine, but every morning after wake up,
    when disc is cold ,
    and after 2 - 3 minutes go to freezing all apps. It seems that does not see the HDD.
    After a forced shutdown and restart the system sometimes can boot normally and without error in the file system, but often does not load.
    After several attempts to boot the system it is still loaded, but disk utility shows :
    2009-02-04 13:46:25 +0800: Verifying volume “Macintosh HD”
    Starting verification tool: 2009-02-04 13:46:25 +0800
    2009-02-04 13:47:53 +0800:
    2009-02-04 13:47:53 +0800: Performing live verification.
    2009-02-04 13:47:53 +0800: Checking Journaled HFS Plus volume.
    2009-02-04 13:47:53 +0800: Checking Extents Overflow file.
    2009-02-04 13:47:53 +0800: Checking Catalog file.
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: Incorrect block count for file indexr3_db
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: (It should be 2080 instead of 2264)
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: Incorrect block count for file indexr3_repo
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: (It should be 8932 instead of 9180)
    2009-02-04 13:47:53 +0800: Checking multi-linked files.
    2009-02-04 13:47:53 +0800: Checking Catalog hierarchy.
    2009-02-04 13:47:53 +0800: Checking Extended Attributes file.
    2009-02-04 13:47:53 +0800: Checking volume bitmap.
    2009-02-04 13:47:53 +0800: Checking volume information.
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: The volume Macintosh HD needs to be repaired.
    Repair can not restore the system partition. I restore the Macintosh HD from last backup and it works well deep into the night!
    Maybe temperature sensor does not work, or is a hard drive?
    Thank you, I understand.
    I have 3 partitions and I just restored the system partition, the other two were normal.
    Macintosh HD:
    Capacity: 250.66 GB
    Available: 131.05 GB
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk0s2
    Mount Point: /
    Work:
    Capacity: 374.11 GB
    Available: 160.36 GB
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk0s3
    Mount Point: /Volumes/Work
    Extra:
    Capacity: 73.3 GB
    Available: 31.64 GB
    Writable: Yes
    File System: Journaled HFS+
    BSD Name: disk0s4
    Mount Point: /Volumes/Extra
    I must have three times to recover the system partition.
    I'm not sure that the drive for 1 year of operation is out of order. During the day the computer works fine, but every morning after wake up,
    when disc is cold ,
    and after 2 - 3 minutes go to freezing all apps. It seems that does not see the HDD.
    After a forced shutdown and restart the system sometimes can boot normally and without error in the file system, but often does not load.
    After several attempts to boot the system it is still loaded, but disk utility shows :
    2009-02-04 13:46:25 +0800: Verifying volume “Macintosh HD”
    Starting verification tool: 2009-02-04 13:46:25 +0800
    2009-02-04 13:47:53 +0800:
    2009-02-04 13:47:53 +0800: Performing live verification.
    2009-02-04 13:47:53 +0800: Checking Journaled HFS Plus volume.
    2009-02-04 13:47:53 +0800: Checking Extents Overflow file.
    2009-02-04 13:47:53 +0800: Checking Catalog file.
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: Incorrect block count for file indexr3_db
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: (It should be 2080 instead of 2264)
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: Incorrect block count for file indexr3_repo
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: (It should be 8932 instead of 9180)
    2009-02-04 13:47:53 +0800: Checking multi-linked files.
    2009-02-04 13:47:53 +0800: Checking Catalog hierarchy.
    2009-02-04 13:47:53 +0800: Checking Extended Attributes file.
    2009-02-04 13:47:53 +0800: Checking volume bitmap.
    2009-02-04 13:47:53 +0800: Checking volume information.
    2009-02-04 13:47:53 +0800: 2009-02-04 13:47:53 +0800: The volume Macintosh HD needs to be repaired.
    Repair can not restore the system partition. I restore the Macintosh HD from last backup and it works well deep into the night!
    Maybe temperature sensor does not work, or is a hard drive?
    WDC WD7500AAKS
    Total Capacity : 698.6 GB (750,156,374,016 Bytes)
    Message was edited by: tamias

  • My Query takes too long ...

    Hi ,
    Env   , DB 10G , O/S Linux Redhat , My DB size is about 80G
    My query takes too long ,  about 5 days to get results , can you please help to rewrite this query in a better way ,
    declare
    x number;
    y date;
    START_DATE DATE;
    MDN VARCHAR2(12);
    TOPUP VARCHAR2(50);
    begin
    for first_bundle in
    select min(date_time_of_event) date_time_of_event ,account_identifier  ,top_up_profile_name
    from bundlepur
    where account_profile='Basic'
    AND account_identifier='665004664'
    and in_service_result_indicator=0
    and network_cause_result_indicator=0
    and   DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
    group by account_identifier,top_up_profile_name
    order by date_time_of_event
    loop
    select sum(units_per_tariff_rum2) ,max(date_time_of_event)
    into x,y
    from OLD_LTE_CDR
    where account_identifier=(select first_bundle.account_identifier from dual)
    and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
    and -- no more than a month
    date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
    and -- finished his bundle then buy a new one
      date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
                             FROM OLD_LTE_CDR
                             WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
                             AND IN_SERVICE_RESULT_INDICATOR=26);
    select first_bundle.account_identifier ,first_bundle.top_up_profile_name
    ,FIRST_BUNDLE.date_time_of_event
    INTO MDN,TOPUP,START_DATE
    from dual;
    insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
    end loop;
    COMMIT;
    end;

    > where account_identifier=(select first_bundle.account_identifier from dual)
    Why are you doing this?  It's a completely unnecessary subquery.
    Just do this:
    where account_identifier = first_bundle.account_identifier
    Same for all your other FROM DUAL subqueries.  Get rid of them.
    More importantly, don't use a cursor for loop.  Just write one big INSERT statement that does what you want.

  • Sql Query takes too long to enter into the first line

    Hi Friends,
      I am using SQLServer 2008. I am running the query for fetching the data from database. when i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second) to enter into first
    line of the stored procedure. After its enter into the first statement of the SP, its fetching the data within a second. I think there is no problem with Sqlquery.  Kindly let me know if you know the reason behind this.
    Sample Example:
    Create Sp Sp_Name
    as
     Begin
     print Getdate()
      Sql statements for fetching datas
     Print Getdate()
     End
    In the above example, there is no difference between first date and second date.
    Please help me to trouble shooting this problem.
    Thanks & Regards,
    Rajkumar.R

     i am running first time after executed the "DBCC FREEPROCCACHE" query for clear cache memory, it takes too long (7 to 9 second)
    Additional to Manoj: A
    DBCC FREEPROCCACHE clears the procedure cache, so all store procedure must be newly compilied on the first call.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

Maybe you are looking for

  • Why does my photos need loading in the gallery

    my pictures are taking time to load and now they re not even loading i can see that exclamation mark at the bottom of the picture which is killing my picture clarity and i cannot even forward pctures over whatspp what should be done ? badly need help

  • Remapping Macintosh keyboard for use with XP?

    Hi all, I am using a KVM to share my monitor, mouse, and standard white Mac keyboard between the G5 and a Windows XP machine. Been trying to find a simple means of remapping the keys in XP so that things like @ and " are swapped, amongst others. I've

  • Display Toolbar error

    Post Author: app CA Forum: .NET When I enable "Display Toolbar"  on the CrystalReportViewer I get an error when I try to display the report on a web page.  The error is "Control 'CrystalReportViewer1_ctl02_ctl01' of type 'ToolbarButton' must be place

  • Operating system name and version

    HI, I was wondering where SCCM gets the value for "system resource - Operating system and version" from. I have created a limiting collection for servers based on "%windows server%" query and a limiting collection based on "%windows workstation%" for

  • LinkedButton and lf_Employee

    Hi all, I create a linkButton and a edit text. I'm link the button to lF_Employee and to the Edit Text. The link doesn't work, i can't open the form associate at the name of the edit text. If i insert the name or the empID, nothing happends. The butt