Asynchronous restart

Hi,
we have an asynchronous interface from ECC to PI and then, through a BPM,  synchronous messages are sent from PI to External system.
The problem we have is that if PI is closed, we gather many messages in ECC and when PI is restarted ,
many messages are sent to PI and then to the external site at once.
Question : how can you configure the retry of messages in ECC when PI is closed ?
Question : how can I reduce the flow of messages so that less messages are sent at once to the external site ?
Thank you.

Question : how can you configure the retry of messages in ECC when PI is closed ?
There must be a scheduling mechanism at your sender to send message to PI periodically. When PI is down, ECC RFC destination call to PI would fail. You may use a report RFC_VERIFY_DESTINATION to identify if the destination is reachable. If it not, then you may increase the time duration of the next run of the schedule.
Regards,
Prateek

Similar Messages

  • It is possible to get a response of a asynchronous RFC call thru wait loop

    Hi Experts,
    The requirement is that i queue up all the requests from a web services ( One queue per plant for example) which i set in the HTTP sender URL ( i mean the queue name ). and i am posting the records thru a RFC call.
    1. Let the RFC call be synchronous and in case the R/3 system is done.. Will the XI queue keep the Call and keep on retrying it till the R/3 system is up.
    2. Otherwise if the RFC call be asynchronous, is it possible to introduce a wait loop in BPM to listen to the RFC till i get some response ( i think this is not possible).
    Need your suggestion on the same...
    The requirement demands that no single request from the web service should be lost.. in case of failure of R/3 they should be queued up and re tried.. Forget about the error because of exception in RFC this will be taken care manually..
    Thanks for your response in advance.

    Hi Rajesh,
    you can queue and restart only asynchronous messages. I think you can build a Business Process, where you repeat sending a message til you have success or maximum number of tries is reached. But no good idea, coz you will get timeout at HTTP sender. Aynchronous messages dont have a response.
    May be, XI can solve your task in the standard: In transaction SMQR you can just configure what you describe. All asynchronous messages will be queued and restartet in case of receiver not reachable. You can configure no of retries and intervall between.
    Regards,
    Udo

  • Fault in initializing asynchronous process

    Hi Guys,
    As many posts and articles suggest, asynchronous process is better because of its flexibility. I am very happy with it until I encounter the following problem.
    We do the fault handling on all of our processes, using both the catch and fault policy framework. For asynchronous process, we use a separate invoke to return the fault. Recently, we noticed a problem. For some reasons, every Sunday when the production BPEL server restarts, there are always a few BPEL process turned retired and off. We've already raised a service request for this on metalink. But this causes another very serious issue:
    The calling BPEL instance sends a request to initiate the asynchronous process . As it is turned off and retired by accident, the initialization fails. However, as it's asynchronous, the calling process is not notified, which means there is no fault raised on the calling process. This causes a lot of trouble because we are not notified with the problem. This has never been a problem because we did not have process turned off and retired before.
    This can be easily verified by a simple test case. Create a synchronous process A and an asynchronous process B. Let A invoke B. Then turn off and retire B. A instance can still be executed successfully without saying the problem calling B. If B is a synchronous process, A will always report a fault in such situation.
    Of course, the domain log file records this ORABPEL-02106 problem. but we don't want to check log file all the time and want to be notified when it happens. Is there a way to catch such exception/fault in the calling process?
    Thanks in advance!!!
    Steven
    Edited by: sw12345 on Jul 26, 2010 9:41 AM

    HI Sridhar,
    Thanks for you advice. I think I will do the following:
    1. I will still keep all process as asynchronous with the deliveryPersistPolicy as default.
    2. I will add a cron job to scan the server whenever it restarts to detect any default version of BPEL process with a retire or off status. If there are, I'll correct them.
    3. I will implement the program to re-submit the failed invoke. But I cannot make it an alert program because I cannot get detail fault through IInvokeHandle (Please correct me if I am wrong).
    By the way the defect of the "ClassCastException" is fixed in 10.1.3.5, I guess that can upgrade that too :-)
    Thanks,
    Steven

  • Application errors occured in ccBPM not restartable?

    hello experts,
    We are using ccBPM for some of the integration scenarios and synchronously posting data to SAP R/3. When an BAPI returns an application error code (e.g. material locked by an user), we are unable to restart these messages from SXMB_MONI hence we have to manually edit and retransfer XML files. This is very cumbersome and error prone. We have informed the implementation partner but they are saying that application specific errors can not be restarted.
    We have gone through the documentation http://help.sap.com/saphelp_nw04/helpdata/en/ea/2f053d39177608e10000000a114084/content.htm and it says :
    "Application error (restart possible)
    An application error occurred during message processing. For asynchronous processing, this status only occurs in the receiver system; for synchronous processing, it occurs in all systems involved."
    We have also gone through the blog /people/shabarish.vijayakumar/blog/2006/11/02/fault-message-types--a-demo-part-1 by Shabarish Vijayakumar. We want the same result but we are using ccBPM send step to invoke a standard BAPI via RFC adapter instead of using ABAP server proxy.
    Can someone please help us out? users are complaining the old R/3 LSWM uploads were much more feature rich & user friendly
    thanks & regards,
    Harsha.

    Hi Uri,
    thanks for the suggestion. I would assume this will require changes in the BPM. Why cant we simply catch the BAPI return code 'E' and mark the message as an application error, which is restartable?
    This is a  very basic requirement and I am sure there must be a way of achieving what we want but I don't have enough knowledge on XI to do it.
    Is there any place we can find a how-to guide?
    thanks & regards,
    Harsha.

  • JMS Queue loosing messages on Server restart

    Hi ,
    We are having a problem with a queue that is loosing pending messages if the server is restarted.
    We have following setup of Oracle Weblogic version 9.2 mp1.
    The JMS Server is backed up by a DB based persistent store.we are having a message driven bean that process the message in the queue asynchronously.
    the message that is delivered to the queue has Delivery Mode 'persistent'
    and the mdb using transactionType=MessageDriven.MessageDrivenTransactionType.CONTAINER.
    What is strange that we are loosing pending message in the queue , when server is restarted .even checking in the xxx'_WLSSTORE table in db i can see the message
    was persisted.
    If anyone of you have seen this situation kindly let me know.
    thanks in advance.
    Rgds
    roy

    You can follow this article to troubleshoot pending message issues.
    http://weblogic-wonders.com/weblogic/2011/01/10/working-with-jms-and-the-standard-issues-in-jms/
    -Faisal

  • Re : what is diffrent Between  Synchronies and   Asynchronies  process

    Hi ,
          what is diffrent between Synchronies and   Asynchronies  process in  session Method and call Transcation method  pls give one Example...
    Thanks
    Arief .S

    Synchronus data processing is that in which the program calling the update task waits for the update work process to finish the update before it continues processing.
    In Asynchronus update the callng program does not wait for update work process to finish the update and continues as normal.
    A BDC done with sessions is always synchronus.
    A BDC with call transaction is by default asynchronus
    unless you define it explicitly as
    call transaction 'XXXX' ...... update 'S'.
    ( If you donot define update option it is defaulted to "A" ).
    The update method is of importance when one transaction locks data which may be required by a subsequent transaction . The subsequent transaction will fail if data is locked from previous one. An example would be you are creating sales order for same material in succession ( with asynchronus update ). Quite likely that some of transactions would fail due to material locked.
    For large volume of data Call Transaction will be faster but you have no restart capability here. Suppose from 1000 transactions 100 fails . You will have to run the BDC program again exclusing the ones which wrere successful. However with session method you have the option to process the error transactions again in SM35 . So if you are sure that errors will not occur use call transaction else use session method.

  • Message doesnot return back in sequantial asynchronous scenario

    I have an scenario like this,all of messages are executing asynchronously by calling from sproxy,
    Request
    clientZET029 --> NumberRangeRequest_Out   XI  NumberRangeRequest_In --> clientOER030
    Response
    clientOER030    --> NumberRangeConfirmation_Out  XI   clientZET029 NumberRangeConfirmation_In--> clientA
    when i try to carry out all scenario i get error "XI Error NO_RECEIVER_CASE_ASYNC.RCVR_DETERMINATION"
    I executing NumberRangeRequest_Out service from sproxy of ZET029 and expecting a return message from OER030 but it arrives to OER030 but not come back it gives error, and like you see , sender of all messages are ZET029 it is the problem i think
    here my sxmb_moni logs
    Processed successfully               ZET029     NumberRangeRequest_Out               NumberRangeRequest_Out               SENDER     PROXY     IENGINE
    Processed successfully               ZET029     NumberRangeRequest_Out          OER030     NumberRangeRequest_In               CENTRAL     IENGINE     IENGINE
    Processed successfully               ZET029                         OER030     NumberRangeRequest_In               RECEIVERIENGINE     PROXY
    System Error - Manual Restart Possible     ZET029     NumberRangeConfirmation_Out                                   CENTRAL     IENGINE     
    Processed successfully@               ZET029     NumberRangeConfirmation_Out          NumberRangeConfirmation_Out          SENDER     PROXY     IENGINE
    Thanks

    I can't understand in asynch how response will come back to sender?

  • Acknowldegement in asynchronous scenario: is it sync or async

    Dear community,
    in an asynchronous scenario with a fault message type defined for the inbound interface, the handover of a message from XI to receiver consists of two steps:
    Step 1: XI will send a message to the receiver
    Step 2: The receiver will acknowledge positively or negatively.
    Will Step 1 and Step 2 be handeled synchronously or will it be handeled asynchronously for async scenarios ?
    The background of this question is resource-related. If the processing of a message is synchronous, processing of these two steps would block resources, which would be problematic for higher volumes.
    Logically it seems as if this would all be async but I could not find a detailled spec and I need to know for sure.
    Thanks a lot in advance !
    Jochen

    Hi Jochen,
    if you want to have the result of a message at the receiver you have 2 alternatives:
    Synchronous:
    +You get the errors at once
    +You know very soon, what happend
    -Bad performance, bcoz processing time may be for minutes stopped
    -no connection (receiver offline) leads to an error
    Asynchronous with acknowledgement:
    -You get only sometimes an error acknowledgement
    -The calling programm has no direct information
    +Much better performance
    +Queing mechanism allows automaticly and manually restart
    Usually you try to build, if possible, asynchronous scenarios. But it depends on requirements.
    Regards,
    Udo

  • Restarting Queue not allowed in PI

    Hi Felix,
    You can use
    RSQOWKEX --> QOUT Scheduler: Execution of Registered Outbound Queues
    OR
    RSQIWKEX - -> Standard QIN Scheduler: Execution of Registered Inbound Queue
    These are available in PI 7.1.
    Please see this Blog
    /people/sap.india5/blog/2006/01/03/xi-asynchronous-message-processing-understanding-xi-queues-part-i
    http://help.sap.com/saphelp_nw04/helpdata/en/25/bcfa40badbf46fe10000000a1550b0/content.htm

    HI Felix,
    If you double click on any Queue entry then you can see all the message entry inside that queue.
    now click on the sysfail entry then it will take you to SXMB_MONI particular corresponding faile message.
    then cancel the message.
    Automatic Cancel:
    Here you GO:
    If in case of bulk error messages, all these error messages can either be cancel them manually by selecting these set of messages in SXMB_MONI and then cancel them by pressing button "Cancel Processing of Messages With Error" (CTRL+F8). or report RSXMB_CANCEL_MESSAGES is the tool of choice for mass data handling, i.e. for canceling a large set of messages. This report handles message that already are in error state.
    SE38 -
    >RSXMB_CANCEL_MESSAGES
    If you cancel the message using RSXMB_CANCEL_MESSAGES then it will be removed from the QUeUE.
    RSXMB_CANCEL_NOT_REST_MESSAGES Cancel XI Messages With Errors That Cannot Be Restarted
      we cannot cancel the messages which are in pending status. But there is a way by which we can bypass holding messages. For this, we need to go to SMQ2, select the queue, select holding message LUW and select the option to save LUW. We can restore these LUWs back from tcode SMQ3.
    Also You may be find interesest in this Blog
    /people/stefan.grube/blog/2006/04/27/how-to-deal-with-stuck-eoio-messages-in-the-xi-30-adapter-framework
    URLs of help.sap.com
    1) http://help.sap.com/saphelp_nw04s/helpdata/en/f5/d347ddec72274ca2abb0d7682c800b/frameset.htm
    2) http://help.sap.com/saphelp_nw04s/helpdata/en/41/b715015ffc11d5b3ea0050da403d6a/frameset.htm
    3) http://help.sap.com/saphelp_nw04s/helpdata/en/0e/80553b4d53273de10000000a114084/frameset.htm 
    Edited by: Debashish Roy on Jul 19, 2011 6:44 PM
    Edited by: Debashish Roy on Jul 19, 2011 6:47 PM

  • Restarting ccBPM.

    All
    we have a ccBPM process with lots of steps.
    1. receive with collection pattern.
    2. mapping transformation.
    3. synchronous call.
    4. mapping transformation
    5. asynchronous call.
    6. mapping transformation
    7. send the output.
    If there is a error in step 4 (or even step 3, if the system is down), we see it in the workflow view as well as in the SXMB_MONI.
    How do we restart the ccBPM from the error point? I tried to restart the failed step in the sxmb_moni, and it wouldnt allow me saying "this step with this status" is not allowed for restart.
    Is there a standard way to restart ccBPM, when they are stuck in error?
    Thanks.

    Raj,
    Tried the SXMB_MONI_BPE, but there is a problem in the way the restart after error works.
    In my example the step 2 - mapping has an issue (or a bug), which causes step 3 sync call to the external system to come back with the message (that has error status). Now, the step 4 mapping fails, because of this.
    When i try to reflow this ccBPM (after fixing step 2 mapping) - it tries to restart from step 4 and fails again and doesnt start from the begining.
    Is there a workaround for this? or not?
    Thanks.

  • Ecatt synchronous and asynchronous update

    Hi,
        I want some details about ECATT. In the UI control tab of test configuration there are around six start modes .What is the difference between
    process in background synchronous local, process in background synchronous not local and process in background asynchronous update.
    Regards,
    Charumathi.B

    Hi Kumar,
    Synchronus data processing is that in which the program calling the update task waits for the update work process to finish the update before it continues processing.
    In Asynchronus update the callng program does not wait for update work process to finish the update and continues as normal.
    A BDC done with sessions is always synchronus.
    A BDC with call transaction is by default asynchronus
    unless you define it explicitly as
    call transaction 'XXXX' ...... update 'S'.
    ( If you donot define update option it is defaulted to "A" ).
    The update method is of importance when one transaction locks data which may be required by a subsequent transaction . The subsequent transaction will fail if data is locked from previous one. An example would be you are creating sales order for same material in succession ( with asynchronus update ). Quite likely that some of transactions would fail due to material locked.
    For large volume of data Call Transaction will be faster but you have no restart capability here. Suppose from 1000 transactions 100 fails . You will have to run the BDC program again exclusing the ones which wrere successful. However with session method you have the option to process the error transactions again in SM35 . So if you are sure that errors will not occur use call transaction else use session method.
    Please also check this link for differences between call transaction and batch input method
    http://help.sap.com/saphelp_47x200/helpdata/en/fa/097015543b11d1898e0000e8322d00/frameset.htm
    Hope this will help.
    Regards,
    Ferry Lianto
    Please reward points if helpful.

  • Synchronous and Asynchronous streams

    Hey,
    I have an application middle ware that communicates with (a local or alternatively remote ) backend via streams. Atleast that is how I am planning it right now. Nothing implemented yet, only thinking over the design. There is a requirement of communication between the middleware and the backend to have synchronous or asynchronous msgs.
    The synchronous msgs I get. I send a msg and keep the stream listening for a reponse. But some msgs can be asynchronous, if I ask the stream to listen for them that will pull out all resources. What would be the solution for such asynchronous msgs.
    So is this a solution to the problem that I have a single stream (wrapped in a thread), it sends msgs, irrespective of the knowledge whether the response will be syn. or asyn. , and it sleeps after doing so. A question here would be that : Are the wait() methods of streams to this effect that I desire. Is this the solution. Would it pull resources or something.
    Is there a better solution that someone knows off.

    Hi Kumar,
    Synchronus data processing is that in which the program calling the update task waits for the update work process to finish the update before it continues processing.
    In Asynchronus update the callng program does not wait for update work process to finish the update and continues as normal.
    A BDC done with sessions is always synchronus.
    A BDC with call transaction is by default asynchronus
    unless you define it explicitly as
    call transaction 'XXXX' ...... update 'S'.
    ( If you donot define update option it is defaulted to "A" ).
    The update method is of importance when one transaction locks data which may be required by a subsequent transaction . The subsequent transaction will fail if data is locked from previous one. An example would be you are creating sales order for same material in succession ( with asynchronus update ). Quite likely that some of transactions would fail due to material locked.
    For large volume of data Call Transaction will be faster but you have no restart capability here. Suppose from 1000 transactions 100 fails . You will have to run the BDC program again exclusing the ones which wrere successful. However with session method you have the option to process the error transactions again in SM35 . So if you are sure that errors will not occur use call transaction else use session method.
    Please also check this link for differences between call transaction and batch input method
    http://help.sap.com/saphelp_47x200/helpdata/en/fa/097015543b11d1898e0000e8322d00/frameset.htm
    Hope this will help.
    Regards,
    Ferry Lianto
    Please reward points if helpful.

  • Synchronous and asynchronous mode

    Hi all,
          when to use synchronous and asynchronous mode in BDC
    cheers

    Hi Kumar,
    Synchronus data processing is that in which the program calling the update task waits for the update work process to finish the update before it continues processing.
    In Asynchronus update the callng program does not wait for update work process to finish the update and continues as normal.
    A BDC done with sessions is always synchronus.
    A BDC with call transaction is by default asynchronus
    unless you define it explicitly as
    call transaction 'XXXX' ...... update 'S'.
    ( If you donot define update option it is defaulted to "A" ).
    The update method is of importance when one transaction locks data which may be required by a subsequent transaction . The subsequent transaction will fail if data is locked from previous one. An example would be you are creating sales order for same material in succession ( with asynchronus update ). Quite likely that some of transactions would fail due to material locked.
    For large volume of data Call Transaction will be faster but you have no restart capability here. Suppose from 1000 transactions 100 fails . You will have to run the BDC program again exclusing the ones which wrere successful. However with session method you have the option to process the error transactions again in SM35 . So if you are sure that errors will not occur use call transaction else use session method.
    Please also check this link for differences between call transaction and batch input method
    http://help.sap.com/saphelp_47x200/helpdata/en/fa/097015543b11d1898e0000e8322d00/frameset.htm
    Hope this will help.
    Regards,
    Ferry Lianto
    Please reward points if helpful.

  • How to Clear JMS pending message withour restarting weblogic

    Hi All,
    We are using JMS concept in our weblogic for asynchronous backend. We have pollJMS  OSB service that process the JMS message to async target.
    For some reason messages are pilling up in queue and poll service is not able to send the messages to target.
    Is there any way if we can clear the queue without restarting the weblogic.
    Any help on this would really be appreciable.
    Thanks,

    Try through WLST
    connect('weblogic', 'weblogic', 't3://HOST:PORT')
    serverRuntime()
    cd('/JMSRuntime/ManagedSrv1.jms/JMSServers/MyAppJMSServer/Destinations/MyAppJMSModule!QueueNameToClear')
    cmo.deleteMessages('')
    To use Java/WLST to delete JMS Message refer below link
    How to purge/delete message from weblogic JMS queue - Stack Overflow
    Cheers,
    Sahil

  • [SOLVED] Duplicity getting 'stuck' when attempting to restart a backup

    Hey all,
    After a backup attempt failed partway through (first-time full backup) with this error:
    AsyncScheduler: scheduling task for asynchronous execution
    No handlers could be found for logger "paramiko.transport"
    AsyncScheduler: a previously scheduled task has failed; propagating the result immediately
    AsyncScheduler: task execution done (success: False)
    Backend error detail: Traceback (most recent call last):
    File "/usr/bin/duplicity", line 1391, in <module>
    with_tempdir(main)
    File "/usr/bin/duplicity", line 1384, in with_tempdir
    fn()
    File "/usr/bin/duplicity", line 1354, in main
    full_backup(col_stats)
    File "/usr/bin/duplicity", line 500, in full_backup
    globals.backend)
    File "/usr/bin/duplicity", line 399, in write_multivol
    (tdp, dest_filename, vol_num)))
    File "/usr/lib/python2.7/site-packages/duplicity/asyncscheduler.py", line 151, in schedule_task
    return self.__run_asynchronously(fn, params)
    File "/usr/lib/python2.7/site-packages/duplicity/asyncscheduler.py", line 215, in __run_asynchronously
    with_lock(self.__cv, wait_for_and_register_launch)
    File "/usr/lib/python2.7/site-packages/duplicity/dup_threading.py", line 100, in with_lock
    return fn()
    File "/usr/lib/python2.7/site-packages/duplicity/asyncscheduler.py", line 207, in wait_for_and_register_launch
    check_pending_failure() # raise on fail
    File "/usr/lib/python2.7/site-packages/duplicity/asyncscheduler.py", line 191, in check_pending_failure
    self.__failed_waiter()
    File "/usr/lib/python2.7/site-packages/duplicity/dup_threading.py", line 201, in caller
    value = fn()
    File "/usr/lib/python2.7/site-packages/duplicity/asyncscheduler.py", line 183, in <lambda>
    (waiter, caller) = async_split(lambda: fn(*params))
    File "/usr/bin/duplicity", line 398, in <lambda>
    async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename, vol_num: put(tdp, dest_filename, vol_num),
    File "/usr/bin/duplicity", line 296, in put
    backend.put(tdp, dest_filename)
    File "/usr/lib/python2.7/site-packages/duplicity/backends/sshbackend.py", line 191, in put
    raise BackendException("sftp put of %s (as %s) failed: %s" % (source_path.name,remote_filename,e))
    BackendException: sftp put of /tmp/duplicity-9_jssu-tempdir/mktemp-uqQRIn-42 (as duplicity-full.20120507T060530Z.vol41.difftar
    .gpg) failed: Server connection dropped:
    BackendException: sftp put of /tmp/duplicity-9_jssu-tempdir/mktemp-uqQRIn-42 (as duplicity-full.20120507T060530Z.vol41.difftar
    .gpg) failed: Server connection dropped:
    the process keeps getting stuck when I try to restart it. There's almost no CPU use and no useful error messages. Here's the output (the version and parameters are included) with -v debug:
    Using archive dir: /root/.cache/duplicity/1498ef4c9557164d69523e5aec4c78ce [83/4524]
    Using backup name: 1498ef4c9557164d69523e5aec4c78ce
    Import of duplicity.backends.sshbackend Succeeded
    Import of duplicity.backends.rsyncbackend Succeeded
    Import of duplicity.backends.botobackend Succeeded
    Import of duplicity.backends.webdavbackend Succeeded
    Import of duplicity.backends.hsibackend Succeeded
    Import of duplicity.backends.localbackend Succeeded
    Import of duplicity.backends.ftpsbackend Succeeded
    Import of duplicity.backends.giobackend Failed: No module named gio
    Import of duplicity.backends.ftpbackend Succeeded
    Import of duplicity.backends.cloudfilesbackend Succeeded
    Import of duplicity.backends.u1backend Succeeded
    Import of duplicity.backends.tahoebackend Succeeded
    Import of duplicity.backends.imapbackend Succeeded
    Import of duplicity.backends.gdocsbackend Succeeded
    Main action: inc
    ================================================================================
    duplicity 0.6.18 (February 29, 2012)
    Args: /usr/bin/duplicity --use-agent -vdebug --encrypt-key YYYYYYYY --sign-key XXXXXXX --full-if-older-than 60D --exclude /mn
    t/backup/bfd-training-data --exclude /mnt/backup/fsarchiver --exclude /mnt/backup/media --exclude /mnt/backup/usapparel --asyn
    chronous-upload /mnt/backup scp://[email protected]//backups/homeserver
    Linux homeserver 3.0.29-1-lts #1 SMP PREEMPT Mon Apr 23 09:41:11 CEST 2012 x86_64 AMD Sempron(tm) Processor 3400+
    /usr/bin/python2 2.7.3 (default, Apr 24 2012, 00:00:54)
    [GCC 4.7.0 20120414 (prerelease)]
    ================================================================================
    Using temporary directory /tmp/duplicity-OjWCPd-tempdir
    Registering (mkstemp) temporary file /tmp/duplicity-OjWCPd-tempdir/mkstemp-GZ2ieI-1
    Temp has 490532864 available, backup will use approx 60293120.
    Local and Remote metadata are synchronized, no sync needed.
    41 files exist on backend
    2 files exist in cache
    Extracting backup chains from list of files: ['duplicity-full.20120507T060530Z.manifest.part', 'duplicity-full-signat[51/4524]
    0507T060530Z.sigtar.part', 'duplicity-full.20120507T060530Z.vol1.difftar.gpg', 'duplicity-full.20120507T060530Z.vol2.difftar.g
    pg', 'duplicity-full.20120507T060530Z.vol3.difftar.gpg', 'duplicity-full.20120507T060530Z.vol4.difftar.gpg', 'duplicity-full.2
    0120507T060530Z.vol5.difftar.gpg', 'duplicity-full.20120507T060530Z.vol6.difftar.gpg', 'duplicity-full.20120507T060530Z.vol7.d
    ifftar.gpg', 'duplicity-full.20120507T060530Z.vol8.difftar.gpg', 'duplicity-full.20120507T060530Z.vol9.difftar.gpg', 'duplicit
    y-full.20120507T060530Z.vol10.difftar.gpg', 'duplicity-full.20120507T060530Z.vol11.difftar.gpg', 'duplicity-full.20120507T0605
    30Z.vol12.difftar.gpg', 'duplicity-full.20120507T060530Z.vol13.difftar.gpg', 'duplicity-full.20120507T060530Z.vol14.difftar.gp
    g', 'duplicity-full.20120507T060530Z.vol15.difftar.gpg', 'duplicity-full.20120507T060530Z.vol16.difftar.gpg', 'duplicity-full.
    20120507T060530Z.vol17.difftar.gpg', 'duplicity-full.20120507T060530Z.vol18.difftar.gpg', 'duplicity-full.20120507T060530Z.vol
    19.difftar.gpg', 'duplicity-full.20120507T060530Z.vol20.difftar.gpg', 'duplicity-full.20120507T060530Z.vol21.difftar.gpg', 'du
    plicity-full.20120507T060530Z.vol22.difftar.gpg', 'duplicity-full.20120507T060530Z.vol23.difftar.gpg', 'duplicity-full.2012050
    7T060530Z.vol24.difftar.gpg', 'duplicity-full.20120507T060530Z.vol25.difftar.gpg', 'duplicity-full.20120507T060530Z.vol26.diff
    tar.gpg', 'duplicity-full.20120507T060530Z.vol27.difftar.gpg', 'duplicity-full.20120507T060530Z.vol28.difftar.gpg', 'duplicity
    -full.20120507T060530Z.vol29.difftar.gpg', 'duplicity-full.20120507T060530Z.vol30.difftar.gpg', 'duplicity-full.20120507T06053
    0Z.vol31.difftar.gpg', 'duplicity-full.20120507T060530Z.vol32.difftar.gpg', 'duplicity-full.20120507T060530Z.vol33.difftar.gpg
    ', 'duplicity-full.20120507T060530Z.vol34.difftar.gpg', 'duplicity-full.20120507T060530Z.vol35.difftar.gpg', 'duplicity-full.2
    0120507T060530Z.vol36.difftar.gpg', 'duplicity-full.20120507T060530Z.vol37.difftar.gpg', 'duplicity-full.20120507T060530Z.vol3
    8.difftar.gpg', 'duplicity-full.20120507T060530Z.vol39.difftar.gpg', 'duplicity-full.20120507T060530Z.vol40.difftar.gpg', 'dup
    licity-full.20120507T060530Z.vol41.difftar.gpg']
    File duplicity-full.20120507T060530Z.manifest.part is not part of a known set; creating new set
    File duplicity-full-signatures.20120507T060530Z.sigtar.part is not part of a known set; creating new set
    Ignoring file (rejected by backup set) 'duplicity-full-signatures.20120507T060530Z.sigtar.part'
    File duplicity-full.20120507T060530Z.vol1.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol2.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol3.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol4.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol5.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol6.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol7.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol8.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol9.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol10.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol11.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol12.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol13.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol14.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol15.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol16.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol17.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol18.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol19.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol20.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol21.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol22.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol23.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol24.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol25.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol26.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol27.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol28.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol29.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol30.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol31.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol32.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol33.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol34.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol35.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol36.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol37.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol38.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol39.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol40.difftar.gpg is part of known set
    File duplicity-full.20120507T060530Z.vol41.difftar.gpg is part of known set
    Found backup chain [Sun May 6 23:05:30 2012]-[Sun May 6 23:05:30 2012]
    Last full backup left a partial set, restarting.
    Last full backup date: Sun May 6 23:05:30 2012
    Collection Status
    Connecting with backend: SftpBackend
    Archive dir: /root/.cache/duplicity/1498ef4c9557164d69523e5aec4c78ce
    Found 0 secondary backup chains.
    Found primary backup chain with matching signature chain:
    Chain start time: Sun May 6 23:05:30 2012
    Chain end time: Sun May 6 23:05:30 2012
    Number of contained backup sets: 1
    Total number of contained volumes: 41
    Type of backup set: Time: Num volumes:
    Full Sun May 6 23:05:30 2012 41
    No orphaned or incomplete backup sets found.
    RESTART: Volumes 41 to 42 failed to upload before termination.
    Restarting backup at volume 41.
    Registering (mktemp) temporary file /tmp/duplicity-OjWCPd-tempdir/mktemp-Fx_YTQ-2
    ^CRemoving still remembered temporary file /tmp/duplicity-OjWCPd-tempdir/mktemp-Fx_YTQ-2
    Removing still remembered temporary file /tmp/duplicity-OjWCPd-tempdir/mkstemp-GZ2ieI-1
    INT intercepted...exiting.
    Does anyone have some experience with this? It's a fairly good sized backup (well, for me at least) so I'd really hate to have to start over every time one fails!
    Thanks,
    Scott
    Last edited by firecat53 (2012-05-08 18:23:19)

    Ok, solved-ish. There were actually 3 issues:
    1. There's a problem in the 0.6.18 new python paraminko ssh internals that was preventing it from uploading more than about 1GB at a time, possibly because it's reusing the same ssh connection which some webhosts will kick off after a certain time or amount.
    2. The --num-retries parameter wasn't being read by the ssh backend
    3. There's a bug in some encryption validation code that had to be removed before a 'resume backup' operation would work correctly (it won't read the GPG encryption key correctly).
    So I reverted to 0.6.17 and removed a previously applied patch from the code found here.
    So far it seems to be working. The dev who answered me on Launchpad said that there's a memory leak with 0.6.17 so watch out when doing any backups over ~100GB or so.
    Scott

Maybe you are looking for