Error delete archive job - BROKEN_URI_EXISTS

Hello!
I'm having trouble removing Jobs archiving status Warning. I'm created a new home path and join Archive Path Properties a new folder, restarted JOB it worked, But i can not remove a Background Job status Warning error BROKEN_URI_EXISTS. In the XML DAS Administration when you click Unassign an error occurs:
Error while unassigning archive path /px1/xi_af_msg/ from archive store ARCHIVE MESSAGES; check application log of back-end system: java.lang.Exception: 598 _ASSIGN_ARCHIVE_STORES: _ASSIGN_ARCHIVE_STORES: I/O-Error while deleting collection 3iwjupqcfmi6jcuzaaaaazzg2g: Archive store returned following response: 598 I/O Error java.io.IOException: Error while deleting collection //sapmnt/PX1/global/xi/px1/xi_af_msg/2014/07/3iwjupqcfmi6jcuzaaaaazzg2g/ java.io.IOException: Error while deleting collection //sapmnt/PX1/global/xi/px1/xi_af_msg/2014/07/3iwjupqcfmi6jcuzaaaaazzg2g/
But this path //sapmnt/PX1/global/xi/px1/xi_af_msg/2014/07 no folder 07 and no files.
Please tell me how to stop and delete jobs with the status Warning
Best regards,
Rinaz

HI Rinaz,
Check if your PI system is in the SP pointed in this note  1624448 - BROKEN_URI_EXISTS error when editing archiving job
Regards.

Similar Messages

  • Error Deleting/Archiving processed files

    Hi all,
    I'm working with a File (xml) to Idoc scenario. I have configured Sender File Channel to read files and after that deleting . SAP XI can read the files (xml files) and send to my SAP R/3 system but I in communication channel monitoring I get a message "Could not delete file 'p:.xml' after processing".
    I have tried with archiving and I get a similar error, but in this case it makes the archiving twice (I get 2 files in the archiving directory) and it doesn´t delete the original file from the reading directory, so SAP XI is ever reading and archiving.
    Does anyone know any solution?
    Thanks.
    Antonio.

    Hi Antonio,
    can you check if those files have <b>read only</b> attributes?
    if so you cannot delete them from the directory ? (or archive )
    Regards,
    michal

  • ABAP error in deleting archived messages

    Hello all,
    We seem to be having a problem when deleting archived messages.
    The archiving background jobs runs periodically (SAP_BC_XMB_ARCHIVE....).
    The first part works fine ARV_BC_XMB_WRP...., and i can view the list of the message ids that have been archived in files.
    The second bit ARV_BC_XMB_DEL... fails to due a runtime error "NO_DDIC_TYPE":
    check for dictionary type
      rel_name = me->get_relative_name( ).
      if me->is_ddic_type( ) = ABAP_FALSE or rel_name is initial.
        raise no_ddic_type.
      endif.
    Has anyone come across the above error?
    Object BC_XMB has been activated, but i have unable to fill it.
    Any input will much appreciated.
    Thanks,
    Dimitris

    Hi Samanay,
    The fields included in the infostructure are Message Id , Pipeline Id, Message Version, Time Stamp.
    Values for Offset processing is K and for File nema e processing is K.
    Key fields are Message id, pipiline id, message version.
    I apprciate the effort.
    Dimitris

  • Archive jobs / Delete job

    I have some questions regarding Archiving / Deleting..
    1) What are the implication of not setting 'DELETION' category parameters in 'Specific configuration'.
    2) How do I delete the messages that are archived? Will 'Delete job' handle this also?
    3) Also I had archived some messages using Archiving job.. When try to lookup this message using 'Archived XML Messages (Search using Archive), I get a list of Archived messages. When I click on any of this messages, I get "Could not find message in archive" message.

    hi,
    - did you activate SAP_BC_XMB info structure?
    - did you do all the configuration?
    http://help.sap.com/saphelp_nw04/helpdata/en/0e/80553b4d53273de10000000a114084/content.htm
    BTW
    you can also fill activated information structure retrospectively for already existing archives:
    http://help.sap.com/saphelp_nw04/helpdata/en/5c/11afaad55711d2b1f80000e8a5b9a5/content.htm
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions">XI FAQ - Frequently Asked Questions</a>

  • Archive Job error

    Hi friends,
    I have done Archive configuration setup. In Define interfaces for archiving given 1 interface (which i want to do archive) and retention period as 1 day under asynchronous XML msgs. Scheduled a archiving job on the same day. After all these, I triggered one successful msg. The job ARV_BC_XMB_WRP* gets cancelled with error msg "Error when accessing the archive data". But I am not able to see the archive file in the physical path given in the configuration.
    whr this msg gets archived?
    Could any one help me wht is the problem and how to correct this?
    thanx,
    kumar

    Hi Sumit,
    Thanx for ur reply.
    Got the msg ID from table and that cancelled msg in Moni and that gets archived whn job ran today.
    Cancelled msgs are getting archived only If I maintain the below entry
    category            Parameters                                         Current value
    RUNTIME    PERSIST_ARCH_MANUAL_CHANGES          1
    in Integration Engine Configuration ---> Specific Configuration.
    But in this case, every cancelled msg getting archived irrespective of the Interfaces given in Define Interfaces for Archiving. But i need to archive the cancelled msgs only for the interfaces defined.
    To do this, I selected the "Manually cancelled Msgs" check box for the interface given in Define Interfaces for Archiving, but not working.
    Again help me out on this.
    Thanx,
    Kumar.
    Message was edited by:
            ms kumar

  • Chain of archive jobs ie write, delete & store

    Hi All,
    We are doing archiving for technical objects. Currently we have scheduled the archiving write job periodically but after that, manually executing the delete job & store job.  We are not using standard functionality of chaining these jobs because there are high data & write job is creating the archive file in a high quantity & chaiing the deletion & store job may create the issue of high quantity of background job running simultaneously at a time.
    Now, Is there any process by which i can limit the simultaneously running these jobs to a number ie max 5 or 10. I have searched for program RSARCHD but i am not able to understand how to schedule this program  as i want to schedule this for periodically after write job. and also let me know if it can be used for store job also.
    Thanks in advance.
    Ankit

    Hi Ankit,
    I am not aware of a technique for limiting the number of simultaneous jobs, but you have a way (if you have chosen to start the delete jobs automatically) to start the delete jobs only after the write job is complete (else normally delete job starts once a archive file reaches its max size / no of objects and a new archive file is created).
    To activate this, go to transaction AOBJ, double click on the object (i assume you are trying to archive IDOC) and there is a check box 'Do Not Start Before End of Write Phase', chck that. You can test this on QA system to see if it gves satisfactory results before trying on production.
    hope this helps,
    Naveen

  • Dump during archive job runs -  TSV_TNEW_PAGE_ALLOC_FAILED (snote 1017000)

    Hi All,
        I'm currently having problems with the deletion/archiving process on one of my XI boxes. I has been
    configure to use the Switch Procedure and, the error happens during the TABLE_SWITCH procedure.
        Due to this error, I have 1 dump per day and the other jobs related with this task are failing.
        The dump says:
               TSV_TNEW_PAGE_ALLOC_FAILED
        and I have found note 1017000 for this issue and sucesfully applied.
        According to the note, next step should be:
                 "To stop the archiving process, start the deletion process for the failed
                  archiving runs manually directly from the archive administration."
        by I'm not able to do that. I go to XMS_SATA and with object name BC_XMB try to run the "Delete" action
        but can not continue as there is no archive file to select.
        Anyone with the same issue?
        Thanks in advance?

    Hi Prateek,
       Thanks for your help but I don see how is this related to XI. I read about IS-OIL and IS-MINE. I already found a note that helps with this issue but I'm stuck with one of the steps and can cancel the copy process.
        Regards,
            Encinas

  • HELP! SAP Execute Delete Archiving Session failure

    Dear All,
    Could anyone give me opinions on SAP problems?
    When we archived object u201CFI_documntu201D, delete a u201CFilesu201D in tcode u201CSARAu201D, the AIX file system u201C/Oarcle/R3P/oraarchu201D was full, SAP and database have stopped, one archiving session deleting was failed and cannot retry. The SAP error message below:
    "Text CCR 110000570022007002 ID 0001 language EN not found"
    Message no. TD600
    Diagnosis: you want to read a text which does not exist in the database (or update memory)
    system response: Reading could not be carried out.
    Procedure: you need to create this text:
    1.Initialization (module INIT_TEXT)
    2.Save (module SAVE_TEXT)
    In SAP -> Overview of Archiving, the icon of file "000092-0031FI_DOCUMNT" is a THUNDER, not a green light.
    I captured screen for reference.
    [http://picasaweb.google.com.hk/lbiceman/SAP_error?feat=directlink]
    Please teach me how to delete archiving session after archive job was completed.

    Hi Alan,
    I have not faced this particular situation, so the following is just a thought,  please try with caution.
    Delete the cancelled job from the job overview, this possibly does away with the flash icon (THUNDER) on the deletion status of archive file, hopefully you should be able to manually re-run the delete job by going to SARA-(object FI_DOCUMNT) ->Delete and choosing the file for which deletion was incomplete.
    Hope this works,
    Naveen

  • XI Archiving: Restarting a terminated archived job.

    Hi All,
    Due to some constraints, our PI server (devt) has only 9.5 GB of disk space for the archive directory. We have 300,000 messages that is flagged for archiving and it seems that the archiving job will terminate with a dataset_write_error which we assume is caused because there is not enough disk space in the archive folders.
    After moving the previous archive files out of the directory to reclaim the disk space, we restarted the archive job.Will restarting the archive job allow the archive job to pick up from where it was terminated or will it "restart" from beginning?
    Anyone experienced something similar before?

    Dear Lugman,
    the behaviour of the new archiving job depends on the configuration of archiving object BC_XMB (transaction AOBJ). If the option "Do Not Before End of Write Phase" is set (default), then the next archiving job will start from the very beginning. In this case all archive file written by the previous job can be deleted. If the option is de-selected the new archiving job will start with the last archiving file. All files that were written sucessfully before the error ocurred are safe.
    In general you might also want to increase the retention period temporarily. This way you reduce the number of messages to be archived in one single blow and you have better control on the data volume that is written to file.
    Best regards,
    Harald Keimer
    XI Development Support
    SAP AG, Walldorf

  • ERROR VNIC creation job failed

    Hello All,
    I have brought oracle VM X86 manager into ops center 12c control. When I try to create a new virtual machine it is throwing the ‘ERROR VNIC creation job failed’ error. Can anybody throw some light over this issue.
    Thanks in advance.
    Detailed log is
    44:20 PM IST ERROR Exception occurred while running the task
    44:20 PM IST ERROR java.io.IOException: VNIC creation job failed
    44:20 PM IST ERROR VNIC creation job failed
    44:20 PM IST ERROR com.sun.hss.services.virtualization.guestservice.impl.OvmCreateVnicsTask.doRun(OvmCreateVnicsTask.java:116)
    44:20 PM IST ERROR com.sun.hss.services.virtualization.guestservice.impl.OvmAbstractTask.run(OvmAbstractTask.java:560)
    44:20 PM IST ERROR sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    44:20 PM IST ERROR sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    44:20 PM IST ERROR sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    44:20 PM IST ERROR java.lang.reflect.Method.invoke(Method.java:597)
    44:20 PM IST ERROR com.sun.scn.jobmanager.common.impl.TaskExecutionThread.run(TaskExecutionThread.java:194)
    Regards,
    george

    Hi friends
    I managed to find the answer. Internally it is has some indexes in the data base level. It still maintains the indexes in the shadow tables. Those all need to be deleted. With our Basis team help I have successfully deleted those and recreated the indexes.
    As Soorejkv said sap note 1283322 will help you on this to understand the scenarios.
    Thank you all.
    Regards
    Ram

  • "unable to get printer status" and "server-error-not-accepting-jobs"

    I've got an Intel Mac Pro with Tiger. When I accepted the latest update from Apple, I lost all my defined printers! I've re-added a local USB printer and a networked Laserjet - fine. But I also added a Phaser 850 color printer (networked) which used to work fine. It connects ok, and opens the printer queue window, but when it tries to print, it says "Unable to get printer status (server-error-not-accepting-jobs". The queue stays active (not stopped) but the jobs don't move. I've tried deleting the printer and re-making it - same deal. What's going on?
    Mike

    Hi Mike,
    One think I really dislike about OSX is the Rocket Science needed to just Print!
    Might try these two...
    Mac OS X: About the Reset Printing System feature ...
    http://support.apple.com/kb/HT1341?viewlocale=en_US
    Might try Printer Setup Repair 5.1...
    http://www.fixamac.net/software/index.html

  • Schedule archiving jobs

    Hi,
    We are trying to schedule periodically the archiving jobs for several objects.
    To be more specific, we begin with MM_EKKO archiving object and we would like to chain the 3 steps : preprocessor the write, then delete.
    Chaining delete after write is easy : In "technical Setting", we just have to tickle "start automatically" in the "delete jobs" parameters.
    But how to chain automatically write step after preprocessor ?
    Obviously, I tried usual options in SM36 and used "after job" parameter. But what job can I enter as the job name  for precprocessor is automatically generated by SAP and changed at every new job (the date and time are included in the name, e.g :  ARV_MM_EKKO_SUB20110406163238) ?
    Do I need to use an exernal third party scheduler to do this ?

    Serge,
    I have been researching on your query lately and have put accross this question to some Archiving Experts at my end. Hopefully, will come up with a full-proof solution.
    But in the meantime, if you have a sandbox or if you can do a research, please do the following:
    1. In SARA, under the TECHNICAL SETTINGS tab go to the VERIFY ARCHIVE FILES box.
    2. There just click on the following, Before Deleting, Before Reading and Before Reloading.
    3. I think this BEFORE READING box could be a solution.
    4. And for deletion check the START AUTOMATICALLY option.
    Please try this and let us know. If solved, okay and if not then I will reply back after conveying from the experts.

  • Switch or Simple procedure to  Delete the job?

    Hi
    I want to delete all the data which are less than 6 months I have changed the staus from del to arch for those data which need to archived .I am really confused on selecting the delete procedure whether to select simple or switch
    in my data base there are more than 500000 to be delete
    The simple deletion procedure deletes all XML messages flagged for deletion or archiving from the database tables in records.
    it is saying will delete all the records which are flagged for archiving also
    in switch it is taking copy of the table to another i want to reduce the space from my data base
    Kindly suggest me how can i achive my goal
    Thanks
    Jayaraman
    Edited by: Jayaraman P on Jun 21, 2010 4:05 PM

    Hi,
    Messages having status ARCH will not be deleted by the deletion job. only the messages having status DEL will be deleted by deletion job RSXMB_DELETE_MESSAGES.
    This message status can be checked by the status of ITFACTION of SXMSPMAST table.
    The type of deletion procedure simple or table swith has no effect on the messages that will be selected for deletion.
    The messages that has the flag ARCH can be deleted only by the ARCHIVE job. Archive job has two steps it first archive those messages in to file system RSXMB_ARCHIVE_MESSAGE and then delete by RSXMB_DELETE_ARCHIVED_MESSAGES.
    Please check the note Note 872388 - Troubleshooting Archiving and Deletion in PI for further assistance.
    Thanks,
    Francis
    Edited by: Francis M. A. on Jun 21, 2010 5:13 PM

  • How to delete archive logs on the standby database....in 9i

    Hello,
    We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
    The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
    Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
    thanks,
    C.

    We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
    The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
    Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.From 10g there is option to purge on deletion policy when archives were applied. Check this note.
    *Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]*
    Still it is on 9i, So you need to schedule RMAN job or Shell script file to delete archives.
    Before deleting archives
    1) you need to check is all the archives are applied or not
    2) then you can remove all the archives completed before 'sysdate-2';
    RMAN> delete archvielog all completed before 'sysdate-2';
    As per your requirement.

  • System error when archiving a photo

    Hi,
    I can store employee picture successfully in R3 (T-Code: OAOH). I can
    see it there too by Extract->Dispaly all facsimiles in PA30 for that
    employee.
    But I can't see the picture from ESS ,only a frame is coming. The URL
    of the picture is <R3 hostname>:8000. But the ITS service i.e content
    server is <R3 hostname>:8001. If i give 8001 then the picture is
    comming. I can't store any picture from ESS ..."System error when
    archiving a photo".
    I can delete the picture in TOAHR table from the ESS by removing the
    employee pic though i can't see it.
    We have 2 cluster instances...1) sapprddb 0 (message server is under this)
                                                 2) sapprdapp 1 (application server r3)
    Please help...
    Thanks and Regards,
    Sekhar

    Hi,
    Thanks for ur reply. But I have already gone through the steps. I can see the picture from R3 and store it there. But the problem is in ESS (portal). I can't store it from ESS as the address it going to store is <R3 hostname>:8000/fp/bc/contentserver........The actual should be <R3 hostname>:8001/fp/bc/contentserver........
    Plz suggest me where from it take the value 8001 in portal for employee pic. For all other T-iview  working fine. Telnet to 8000 is not possible, but 8001 is ok.
    Thanks,
    Sekhar

Maybe you are looking for

  • How to save selected pages of a PDF file in Adobe 11.0.07

    I upgraded to Adobe XI version  11.0.07. Example: I have a 100 page PDF and I want to save a portion of the pages: Pages 49-57 of the 100. Is there a way to do that, I cannot seem to get it to do it, I think Adobe 9 it was possible Thanks

  • The project could not be loaded, it may be damaged or contain outdated elements - PLEASE HELP

    Hi Dear Adobe Forum, I need help, please i cant open my project I72600K Gygabite Z77Ud5 Matrox Rtx2 16Gb ram Adobe premiere cs5.0.3 Nvidia geeforce gtx460 u can download my file from here[code]http://speedy.sh/Ahkxh/Anahit1.prproj[/code]http://speedy

  • Adobe Flash installed but not showing up in Plug-Ins?

    Before this happened, every time I would download the latest version of Flash it would show up in Chrome Plug-Ins. I have Version 17 installed but it's not showing up anymore, there's only 1 file when there used to be 2...and when I attempt to watch

  • Newbie here with automatic update problem.

    Hi everyone, I just purchased a palm centro for sprint and so far I really like it. The problem is every morning when I get up there is this message... Sorry, your secheduled automatic updte did not complete successfully because a connection could no

  • Music bitrate in computer folder versus iPod

    Is it possible to use the Apple lossless or native CD format in my iTunes folder, and then sync a lower bitrate on my iPod (120gb classic)? I have approximately 250gb in my library and obviously cannot sync the entire folder with my iPod.