Regarding force archiving

Dear all,
When I need to take the backup of latest archive log file, which commad from the below two is better
1. ALTER SYSTEM ARCHIVE LOG CURRENT;
2. ALTER SYSTEM SWITCH LOGFILE;
Does both the commands give the same result ?
Regards,
Charan

Hi Charan;
alter system switch logfile -- > switches to the next logfile, irrespective of what mode the database is in ARCHIVELOG or NOARCHIVELOG mode.If in archivelog mode alter system swicth logfile will generate the archive of the redolog that was switched.
alter system archive log current --> Here oracle switches the current log and archives it as well as all other unarchived logs.It can be fired on only the database which is in ARCHIVELOG mode.
[http://download.oracle.com/docs/cd/B10501_01/server.920/a96519/backup.htm]
Source:
Diff between  switch logfile and archive log current
Regard
Helios

Similar Messages

  • Appropritate forum to put queries regarding data archiving

    Hi All,
    Which one is the appropriate forum to put queries regarding data archiving?
    Thanks in advance.
    Vithalprasad

    Yes you can use this forum, also there is one forum for Data Transfers.
    Regards,
    Altaf Shaikh

  • Forcefully archiving a logfile

    Can anyone help me with a command to forcefully archive my redo log file. I ran into the trouble to my archiving process stopping without my knowledge on a database running in archive mode. Now I get the error
    Connected.
    SQL> startup force
    ORACLE instance started.
    Total System Global Area 612368384 bytes
    Fixed Size 1249380 bytes
    Variable Size 276828060 bytes
    Database Buffers 327155712 bytes
    Redo Buffers 7135232 bytes
    Database mounted.
    ORA-16038: log 1 sequence# 53 cannot be archived
    ORA-19809: limit exceeded for recovery files
    ORA-00312: online log 1 thread 1: 'E:\ORADATA\ILEAD\REDO01.LOG'
    When I tried the
    alter database archive log group 1
    i got
    SQL> alter system archive log group 1;
    alter system archive log group 1
    ERROR at line 1:
    ORA-16014: log 1 sequence# 53 not archived, no available destinations
    ORA-00312: online log 1 thread 1: 'E:\ORADATA\ILEAD\REDO01.LOG'
    The database is in Mounted Mode.
    Please help.
    I am running Oracle 10gR2 on Window 2003 server Enterprise Edition SP1

    why then should I have an entry in the alert log saying
    Thu Jul 24 13:49:12 2008
    Errors in file e:\admin\ilead\bdump\ilead_arc1_3896.trc:
    ORA-19809: limit exceeded for recovery files
    ORA-19804: cannot reclaim 47716352 bytes disk space from 2147483648 limit
    ARC1: Error 19809 Creating archive log file to 'E:\FLASH_RECOVERY_AREA\ILEAD\ARCHIVELOG\2008_07_24\O1_MF_1_53_U_.ARC'
    ARC1: Failed to archive thread 1 sequence 53 (19809)
    ARCH: Archival stopped, error occurred. Will continue retrying
    Thu Jul 24 13:49:12 2008
    Errors in file e:\admin\ilead\bdump\ilead_arc1_3896.trc:
    ORA-16038: log 1 sequence# 53 cannot be archived
    ORA-19809: limit exceeded for recovery files
    ORA-00312: online log 1 thread 1: 'E:\ORADATA\ILEAD\REDO01.LOG'
    Thu Jul 24 13:49:47 2008
    ARC0: Archiving not possible: No primary destinations
    ARC0: Failed to archive thread 1 sequence 53 (4)
    ARCH: Archival stopped, error occurred. Will continue retrying
    Thu Jul 24 13:49:47 2008
    Errors in file e:\admin\ilead\bdump\ilead_arc0_3892.trc:
    ORA-16014: log 1 sequence# 53 not archived, no available destinations
    ORA-00312: online log 1 thread 1: 'E:\ORADATA\ILEAD\REDO01.LOG'
    Thu Jul 24 13:50:47 2008
    ARC1: Archiving not possible: No primary destinations
    ARC1: Failed to archive thread 1 sequence 53 (4)
    Thu Jul 24 13:51:36 2008
    sorry for the long post.

  • Exchange 2010 Migration of 200 Users/Force Archiving

    Exchange 2010 Outlook 2007
    Here's my situation.  We are migrating 200 mailboxes from our current domain which will be absorbed into another company/organization.  The new organization is putting limits on our users mailboxes of 100 Mb or 5000 items in order to make a smooth
    transition from our Exchange environment prior to them absorbing all of our mailboxes.  Of the 200 users a quarter of them are way over that limit.  We have sent out numerous emails informing the users (mostly offsite) to archive all of their emails
    and clear out the "deleted"/"sent" items.  Obviously no one has listened to our requests to get their mailboxes down to the limit the new organization has set for us.  I just recently migrated all of the users mailboxes to .pst's
    via a powershell script, but as you know, that only copies the data.  Is there a way via powershell to move or force all of the users data (sent/deleted/inbox) to an archive thus freeing up their mailboxes for the upcoming migration?  The new org
    will not approve the migration of any mailboxes over the 100 Mb/5000 item limit.  I'm not limiting a solution to powershell -- I will take any solution that is Microsoft-centric.  I will not be able to use third party tools or solutions due to the
    nature of my job. 

    Hi,
    Since the target domain set a policy to limit the migration, I suggest kindly let the target domain disalbe or expand the limitation for a while to complete the migration, if it is possible.
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

  • Regarding data Archiving

    Hi,
           All
                   my client needs to archive Data in SAP 8.8 each year having only 1 year completed.Can we archive data
                   less than 3 year?
    Thanks in Advance

    There will not be any technical problem if the archiving is done as per the instructions set by SAP (I suggest you go through DATA Archiving Session from SAP), What I meant was that any data which is less than 3 year old may still be considered as new data and if this is archived you are left with no choice to modify anything in this data as this becomes read-only.  If the client is looking for extra space on server by Data Archiving you can suggest other step as well . Again as I said it is the client's call afterall.
    regards
    johnson

  • Force archiving of error and access log

    Does anybody know of a way to force the archiving and restart of the errors and access logs in Directory Server 5.2 P4?

    eacardu wrote:
    Does anybody know of a way to force the archiving and restart of the errors and access logs in Directory Server 5.2 P4?you could create a script:
    cp /ds path/logs/access /archive/access.timestamp #archive
    /ds path/logs/access #clears filecp /ds path/logs/errors /archive/errors.timestamp #archive
    /ds path/logs/errors #clears file

  • Regarding workflor archiving

    Hi all,
    we are in process starting workflow archiving and the scenario as below
    adding a dedicated application server to the existing setup due to huge data
    and planning to run archiving jobs in the particular server only.
    ran couple of Test jobs, initial job triggers 2 jobs 1 SUB job (ARV_IDOC_SUB-program RSARDISP)
    & 1 Write jobs(ARV_IDOC_WRI- program RSEXARCA)
    able to move the SUB job to the desired server but not able to move the write which get triggered
    after the SUB job.
    This Write job is the actual archiving and consumes more time & space
    Is there any options to move only this Write to the desired server and we also need SUB which in turn updates
    the infostructure.
    Please suggest.
    Thanks
    Thirumalai

    Hi Thirumalai,
    You cannot assign archiving jobs directly to a particaular server, but you can assign it to a server group. You can create a server group having the app servers on which you would like to run the archiving jobs and then in transaction SARA->Customizing->Cross-Archiving Object Customizing->Technical Settings, provide the server group name. I beleieve this technique only works for the jobs released thru SARA.
    For the question of updating infostrutures: why do you have the need to run a job to update infostructure repeatedly? once the infostructure is activated, it will automatically be updated during the delete job.
    Hope this helps,
    Naveen

  • Regarding data archiving for SD objects.

    Hi All,
    Can you please help me to learn data archiving of SD objects.
    I am very new to SDN and this is my first thread.

    Hi Hariprasad,
    You can find a lot of information relating to archiving SD data in SAP Help.
    Here is the link you need
    <a href="http://help.sap.com/saphelp_erp2005vp/helpdata/en/d1/90963466880c30e10000009b38f83b/frameset.htm">Archiving SD data</a>
    Hope this helps
    Cheers!
    Samanjay

  • Pb regarding SAP Archives

    Hi,
    I've a problem on R/3 4.6c production server.
    Sequence of archive is 100014 and name of file is PRDARCHARC00014_0544984730.001.
    The parameter log_archive_format is ARC%S_%R.%T.
    I think Oracle doesn't manage to build the %S with 6 digit and forget the first digit of the sequence.
    Is there a way to solve that ?
    Thank you,
    Alexandre

    Hi!
    Directly taken from Oracle 10g online documentation. Take care of case-sensitivenesses:
    LOG_ARCHIVE_FORMAT is applicable only if you are using the redo log in ARCHIVELOG mode. Use a text string and variables to specify the default filename format when archiving redo log files. The string generated from this format is appended to the string specified in the LOG_ARCHIVE_DEST parameter.
    The following variables can be used in the format:
    %s log sequence number
    %S log sequence number, zero filled
    %tthread number
    %Tthread number, zero filled
    %a activation ID
    %d database ID
    %r resetlogs ID that ensures unique names are constructed for the archived log files across multiple incarnations of the database
    Using uppercase letters for the variables (for example, %S) causes the value to be fixed length and padded to the left with zeros. An example of specifying the archive redo log filename format follows:
    LOG_ARCHIVE_FORMAT = 'log%t_%s_%r.arc'
    Neither LOG_ARCHIVE_DEST nor LOG_ARCHIVE_FORMAT have to be complete file or directory specifiers themselves; they only need to form a valid file path after the variables are substituted into LOG_ARCHIVE_FORMAT and the two parameters are concatenated together.

  • Question regarding flash archives

    Quick question to all
    Is there a maximum size a flash archive can be when being used by jumpstart over NFS?

    For Solaris10 11/06 Release:
    "The default copy method that is used when you create a Solaris Flash archive is the cpio utility. Individual file sizes cannot be over 4 Gbytes. If you have large individual files, you can create an archive with the pax copy method. The flarcreate command with the -L pax option uses the pax utility to create an archive without limitations on individual file sizes. Individual file sizes can be greater than 4 Gbytes."
    http://docs.sun.com/app/docs/doc/819-6398
    This info for other versions/releases will be in install guide.
    John

  • To Karl Petersen: Regarding Another Archived Question

    Hello Mr. Petersen:
    I have been reading your posts on this topic, and I have question regarding making many frames combined into a QT movie. I do not have QT Pro, and am trying to do this through iMovie 6. Here is the deal:
    I have a series (about 2800) of frames stored in a folder on my Hard Drive, which I would like to make into a .mov file. They are currently BNP files. I imported them in order to iMovie in the timeline, and now they are showing up individually as five second clips. I will attach a screenshot of the timeline. When double-clicking on each clip, I can't adjust the time. I am trying to make each clip into one frame of a movie.
    Thank you so much.
    C h r i s t o p h
    Here is the photo of the timeline:

    Duration Slider? When I double click on the image I can't change the time. Is there a different way?
    C h r i s t o p h

  • Skipping archive logs

    Hi All,
    I have a question regarding oracle archive log configuration .
    My DB is : Ora10gR2
    Unix : HPUX
    To support Data guard functionality DBA has put DB in ARCHIVE LOG mode with forced logging mode =YES.
    1* select LOG_MODE,FORCE_LOGGING from v$database
    SQL> /
    LOG_MODE FORCE_LOGGING
    ARCHIVELOG YES
    Now i have a table called PARAMETER where i need to load aroung 700 Million records . Since DB is in forced logging mode this will create lot of log information and that will take long time to load as well.
    Is thr any option to keep a table in nologging mode , even if the DB is in forced logging mode ??
    Thanks

    am_73798 wrote:
    Hi All,
    I have a question regarding oracle archive log configuration .
    My DB is : Ora10gR2
    Unix : HPUX
    To support Data guard functionality DBA has put DB in ARCHIVE LOG mode with forced logging mode =YES.
    1* select LOG_MODE,FORCE_LOGGING from v$database
    SQL> /
    LOG_MODE FORCE_LOGGING
    ARCHIVELOG YES
    Now i have a table called PARAMETER where i need to load aroung 700 Million records . Since DB is in forced logging mode this will create lot of log information and that will take long time to load as well.
    Is thr any option to keep a table in nologging mode , even if the DB is in forced logging mode ??
    ThanksHi,
    No there is no option to keep a table in nologging mode if DB is in force logging.
    Regards
    Anurag

  • Archive Log vs Full Backup Concept

    Hi,
    I just need some clarification on how backups and archive logs work. Lets say starting at 1PM I have archive logs 1,2,3,4,5 and then I perform a full backup at 6PM.
    Then I resume generating archive logs at 6PM to get logs 6,7,8,9,10. I then stop at 11PM.
    If my understanding is correct, the archive logs should allow me to restore oracle to a point in time anywhere between 1PM and 11PM. But if I only have the full backup then I can only restore to a single point, which is 6PM. Is my understanding correct?
    Do the archive logs only get applied to the datafiles when the backup occurs or only when a restore occurs? It doesn't seem like the archive logs get applied on the fly.
    Thanks in advance.

    thelok wrote:
    Thanks for the great explanation! So I can do a point in time restore from any time since the datafiles have last been written (or from when I have the last set of backed up datafiles plus the archive logs). From what you are saying, I can force the datafiles to be written from the redo logs (by doing a checkpoint with "alter set archive log current" or "backup database plus archivelog"), and then I can delete all the archive logs that have a SCN less than the checkpoint SCN on the datafiles. Is this true? This would be for the purposes of preserving disk space.Hi,
    See this example. I hope this explain your doubt.
    # My current date is 06-11-2011 17:15
    # I not have backup of this database
    # My retention policy is to have 1 backup
    # I start listing  archive logs.
    RMAN> list archivelog all;
    using target database control file instead of recovery catalog
    List of Archived Log Copies
    Key     Thrd Seq     S Low Time            Name
    29      1    8       A 29-10-2011 12:01:58 +HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837
    30      1    9       A 31-10-2011 23:00:30 +HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025
    31      1    10      A 03-11-2011 23:00:23 +HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105
    32      1    11      A 04-11-2011 23:28:23 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065
    33      1    12      A 05-11-2011 23:28:49 +HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349
    ## See I have archive logs from time "29-10-2011 12:01:58" until "05-11-2011 23:28:49" but I dont have any backup of database.
    # So I perfom backup of database including archive logs.
    RMAN> backup database plus archivelog delete input;
    Starting backup at 06-11-2011 17:15:21
    ## Note above RMAN forcing archive current log, this archivelog generated will be usable only for previous backup.
    ## Is not my case... I don't have backup of database.
    current log archived
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=159 devtype=DISK
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=8 recid=29 stamp=766018840
    input archive log thread=1 sequence=9 recid=30 stamp=766278027
    input archive log thread=1 sequence=10 recid=31 stamp=766366111
    input archive log thread=1 sequence=11 recid=32 stamp=766516067
    input archive log thread=1 sequence=12 recid=33 stamp=766516350
    input archive log thread=1 sequence=13 recid=34 stamp=766516521
    channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:23
    channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:15:38
    piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 tag=TAG20111106T171521 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=+HR/dbhr/archivelog/2011_10_31/thread_1_seq_8.399.766018837 recid=29 stamp=766018840
    archive log filename=+HR/dbhr/archivelog/2011_11_03/thread_1_seq_9.409.766278025 recid=30 stamp=766278027
    archive log filename=+HR/dbhr/archivelog/2011_11_04/thread_1_seq_10.391.766366105 recid=31 stamp=766366111
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_11.411.766516065 recid=32 stamp=766516067
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_12.413.766516349 recid=33 stamp=766516350
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_13.414.766516521 recid=34 stamp=766516521
    Finished backup at 06-11-2011 17:15:38
    ## RMAN finish backup of Archivelog and Start Backup of Database
    ## My backup start at "06-11-2011 17:15:38"
    Starting backup at 06-11-2011 17:15:38
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backupset
    channel ORA_DISK_1: specifying datafile(s) in backupset
    input datafile fno=00001 name=+HR/dbhr/datafile/system.386.765556627
    input datafile fno=00003 name=+HR/dbhr/datafile/sysaux.396.765556627
    input datafile fno=00002 name=+HR/dbhr/datafile/undotbs1.393.765556627
    input datafile fno=00004 name=+HR/dbhr/datafile/users.397.765557979
    input datafile fno=00005 name=+BFILES/dbhr/datafile/bfiles.257.765542997
    channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:15:39
    channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:03
    piece handle=+FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539 tag=TAG20111106T171539 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:24
    Finished backup at 06-11-2011 17:16:03
    ## And finish at "06-11-2011 17:16:03", so I can recovery my database from this time.
    ## I will need archivelogs (transactions) which was generated during backup of database.
    ## Note during backup some blocks are copied others not. The SCN is inconsistent state.
    ## To make it consistent I need apply archivelog which have all transactions recorded.
    ## Starting another backup of archived log generated during backup.
    Starting backup at 06-11-2011 17:16:04
    ## So automatically RMAN force another "checkpoint" after backup finished,
    ## forcing archive current log, because this archivelog have all transactions to bring database in a consistent state.
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archive log backupset
    channel ORA_DISK_1: specifying archive log(s) in backup set
    input archive log thread=1 sequence=14 recid=35 stamp=766516564
    channel ORA_DISK_1: starting piece 1 at 06-11-2011 17:16:05
    channel ORA_DISK_1: finished piece 1 at 06-11-2011 17:16:06
    piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565 tag=TAG20111106T171604 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    channel ORA_DISK_1: deleting archive log(s)
    archive log filename=+HR/dbhr/archivelog/2011_11_06/thread_1_seq_14.414.766516565 recid=35 stamp=766516564
    Finished backup at 06-11-2011 17:16:06
    ## Note: I can recover my database from time "06-11-2011 17:16:03" (finished backup full)
    ##  until "06-11-2011 17:16:04" (last archivelog generated) that is my recover window in this scenary.
    ## Listing Backup I have:
    ## Archive Logs in backupset before backup full start - *BP Key: 40*
    ## Backup Full database in backupset - *BP Key: 41*
    ##  Archive Logs in backupset after backup full stop - *BP Key: 42*
    RMAN> list backup;
    List of Backup Sets
    ===================
    BS Key  Size       Device Type Elapsed Time Completion Time
    40      196.73M    DISK        00:00:15     06-11-2011 17:15:37
            *BP Key: 40*   Status: AVAILABLE  Compressed: NO  Tag: TAG20111106T171521
            Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
      List of Archived Logs in backup set 40
      Thrd Seq     Low SCN    Low Time            Next SCN   Next Time
      1    8       766216     29-10-2011 12:01:58 855033     31-10-2011 23:00:30
      1    9       855033     31-10-2011 23:00:30 896458     03-11-2011 23:00:23
      1    10      896458     03-11-2011 23:00:23 937172     04-11-2011 23:28:23
      1    11      937172     04-11-2011 23:28:23 976938     05-11-2011 23:28:49
      1    12      976938     05-11-2011 23:28:49 1023057    06-11-2011 17:12:28
      1    13      1023057    06-11-2011 17:12:28 1023411    06-11-2011 17:15:21
    BS Key  Type LV Size       Device Type Elapsed Time Completion Time
    41      Full    565.66M    DISK        00:00:18     06-11-2011 17:15:57
            *BP Key: 41*   Status: AVAILABLE  Compressed: NO  Tag: TAG20111106T171539
            Piece Name: +FRA/dbhr/backupset/2011_11_06/nnndf0_tag20111106t171539_0.269.766516539
      List of Datafiles in backup set 41
      File LV Type Ckp SCN    Ckp Time            Name
      1       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/system.386.765556627
      2       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/undotbs1.393.765556627
      3       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/sysaux.396.765556627
      4       Full 1023422    06-11-2011 17:15:39 +HR/dbhr/datafile/users.397.765557979
      5       Full 1023422    06-11-2011 17:15:39 +BFILES/dbhr/datafile/bfiles.257.765542997
    BS Key  Size       Device Type Elapsed Time Completion Time
    42      3.00K      DISK        00:00:02     06-11-2011 17:16:06
            *BP Key: 42*   Status: AVAILABLE  Compressed: NO  Tag: TAG20111106T171604
            Piece Name: +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171604_0.272.766516565
      List of Archived Logs in backup set 42
      Thrd Seq     Low SCN    Low Time            Next SCN   Next Time
      1    14      1023411    06-11-2011 17:15:21 1023433    06-11-2011 17:16:04
    ## Here make sense what I trying explain
    ## As I don't have backup of database before of my Last backup, all archivelogs generated before of my backup full is useless.
    ## Deleting what are obsolete in my env, RMAN choose backupset 40 (i.e all archived logs generated before my backup full)
    RMAN> delete obsolete;
    RMAN retention policy will be applied to the command
    RMAN retention policy is set to redundancy 1
    using channel ORA_DISK_1
    Deleting the following obsolete backups and copies:
    Type                 Key    Completion Time    Filename/Handle
    *Backup Set           40*     06-11-2011 17:15:37
      Backup Piece       40     06-11-2011 17:15:37 +FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525
    Do you really want to delete the above objects (enter YES or NO)? yes
    deleted backup piece
    backup piece handle=+FRA/dbhr/backupset/2011_11_06/annnf0_tag20111106t171521_0.268.766516525 recid=40 stamp=766516523
    Deleted 1 objectsIn the above example, I could before starting the backup run "delete archivelog all" because they would not be needed, but to show the example I follow this unnecessary way. (backup archivelog and delete after)
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Nov 7, 2011 1:02 AM

  • PI 7.11  AAE- Archiving and deletion

    Dear All,
    I just want to get confirm on my understanding regarding AAE archiving and deletion.
    We have 98% interfaces using only AAE in our landscape.
    I had set up XMLDAS archiving for AAE ( After fixing so many issues ) which archives the AAE messages into the local file system. So no issues in that .
    Now my requirements are,
    1) I would like to keep only 7 days of data in the database and 100 days of data in the file system
    2) I do not want to set up any rules for Archiving /Deletion. It means i would like to archive messages older than 7 days in the file system.
    The steps are ( My understanding )
      1) Set the persistence duration in NWA to 7 days . So only 7 days of data will be stored in table BC_MSG.
      2) Set up the archival job in the background processing through RWB  without ANY RULES. So all processed messages older  than 7 days will be achieved in the file system. And Archive will come with default delete procedure as well hence Archived messages will be deleted from table BC_MSG.
      3) Deactivate the delete job . If i have the active delete job in the background processing, it will delete all messages older than 7 days which i don want.  The deletion of messages from BC_MSG will be taken care as part of archival job.
    Is my understanding correct?
    Thanks
    Rajesh

    Hello, any know what did they do to delete the files from the file system after 100 days?
    We have the same problem, when we deleted manualy some messages  from SO AIX but they still appear from the RWB, and when we clic over them we get an error:  ...... java.lang.Exception: XML DAS GET command failed with ErrorCode: 598; ErrorText:  598 GET: Error while accessing resource; response from archive store: 598 I/O Error java.io.FileNotFoundException: /usr/sap/PID/javaarch/archive/usr/sap/pid/x
    Is there any way to delete messages without generating errors / inconcistencias?

  • How to use for all entires clause while fetching data from archived tables

    How to use for all entires clause while fetching data from archived tables using the FM
    /PBS/SELECT_INTO_TABLE' .
    I need to fetch data from an Archived table for all the entries in an internal table.
    Kindly provide some inputs for the same.
    thanks n Regards
    Ramesh

    Hi Ramesh,
    I have a query regarding accessing archived data through PBS.
    I have archived SAP FI data ( Object FI_DOCUMNT) using SAP standard process through TCODE : SARA.
    Now please tell me can I acees this archived data through the PBS add on FM : '/PBS/SELECT_INTO_TABLE'.
    Do I need to do something else to access data archived through SAP standard process ot not ? If yes, then please tell me as I am not able to get the data using the above FM.
    The call to the above FM is as follows :
    CALL FUNCTION '/PBS/SELECT_INTO_TABLE'
      EXPORTING
        archiv           = 'CFI'
        OPTION           = ''
        tabname          = 'BKPF'
        SCHL1_NAME       = 'BELNR'
        SCHL1_VON        =  belnr-low
        SCHL1_BIS        =  belnr-low
        SCHL2_NAME       = 'GJAHR'
        SCHL2_VON        =  GJAHR-LOW
        SCHL2_BIS        =  GJAHR-LOW
        SCHL3_NAME       =  'BUKRS'
        SCHL3_VON        =  bukrs-low
        SCHL3_BIS        =  bukrs-low
      SCHL4_NAME       =
      SCHL4_VON        =
      SCHL4_BIS        =
        CLR_ITAB         = 'X'
      MAX_ZAHL         =
      tables
        i_tabelle        =  t_bkpf
      SCHL1_IN         =
      SCHL2_IN         =
      SCHL3_IN         =
      SCHL4_IN         =
    EXCEPTIONS
       EOF              = 1
       OTHERS           = 2
       OTHERS           = 3
    It gives me the following error :
    Index for table not supported ! BKPF BELNR.
    Please help ASAP.
    Thnaks and Regards
    Gurpreet Singh

Maybe you are looking for

  • Difference b/w sy-index and sy-tabix

    hai all, Could u explain the difference b/w sy-index and sy-tabix? regards, Selva

  • I Lose Internet connection each time my Mac Sleeps

    I have an iMac 21" late 2009 model. Since I installed Lion, every time my Mac "nods" after leaving it a bit, after waking up it will take about a minute for the internet connection (dsl ethernet) to come back. It's frustrating, I turn on my mac to br

  • The order of photos in folders

    Hi all, Just a question about the arrangement of photos in the iPod's photo folders. I created an album in iPhoto and spent a bit of time arranging the photos into groups and a specific order, but when I updated the folder to the iPod all the photo's

  • NLS Support in Instant Portal

    I have installed Instant Portal. The default site has been created in English. I can translate the sites text to Danish (I live in Denmark). But how do I get the Instant Portal generated text items such as (Home, Logout, Change Profile, Search etc.)

  • Which MAB 1.6GHz or 1.8GHz?

    Hi all, I would like some advice on the difference between the 1.6GHz and 1.8GHz MAB. I'm not very techie...but hope you can help. I will be using my MAB for emails, internet, basic photo uploading, not much editing, if any,not any movie editing or D