Clarification in Archiving

Hi,
Currently we are in 4.6C and  are planning to carry out  archiving for the first time.
I would like to know if there is any other transaction code for archival than SARA and is it the job of functional or basis or technical consultants?
Thanks
Suresh.

Hello SS,
Data Archiving – a service provided by mySAP Technology – removes mass data from the database that the system no longer needs online, but which must still be accessible at a later date if required. Archiving objects are used to write documents to archive files, which can be stored on other media.
Data in the database can only be archived using archiving objects, which describe the data structure and context.
Financial Accounting documents are, archived using the archiving object FI_DOCUMNT. It includes the document header, company code-dependent postings, change documents, SAPscript texts, and other elements.
Integration: The SAP Data Archiving concept is based on the Archive Development Kit (ADK). The ADK provides the technical basis for the archiving transaction (SARA). To call the archiving transaction, choose Tools ® Administration ® Management ® Data Archiving, or call it directly from the application component. If Archive Administration is called from the application component, application-specific parameters, such as programs and archiving objects, are activated automatically.
Archiving objects for each application component are predefined in the system. Their structures are described in the application-specific sections.
Features:The archiving procedure is divided into three main steps:
Creation of archive files: The data to be archived is written sequentially to a newly created archive file.
Storage of archive files: The newly created archive files can then be moved to a storage system or copied to a tape. The removal to an external storage system can be triggered manually or automatically.
Deletion from the database: The delete program reads the data from the archive files and then deletes it from the database.
You can schedule archiving programs as background tasks or run them during normal online operations.
There is no other code for archiving and it is purely a technical job with functional consultant just giving input about which data to archive from their respective module, if needed.
Hope I had been able to help you. Please assign points.
Rgds
Manish

Similar Messages

  • Archiving in SAP

    Hello All,
    I need a small clarification regarding archiving suppose on a perticular day we stopped the archiving by removing the server details in OAC0, is there a way to collect all the un archived docs on that day and archive it some other day.

    Hi,
    I dont think you should have any problems if you stop archiving and commence it at a later date.
    It very well depends on which archiving object you are using and what kind of documents you are archiving.
    Perhaps if you provide more details I can give you some more feedback.
    Regards,
    Chandra

  • PM Notification archiving -clarification -reg

    HI,
    While planning for PM notifications archiving , io the preprocessing step  of the tcode SARA , we dont find any reference date .
    We need to archive notification which are changed before two years
    we can take the details from QMEL like table  and input in SARA , but going forward we need to schedule back ground jobs for the notifications archiving
    do we need to do customization soem where to get the option of date here in the preprocessing stpe ?
    now we have only
    notification type
    notification number
    equipment
    functional location
    where as in the case of PM orders (object :PM_ORDER) we have the DLFl date or change date as reference
    pl suggest
    regards,
    Madhu Kiran

    Hi,
    For PM Orders and Notifications two flags need to be set: deletion flag and deletion indicator.
    For PM Order, the pre-processing variant can set both: deletion flag and deletion indicator.
    For PM Notification, the pre-processing variant can only set the second one: deletion indicator. Hence, you can either manually set the deletion flag or create a custom program to set this deletion flag for you. Then SAP will analyse (in the pre-processing program), those notifications flagged for deletion and set the deletion indicator if possible.
    After this, the write archiving program will pick up those notifications with deletion indicator set and archive them.
    Hope this makes sense now.

  • Archive and Install Time? Clarification?

    I have several symptoms going on simultaneously that lead me to think that I need to try an Archive and Install.
    1. Mail loses all settings and data on a daily basis.
    2. Trying to install Quicken via CD that is known to be good gives bad disk error message.
    3. Trying to transfer files from CD-R burned on another PB running same OS gives file error message and won't allow transfer of some files.
    4. Found an iPhoto file that was consuming almost 80 gigs of disk space. How and why?
    5. During start-up, when the gray Apple start-up screen appears, there is an irregular gray line about 3 inches long that appears above the Apple. It isn't present when I start up from the Install Disk, and disappears when the desktop appears.
    Repaired permissions, repaired disk using disk utility, performed long hardware test. Am I missing something?
    If I try an Archive and Install, will that preserve my data and files? System settings, yes, but what about my stuff. I suppose that means I need to reinstall any third party software?

    Daniel is almost right. What an Archive and Install does is to move your entire existing system into a Previous System Folder at the root directory of the hard drive. It then installs a fresh copy of OS X. All your files are preserved. If you use the option to preserve user and preference settings they will be moved into the new system. All your installed applications that are in the Applications folder will be moved into the new system. However, some applications will store information in the /Library/Applications Support/ folder. These files will remain in the Previous System Folder and will have to be moved to the new system manually.
    If you do an archive and install be sure you first repair the hard drive. Do not attempt an archive and install unless the hard drive has been verified as OK.

  • Archiving-Need clarification

    Hi, I would like to know what exactly is the difference between archiving a material ledger document and a material ledger index. there are two diff objs available for these, what is the diff and if i am archiving material ledger index, will the ledger doc be impacted?

    hi,
    check these links...
    http://www.sap-img.com/basis/delete-remove-of-idocs-from-r3.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/0b/5d193ad0337142e10000000a11402f/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/b4/fb3340284e7a56e10000000a1550b0/content.htm

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • IPhoto/Photo Stream clarification

    Forgive me if this has already been asked.
    What I need clarification on is
    Here is my scenario.
    I have Photo Stream enabled on all my devices(iphone, ipad & iMac)
    When i take a picture it goes in to photo stream album on all devices(which essentially means the photos are in the cloud)
    In iPhoto, there is a album called xxxx 2012 photo stream where xxxx is the month.
    So is it safe to say that the photos are stored in my iPhoto library in the monthly photo stream album only?
    Or do I have to turn on the automatic download setting in iPhoto to have the photos saved in iPhoto library?
    This where I am getting confused. Given the above scenario where the photos are only in the photo stream is it possible they could be lost if for some reason all the photos in the photo stream get deleted?
    I guess my main question is are the photos in photo stream just sitting in the cloud until I manually move them into my iPhoto library.  I just took a picture with iPhone and the picture is in Photo Stream and camera roll.  So if in camera roll does that mean the photo will be saved to permanent location in iPhoto.
    As you can see I am quite confused with workflow when using Photo Stream.  Apple has made the process very convaluded.
    Thanks and I look forward to your reply.
    ...Bruce

    you need to read up on PhotoStream
    http://www.apple.com/icloud/setup/
    PS is temporary - the lessor of the  last 1000 photos or the last 30 days - once either of these is passed the photos go away unless you import them to iPhoto - it is NOT a web archive
    In iPhoto, there is a album called xxxx 2012 photo stream where xxxx is the month.
    So is it safe to say that the photos are stored in my iPhoto library in the monthly photo stream album only?
    Or do I have to turn on the automatic download setting in iPhoto to have the photos saved in iPhoto library?
    There are two ways to import to iPhoto  - manually and automatically - manually you drag photos from PS to iPhoto - automatically you check the import autpmatically box in the iPhoto preferences and the photos are automatically imported into an event named MMM YYYY Photo Stream - no albums are automatically created or used
    As you can see I am quite confused with workflow when using Photo Stream.  Apple has made the process very convaluded.
    It actually is very simple if you read the instructions
    LN

  • Need a clarification in File to File scenario...

    Hi frens,
    I need a clarification from you allu2026.
    Normally in file to file scenario, the input file will be picked from a directory location and after processing output file and that will be placed in the output directory location specified in the receiver communication channel.
    In the sender communication channel for the scenario mentioned above, if the processing mode is given as delete, the input file will be u201Cdeleteu201D as soon as the file was successfully read by the adapter, but this will not ensure that the file processing is complete and the output file is being created.
    But can we create a scenario which will ensures that the input file is deleted from the input folder only if the complete process is successful and the output is generatedu2026.
    Thanks in advance,
    Vardhani...

    Dear Experts,
    I need your help on the below issue i am facing
    In File - RFC scenario, I am using the following beans to write the rfc response into file on file sender adapter
    Number - Module Name u2013 Module Type u2013 Module Key
    1- AF_Modules/RequestResponseBean - Local Enterprise Bean - 1
    2- localejbs/CallSapAdapter                    - Local Enterprise Bean - 2
    3-AF_Modules/ResponseOnewayBean  - Local Enterprise Bean - 3
    I need to delete the incomming file after processing.
    Problem 1:
    When i use QOS as BestEffort , The input file is not getting deleted and nor  archived after processing and  file sender communication channel shows null pointer exception.
    Problem 2:
    When i use QOS: exactly once, The file is getting deleted and getting archived, But some time i getting error in moni, Message already exists like GUID ALREADY EXISTS
    Appreciate your immediate help.
    Thanks and Regards.
    Sravya.

  • Can't find older mail from archived install

    I recently upgraded the OS on my iBook G4 to Tiger - and performed an archived install just to be safe. After installation was successful, most of my preferences transferred with the upgrade seamlessly (Calendar, Address Book items, bookmarks etc.)
    But, when I opened Mail, although my mail server preferences transferred okay, I discovered that most of my earlier mail from the the last year or so is missing. And for most of the messages that are actually apearring in my inbox, Mail is saying "X Message has not been downloaded from the server. You need to take this account online in order to download it." But, I am online and I can retrieve and send mail just fine.
    Anyone have any idea how imight be able to recover my older mail? (I checked in the "Previous System" archived folder and sadly, there is no Library/Mail folder in there.) If worse comes to worse, I did back up all of my files on another drive before I upgraded, but I'm curious if maybe I'm just doing something wrong and my old Mail is currently buried somewhere on my laptop. Any help would be apprecaited. Thanks.

    The conversion from Mail 1.x to Mail 2.x is broken. Take a look at the following thread to better understand the problem:
    Help! "You need to take this account online in order to download it."
    More specifically, if this is a POP account, the following procedure should allow you to fix the Inbox problem. A similar procedure should allow you to fix other mailboxes that might also be affected:
    1. Quit Mail if it’s running.
    2. Make a backup copy of the ~/Library/Mail folder, just in case something goes wrong while trying to fix the problem. You can do this in the Finder by dragging the folder to the Desktop while holding the Option (Alt) key down, for example. This is where all your mail is stored.
    3. Create a new folder on the Desktop and name it however you wish (e.g. Inbox Old). It doesn’t need to have an .mbox extension.
    4. In the Finder, go to ~/Library/Mail/POP-username@mailserver/INBOX.mbox/.
    5. Move the files mbox and Incoming_Mail out of INBOX.mbox, into the Inbox Old folder just created on the Desktop. These files contain all the messages that were in the mailbox before the upgrade to Tiger, and maybe even some messages that had been deleted. mbox is the most important. Incoming_Mail may or may not be present.
    6. Move any strangely-named Messages-T0x... folders to the Desktop (not into the Inbox Old folder). These folders are to be deleted after fixing the problem. They are temporary folders created during an import or an indexing process, and Mail should have deleted them when done. Their presence is a clear indication that something didn’t work as expected. If you’ve been using Mail after the conversion and have already tried to fix the problem by rebuilding the mailbox or something like that, they might contain messages that are neither in Messages proper nor in the mbox file, so keep them around until the problem is fixed.
    7. Move everything else within INBOX.mbox, except the Messages folder, to the Trash.
    The result of the above should be that INBOX.mbox contains the proper Messages folder only, and the Inbox Old folder on the Desktop contains the mbox and Incoming_Mail (if it exists) files only. Now, proceed as follows:
    8. Open Mail.
    9. The account’s Inbox should properly display in Mail as many messages as *.emlx files are in ~/Library/Mail/POP-username@mailserver/INBOX.mbox/Messages/. If that’s not the case, select the mailbox in Mail and do Mailbox > Rebuild.
    10. In Mail, do File > Import Mailboxes, choose Other as the data format, and follow the instructions to import the Inbox Old folder that’s on the Desktop.
    As a result of doing the above, some messages may be duplicated now. Andreas Amann’s Mail Scripts has a Remove Duplicates script that you may find useful.
    Do with the imported mail whatever you wish. You may move the messages anywhere you want and get rid of the imported mailboxes afterwards.
    If all is well and you don’t miss anything, the files on the Desktop can be deleted, although you may want to keep them for a while, just in case.
    Take a look at the following article (also referenced in the thread I mentioned at the beginning of this post) to learn what you might have done before upgrading to minimize the risk of this happening, and what you may do after fixing the problem to avoid similar issues from happening in the future. DON’T do now what the article suggests, though, as that would make things worse in the current situation:
    Overstuffed mailbox is unexpectedly empty
    Ask for any clarifications or if you need further assistance.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder. That is, ~/Library is the Library folder within the user’s home folder, i.e. /Users/username/Library.

  • Clarification on Incomplete recovery

    Hi Friends,
    Just clarification on 2 Things on incomplete recovery.
    Undo tablespace (datafiles):
    I lost current undo tablespace datafile but i have datafiles from backup,archived logs,current control file,online redo log and still can i perform complete recovery ?
    because complete recovery means= restore datafiles from backup +roll forward (online redo logs which is available) and roll back (current undo datafile , the current file is not available but only the backup undo datafiles is available)
    so in this case how will the roll forward happen? so is this in complete recovery?
    Control Files:
    My control files got lost but i have control trace file , dbf files(inclusing undo),archived logs,online redo logs. in this case i won't lose any data but since i am recreating control file i will perform open reset logs (new incarnation) so is this considered as in complete recovery (though my data is intact - all committed but the DB is incarnated)
    so what is complete recovery?
    DB should not be incarnated? (or) DB should have only commited data (or) both
    Regards,
    DB

    839396 wrote:
    Hi Friends,
    Just clarification on 2 Things on incomplete recovery.
    Undo tablespace (datafiles):
    I lost current undo tablespace datafile but i have datafiles from backup,archived logs,current control file,online redo log and still can i perform complete recovery ?
    because complete recovery means= restore datafiles from backup +roll forward (online redo logs which is available) and roll back (current undo datafile , the current file is not available but only the backup undo datafiles is available)
    so in this case how will the roll forward happen? so is this in complete recovery? If you have all the back up filles, archive logs and you have lost the undo tablespace data file, you would be still doing the complete recovery only. What makes you think that it would be an incomplete recovery?
    >
    Control Files:
    My control files got lost but i have control trace file , dbf files(inclusing undo),archived logs,online redo logs. in this case i won't lose any data but since i am recreating control file i will perform open reset logs (new incarnation) so is this considered as in complete recovery (though my data is intact - all committed but the DB is incarnated)
    so what is complete recovery?
    DB should not be incarnated? (or) DB should have only commited data (or) bothRead the reply of Rp!
    Aman....

  • Clarification about  Database_Buffer_cache workings

    Hi All,
    Clarification about Database_Buffer_cache workings:(This statement from my course material)
    *1.The information read from disk is read a block at a time,not a row at a time,because a database block*
    is the smallest addressable storage space on disk.
    Before answering, my please check whether my above statement is correct or not,becoz i get this from My course material.
    If i am querying ,
    select * from emp;
    Whether server_process bring the whole block belongs to EMP table right or it just bring the table itself?
    Thank you,
    Regards,
    DB
    Edited by: DB on May 30, 2013 3:19 PM
    Edited by: DB on May 30, 2013 4:35 PM

    Both happens, the LGWR may call the DBWR to write dirty blocks from the buffer cache to disk. Dirty in this context means, that the blocks in the buffer cache have been modified and not yet written to disk, i.e. their content differs from the on disk image. Conversely the DBWR can also call LGWR to write redo records from the redo log buffers (in memory) to the redo log files on disk.
    To understand why both is possible, you need to understand the mechanics how Oracle does recovery, in particular REDO and UNDO and how they play together. The excellent book "Oracle Core" from Jonathan Lewis describes this in detail.
    I'll try to sketch each of the two cases. I am aware that this is only an overview which leaves out many details. For a complete description please look at the various Oracle books and documentation that cover this topic.
    1. LGWR posts DBWR to write blocks to disk
    As you probably know, any modifications done by DML (which modify data blocks) are recorded in the redo. In case of recovery this redo can be used to bring the data blocks to the last committed stated before failure by re-applying modifications that are recorded in the redo. Redo is written into redo log files and the redo log files are used in a round robin fashion. As the log files are used in a round robin fashion, old redo data is overwritten at some point in time - thus the corresponding redo records are not longer available in a recovery scenario (they may be in the archived redo logs, which may however not exist if your database runs in NOARCHIVELOG mode and even if your database runs in ARCHIVELOG mode, the archived redo log files may not be accessible to the instance without manual intervention by the DBA).
    So before overwriting a redo log file, the Oracle instance must ensure, that the redo records being overwritten will not be used in a potential instance recovery (which the instance is supposed to do automatically, without any action by the DBA, after instance failure, e.g. due to a power outage). The way to ensure this is to have the DBWR write all modifications to disk that are protected by the redo records being overwritten (i.e. all data blocks where the first modification that has not yet been written to disk is older than a certain time) - this is called a "Thread checkpoint".
    2. DBWR posts LGWR to write redo records to disk
    Oracle uses a write ahead protocol (see http://en.wikipedia.org/wiki/Write-ahead_logging and Write Ahead Logging.... This means, that for any modification the corresponding redo records must been written to disk before the actual modification to the data blocks is written to disk (into the data files). The purpose of this I believe is, to ensure that for any data block modification that makes it to disk, the corresponding UNDO information can be restored (from redo) in case of recovery, in order to reverse uncommitted changes in a recovery scenario.
    Before writing a data block to disk, the DBWR must thus make sure, that all redo for modifications affecting this block has already been written to disk by the LGWR. If this is not the case, the DBWR will post the LGWR and only write the data block to the datafile once the redo has been written to the redo log file by LGWR.

  • How to set no archive log in MS SQL

    Dear Gurus,
    I'll already did the support package upgrade and need to run the SGEN.But before that I'll need to set the archive log in MS SQL 2005 database.
    Kindly please help me,how to set "no archive log " in MS SQL .Then I can run the SGEN.After that how to revert back to "enable archive log mode".
    All the posting told about the archive log with Oracle.
    Thanks
    /Shah

    Hi Shah,
    In MS SQL Server, Transaction log is used to write log files.
    For eg., Intially if you allocate 10GB to the Tr.Log and can set the limit based on your requirement, say as 11GB. It means it wont grow beyond the point 11GB.
    If you take a transactional log backup, the data present in the 10GB file is freed up, but the size of the file would remain same.
    This can be truncated by shrinking the log file.
    1. Open SQL Mgmt studio.
    2. Right click on the DB(SID)> Tasks>Shrink-->Files.
    3. Choose "Log" in the file type and "Log File Name" in the Filename column.
    4. Shrink Action should be "Release Unused Space"
    5. Then Click Ok. The unused space will be released.
    The Transaction log can be switched off by changing the Recovery Model to "SIMPLE"
    1. Click on the DB(SID)
    2. Properties>Options>Recovery Model.
    3. If the recovery model is set to Simple, the Transaction log wont be written.
    Hope this would help you. Revert for any other clarification.
    Regards,
    Kamesh

  • Sun Messages Journaled by Exchange are unable to be Journal Archived

    Question: XXX Please read the problem description before the question XXX -
    Is it possible to automatically change the "Message-ID" in each message, from Message-id? How would that be done?
    Please respond with any clarifications or questions, comments, solutions.
    Thanks
    Vlad
    Customer is currently running Sun Java(tm) System Messaging Server 7.3-11.01 64bit (built Sep 1 2009)).
    Here is the situation:
    In order to archive Sun Mail Server traffic the Sun messaging system BCC’s every message to an Exchange Journal. Archiving software picks up the messages from Exchange journal mailbox and archives them.
    Here is the problem at hand:
    Sun Messages Journaled by Exchange is unable to be Journal Archived by Archiving product
    •     An issue has been identified with the current format of the sunmail envelope structure with the case sensitivity of Message-id value.
    Root Cause
    •     When the Message-id value is Message-id, the Exchange Journaled Sun Message , cannot be Journal Archived
    •     Message-ID is recognized by Archiving Software, and Archiving Software archives the message properly.
    Workaround
    •     Manually change the Message-id value to upper case ; Message –ID
    •     This enables the message to be Journal Archived
    Current Status
    Customer is looking to automate this manual workaround and has requested a Product Enhancement which will automate the process.
    When we compared the two envelopes, we see the following:
    SunMail envelope msg:
    Sender: <smtp:[email protected]>
    Message-id: <[email protected]>
    Recipients:
    "jdoe" <smtp:[email protected]>
    Exchange 2003 envelope msg:
    Sender: "XXXX Bhatia" <smtp:[email protected]>
    Message-ID: <[email protected]>
    Recipients:
    "zz_Oscar XXXXXX" <smtp:[email protected]>,
    "'[email protected]'" <smtp [email protected]>
    Notice the difference regarding Message ID. It appears that we we are looking for Message-Id: or Message-ID: , not Message-id.

    DamnGoodSE wrote:
    Is it possible to automatically change the "Message-ID" in each message, from Message-id?To my knowledge, no. I attempted to modify the header using addheader/deleteheader sieve operations however sieve filtering considers the header name to be case-insensitive:
    <snip RFC3028>
    However, when a string represents the name of a header, the
    comparator is never user-specified. Header comparisons are always
    done with the "i;ascii-casemap" operator, i.e., case-insensitive
    comparisons, because this is the way things are defined in the
    message specification [IMAIL].
    </snip>
    How would that be done?Why can this not be fixed at the archiving applications end? Have you approached the vendor to ask why they use case-sensitive header name comparisons?
    Regards,
    Shane.

  • Is there any t code in SAP to display archived shipping data

    Hi All
    we have a issue with unarchiving shipping doc , our basis team has unzipped the file from the path it was archived and provided display access , when i cross checks in Tcode SARI  theya are un zipped and in sap this document is still in status archived i am not able to view vith vt03n
    for archived billing documents once thay are unzipped , document will not  open in vf03 but we can display in vf07
    Please let us know how to view this shipping data in sap ?
    Is there any t code in SAP to display archived shipping data (like for archived billing dicuments  vf07)
    Your kind help would be highly appreciated.
    Thank you
    Rajendra Prasad

    Hello,
    Once shipment document is archived then you can't display by VT03N transaction. As you have pointed out SARI or SARE transaction will help in displaying the archived shipment documents from archive server. (you have to select Archiving object = SD_VTTK and Archive Infosturcture = Select from display option).
    VF07 - Display archived billing document. We call this transaction VF07 as archived enable transaction.
    I have gone through the OSS note 590656 mentioned by Eduardo Hinojosa, with this enanchment of VT03N (respective program) you should be able to display archived shipment document. This Oss note should help you.
    let me know if required further clarification on this.
    -Thanks,
    Ajay
    Edited by: Ajay Kumar on Aug 25, 2009 6:16 AM

  • Archiving SAP ISU FICA Documents

    Hi
    We are planning to archive SAP Print documents. Just wondering what is the best place to fit in the Archiving of FICA Documents. Before Print Documents or After Meter Reading Documents (when all the other objects are archived) Archiving?
    What are the pros and cons of each approach?
    Any Idea would be helpful.
    I know from process prespective we can do anywhere but from a SAP Best practise or Other utitlities standards what is a recommended appraoch?
    Thanks

    Hi Ankur,
    FICA documents are of 2 types: One is the clearing document and the other is the main document which is being cleared. (Here the main document is the corresponding FICA posting of an invoice). The archiving object FI_MKKDOC (for archiving FICA documents) archives both of these FICA documents separately. When you run the archiving program, you get two radio buttons:
    1. ''Only Pure Clearing/ Stat. Documents''
    2. ''Other documents (Invoice, Credit Memos, and so on)''.
    There is a dependency within this object that the clearing documents should be archived before archiving the main document.
    Hence regarding the program documentation of FI_MKKDOC you mentioned, the statement ''You cannot archive the invoice until you have archived the payment", it mentions the dependency of clearing and main FICA documents within the same object FI_MKKDOC and it does not mention the dependency between the object FI_MKKDOC and object ISU_PRDOCL (which is the object for Print document line item archiving).
    Now coming to the dependency of different objects, unless and until the print document line items (object ISU_PRDOCL) and print document header items (object ISU_PRDOCH) are archived, the main documents of object FI_MKKDOC cannot be archived.
    Hence as per the standard SAP process, the sequence of archiving should be first ISU_PRDOCL, then ISU_PRDOCH and only then object FI_MKKDOC (i.e first the print documents and then the FICA documents).
    Again, within, FI_MKKDOC also, archive the ''Only Pure clearing documents'' first and only then archive ''Other Documents''.
    Hope this helps. Please let me know if you need any further clarifications.
    Regards,
    Megha

Maybe you are looking for

  • Itunes doesnt support the G4 anymore? any way around this?

    I just learned that my G4 cant upgrade to 10.5 and thus I cant supposedly use the itunes store anymore, anybody have infos on how to deal with this without having to buy a new PC or laptop? thanks

  • PI to SOAP webservice performace issue

    Dear All, I have seen message expired error/no response error from one of IDOC--> PI--> SOAP Webservice sceanrio when PI have sending bulk of messages request to SOAP webserver at a time. This scenario has already implemented using BPM because of com

  • Replace the Hot Spare that Shared between Raid 1 and raid 5

    I have ProLiant ML370 G6  with Smart Array P410i Controller on System Board  I have two Logical Drive (Logical Drive 1 - Mirroring (RAID 1) And (Logical Drive 2 - Distributed Data Guarding (RAID 5) The logical Drive 1- Mirroring (RAID 1) Have two Phy

  • F.5D - very urgent

    SAP Note 185787 suggest 1. Run report SAPF180A several times per period, e.g. once a week. It may also be useful that you run posting report SAPF180 more frequently (after SAPF180A). Or it might be better to execute SAPF180 once at the period-end. 2.

  • Best Video Editing Tool For MBP

    Hey Fellas recently i switched from a windows pc to MBP. I have a small business of making motion movies for various clients of mine. On windows based platform i used "Pinnacle Studio 15" for editing my videos And it worked fine there. Now as i switc