C000000* Folder in log archive directory

Hi,
We use db2 9,5 + ecc 6.0 in our landscape
We see the folders C0000<number> created in the log_archive path and some log files inside this
We wanted to know the following
Is this folders created after particular size of log archive files in the directory
Is this folders created after restart etc
Please assist

Hi Balaji,
Yes, the C...number represents the log chains.
Please see page 11 of the db admin guide:
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/50c7d3ef-5a59-2a10-a3ab-fb3b0887f479?quicklink=index&overridelayout=true
The log chain with the highest number contains the active log files.
For example:
If we restore from backup and only go to a point in time, instead of all the way to the latest logs, in the old log file management, the subsequent logs, left over after the point in time we restored to would be dropped. But in new logfile management, they are kept as a generation of log files and the log file chain is increased and new logs then created under the new chain number.
regards,
Paul

Similar Messages

  • FTPAdapter - logical directory name - file not moved to archive directory

    I created a simple ftp service to read the file from remote inbound directory and archive it to an "archive" directory using logical directories. I supplied the input and archive directories. The process reads the file from the input directory, but doesnt move it to archive directory. In the opmn logs i see the following message
    File Adapter::Outbound> Since file could not be copied to specified archive directory, file : CUST__20081113002951.xml is being copied to a default archive directory :/apps/oracle/product/10.1.3.1/OracleAS_1/j2ee/home/fileftp/defaultArchive/
    I checked the a) directory permission - this is the ftp users home directory , so it has all the bits set rwxrxrx- I even tried rwxrwxrwx, but same issue
    b) there is enough space on the box
    c) I can manually move the files around as the same user.
    Secondly, the files under the default archive directories are being created as root. Not sure why. Our server is running as "oracle" user.
    We are on 10.1.3.4
    any idea how to troubleshoot this ?
    Edited by: user9514124 on Nov 13, 2008 5:27 PM

    Just a thought. You are trying to archive to an FTP user's home directory. I assume that you want to archive remotely (on the source server)? If so you need to specify UseRemoteArchive="true" in the WSDL file for the adapter. If you forget that the adapter archives locallly on the SOA Suite server and perhaps there the directories are indeed missing or have the wrong rights?
    If you are using remote archiving and it doesn't work, have you tried to login with an FTP client and upload a file to the archive folder with FTP (as the FTP adapter user)? That is what the FTP adapter will do.
    If you are using local archiving, check all the parent directories and make sure that they are fine as well as the target directory. Also look into the file ownership issue, the files should not be created as root if everything really runs as oracle! Perhaps someone has accidentally started something as root?
    Good luck!

  • Archive directory is full

    Hello Experts,
    I'm very new into SAP.
    Plz tell me how to look that the archive directory is full at the database level and what command to use to bring the system up again as it hanged when the redo logs are not deleted from the system.
    Thanks in advance.

    Hi,
    As suggested by other SDN members, offline redologs needs to be backed-up & then deleted to make space for new logs.
    You can check disk utilization with following command:
    df -g | grep oraarc
    If above command shows that it is more than 90% full, move your archive logs to another folder (e.g. /oracle/<SID>/arc_backup). Since this is faster option to make your system available. Backup on tape takes comparatively longer time.
    Once space becomes available in /oracle/<SID>/oraarc directory, systems automatically comes out of hung state & users can continue their work. You need not stop or start your database.
    Regards,
    -Pankaj Kapote

  • Why is my new mail going to an "Inbox" folder in the Archive folder?

    I've discovered that my new mail is now longer going to the regular Inbox. It's now landing in a new "Inbox" folder in the Archive folder. Why? How can I reset this back to the regular new mail inbox?
    I tried renaming the folder in the directory but mail still is defaulting there. I've check to make sure I don't have a filter to do this, no filter for this account.

    Use C:\YourWebSite (where C: is your hard drive).
    Setting up your Local Site and Project Files -
    http://www.adobe.com/devnet/dreamweaver/articles/first_cs4_website_pt1.html
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    http://alt-web.com/
    http://twitter.com/altweb
    http://alt-web.blogspot.com

  • Reg:File adapter archive Directory

    Dear team,
    Our requirement is to read a csv file from a directory and archive the file in archive folder specified in the file adapter.
    If any exception is caught,then we need to read the archieve file from archive directory rename the archive file with source file name and place it in source directory.
    On the receive activity we are able to get the source file name and source file directory.
    <receive name="Receive1" createInstance="yes"
    variable="Receive1_Read_InputVariable" partnerLink="fileRead"
    portType="ns1:Read_ptt" operation="Read">
    <bpelx:property name="jca.file.FileName" variable="srcFileName"/>
    <bpelx:property name="jca.file.Directory" variable="srcDrFolder"/>
    How to get the archive file name and archive file directory from the receive activity so that we can store in local variables.
    Pls do help.
    Thanks

    Hi,
    Another way you can accomplish your scenario. Instead of deleting or archiving in beginning just move the file from inbound to archive location after business flow completion.
    In case of error, the file will remain at original position as moving operation is at the end.
    First read the file using read operation, then at the end create a file adapter with sync read operation. Change the entries in .jca generated with below sample.
    Sample jca file.
    <endpoint-interaction portType="SynchRead_ptt" operation="SynchRead">
    <interaction-spec className="oracle.tip.adapter.file.outbound.FileIoInteractionSpec">
    <!-- Below properties are dummy except Type , it will be changed in runtime -->
    <property name="SourcePhysicalDirectory"
    value="srcdir"/>
    <property name="SourceFileName" value="abc.txt"/>
    <property name="TargetPhysicalDirectory"
    value="targetdir"/>
    <property name="TargetFileName" value="abc.txt"/>
    <property name="Type" value="MOVE"/>
    </interaction-spec>
    Then,in you bpel flow at the invoke for sync read add these two properties.
    <bpelx:inputProperty name="jca.file.SourceFileName"
    variable="varInputFileName"/>
    <bpelx:inputProperty name="jca.file.TargetFileName"
    variable="varArchiveFileName"/>
    <bpelx:inputProperty name="jca.file.SourceDirectory"
    variable="varInputDirectory"/>
    <bpelx:inputProperty name="jca.file.TargetDirectory"
    variable="varArchiveDirectory"/>
    - It is considered good etiquette to reward answerers with points (as "helpful" - 5 pts - or "correct" - 10pts).
    Thanks,
    Durga

  • Archive directory on overload

    Hi,
    I've configured a Sender File Adapter.
    It archives messages and adds a timestamp.
    Files are succesfully put on the integration engine and archived.
    However the files are not moved from the input directory, but are only copied
    So the files stay in the input directory. They are not processed again into the integration engine, BUT the archiving doesnt stop. It just keeps archiving the same file again and again.
    Because of the timestamp this means that we have a new file each time.
    This causes an overload on the archive share (file of 2 kb produced 16 gigabyte on archive)
    Is this a setting of the adapter that i have to change or does it mean that the pi-user who does the moving and archiving does not have sufficient rights on the directory?
    Thx
    Robert

    Hi
    In usual case if you give processing mode as Archive the files will be moved from source to Archive directory.
    And also are you getting any error in the Channel?
    But in your case if this is not happening then please check the User ID which you are using in the file channel if it has the access to
    delete the file once its processed.to check this login with that ID  to the FTP and check if you are able to delete.
    If nothing works , i think you can have a batch job to clear the source folder.
    Regards,
    Srinivas

  • Archive directory user rights?

    Hi all: I recently migrated our GW2014 SP1 server over to new hardware. The migration went smooth and our domain and postoffice seem happy. My users are reporting an 8201 error when starting up their client and I have traced it to a user rights issue with our archive folder on the server. I have given full rights to everyone as a temp measure, but I want to get the proper rights set up. BTW, the archive directory is on an NSS volume.
    So, what are the proper user rights to the archive directory? Thanks much, Chris.

    Hi Chris,
    They need Read, Write, File Scan, Create, Erase, Modify, Delete - all except Access Control and Supervisor.
    Hope that helps.
    Cheers,

  • Archive directory full. RMAN backup failed.

    My archive directory went out of space due to rman issue. /tmp was 100%. I had no choice late last night but to move archive logs to another directory (/backups/arch) I changed log_archive_dest also to this location. However RMAN keeps looking at the previous archive directory. Any ideas how I acn make RMAN pick up the archives from this new location?
    Oracle version : 10.2.0.4
    HP-UX OS
    Thank you.

    Solved it. Had to use :
    catalog start with '<new_directory_of_archivelogs>';
    Edited by: 860412 on Aug 3, 2011 9:52 AM

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Archived directory getting filled regularly : causing issues to Prod system

    Hi Team,
    In my EP system , the archive directory is getting occupied regularly and hence causing space crunch.
    As of now , we are manually deleting those archived logs on a weekly basis.
    I checked the log configuratiuon and found that the severity is Error
    So , I need your valuable inputs  to resolve this issue
    Thanks in Advance
    Regards
    Sandeep

    Thanks all and apologies for late reply
    Hi Sujith ,
    I am not taking abt  ora/<sid>/saparch
    My issue is with  usr/sap/<SID>/JCOO/j2ee/cluster/server*/log/archive
    Hi Steven,
    yeah ,, whatever you  said migbe right
    If I mark Archiveoldfiles option to OFF  what happens ?
    I might be wrong (correct me)
    -->Is that just stops removing the old files from log directory to move to archive dir .
    --> If that is the case, then my log dir will  suffer with space crunch .. right ?
    Regards
    Sandy

  • Archiving directory:unknown

    Hi SAP Basis Guru,
    In procuction system R/3 Enterprise, through accessing transaction db12, when clicking on Redo log backup-archive directory status then showing "unkown".
    Please help.
    Dilip

    Hi SBK,
    At os level permission of saparch is set to full.
    When we try to click archive director status, it takes the path /oracle/P01/saparch/P01arch and showing status unknown and permission for both saparch and P01arch is full.
    We can see the contents of other file through AL11, but path /oracle/P01/saparch and /oracle/P01/saparch/P01arch is not there(configured) in AL11.
    Please suggest.
    Regards,
    Dilip

  • Creating a folder in web application directory?

    Hi
    I have a web application which contains some jsps and servlets and i am running it on Apache Tomcat 4.1. The name of my application is MyApp which contains all the servlets and jsps.
    In order to deploy the application i have placed the MyApp folder in the 'webapps' folder of Tomcat. Now in one of my servlets i.e. 'DirectoryCreator', i am trying to create a folder i.e. 'Directory' in the MyApp folder. The problem is that i dont want to give an absolute path to the File class constructor. The class files of my servlets are in classes folder i.e. MyApp \ Web-INF \ classes.
    I have tried:
    File f = new File("/Directory");
    f.mkdir();
    but this creates the Directory folder in my C drive.
    Please tell me how i can avoid giving the absolute path.
    Thanks.

    if the code to create the directory is a servlet, you can use
    String path=getServletContext().getRealPath("/");
    //getRealPath("/") will return the actual path on your machine of the base directory
    //of the webapp. getRealPath("/SomeDirectory/SomeFile") will return the real
    //path of SomeFile
    File f=new File(path + "/Directory");
    if(!f.exists())
        f.mkdir();if the code is in a .jsp file simple replace getServletContext(). with application.

  • How do I add a folder to my Archive folder in Mail?

    On previous versions of Mail (on my Mac, NOT on my iPhone), I was able to move whole folders into Archive, but I can't seem to do that anymore.  When I move a folder over the Archive folder, nothing happens.
    After looking around, it looks like I can only move single messages or groups of messages into the Archive folder, but that can't be right, can it?
    I want to keep my archived emails sorted by the folders they exist in already... can someone please help me do that?

    The only way you can now add additional folders to the 'Archive' folder/mailbox is to create the new folders using the specific mail application.
    If you have Apple Mail pointing to a Gmail account, you will need to access Gmail web interface and create the folder there.  Likewise for Hotmail, Yahoo! Mail, etc.
    Not sure if this will work for Apple Cloud mail accounts.
    I have a @sky.com account and accessed the Sky webmail interface, created the new folders (Archive/*****) and reopned Apple Mail and there they were under the Archive folder :-)

  • How to monitor available disk space in archive directory

    I need to monitor the available disk space of the archive directory during a long running PL/SQL program which does a lot of inserts and updates. This program is running in a background session and should regularly check for the available space. If there is less then a customizable number of MBs free the program must terminate itself. How can I access this information from the file system? (I'm using 8.1.6 and the application will run on both AIX and NT)

    The directory itself is not taking space (It actually does but in most cases you can neglect its size). It is files that it has that take space. Also, the more files you create the more space gets allocates from the media that holds them. And the last, the media (Hard Drive, Floppy, or other) has free space, not directory. That is how all OS's that I know about work.

  • Regds : Read archive Directory

    I want to read the archive directory name in sender file adapter at runtime in the message mapping?
    Please suggest how can i achive this?

    Want to read the archive directory name in sender file adapter at runtime in the message mapping?
    Why?
    For a given sender file channel, the Archive Directory is constant. One sender file channel can be used only in one Sender Agreement. That means for given combination Communication Component, Interface, Namespace there is only one Communication Channel (Archive Directory). These three variable can be accessed in UDF using
    map = container.getTransformationParameters();
    headerField = (String) map.get(StreamTransformationConstants.INTERFACE_NAMESPACE);
    /* similarly Communication Component, Interface */
    http://help.sap.com/saphelp_nw04/helpdata/EN/43/09b16006526e72e10000000a422035/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/EN/78/b4ea10263c404599ec6edabf59aa6c/content.htm
    i.e. if this combination of Communication Component, Interface, Namespace then this is Archive Directory, if this combination of Communication Component, Interface, Namespace then this is Archive Directory.
    Now we have 2 option:-
    1. Hard code in UDF for this combination this is Archive Directory.
    2. Store these values in a database (this combination this is Archive Directory) and access it in message mapping by database lookup.
    There is other option, write a Adapter module, which populate the Archive Directory in SOAP header of messages in Adapter Engine. And access Archive Directory in UDF, similar to this http://help.sap.com/saphelp_nw04/helpdata/EN/78/b4ea10263c404599ec6edabf59aa6c/content.htm
    . But, this option is not recommended as it is performance overhead, and as for a Communication Channel, Archive Directory will not change (we have to do manual edit), At least very less in production (we don't change it very day ).

Maybe you are looking for