Managing&Auditing Database Physical files.

Hi all,
Sorry if I posted this issue in this forum.
I have just this problem in monitoring and auditing oracle files and disk spaces in our Windows server.
We have a server farm of around 20 Windows server, for global centralized test servers.
Each server has different versions of Oracle DB, and maybe contain more than 10 database/intance (small test DBs).
So we monitor around 200+ Databases.
We run backup daily for each db using cold backup and maintain at least 1week retention files.
We we have around 5x10x200 backup zip files.
We also monitor all the Physical list of DB_home folders and it corresponding .dbf files and timestamp.
What I did is map all the hardrive from each servers on my laptop to I can read all and audit all the physical files.
I want to have a centralized list of the files associated with the database in spreadsheets and update it periodically.
My question is how can I run a list cascade command that will list all the files and sub-directories and saved in in .csv or excel file?
Is it like "ls -lt">list.log in linux.
Thanks

Please see the threads referenced in these links.
https://forums.oracle.com/forums/search.jspa?threadID=&q=Images+AND+HRMS&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
https://forums.oracle.com/forums/search.jspa?threadID=&q=Images+AND+HR&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
Thanks,
Hussein

Similar Messages

  • Detach Database: physical file permission

    Hi All. I am using windows account to login which is Sysadmin of the SQL server.
    when I detach a database, the physical file permission will only have one entry, which is my domain ID with full control.
    it using another windows domain account which is also sysadmin would like to attach the database again: FAILED since access denied....
    how to avoid this happenn ??

    When you detach a DB, the ACLS are reset and the permission is given to the logged in windows account user - the full control on the DB files as expounded here:
    https://msdn.microsoft.com/en-us/library/ms189128.aspx/html. You can circumvent your problem by using "EXECUTE AS Domain\[Attaching User]" before detaching the DB using sp_detach_db
    (this will retain full CONTROL permission to the attaching user)
    However, starting from SQL 2005, you can use a more graceful method of  moving files (on the same server) rather than using the attach/detach as shown below:
    Take the DB offline
    Move the physical files to the new location
    Run the “ALTER DATABASE DBNAME MODIFY FILE (NAME  = logicalfilename, FILENAME =  physicalfilepath” command to provide the new path
    Bring the DB online
    However, if you are moving the DB to another server, then backup/restore will be best (you can use compression (SQL 2008 R2 standard edition or higher supports it) and this will move much faster than uncompressed MDF and LDF files
    Satish Kartan www.sqlfood.com

  • Error message: "Unable to open the physical file" when trying to attach AdventureWorks database

    I have searched the internet and this forum and have not found an answer...
    I am trying to install the AdventureWorks database into my single instance of MS SQL Server Express 2005.  I am logged into my machine as an administrator and logged into SQL Server 2005 express 'sa'.  I attempt to run the following script:
    exec sp_attach_db @dbname = N'AdventureWorks',
    @filename1 = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorks_Data.mdf',
    @filename2 = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorks_log.ldf'
    The error message I get back is:
    Msg 5120, Level 16, State 101, Line 1
    Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorks_Data.mdf". Operating system error 5: "5(Access is denied.)".
    The folder "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data" and all the files in it are read-write.  I am 100.0000% certain the files "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorks_Data.mdf" and "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\AdventureWorks_Log.ldf" exist!  They are the result of running the installation program AdventureWorksDB.msi, which I downloaded from: http://www.codeplex.com/MSFTDBProdSamples/Release/ProjectReleases.aspx?ReleaseId=4004.
    What do I have to do to install the AdventureWorkds database????

    Hello,
    To try to help you, please, could you give some more informations ?
    - the operating system (XP/Vista), the edition ( Pro/Home...) and the service pack
    - usually, the installer installs the both files in C:\Program Files\Microsoft Sql Server\Samples. Is there any change in the location or have you moved the both files ?
    - could you check with the files explorer , that the 2 files are read-write and not read only( find one file, right-click on it, properties and in the 1st tabpage, you should see a checkbox read-only ( if checked , unchecke it ) ?
    - have you Sql Server Management Studio Express Edition ( at least SP1 ) ?
    If no, download it and use it to attach
    in the object explorer,
    click on your instance to expand it
    right click on databases
    in the contextual menu, click on attach
    in the new form,click on add
    you arrive on a second form : find your file , click on it, and OK
    it's the simplest way to attach ( the sp_-attach_db is complicated to type )
    the error messages are sometimes more clear in SSMSEE than in Sqlcmd
    Try also to attach ( thru SSMSEE or Sqlcmd but in using the windows authentification )
    NB: i hope that you are not trying to attach AdventureWorks on a remote instance and on a remote computer ( it would explain access denied )
    We are waiting for your feedback to try to help you more efficiently
    Have a nice day

  • Oracle Enterprise Manager 11g Database Control log file location

    Any one knows where the log file is for Oracle Enterprise Manager 11g Database Control. I am trying to set up EUS proxy authentication and running into issues when trying to search proxy user in my DB. The search returns no results and I am not sure where to look for. Please not my setup does work for EUS with shared or exclusive schema. I need to use Oracle DB proxy authentication for my application and that would need a setup of EUS with proxy permissions so please advice if you know where the log file is for Oracle Enterprise Manager 11g Database Control or if there is any logging I can turn on to see what search parameters are passed from Oracle Enterprise Manager 11g Database Control to my DB by using DB audit etc.
    Thanks,

    Be patient...wait for report.
    If after some time report does not appear, go to Advisor Central and find it there.
    :p

  • A file activation error occurred. The physical file name may be incorrect while creating database

    Hi Experts,
       I am trying to create a database in my local system, while creating i am getting error like below can anyone please help me on this.
    Msg 5105, Level 16, State 2, Line 1
    A file activation error occurred. The physical file name ' C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\NewData\Demo_dynamic.mdf' may be incorrect. Diagnose and correct additional errors, and retry the operation.
    Msg 1802, Level 16, State 1, Line 1
    CREATE DATABASE failed. Some file names listed could not be created. Check related errors. 
    Awaiting for the response......!! 
    Niraj Sevalkar

    I have created earlier database in 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA'
    but now i have created one newdata folder and trying to create in it also i have checked the path what have you suggested above and the NewData folder is existed please check below snapshot
    Both folders files and NewData folder which don't have any files 
    Niraj Sevalkar

  • Physical file of the "Database" object keep growing

    Hello,
    My Application is using BDB. Version 4.3.29.
    We are using the Database object for saving our byte array data to BDB(see the code below)
    The written data is initialized by: Byte[] Data = new byte[size].
    And we always use the same key for the insert operation. By the BDB API it should overwrite the existing entry, correct me if I’m wrong)
    The data size may vary, I.e. it can grow, or decrease.
    After monitoring the physical file that sits on dist, we have noticed that the physical file only grows, but never decrease its size. Even when the written data is of a 0 length.
    Please explain the issue. What are we doing wrong? Or what additional parameters should be used in order for the phisical file to change its size.i.e. decrease it.
    Thank you in advance.
    Shira Faigenbaum
    Opening the BDB:
    DataBase m_dataBase = m_DBEnv().openDatabase(transaction, tablePath + File.separator + m_name, null, m_DatabaseConfig);
    The code that we have, for the PUT operation:
    DatabaseEntry keyDbt = new DatabaseEntry(new byte[4]);
    keyDbt.setSize(4);
    keyDbt.setUserBuffer(4, true);
    keyDbt.setRecordNumber(DATA_KEY);
    DatabaseEntry dataDbt = new DatabaseEntry(data);
    dataDbt.setSize(data.length);
    dataDbt.setUserBuffer(length, true);
    synchronized (transaction)
    m_dataBase.put(transaction.getTransaction(), keyDbt, dataDbt);
    }

    Hi,
    There is a bit a lot of code, but here is the program.
    thanks
    Shira
    public class BDBsizeNotReducingExxample
         Environment m_dbEnv;
         private Database m_dataBase = null;
         private byte[] m_serializationBuffer = new byte[256];
         private Object m_data = null;
         public static void main(String[] args)
              BDBsizeNotReducingExxample DbsizeNotReducingExxample = new BDBsizeNotReducingExxample();
              DbsizeNotReducingExxample.opendbEnv();
              DbsizeNotReducingExxample.openDB();
              DbsizeNotReducingExxample.saveData();
         private void opendbEnv()
              try
                   EnvironmentConfig ec = new EnvironmentConfig();
                   ec.setCacheSize(16 * 1024 * 1024);
    ec.setErrorStream(new LogOutputStream());
    ec.setVerboseDeadlock(true);
                   ec.setVerboseWaitsFor(true);           
    ec.setVerboseRecovery(true);      
              ec.setMaxLockers(1024);
                   ec.setMaxLocks(100000);                ec.setMaxLockObjects(100000);               ec.setTxnMaxActive(20);
                   ec.setLogRegionSize(600 * 1024);     
    ec.setLockDetectMode(LockDetectMode.DEFAULT);
    ec.setAllowCreate(true);
                   ec.setTransactional(true);
              ec.setRunRecovery(true);          ec.setInitializeCache(true);
                   ec.setInitializeLocking(true);     ec.setInitializeLogging(true);
                   File f = new File("/Application/Persistency");
                   //open the environment try to open data base without fatal recovery
                   try
                        m_dbEnv = new Environment(f, ec);
                   catch (RunRecoveryException dbrre_1)
                        ec.setRunFatalRecovery(true);
                        m_dbEnv = new Environment(f, ec);
                   //archive and checkpoint at gatherer start up
                   CheckpointConfig cpc = new CheckpointConfig();
                   cpc.setForce(true);
                   m_dbEnv.checkpoint(cpc);
              catch (Exception e)
                   PersistencyUtils.handleFatalException(e);
         private class LogOutputStream extends OutputStream
              public void write(int b) throws IOException
                   //some clss just to print the errors
         private void openDB()
              DatabaseEntry keyDbt = new DatabaseEntry(new byte[4]);
         keyDbt.setSize(4);
         keyDbt.setUserBuffer(4, true);
         keyDbt.setRecordNumber(0);
         DatabaseEntry dataDbt = new DatabaseEntry();
         dataDbt.setReuseBuffer(false);
    try
         String tablePath = "/Application/Persistency/";
         DirUtil.mkdirs(new File(tablePath));
         PersistencyUtils.verifyFreeDiskSpace();
    Transaction transaction = m_dbEnv.beginTransaction(null, null);
    //DatabaseConfig use btree
    m_dataBase = m_dbEnv.openDatabase(transaction, tablePath + "/Data", null, PersistencyUtils.getDatabaseConfig());
    if (m_dataBase.get(transaction, keyDbt, dataDbt, null) != OperationStatus.SUCCESS)
         dataDbt = null;
    transaction.commit();
    catch (Exception e)
         PersistencyUtils.handleFatalException(e);
    if (dataDbt != null)
         byte[] data = dataDbt.getData();
         if (data != null)
              m_serializer.deserialize(m_data, data, 0, data.length);
    private void saveData()
    int length = m_serializer.serialize(m_data, m_serializationBuffer, 0);
    //this function changes the size of the m_serializationBuffer, increase and decrease its size!!!
    adjustSerializationBufferToTheRealLength(m_serializationBuffer, length);
    DatabaseEntry keyDbt = new DatabaseEntry(new byte[4]);
    keyDbt.setSize(4);
    keyDbt.setUserBuffer(4, true);
    keyDbt.setRecordNumber(0);
    DatabaseEntry dataDbt = new DatabaseEntry(m_serializationBuffer);
    dataDbt.setSize(length);
    dataDbt.setUserBuffer(length, true);
    synchronized (currentTransaction)
         m_dataBase.put(transaction.getTransaction(), keyDbt, dataDbt);
    }

  • Marketing Encyclopedia System : Where are the physical files located?

    Hello All,
    I've a Published Content (Oracle Marketing > Encyclopedia > Publish). A PDF File was attached under a particualr category/content type. The same shows up in 'My Published Items' in My Channels. The file opens fine when clicked on the link under the Category to which I attached this item.
    Can anyone help me find where is this file (the PDF that I attached into the item) stored in the system? Does it reside as a BLOB in a database table or does the file reside somewhere on the UNIX Box? Any pointers on getting to the physical location of the files attached would be greately appreciated.
    TIA.

    Hi,
    Portal Application components do not have any physical files. It is available as a package in the database. You can see it in the manage component screen. Clicking on the package body link will show you the whole package. You can copy this into a .SQL file and compile it later whenever you want, in the schema where the application is built.
    Hope that helps.
    Thanks,
    Sharmila

  • Two TT instances  with same physical  files

    HI All,
    can we have two timesten instances sharing the same set of physical files like check point , log files .
    Node A
    Timesten instance1 running
    Node B
    Timesten Instance2 running
    both Timesten instances instance1,instnce2 with shared physical ( check point , log files )
    Please tell me above configuration is possible or not

    Can you please elaborate on the background to this question? Do you want to 'share' the files such that both instances can use them concurrently or is this related to some form of shared disk failover/HA mechanism?
    It is absolutely not possible to concurrently access the same set of datastore files from two different instances and in fact TimesTen goes to some lengths to prevent this. If you managed to circumvent this protection the result would be a totally corrupted database.
    It is possible to use shared disk failover with TimesTen though this is not at all an optimal way to implement HA with TimesTen. TimesTen replication is the recommended HA mechanism to use.
    Chris

  • Where are the Physical Files of Portal App (Forms,Reports,etc)

    Hi..
    We are developing our portal using Oracle Portal..
    We use the Portal form,report,link,url,dynamic page etc in our development.
    The case is,
    Our QA want to restrict the access to the file (version control).
    So, they want to do a backup/copy of the development files (form,report,dynamic page etc..) and if there are changes to the portal, they want to use the backup as the development files.
    The problem is,
    we not sure where the files are..
    We don't know where the forms,report,dynamic page are restored and what are they (.jsp?.cvf?.java?.fmb?.rdf?)
    Can anybody help?
    p/s Fyi,we also use jsp..so, we can see the physical files..e.g. index.jsp,login.jsp....but for the portal form,report,link????Where are their physical files?
    Thanks.

    Hi,
    Portal Application components do not have any physical files. It is available as a package in the database. You can see it in the manage component screen. Clicking on the package body link will show you the whole package. You can copy this into a .SQL file and compile it later whenever you want, in the schema where the application is built.
    Hope that helps.
    Thanks,
    Sharmila

  • PHYSICAL FILES FOR ABAP PROGRAM

    hi friends
    i would like to know are there any physical files on os level for the ABAP programs.for example, when we create a customized report for sales in ABAP does SAP also create a corresponding copy on os level. if yes then in which file system .?
    we have ECC 5.0 on AIX & use oracle 9i.
    thanks in advance.
    regards.

    imran mohd wrote:
    > hi friends
    >
    > i would like to know are there any physical files on os level for the ABAP programs.for example, when we create a customized report for sales in ABAP does SAP also create a corresponding copy on os level. if yes then in which file system .?
    >
    >
    > we have ECC 5.0 on AIX & use oracle 9i.
    >
    > thanks in advance.
    >
    > regards.
    The code you write in ABAP is not stored on OS-level (at least not in an ABAP-stack system) - it's contained in the database. As for the 'copies' your management wishes (for some ambigous reason) to have - there's no need. The code you write is versionised, so that every change made to the code is automatically documented when you press 'Save'.
    That was answering a 'basic' question - but you have made me very curious now: would you mind to explain why you would want a 'copy' of your custom code on OS-level? Any special reason??
    Edited by: Mylène Dorias on Jun 9, 2010 1:37 PM typo

  • Job output is no longer available in the database control file

    Im running my rman backup with Oracle Enterprise manager grid control 10.2.0.1.
    In nocatalog mode.
    When I look for the job output in the "View Backup Report" section, it says
    Job output is no longer available in the database control file.
    Here is my rman setup:
    RMAN> show all;
    using target database control file instead of recovery catalog
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 7;
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 2;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 5 G;
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/app/oracle/product/10.2.0/db_1/dbs/snapcf_mus_prod.f'; # default
    anyone knows why i cannot see my rman job output?

    show parameter control_file_record_keep_time;
    NAME TYPE VALUE
    control_file_record_keep_time integer 7
    The backup is usually the last one. And it's the day before.
    If i go to the JOB tab in oem, then click on the job detail, i can see the detail of my rman backup. But i cannot see the backup detail in the View Backup Report section off the database. It says the error message : Job output is no longer available in the database control file

  • Unable to Initialize Volume Manager from a Configuration File

    I'd like to reattach a D1000 to a rebuilt system. The array contains a raid 5 partition that was built with Solaris Volume Manager (Solaris 9). Since it is the same system all controller/target/slice ids have not changed. I was able to restore the Volume Manager configuration files (/etc/lvm) from a tape backup and followed the instructions provided in the Solaris Volume Manager Administration Guide: How to Initialize Solaris Volume Manager from a Configuration File <http://docs.sun.com/db/doc/816-4519/6manoju60?a=view>.
    All of the state database replicas for this partition are contained on the disks within the array so I began by creating new database replicas on a local disk.
    I then copied the /etc/md.cf file to /etc/md.tab
    # more /etc/lvm/md.tab
    # metadevice configuration file
    # do not hand edit
    d0 -r c1t10d0s0 c1t11d0s0 c1t12d0s0 c1t8d0s0 c1t9d0s0 c2t10d0s0 c2t11d0s0 c2t12d0s0 c2t8d0s0 c2t9d0s0 -k -i 32b -h hsp000
    hsp000 c1t13d0s0 c2t13d0s0
    I then tested the syntax of the md.tab file (this output is actually from my secomd attempt).
    # /sbin/metainit -n -a
    d0: RAID is setup
    metainit: <hostname>: /etc/lvm/md.tab line 4: hsp000: hotspare pool is already setup
    Not seeing any problems I then attempted to recreate the d0 volume, but it fails with the error below:
    # /sbin/metainit -a
    metainit: <hostname>: /etc/lvm/md.tab line 3: d0: devices were not RAIDed previously or are specified in the wrong order
    metainit: <hostname>: /etc/lvm/md.tab line 4: hsp000: hotspare pool is already setup
    Any suggestions on how to reinitialize this volume would be appreciated.
    Thanks, Doug

    You have UserPrincipalName column heading in the csv file so this should be your cmdlet.
    import-csv C:\temp\sharedMailboxCreationTest.csv | ForEach-Object {New-Mailbox -shared  -Name $_.Name  -Alias $_.Alias  -OrganizationalUnit $_.OrganizationalUnit -UserPrincipalName $_.UserPrincipalName -Database $_.Database}
    Blog |
    Get Your Exchange Powershell Tip of the Day from here

  • Unable to open the physical file "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\abc.mdf". Operating system error 2: "2(The system cannot find the file specified.)".

    hi,
    am running the below command for moving sql serevr mdf and ldf files  from one  drive to another : c  drive to d drive:
    but am getting the below error
    SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\abc.mdf". Operating system error 2: "2(The system cannot find the file specified.)".
    use master
    DECLARE @DBName nvarchar(50)
    SET @DBName = 'CMP_143'
    DECLARE @RC int
    EXEC @RC = sp_detach_db @DBName
    DECLARE @NewPath nvarchar(1000)
    --SET @NewPath = 'E:\Data\Microsoft SQL Server\Data\';
    SET @NewPath = 'D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\';
    DECLARE @OldPath nvarchar(1000)
    SET @OldPath = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\';
    DECLARE @DBFileName nvarchar(100)
    SET @DBFileName = @DBName + '.mdf';
    DECLARE @LogFileName nvarchar(100)
    SET @LogFileName = @DBName + '_log.ldf';
    DECLARE @SRCData nvarchar(1000)
    SET @SRCData = @OldPath + @DBFileName;
    DECLARE @SRCLog nvarchar(1000)
    SET @SRCLog = @OldPath + @LogFileName;
    DECLARE @DESTData nvarchar(1000)
    SET @DESTData = @NewPath + @DBFileName;
    DECLARE @DESTLog nvarchar(1000)
    SET @DESTLog = @NewPath + @LogFileName;
    DECLARE @FILEPATH nvarchar(1000);
    DECLARE @LOGPATH nvarchar(1000);
    SET @FILEPATH = N'xcopy /Y "' + @SRCData + N'" "' + @NewPath + '"';
    SET @LOGPATH = N'xcopy /Y "' + @SRCLog + N'" "' + @NewPath + '"';
    exec xp_cmdshell @FILEPATH;
    exec xp_cmdshell @LOGPATH;
    EXEC @RC = sp_attach_db @DBName, @DESTData, @DESTLog
    go
    can anyone pls help how to set the db offline. currently  i  stopped the sql server services from services.msc and started the  sql server agent.
    should i stop both services for moving from one drive to another?
    note: I tried teh below solution but this didint work:
    ALTER DATABASE <DBName> SET OFFLINE WITH ROLLBACK IMMEDIATE
    Update:
    now am getting the message :
    Msg 15010, Level 16, State 1, Procedure sp_detach_db, Line 40
    The database 'CMP_143' does not exist. Supply a valid database name. To see available databases, use sys.databases.
    (3 row(s) affected)
    (3 row(s) affected)
    Msg 5120, Level 16, State 101, Line 1
    Unable to open the physical file "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\CMP_143.mdf". Operating system error 2: "2(The system cannot find the file specified.)".

    First you should have checked the database mdf/ldf name and location by using the command
    Use CMP_143
    Go
    Sp_helpfile
    Looks like your database CMP_143 was successfully detached but mdf/ldf location or name was different that is why it did not get copied to target location.
    Database is already detached that’s why db offline failed
    Msg 15010, Level 16, State 1, Procedure sp_detach_db, Line 40
    The database 'CMP_143' does not exist. Supply a valid database name. To see available databases, use sys.databases.
    EXEC @RC = sp_attach_db @DBName, @DESTData, @DESTLog
    Attached step is failing as there is no mdf file
    Msg 5120, Level 16, State 101, Line 1
    Unable to open the physical file "D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\CMP_143.mdf". Operating system error 2: "2(The system cannot find the file specified.)"
    Solution:
    Search for the physical files(mdf/ldf) in the OS and copy to target location and the re-run sp_attach_db with right location and name of mdf/ldf.

  • Problem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Filestream Creation Unable to Open Physical File Operating System Error 259

    Hey Everybody,
    I have run out of options supporting a customer that is having an error when creating a database with a file stream.  The error displayed is unable to open physical file operating system error 259 (No more data is available).  We're using a pretty
    standard creation SQL script that we aren't having issues with other customers:
    -- We are going to create our data paths for the filestreams.  
    DECLARE @data_path nvarchar(256);
    SET @data_path = (SELECT SUBSTRING(physical_name, 1, CHARINDEX(N'master.mdf', LOWER(physical_name)) - 1)
                      FROM master.sys.master_files
                      WHERE database_id = 1 AND file_id = 1);
    -- At this point, we should be able to create our database.  
    EXECUTE ('CREATE DATABASE AllTables
    ON PRIMARY
        NAME = AllTables_data
        ,FILENAME = ''' + @data_path + 'AllTables_data.mdf''
        ,SIZE = 10MB
        ,FILEGROWTH = 15%
    FILEGROUP FileStreamAll CONTAINS FILESTREAM DEFAULT
        NAME = FSAllTables
        ,FILENAME = ''' + @data_path + 'AllTablesFS''
    LOG ON
        NAME = AllTables_log
        ,FILENAME = ''' + @data_path + 'AllTables_log.ldf''
        ,SIZE = 5MB
        ,FILEGROWTH = 5MB
    GO
    We are using SQL Server 2014 Express.  File streams were enabled on the installation SQL Server.  The instance was created successfully and we are able to connect to the database through SSMS. The user is using an encrypted Sophos. 
    We have tried the following:
    1. Increasing the permissions of the SQL Server server to have full access to the folders.
    2. Attempted a restore of a blank database and it failed.
    There doesn't seem to be any knowledge base articles on this particular error and I am not sure what else I can do to resolve this.  Thanks in advance for any help!

    Hi Ryan,
    1)SQL Server(any version) can't be installed on encrypted drives. Please see a similar scenario in the following link
    https://ask.sqlservercentral.com/questions/115761/filestream-and-encrypted-drives.html
    2)I don't think there is any problem with permissions on the folder, if the user can create a database in the same folder. Am not too sure. Also see the article by
    Jacob on configuring the FILESTREAM for SQL Server that describes how to configure FILESTREAM access level & creating a FILESTREAM enabled database
    Hope this helps,
    Thanks
    Bhanu 

Maybe you are looking for

  • Less destructive fix for the "original file could not be found" issue

    +(Note: this is mostly a copy of the post I just submitted in reply to http://discussions.apple.com/message.jspa?messageID=6785594#6785594)+ Most solutions to the "The song xxx could not be used because the original file could not be found. Would you

  • [cs3 vba] How to move textframe with TOC generated?

    I read one answer here, that "CreateTOC" give reference for story. But how can i use this reference? line myDoc.CreateTOC MyTOCStyle, True, , Array(20, 90) do not give any possibility to have textframe as object. I need this, because: myInd.ActiveWin

  • Flex vs ActionScript

    Hi all, Perhaps I am confused but here's my situation: I'm trying to build a Flash video player for compatibility with IE. I built a simple on in Flex using the Spark components but compiled, it comes to 41k without static linking and 300k+ with stat

  • Ipad or not??????????

    I am a junior in high school and I currently have a Macbook pro 15" 2007 version. It is half broken from use. When it breaks i need to get something else. I was thinking. I use my mabook pro for; web browsing, word, sometimes for Power point and exce

  • Multiple Ship to Party( 999) ,Partner Determination Exit

    Due to the 999 entry restriction we cannot maintain more than 999 ship to party in the sold to customer master data, to overcome this problem we have implemented a workaround, We don’t maintain the ship to party in the sold to party customer data , b