Backup of application file system is also required ?
Hi,
Basic question.
Suppose database is creashed and we have restored it with cold backup taken 2 days back.
How do Oracle Application file system will be in sync with database?
We need to backup the application file system also ?
Thanks,
Kishore
Hi,
It's also worth mentioning that your concurrent logs in the application file system will be out of sync with the records in the database.
If you restore the database from a backup taken a couple of days ago then the records in FND_CONCURRENT_REQUESTS will be a couple of days old, but the users will have been running requests since the backup was made and creating files in $APPLCSF/log/<context>. The danger is that when you restart the environment the users start to run concurrent requests and the output of the request will be appended to the end of the existing logfile.
So best to find the maximum request number in the table and remove any output files newer than that to avoid confusion.
HTH
J
Similar Messages
-
Backup of application file system in oracle apps 11i
what are the ways to take the backup of application file system in oracle application 11i ?
Can anyone suggest regarding this ?
Application :11.5.10.2
OS : RHEL 3.0Check the following threads:
Apps Backup
Re: Apps Backup
Backup Oracle Applications (11.5.10.2)
Re: Backup Oracle Applications (11.5.10.2)
Best Backup Strategy
Re: Best Backup Strategy
System Backup
system backup
Reommended backup and recovery startegy for EBS
Re: Reommended backup and recovery startegy for EBS -
Application file system deleted R12.1.3
Hi,
In one of our development R12.1.3 instance(db 11.2.0.3) application file system got deleted accidentally. Please help us in getting the EBS instance up.
1. We have a backup taken on 27th Oct.(Both database and application )
2. We applied 8/10 patches during this period and couple of rounds of testing.
3. Application file system got deleted accidentally on 4th Nov. No issues on Database node .
I want to continue using the same database and old application file system.
Is it possible !!!!! (as we have applied patches within this period)
Please share your thoughts if it can be done ! ! !
Regards,
DjayHi Djay,
You may restore the application binary files from your backup, and re-apply the patch. So the necessary application level binaries will be re-generated while the necessary coding at the DB level will also be re-applied.
Note: When re-applying an already existing patch it prompts you that the patch has been already applied and do you wish to re-apply. Just say yes and proceed
Also let us know, what file did you really delete so that we may better assist you.
Thanks &
Best Regards, -
SAP Backups ; R/3 & file system
Hello,
What is the different between SAP R/3 backup and SAP File System backup?
what are the files that backing up in each cases
To which back up does the oracle DB fit in. (Data Files,Control files, Redo Logs, SPFILE)
To which back up does the O/S files fit in. (here Suse O/S)
thankxThere is not any concept of SAP file system backup.....it means OS level file system backup.
SAP R/3 backup can backed up all the sapdata(database) files, controlfiles as well as redologs.
OS filesystem level backup can backed up all the files based on your requirement and rotation.
Both backups you can use for restorations.....SAP R/3 backup will take all the responsibilities in backup as well as in restoration.
but in OS level backup first you must know which files need to be backed up and what exactly the all directoruies contains.
Regards,
Nick Loy -
Shared Application File System.
After moving to shared application tier in production
In $IAS_ORACLE_HOME/Apache/Apache/conf I see the security.conf file has version security_ux_ias1022.conf 115.25 2009/04/23 10:04:40 mmanku ship $
but in $CONFIG_TOP/Apache/Apache/conf the sercurity.conf file has the following version security_ux_ias1022.conf 115.29 2009/12/21 05:56:02 sbandla ship $
Which top does oracle use ?
why is the shared IAS Home have a lower version of than the CONFIG_TOP
why does not the one which is not used gets deleted?Hi;
Please check below notes which could be helpful for your issue:
Explanation of Context Variables for Shared Application File System in R12 and 11i [ID 1070152.1]
Sharing the Application Tier File System in Oracle Applications Release 11i [ID 233428.1]
If its not help i suggest rise SR while you are waiting other forum users response here
Regard
Helios -
Shared and non-shared application file system in R12
Hi,
We have a oracle apps 11.5.10.2 on mutli node architecture with node 1 hosting DB,CCM and admin while node 2 hosting Form and web servers on Windows 2003.
Some of our Products are in Shared status(Installation) like AD,AU etc and I found that R12 currently does not support the shared file system infrastructure on application tier server nodes running on Windows platforms
Metalink note:384248.1
what will be the impact of this if we plan for R12 upgrade?
Thanks!
ShekharHi Shekar,
<p>
I'm slightly confused by your question. Shared application tier file system isn't supported on Windows platforms in 11i, either. From Note 233428.1: <br>
<blockquote>
The shared file system infrastructure is not supported <b>on application tier server nodes running Windows platforms</b> or using Oracle9iAS releases earlier than 1.0.2.2.2. Application tier nodes must be running on the same operating system to implement a shared file system.
</blockquote>
<p>
Are you perhaps referring to the licensing status of your Applications products (i.e. either Installed, Shared, or Not Installed) from the Installed Products report in OAM? If yes, then this should have no impact on your upgrade to R12...those statuses should remain the same when you upgrade. "Shared" in this case relates only to the licensing status of the product, implying that the product is not explicitly licensed (i.e. Installed status), but its functionality is required for used with other Installed products.
<p>
Regards,<br>
John P. -
Dear Hussein,
I have db on one node and application on second node(12.1.1), I dont know suddenly the ownership of all the application file is changed, it was applprod dba and now it shows applprodĀ others, and also file permissions are changed, no activity was done in this server.
Please suggest
REgds
BilalBilal,
Change the ownership of the files at the OS level, update the group in the application context file, and run AutoConfig.
Regards,
Hussein -
R12 shared application file system with Rac cloning help
Hello,
We have R12 with two application servers(Shared File system) and two db servers(on RAC). The same setup we have on test as well. We are using linux servers. Can anyone help me with cloning steps? Shared file system to Shared file system and RAC to RAC cloning. I am waiting for your replies.
Thanks
MehmoodHi,
Please refer to:
Note: 783188.1 - Certified RAC Scenarios for E-Business Suite Cloning
Note: 559518.1 - Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
Note: 799735.1 - Rapid Clone Documentation Resources, Release 11i and 12
Note: 756050.1 - Troubleshooting Autoconfig issues with Oracle Applications RAC Databases
Thanks,
Hussein -
Reg: shared application file system
hi
Am new to apps DBA, am having a single-node-installation, now I will do the shared application tier file system from single-node, its possible to do that, Please give me the solution.
Regards
DHi,
It is possible -- Please refer to the following documents for details.
Note: 384248.1 - Sharing The Application Tier File System in Oracle E-Business Suite Release 12
Note: 233428.1 - Sharing the Application Tier File System in Oracle Applications 11i
Regards,
Hussein -
RAC backup with RMAN...put backup on diff file system
hello all,
I have not work a lot on SAN storages. One of my client has implementated 9i RAC. Now he wants to Add two more disk in SAN storage (Implemented RAID). So sun engineers will do this but before this i have to take full database backup(80GB database) throught RMAN. The problem or confussion from my side is that the database is on Sun SAN storage and i have to put the Full database backup taken by RMAN on local hard disk on the node (node 1 of rac). Is the possible since the SAN storage is RAW file system (as i guess) and i am putting the backup on local system.
Please help me out ...i have do this in couple of days..
Please tell me prosedure too to how to change the path of backup in RMAN if above is possible..
Its urgent
Thanks and Regards!!
Pankaj RawatTwo things:
1) You will not have any problems taking RMAN backup for RAC and raw devices. None of them make your backups any different.
2) Based on your post, you are not very confident in your RMAN skills and this is your real problem. What is a must for you - take the backup, copy it on another machine and try to restore from it. Note, that you should NOT look at your original database during restore or take any files from there (even init.ora or spfile). If you don't have this done and don't have exact procedure - consider your backup as useless. This is a conservative approach but believe me - it's wort it when you SAN engineers screw up your storage. And they warned you. ;-) -
Does Time Machine backup open applications/files?
I recently lost my calendars (in Entourage, Ical, Mobileme, etc.), and wanted to restore from a time machine backup.
I tried to restore the Entourage identity from a day or two before only to see that the date in Time Machine for the file was way old.
I suspected that because I leave the app open all the time, particularly all night when my backups run at 11 pm that the file never got backed up. Does this sound right?
Ical, on the other hand (which is synched to mobile me and entourage), even though I rarely open it and use it, seemed to have the updated data.
How does Time Machine deal with files that are in use?Entourage needs some special handling. I don't use it, so can't tell you exactly how to do it, but if you search this forum for Entourage you should find a post or two (by Kappy and/or Baltwo, I think) with good advice.
-
I just upgraded my macbook pro to mountain lion. Now I want to install windows, but during installation using bootcamp, it says windows can't install on FAT partition. I couldn't change it to NTFS. Also how can I partition my hard disk after windows installation. Coz before windows installation, if I partition hard disk, it won't allow me to install windows saying startup disk can't be partitioned.......help would be appreciated.
Thanksbkchoo wrote:
Now, I wish to upgrade from Mountain Lion 10.8.5 to latest version of OS X Yosemite, but I am in doubt and worry if after upgraded to Yosemite, can I still using my Windows XP where I installed now in Mountain Lion 10.8.5 BootCamp?
Yosemite does not allow a new installation of XP, but an existing XP installation is self-contained and will continue to work. Does your XP installation use a FAT partition? You may be better off using NTFS and then upgrading to Yosemite.
Will the Yosemite upgrade process delete my Windows XP partition or crash my current BootCamp? Can I still have an option to load OS X or Windows XP (press 'alt/option' key during boot up) after upgraded to Yosemite?
If you have made any re-sizing attempts to WXP partition Yosemite will cause problems, otherwise the Yosemite upgrade leaves XP untouched.
Are there any specific reasons to upgrade to Yosemite? -
Oracle DB and File system backup configuration
Hi,
As I understand from the help documents and guides, brbackup, brarchive and brrestore are the tools which are used for backing up and restoring the oracle database and the file system. We have TSM (Trivoli Storage manager) in our infrastructure for managing the backup centrally. Before configuring the backup with TSM, I want to test the backup/restore configuration locally i.e. storing the backup to local file system and then restoring from there. Our backup strategy is to have full online backup on the weekends and incremental backup on the weekdays. Given this, following are the things, I want to test.
1. Full online backup (to local file system)
2. Incremental online backup (to local file system)
3. Restore (from local file system)
I found help documents to be very generic and couldn't get any specific information for the comprehensive configuration to achieve this. Can someone help with end to end configuration?
We are using SAP Portal 7.0 (NW2004s) with Oracle 10g database hosted on AIX server.
Helpful answers will be rewarded
Regards,
ChandraThanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous. -
Hi
Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
Thanks
sunder
<distributed-scheme>
<scheme-name>distributed-Customer</scheme-name>
<service-name>DistributedCache</service-name>
<!-- <thread-count>5</thread-count> -->
<backup-count>1</backup-count>
<backup-storage>
<type>file-mapped</type>
<directory>/data/xx/backupstorage</directory>
<initial-size>1KB</initial-size>
<maximum-size>1KB</maximum-size>
</backup-storage>
<backing-map-scheme>
<read-write-backing-map-scheme>
<scheme-name>DBCacheLoaderScheme</scheme-name>
<internal-cache-scheme>
<local-scheme>
<scheme-ref>blaze-binary-backing-map</scheme-ref>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>com.xxloader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.CustomerProduct
</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>CUSTOMER</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
<read-only>true</read-only>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>blaze-binary-backing-map</scheme-name>
<high-units>{back-size-limit 1}</high-units>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>{back-expiry 0}</expiry-delay>
<cachestore-scheme></cachestore-scheme>
</local-scheme>Hi
We did try out with the following configuration
<near-scheme>
<scheme-name>blaze-near-HeaderData</scheme-name>
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>{front-size-limit 0}</high-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>{back-expiry 1h}</expiry-delay>
<flush-delay>1m</flush-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>blaze-distributed-HeaderData</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
<autostart>true</autostart>
</near-scheme>
<distributed-scheme>
<scheme-name>blaze-distributed-HeaderData</scheme-name>
<service-name>DistributedCache</service-name>
<partition-count>200</partition-count>
<backing-map-scheme>
<partitioned>true</partitioned>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<external-scheme>
<high-units>20</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1073741824</unit-factor>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</nio-memory-manager>
</external-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.xx.loader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.bean.HeaderData</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>SDR.TABLE_NAME_XYZ</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<backup-count>1</backup-count>
<backup-storage>
<type>off-heap</type>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</backup-storage>
<autostart>true</autostart>
</distributed-scheme>
With configuration the amount of residual main memory consumption is like 15 G.
When we changed this configuration to
<near-scheme>
<scheme-name>blaze-near-HeaderData</scheme-name>
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>{front-size-limit 0}</high-units>
<unit-calculator>FIXED</unit-calculator>
<expiry-delay>{back-expiry 1h}</expiry-delay>
<flush-delay>1m</flush-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>blaze-distributed-HeaderData</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
<autostart>true</autostart>
</near-scheme>
<distributed-scheme>
<scheme-name>blaze-distributed-HeaderData</scheme-name>
<service-name>DistributedCache</service-name>
<partition-count>200</partition-count>
<backing-map-scheme>
<partitioned>true</partitioned>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<external-scheme>
<high-units>20</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1073741824</unit-factor>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>50MB</maximum-size>
</nio-memory-manager>
</external-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.xx.loader.DataBeanInitialLoadImpl
</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>com.xx.bean.HeaderData</param-value>
</init-param>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>SDR.TABLE_NAME_XYZ</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<backup-count>1</backup-count>
<backup-storage>
<type>file-mapped</type>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
<directory>/data/xxcache/blazeload/backupstorage</directory>
<file-name>{cache-name}.store</file-name>
</backup-storage>
<autostart>true</autostart>
</distributed-scheme>
Note backup storage is file-mapped
<backup-storage>
<type>file-mapped</type>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
<directory>/data/xxcache/blazeload/backupstorage</directory>
<file-name>{cache-name}.store</file-name>
</backup-storage>
We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
Appreciate any pointers in this regard.
Thanks
sunder -
Shared Application Tier File System in Oracle Applications R12
Hi
Recently I came to an environment, here we are having 3 nodes with Shared Application Tier File System - One is Admin & DB and the other two are Applications Nodes(Apache and Forms) with load balancing. Its a Shared Application Tier File system so APPS_TOP is NFS mounted on two Applications Nodes. But i see ADMIN Node INST_TOP is also mounted on these two Applications Nodes...Could you please let me know what for this is????? or for what reasons we can keep like this?????my question is Admin node and two Application Nodes are having INST_TOPS locally on those nodes it self,the best practice is to have $INST_TOP directory located on a each server local mount directory.
Re: Shared Application File System on Linux
other than that Why ADMIN Node INST_TOP is mounted on these two Application Nodes.you have to check with your admins who implemented initially.but best practice is to have $INST_TOP directory located on a server local mount point.
Maybe you are looking for
-
Planned order is creating before planning bucket for a specific material.
Planned order is creating before planning bucket for a specific material. Eg: Planning Bucket future: 12 months & Start date- 01/02/2012, but Planned order Availability/Requirement date is 12/23/2011. What causes this, how to fix it? Please suggest u
-
How to download SDK for UPK 11.1.0 ?
Hi All, Can anybody please guide me for how to download SDK for UPK 11.1.0 ? I tried to search this. I could able to search "In-Application Support SDK". But couldn't search SDK for developer. Also if available, which IDE can be used for coding purpo
-
Help! Performed software update, now emails and account info gone.
Yesterday morning I performed the software update (Java, couple of security updates, QuickTime), and restarted. Essentially, every e-mail received between early November 2006 and yesterday is gone. All folders created since then (and since I have sta
-
5.1 Channel on P7N's Bundled X-Fi?
Hello All, I am looking for help with getting my X-Fi Xtreme Audio to output 5.1 channel sound. This is the card that comes bundled with the P7n Diamond. Issue: "In Control Panel" -> "Sound" -> "Digital Output Device" -> "Advanced" only shows options
-
What Could Make My iMac Stutter As It Records?
Until recently my iMac would record flawlessly when recording via the iSight camera or when importing from my DV camera, but now it stutters and stops? I have tried other programs on the web like Vidi and it's the same problem. I am now suspecting a