SAP/ORACLE File Systems Disappeared in Local Zone
I created some 20-30 files systems as meta devices for a SAP/Oracle installation on my 6320 SAN for a local zone.
I did a newfs on all of them and then mounted them in the global zone with their appropriate mount points.
I then did a zonecfg with the following input.
zonepath: /export/zones/zsap21
autoboot: true
pool:
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /opt/sfw
inherit-pkg-dir:
dir: /usr
fs:
dir: /oracle
special: /oracle_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /oracle/stage/920_64
special: /oracle/stage/920_64_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /temp
special: /temp_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /usr/local
special: /usr/local_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /usr/sap
special: /usr/sap_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /usr/sap/install
special: /usr/sap/install_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /usr/sap/trans
special: /usr/sap/trans_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /export/home/zsap21
special: /export/home_zsap21
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1
special: /oracle/FS1
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1/920_64
special: /oracle/FS1/920_64
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1/mirrlogA
special: /oracle/FS1/mirrlogA
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1/oraarch
special: /oracle/FS1/oraarch
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1/origlogA
special: /oracle/FS1/origlogA
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1/origlogB
special: /oracle/FS1/origlogB
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1/saparch
special: /oracle/FS1/saparch
raw not specified
type: lofs
options: []
fs:
dir: /oracle/FS1/sapdata1
special: /oracle/FS1/sapdata1
raw not specified
type: lofs
options: []
***********more available but I truncated it here**********************
I successfully installed and configured the zone.
I then logged into the local zone and installed a new version of Oracle and a working SAP instance. I know that it worked because I was able to log into it.
Upon rebooting the server and the local zone I have lost access to all of my file systems in the Oracle/SAP zone.
It's almost as if the global zone has not mounted its file systems over the local zones mount points. In the local zone I can see no files in the directories where I would expect Oracle or SAP files.
I either mis-configured the zonecfg or missed a step somewhere.
I suspect that my file system contents are still around somewhere waiting to be hooked up with a zone's mount point.
In the local zone a df -k shows all of the file systems (like the local zone knows about what file systems belong) but they all have the same size and free space (probably from the root zone).
Any thoughts appreciated.
Atis
Do you have amount point within the zone path for these oradb mounts?
You have to add a directory entry for /oradb in the zone's root directory, i.e. ``zone_path''/root/oradb
Similar Messages
-
Configure Windows file system repository in local portal system
Hi
can any one give the step by step explanation for Configuring windows file system repository in local portal system .
Thanks
Kirankumar MHi Kiran,
Go Through all the docs, It will help you.
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/KMC/File%2bSystem%2bRepository%2b-%2bW2KSecurityManager
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/284
http://help.sap.com/saphelp_nw04s/helpdata/en/e3/92322ab24e11d5993800508b6b8b11/frameset.htm
Anyways you have to
1. create a mapping in your windows by Tools -> Map Network Drive (say v:\)
2. Do all the configuration in your portal as per the 3rd link provided like network path and all.
3.Logon to portal with admin perrmission,
then
System Administration -> System Configuration ->Knowledge Management -> Content Management ->Repository Managers -> File System Repository
Configure a new repository manager for your windows system.
Regards
Manisha -
Configuring windows file system repository in local portal system(KM)
Hi
can any one give the step by step explanation for Configuring windows file system repository in local portal system .
Thanks
Kirankumar MHi Kiran,
Go Through all the docs, It will help you.
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/KMC/File%2bSystem%2bRepository%2b-%2bW2KSecurityManager
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/284
http://help.sap.com/saphelp_nw04s/helpdata/en/e3/92322ab24e11d5993800508b6b8b11/frameset.htm
Anyways you have to
1. create a mapping in your windows by Tools -> Map Network Drive (say v:\)
2. Do all the configuration in your portal as per the 3rd link provided like network path and all.
3.Logon to portal with admin perrmission,
then
System Administration -> System Configuration ->Knowledge Management -> Content Management ->Repository Managers -> File System Repository
Configure a new repository manager for your windows system.
Regards
Manisha -
SAP GoLive : File System Response Times and Online Redologs design
Hello,
A SAP Going Live Verification session has just been performed on our SAP Production environnement.
SAP ECC6
Oracle 10.2.0.2
Solaris 10
As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
1/
We have been told that our file system read response times "do not meet the standard requirements"
The following datafile has ben considered having a too high average read time per block.
File name -Blocks read - Avg. read time (ms) -Total read time per datafile (ms)
/oracle/PMA/sapdata5/sr3700_10/sr3700.data10 67534 23 1553282
I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
2/
We have been asked to increase the size of the online redo logs which are already quite large (54Mb).
Actually we have BW loading that generates "Chekpoint not comlete" message every night.
I've read in sap note 79341 that :
"The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
Frankly, I have problems undertanding this sentence.
Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
But how is it that frequent chekpoints should decrease the time necessary for recovery ?
Thank you.
Any useful help would be appreciated.Hello
>> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
The recommended ("standard") values are published at the end of sapnote #322896.
23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
>> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
Correct.
>> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
Every dirty block in the buffer cache is written down to the datafiles
The latest SCN is written (updated) into the datafile header
The latest SCN is also written to the controlfiles
If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
Regards
Stefan -
Oc 11gR1 update 3: doesn't show ZFS file systems created on brownfield zone
Subject is a pretty concise description here. I have several brownfield Solaris 10U10 containers running on M5000s, and I have delegated three zpool to each container for use by Oracle. Below is relevant output from zonecfg export for one of these containers. They were all built in the same manner, then placed under management by OC. (Wish I'd been able to build them as green field with Ops Center, but there just wasn't enough time to learn how to configure OpsCenter the way I needed to use it.)
set name=Oracle-DB-Instance
set type=string
set value="Oracle e-Business Suite PREPROD"
end
add dataset
set name=PREPRODredoPOOL
end
add dataset
set name=PREPRODarchPOOL
end
add dataset
set name=PREPRODdataPOOL
end
The problem is, none of the file systems built on these delegated pools in the container appear in the Ops Center File System Utilization charts. Does anyone have a suggestion for how to get OC to monitor the file systems in the zone?
Here's the output from zfs list within the zone described by the zonecfg output above:
[root@acdpreprod ~]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
PREPRODarchPOOL 8.91G 49.7G 31K none
PREPRODarchPOOL/d05 8.91G 49.7G 8.91G /d05
PREPRODdataPOOL 807G 364G 31K none
PREPRODdataPOOL/d02 13.4G 36.6G 13.4G /d02
PREPRODdataPOOL/d03 782G 364G 782G /d03
PREPRODdataPOOL/d06 11.4G 88.6G 11.4G /d06
PREPRODredoPOOL 7.82G 3.93G 31K none
PREPRODredoPOOL/d04 7.82G 3.93G 7.82G /d04
None of the file systems in the delegated datasets appear in Ops Center for this zone. Are there any suggestions for how I correct this?Do you mean adopt the zone? That requires the zone be halted and it also says something about copying all file systems to the pool created for the zone. Of the 12 zones I have (four on each of three M5000s), seven of them are already in "production" status, and four of those seven now support 7x24 world-wide operations. A do-over is not an option here.
-
RAC 10gr2 using ASM for RMAN a cluster file system or a Local directory
The environment is composed of a RAC with 2 nodes using ASM. I have to determine what design is better for Backup and Recovery with RMAN. The backups are going to be saved to disk only. The database is only transactional and small in size
I am not sure how to create a cluster file system or if it is better to use a local directory. What's the benefit of having a recovery catalog that is optional to the database?
I very much appreciate your advice and recommendation, TerryArf,
I am new to RAC. I analyzed Alejandro's script. He is main connection is to the first instance; then through sql*plus, he gets connected to the second instance. he exits the second instance and starts with RMAN backup to the database . Therefore the backup to the database is done from the first instance.
I do not see where he setenv again to change to the second instance to run RMAN to backup the second instance. It looks to me that the backup is only done to the first instance, but not to the second instance. I may be wrong, but I do not see the second instance backup.
Kindly, I request your assistance on the steps/connection to backup the second instance. Thank you so much!! Terry -
Hi,
I want to know about file system in oracle 10g.
Please advice me which one is better among OMF and manually.
Regards,
Mark,OMF is not a file system but its a way to manage the oracle db's files in a more better way. This is the base of the ASM in 10g which has got a file system to manage not just database files but also the non-db files also in 11g using the ACFS. If you are on 9i, you are limited to OMF only but if you are on 10g, its ASM that you should be using to get a lot of benefits than just getting the files getting created and managed by oracle .
HTH
Aman.... -
Microsoft Windows [Version 6.1.7600]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Windows\system32>chkdsk
The type of the file system is NTFS.
WARNING! F parameter not specified.
Running CHKDSK in read-only mode.
CHKDSK is verifying files (stage 1 of 3)...
4 percent complete. (148991 of 343808 file records processed)
Attribute record (128, "") from file record segment 154351
is corrupt.
4 percent complete. (157441 of 343808 file records processed)
Attribute record (128, "") from file record segment 165044
is corrupt.
Attribute record (128, "") from file record segment 170460
is corrupt.
5 percent complete. (171904 of 343808 file records processed)
Attribute record (128, "") from file record segment 176325
is corrupt.
Attribute record (128, "") from file record segment 176329
is corrupt.
Attribute record (128, "") from file record segment 176849
is corrupt.
343808 file records processed.
File verification completed.
3390 large file records processed.
Errors found. CHKDSK cannot continue in read-only mode.
C:\Windows\system32>well actually HP notebooks bioses have some diagnostics an user can run: HDD test, memory check, battery check thats all.
Still U can use HD Tune if U download it - freeware. But Hard Drive test U can do in Bios also works very well. I suggest this one cause when it detect something HP wont be oposing to U'rs repair request.
click on Kudos if i helped U -
Df error for lofs file system in local zone.
I 've a zone which is running oracle db instance. We have exported the SAN file system from the global zone as following
fs:
dir: /oradb
special: /oradb
raw not specified
type: lofs
options: []
from global zone
#df -h | grep oradb
/dev/dsk/emcpower174c 17G 5.1G 11G 31% /oradb/archa
/dev/dsk/emcpower177c 58G 3.3G 54G 6% /oradb/index1
/dev/dsk/emcpower172c 9.9G 610M 9.2G 7% /oradb/redob
/dev/dsk/emcpower176c 58G 30G 27G 53% /oradb/index2
/dev/dsk/emcpower180c 58G 35G 23G 61% /oradb/data1
Problem is from local zone if i do cd to /oradb/data1 and then df -h . , i get following error Is there anyway i get the usage or df o/p of lofs file system from the local zone itself
local_zone# df -h .
df: Could not find mount point for .
local_zone # pwd
/orad/data1
local_zone# df -h /oradbdb/data1
df: Could not find mount point for /oradbdb/data1
local_zone#Do you have amount point within the zone path for these oradb mounts?
You have to add a directory entry for /oradb in the zone's root directory, i.e. ``zone_path''/root/oradb -
Best practices for ZFS file systems when using live upgrade?
I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
Here is the ZFS layout I have come up with (swap, home, etc omitted):
NAME USED AVAIL REFER MOUNTPOINT
rpool 11.0G 52.0G 94K /rpool
rpool/ROOT 4.80G 52.0G 18K legacy
rpool/ROOT/boot1 4.80G 52.0G 4.28G /
rpool/ROOT/boot1/zones-root 534M 52.0G 20K /zones-root
rpool/ROOT/boot1/zones-root/zone1 534M 52.0G 534M /zones-root/zone1
rpool/zone-data 37K 52.0G 19K /zones-data
rpool/zone-data/zone1-runtime 18K 52.0G 18K /zones-data/zone1-runtimeThere are 2 key components here:
1) The ROOT file system - This stores the / file systems of the local and global zones.
2) The zone-data file system - This stores the data that will be changing within the local zones.
Here is the configuration for the zone itself:
<zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
<inherited-pkg-dir directory="/lib"/>
<inherited-pkg-dir directory="/platform"/>
<inherited-pkg-dir directory="/sbin"/>
<inherited-pkg-dir directory="/usr"/>
<filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
<network address="192.168.0.1" physical="e1000g0"/>
</zone>The key components here are:
1) The local zone / is shared in the same file system as global zone /
2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
The system (local and global zone) will operate like this:
The global zone is used to manage zones only.
Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
Thanks in advance for your feedback!
--JasonHello "jemurray".
Have you read this document? (page 198)
http://docs.sun.com/app/docs/doc/820-7013?l=en
Then the solution is:
01.- Create an alternate boot enviroment
a.- In a new rpool
b.- In the same rpool
02.- Upgrade this new enviroment
03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
This maybe sound easy but you should be carefull, please try this in a development enviroment
Good luck -
New zone and inherited file system mount point error
Hi - would anyone be able to help with the following error please. I've tried to create a new zone that has the following inherited file system:
inherit-pkg-dir:
dir: /usr/local/var/lib/sudo
But when I try to install it fails with:
root@tdukunxtest03:~ 532$ zoneadm -z tdukwbprepz01 install
A ZFS file system has been created for this zone.
Preparing to install zone <tdukwbprepz01>.
ERROR: cannot create zone <tdukwbprepz01> inherited file system mount point </export/zones/tdukwbprepz01/root/usr/local/var/lib>
ERROR: cannot setup zone <tdukwbprepz01> inherited and configured file systems
ERROR: cannot setup zone <tdukwbprepz01> file systems inherited and configured from the global zone
ERROR: cannot create zone boot environment <tdukwbprepz01>
I've added this because unknown to me when I installed sudo from sunfreeware in the global it requires access to /usr/local/var/lib/sudo - sudo itself installs in /usr/local. And when I try to run any sudo commands in the new zone it gave this:
sudo ls
Password:
sudo: Can't open /usr/local/var/lib/sudo/tdgrunj/8: Read-only file system
Thanks - Julian.Think I've just found the answer to my problem, I'd already inherited /usr ..... and as sudo from freeware installs in /usr/local I guess this is never going to work. I can only think to try the sudo version of the Solaris companion DVD or whatever it's called.
-
How to check if a file system in global zone is shared on zones
Hello,
Would you by any chance know how to detect if a file system in the global zone is shared with any zones created from the global zone (either shared with readonly or read-write)?
Thanks in advance.
Regards
FrankUsually they're shared via lofs. So try 'df -kZ -F lofs'.
Darren -
Hi,
We are implementing HA NW2004s , the SCS is going to be in active-passive cluster.Suppose we have four system.
Host A --SCS(active)/DB(active)
Host B--SCS(inactive)/DB (acttive)
Host C--CI
Host D--- DI
Please find the <a href="https://weblogs.sdn.sap.com/weblogs/images/251750294/SAP.JPG">Image</a> attached .Based on the above diagram i have following question .
1) what all are the file systems.
2) which of them are soft links.
3) which all file system are on local or shared.
Also can some one suggest the cluster solution which can be used to cluster SCS instance in SAP NW2004s JAVA.
The CI and DI instance resuires the files availble in /sapmnt .In case the /sapmnt fails on Node A . how does the failure on NOde B takes care that /sapmnt content is available for CI and DI.
Can anyone please provide me some solution to these questions.Hello,
Here is the rough idea for HA from my experience.
The important thing is that the binaries required to run SAP
are to be accessible before and after switchover.
In terms of this file system doesn't matter.
But SAP may recommend certain filesystem on linux
please refer to SAP installation guide.
I always use reiserfs or ext3fs.
For soft link I recommend you to refer SAP installation guide.
In your configuration the files related to SCS and DB is the key.
Again those files are to be accessible both from hostA and from hostB.
Easiest way is to use share these files like NFS or other shared file system
so that both nodes can access to these files.
And let the clustering software do mount and unmount those directory.
DB binaries, data and log are to be placed in shared storage subsystem.
(ex. /oracle/*)
SAP binaries, profiles and so on to be placed in shared storage as well.
(ex. /sapmnt/*)
You may want to place the binaries into local disk to make sure the binaries
are always accessible on OS level, even in the connection to storage subsystem
losts.
In this case you have to sync the binaries on both nodes manually.
Easiest way is just put on shared storage and mount them!
Furthermore you can use sapcpe function to sync necessary binaries
from /sapmnt to /usr/sap/<SID>.
For your last question /sapmnt should be located in storage subsystem
and not let the storage down! -
Hi,
Has anyone tried mount an NFS share into a zone?? I am unable to lofs or manually mount..
root@NFS_SERVER # share -F nfs -o rw /stage
root@LDOM1 # mount -F nfs -o rw NFS_SERVER:/stage /stage
Then try and mount /stage into the local zone - doesnt seem to work:
root@LDOM1 # zoneadm -z localzone boot
cannot verify fs /stage: NFS mounted file-system.
A local file-system must be used.
zoneadm: zone localzone failed to verify
root@LDOM1 #
Thanks.I'm not sure what you're trying to do. It sounds like you're trying to mount a NFS filesystem in the global zone and then reference it somehow via the zone configuration as though it was a local filesystem. If so, I can't comment on whether that will work or not because I've no experience with that approach. What I have done successfully is to mount NFS filesystems via /etc/vfstab or automount inside the zone. That works just fine and it is done using exactly the same procedures as you'd use to mount NFS filesystems in the global zone.
-
Does anyone know of any tools (SAP or third-party) to protect SAP log files (system logs, security audit logs, etc.) from alteration by an authorized user (e.g., someone with SAP_ALL)? We are looking for an audit-friendly method to protect log files such that someone with SAP_ALL privileges (via Firefighter or special SAP userid (DDIC, SAP*)) can't perform actions and then cover up their tracks by deleting log entries etc. For example, we're wondering if any tools exist that enable the automatic export of the log files to a protected area (that's inaccessible to users with SAP privileges)? We'd certainly appreciate any advice or insight as to how to resolve this issue.
Regards,
GailFor anyone who is interested, I wanted to pass along what we did (this was in response to an audit finding):
First, SAP_ALL access is restricted to monitored Firefighter accounts (we already had that in place). Recognizing that users with SAP_ALL and super-user access at the UNIX level (i.e., our Basis Team) can still circumvent pretty much any measure we take (e.g., can disable alerts in CCMS, delete batch jobs, deactivate Security Audit Log filters, delete Security Audit Log files, etc.), at least the actions would be captured via FF (although they could disable that as well) or other utilities at the UNIX level. And the more things the person has to disable/deactivate, the more likely it becomes that someone would notice that something was amiss.
Our company was already using SPLUNK to capture logs from other (non-SAP) systems so we decided to leverage that to capture and retain certain SAP Security Audit Log entries. We created a batch job on SAP that runs a custom program at 5 minute intervals to extract records from the Security Audit Log files into a UNIX file (the program includes some logic that uses timestamps in the UNIX file to determine which records to extract). The UNIX file is monitored by the UNIX tail-f command which is spawned by a Perl script. The output from the tail-f command is then piped to a file on a central syslog server via the logger command in the script. Finally, a SPLUNK process, which monitors syslog entries, extracts the information into SPLUNK real-time.
This process is not bulletproof as our Basis Team (with SU privileges at the UNIX level) could disable the Perl script or delete/change entries within the UNIX file. All we can really do is make it difficult for them to cover their tracksu2026
Gail
Maybe you are looking for
-
hi i have a products introduction video and i would like to generate a video with the same template (but of course change the attributes for each and photo...) instaed of doing it manually for a thousand products thanks a lot
-
The server gave an error during download: 500 Internal Server Error
Does this mean the servers are overloaded? I've installed Lion on two of my MACs. Can't get past the "The server gave an error during download: 500 Internal Server Error" message on my third system. Anyone?
-
Contribute Won't Transfer Data
After working well some weeks ago, Contribute CS3 now says it is not able to transfer data, so I can't access my site to edit. It does state that it can connect to the server, but then nothing happens. I've had to recreate a connection profile with t
-
Error when creating data base using Database Configuration Assistant
Hi! I have installed Oracle 9i release 2 in Windows Xp, I want to create a db from the Database Configuration Assistant but they appear the following errors: ORA 01034 ORA 01501 ORA1109 I press the button to ignore but they continue appearing.. when
-
Unable to set up incoming email settings in Discussion Board
I was not able to set up the incoming email settings in my test discussion board list. even though my Exchange Server team has done the setup the Exchange related things in my SP 2013 server, I was not able to see any emails flowing into discussion b