ASM + RAW
We have following MICCP configuration:
* Two IBM AIX machines with 10gR2-RAC
* ASM with raw devices
* We are backing up using RMAN to ASM disk group
Concerns with this configuration:
** We didn't implement tape backup
** With our limited knowledge, what we know is, we need to convert the database to COOKED filesystem to back this to tape.
We want oracle, to give us some best practices with ASM+RAW for
* BACKUPS to tape
* Cloning
* Shadow backups
* DR
* Storage migration path ( Faster moving from one storage to ohter )
Reason:
* As of now, we are not sure of using ASM+RAW for future projects.
With our limited knowledge, what we know is, we need to convert the database to COOKED You don't need to convert you DB to any filesystem in order to take tape backups. If your MML layer can be integrated with RMAN means you have to use oracle tape backup agent provided by your Media Manager vendor, then you can use RMAN to take backup of DB sitting on RAW + ASM to tape by allocating SBT channels. There are several tape backup solutions are availabe in market and now you can also use Oracle's own OSB (Oracle Secure Backup) for this. Read Backup & Recovery guide for more on RMAN and SBT:
http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/toc.htm
Daljit Singh
Similar Messages
-
Question(s) related to ASM, Raw devices and performance
Good morning,
I was recently getting acquainted with ASM. Since I was doing this in "play" VM boxes, it was not possible to draw any conclusions about any performance improvements.
I'd like to know what performance improvements forum members may have experienced after migrating a production environment from a non-ASM setup to an ASM setup.
It would seem that since ASM is a "form" of raw device access that the increase in performance should be noticeable. Was that the case for those who migrated to it ? Was the performance improvement worth it or did it only make managing the database easier ?
Thank you for your contributions on the subject,
John.ASM uses disk groups to store datafiles; an ASM disk group is a collection of disks that ASM manages as a unit. Within a disk group, ASM exposes a file system interface for Oracle database files. The content of files that are stored in a disk group are evenly distributed, or striped, to eliminate hot spots and to provide uniform performance across the disks. The performance is comparable to the performance of raw devices.
You can add or remove disks from a disk group while a database continues to access files from the disk group. When you add or remove disks from a disk group, ASM automatically redistributes the file contents and eliminates the need for downtime when redistributing the content.
I hope the links below will helps you.
refer the links:
http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asmcon.htm
http://oracleinstance.blogspot.com/2009/12/rac-file-system-options-basic-concept.html
http://www.dbasupport.com/oracle/ora10g/ASM01.shtml
also you will get more information from book:
Oracle Automatic Storage Management: Under-the-Hood & Practical Deployment Guide (Osborne ORACLE Press Series)
Nitin Vengurlekar (Author), Murali Vallath (Author), Rich Long (Author)
http://www.amazon.com/Oracle-Automatic-Storage-Management-Under/dp/0071496076 -
11510 to RAC ASM RAW Conversion
Hi,
We are using 11510 in multinode env, Now customer asked me to convert into RAC ASM with RAW device. My present architecture is Database is running HP-Itanium 10gR1 and Application running on HP-Tru64 11510. Please tell me any good note to implement the solutions. i have rman coldbackup backup for my database.
Thanks in Advance,
Panneer.Metalink Note 220970.1 could be a starting point, section: "What is the optimal migration path to be used while migrating the E-Business suite to RAC?"
C. -
How to converrt files on RAW devices in ASM to non ASM file system.
Hi all,
I have on problem .
Is that possible to migrate ASM raw files system to non ASM file .
If possible plzz describe them.
If not also please tell y not?
Thanks in addvance
Regards
KrishnaHi,
I totally agree with Mahir. And I just want to share one thing:
Use %U to generate guaranteed unique names :
For backupsets, %U means: %u_%p_%c
For an image copy of a datafile, %U means: data-D-%d_id-%I_TS-%N_FNO-%f_%u
For an image copy of an archived redolog, %U means: arch-D_%d-id-%I_S-%e_T-%h_A-%a_%u
For an image copy of a control file, %U means: cf-D_%d-id-%I_%u
Thank you -
Installing Oracle Database with ASM on Oracle VM for SPARC
We're installing Solaris 11 and Oracle VM for SPARC so we can install Oracle Database with ASM. There is a requirement when creating the database that the raw disk have an owner that is the same as the database. Everytime we try to change the owner it will always show that the owner is root.
Any ideas?Hi
Please let me know from where you are allocating ASM raw disks for the guest domain.
i hope you are changing the disk permissions using chown -R
Also confirm the permission using command # ls -IL /dev/rdsk
Regards
AB -
Raw device owners change after reboot the server
The raw device owner change after reboot the server. i have to adjust it manually like
chown oracle:oinstall /dev/raw/raw*
any idea to make it permanent after bouncing the Server?
MY OS is RHEL4 & Rdbms 10.2.0.1I got my answer .
New to Linex. Need suggestions...
How i can create new file ? like i want to create file oracle.permission
should i use this command
touch <filename> or any other command?
second i want to put these entries raw device 3, 6,7,10,11 etc
shoud it work ? like in the oracle.permission directory?
# ASM
raw/raw[3671011]:oracle:dba:0660 -
A basic question: Is it a MUST to have ASM installed to implement RAC in 10gR2? Please explain.
Thanks.For linux, you can use both OCFS and ASM.
But for OCR and Voting Disks you can use OCFS and for datafiles use ASM.
But ASM is a better option as it has more feature.
http://decipherinfosys.wordpress.com/2007/11/12/ocfs-asm-raw-devices-and-regular-filesystem/
Regards
Rajesh -
100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
After upgrading to OEL-5.2 and relinking all Oracle binaries, my old Oracle 11g installation, installed several months before on OEL-5.1, has been working well, including Enterprise Manager Database Console working nicely as always with respectful performance. Unfortunatelly, it lasted just several days.
Yesterday I decided to uninstall the 11g completely and perform new clean installation (software and database) with the same configuration options and settings as before, including EM dbconsole, all configured using dbca. After completing the installation (EM was started automatically by dbca), oracle continued to suck 80-85% CPU time. In further few minutes CPU utilization raised up to 99% due to only one (always the same PID) client process - "oracleorcl (LOCAL=NO)". For first ten minutes I didn't care too much since I always enable Automatic Management in dbca. But after two hours, I started to worry. The process was still running, consuming sustained 99% of CPU power. No other system activity, no database activity, no disks activity at all!
I was really puzzled since I installed and reinstalled the 11g at least 20 times on OEL-5.0 and 5.1, experimenting with ASM, raw devices, loopback devices and various combinations of installation options, but never experienced such a behaviour. It took me 3 minutes to log in to EM dbconsole as it was almost unusable performing too slow. After three hours CPU temperature was nearly 60 degrees celsius. I decided to shutdown EM and after that everything became quiet. Oracle was running normally. Started EM again, the problem was back again. Tracing enabled, it filled a 350 MB trace file in just 20 minutes. Reinstalling the software and database once again didn't help. Whenever EM is up, the CPU usage overhead of 99% persists.
Here is a cca 23 minutes session summary report taken from EM dbconsole's Performance page. The trace file is too big to list it here, but it shows the same.
Host CPU: 100%
Active Sessions: 100%The details for the Selected 5 Minute Interval (the last 5 min interval) are shown as follow:
TOP SESSIONS: SYSMAN, Program: OMS
Activity: 100%
TOP MODULES: OEM.CacheModeWaitPool, Service: orcl
Activity: 100%
TOP CLIENT: Unnamed
Activity: 99.1%
TOP ACTIONS: Unnamed (OEM.CacheModeWaitPool) (orcl)
Activity: 100%
TOP OBJECTS: SYSMAN.MGMT_JOB_EXEC_SUMMARY (Table)
Activity: 100%
TOP PL/SQL: SYSMAN.MGMT_JOB_ENGINE.INSERT_EXECUTION
PL/SQL Source: SYSMAN.MGMT_JOB_ENGINE
Line Number: 7135
Activity: 100%
TOP SQL: SELECT EXECUTION_ID, STATUS, STATUS_DETAIL FROM MGMT_JOB_EXEC_SUMMARY
WHERE JOB_ID = :B3 AND TARGET_LIST_INDEX = :B2 AND EXPECTED_START_TIME = :B1;
Activity: 100%
STATISTICS SUMMARY
cca 23 minutes session
with no other system activity
Per
Total Execution Per Row
Executions 105,103 1 10,510.30
Elapsed Time (sec) 1,358.95 0.01 135.90
CPU Time (sec) 1,070.42 0.01 107.04
Buffer Gets 85,585,518 814.30 8,558,551.80
Disk Reads 2 <0.01 0.20
Direct Writes 0 0.00 0.00
Rows 10 <0.01 1
Fetches 105,103 1.00 10,510.30
----------------------------------------Wow!!! Note: no disk, no database activity !
Has anyone experienced this or similar behaviour after clean 11g installation on OEL-5.2? If not, anyone has a clue what the hell is going on?
Thanks in advance.Hi Tommy,
I didn't want to experiment further with already working OEL-5.2, oracle and dbconsole on this machine, specially not after googling the problem and finding out that I am not alone in this world. There are another two threads on OTN forums (Database General) showing the same problem even on 2GB machines:
DBConsole easting a CPU
11g stuck. 50-100% CPU after fresh install
So, I took another, a smaller free machine I've got at home (1GB RAM, 2.2MHz Pentium4, three 80GB disks), on which I used to experiment with new releases of software (this is the machine on which I installed 11g for the first time when it was released on OEL-5.0, and I can recall that everything was OK with EM). This is what I did:
1. I installed OEL-5.0 on the machine, adjusted linux and kernel parameters, and performed full 11g installation. Database and EM dbconsole worked nice with acceptable performance. Without activity in the database, %CPU = zero !!! The whole system was perfectly quiet.
2. Since everything was OK, I shutdown EM and oracle, and performed the full upgrade to OEL-5.2. When the upgrade finished, restarted the system, relinked all oracle binaries, and started oracle and EM dbconsole. Both worked perfectly again, just as before the upgrade. I repeated restarting the database and dbconsole several times, always with the same result - it really rocks. Without database activity, %CPU = zero%.
3. Using dbca, I dropped the database and created the new one with the same configuration options. Wow! I'm again in trouble. A half an hour after the creation of the database, %CPU raised up to 99%. That's it.
The crucial question here is: what is that in OEL-5.2, not existing in the 5.0, that causes dbca/em scripts to be embarrassed at the time of EM agent configuration?
Here are the outputs you required picked 30 minutes after starting the database and EM dbconsole (sustained 99% CPU utilization). Note that this is just a 1GB machine.
Kernel command line: ro root=LABEL=/ elevator=deadline rhgb quiet
[root@localhost ~]# cat /proc/meminfo
MemTotal: 1034576 kB
MemFree: 27356 kB
Buffers: 8388 kB
Cached: 609660 kB
SwapCached: 18628 kB
Active: 675376 kB
Inactive: 287072 kB
HighTotal: 130304 kB
HighFree: 260 kB
LowTotal: 904272 kB
LowFree: 27096 kB
SwapTotal: 3148700 kB
SwapFree: 2940636 kB
Dirty: 72 kB
Writeback: 0 kB
AnonPages: 328700 kB
Mapped: 271316 kB
Slab: 21136 kB
PageTables: 14196 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3665988 kB
Committed_AS: 1187464 kB
VmallocTotal: 114680 kB
VmallocUsed: 5860 kB
VmallocChunk: 108476 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 4096 kB
[root@localhost ~]# cat /proc/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
rpc_buffers 8 8 2048 2 1 : tunables 24 12 8 : slabdata 4 4 0
rpc_tasks 8 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
rpc_inode_cache 6 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
ip_conntrack_expect 0 0 96 40 1 : tunables 120 60 8 : slabdata 0 0 0
ip_conntrack 68 68 228 17 1 : tunables 120 60 8 : slabdata 4 4 0
ip_fib_alias 7 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 7 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
fib6_nodes 22 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 13 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ndisc_cache 1 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
RAWv6 4 5 768 5 1 : tunables 54 27 8 : slabdata 1 1 0
UDPv6 9 12 640 6 1 : tunables 54 27 8 : slabdata 2 2 0
tw_sock_TCPv6 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
request_sock_TCPv6 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
TCPv6 1 3 1280 3 1 : tunables 24 12 8 : slabdata 1 1 0
jbd_1k 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
dm_mpath 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
dm_uevent 0 0 2460 3 2 : tunables 24 12 8 : slabdata 0 0 0
dm_tio 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
dm_io 0 0 20 169 1 : tunables 120 60 8 : slabdata 0 0 0
jbd_4k 1 1 4096 1 1 : tunables 24 12 8 : slabdata 1 1 0
scsi_cmd_cache 10 10 384 10 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-128 36 36 2048 2 1 : tunables 24 12 8 : slabdata 18 18 0
sgpool-64 33 36 1024 4 1 : tunables 54 27 8 : slabdata 9 9 0
sgpool-32 34 40 512 8 1 : tunables 54 27 8 : slabdata 5 5 0
sgpool-16 35 45 256 15 1 : tunables 120 60 8 : slabdata 3 3 0
sgpool-8 60 60 128 30 1 : tunables 120 60 8 : slabdata 2 2 0
scsi_io_context 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
ext3_inode_cache 4376 8216 492 8 1 : tunables 54 27 8 : slabdata 1027 1027 0
ext3_xattr 165 234 48 78 1 : tunables 120 60 8 : slabdata 3 3 0
journal_handle 8 169 20 169 1 : tunables 120 60 8 : slabdata 1 1 0
journal_head 684 1008 52 72 1 : tunables 120 60 8 : slabdata 14 14 0
revoke_table 18 254 12 254 1 : tunables 120 60 8 : slabdata 1 1 0
revoke_record 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
uhci_urb_priv 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
UNIX 56 112 512 7 1 : tunables 54 27 8 : slabdata 16 16 0
flow_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
cfq_ioc_pool 0 0 92 42 1 : tunables 120 60 8 : slabdata 0 0 0
cfq_pool 0 0 96 40 1 : tunables 120 60 8 : slabdata 0 0 0
crq_pool 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
deadline_drq 140 252 44 84 1 : tunables 120 60 8 : slabdata 3 3 0
as_arq 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
mqueue_inode_cache 1 6 640 6 1 : tunables 54 27 8 : slabdata 1 1 0
isofs_inode_cache 0 0 368 10 1 : tunables 54 27 8 : slabdata 0 0 0
hugetlbfs_inode_cache 1 11 340 11 1 : tunables 54 27 8 : slabdata 1 1 0
ext2_inode_cache 0 0 476 8 1 : tunables 54 27 8 : slabdata 0 0 0
ext2_xattr 0 0 48 78 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_cache 2 169 20 169 1 : tunables 120 60 8 : slabdata 1 1 0
dquot 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
eventpoll_pwq 1 101 36 101 1 : tunables 120 60 8 : slabdata 1 1 0
eventpoll_epi 1 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
inotify_event_cache 1 127 28 127 1 : tunables 120 60 8 : slabdata 1 1 0
inotify_watch_cache 23 92 40 92 1 : tunables 120 60 8 : slabdata 1 1 0
kioctx 135 135 256 15 1 : tunables 120 60 8 : slabdata 9 9 0
kiocb 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
fasync_cache 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
shmem_inode_cache 553 585 436 9 1 : tunables 54 27 8 : slabdata 65 65 0
posix_timers_cache 0 0 88 44 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 5 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
tcp_bind_bucket 32 203 16 203 1 : tunables 120 60 8 : slabdata 1 1 0
inet_peer_cache 1 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
secpath_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
xfrm_dst_cache 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
ip_dst_cache 6 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
arp_cache 2 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
RAW 2 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
UDP 3 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
tw_sock_TCP 3 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
request_sock_TCP 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
TCP 43 49 1152 7 2 : tunables 24 12 8 : slabdata 7 7 0
blkdev_ioc 3 127 28 127 1 : tunables 120 60 8 : slabdata 1 1 0
blkdev_queue 23 24 956 4 1 : tunables 54 27 8 : slabdata 6 6 0
blkdev_requests 137 161 172 23 1 : tunables 120 60 8 : slabdata 7 7 0
biovec-256 7 8 3072 2 2 : tunables 24 12 8 : slabdata 4 4 0
biovec-128 7 10 1536 5 2 : tunables 24 12 8 : slabdata 2 2 0
biovec-64 7 10 768 5 1 : tunables 54 27 8 : slabdata 2 2 0
biovec-16 7 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-4 8 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-1 406 406 16 203 1 : tunables 120 60 8 : slabdata 2 2 300
bio 564 660 128 30 1 : tunables 120 60 8 : slabdata 21 22 204
utrace_engine_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
utrace_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
sock_inode_cache 149 230 384 10 1 : tunables 54 27 8 : slabdata 23 23 0
skbuff_fclone_cache 20 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
skbuff_head_cache 86 210 256 15 1 : tunables 120 60 8 : slabdata 14 14 0
file_lock_cache 22 40 96 40 1 : tunables 120 60 8 : slabdata 1 1 0
Acpi-Operand 1147 1196 40 92 1 : tunables 120 60 8 : slabdata 13 13 0
Acpi-ParseExt 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Parse 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-State 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Namespace 615 676 20 169 1 : tunables 120 60 8 : slabdata 4 4 0
delayacct_cache 233 312 48 78 1 : tunables 120 60 8 : slabdata 4 4 0
taskstats_cache 12 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
proc_inode_cache 622 693 356 11 1 : tunables 54 27 8 : slabdata 63 63 0
sigqueue 8 27 144 27 1 : tunables 120 60 8 : slabdata 1 1 0
radix_tree_node 6220 8134 276 14 1 : tunables 54 27 8 : slabdata 581 581 0
bdev_cache 37 42 512 7 1 : tunables 54 27 8 : slabdata 6 6 0
sysfs_dir_cache 4980 4992 48 78 1 : tunables 120 60 8 : slabdata 64 64 0
mnt_cache 36 60 128 30 1 : tunables 120 60 8 : slabdata 2 2 0
inode_cache 1113 1254 340 11 1 : tunables 54 27 8 : slabdata 114 114 81
dentry_cache 11442 18560 136 29 1 : tunables 120 60 8 : slabdata 640 640 180
filp 7607 10000 192 20 1 : tunables 120 60 8 : slabdata 500 500 120
names_cache 19 19 4096 1 1 : tunables 24 12 8 : slabdata 19 19 0
avc_node 14 72 52 72 1 : tunables 120 60 8 : slabdata 1 1 0
selinux_inode_security 814 1170 48 78 1 : tunables 120 60 8 : slabdata 15 15 0
key_jar 14 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
idr_layer_cache 170 203 136 29 1 : tunables 120 60 8 : slabdata 7 7 0
buffer_head 38892 39024 52 72 1 : tunables 120 60 8 : slabdata 542 542 0
mm_struct 108 135 448 9 1 : tunables 54 27 8 : slabdata 15 15 0
vm_area_struct 11169 14904 84 46 1 : tunables 120 60 8 : slabdata 324 324 144
fs_cache 82 177 64 59 1 : tunables 120 60 8 : slabdata 3 3 0
files_cache 108 140 384 10 1 : tunables 54 27 8 : slabdata 14 14 0
signal_cache 142 171 448 9 1 : tunables 54 27 8 : slabdata 19 19 0
sighand_cache 127 135 1344 3 1 : tunables 24 12 8 : slabdata 45 45 0
task_struct 184 246 1360 3 1 : tunables 24 12 8 : slabdata 82 82 0
anon_vma 3313 5842 12 254 1 : tunables 120 60 8 : slabdata 23 23 0
pgd 84 84 4096 1 1 : tunables 24 12 8 : slabdata 84 84 0
pid 237 303 36 101 1 : tunables 120 60 8 : slabdata 3 3 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 2 2 65536 1 16 : tunables 8 4 0 : slabdata 2 2 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 9 9 32768 1 8 : tunables 8 4 0 : slabdata 9 9 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 6 6 16384 1 4 : tunables 8 4 0 : slabdata 6 6 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 5 5 8192 1 2 : tunables 8 4 0 : slabdata 5 5 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 205 205 4096 1 1 : tunables 24 12 8 : slabdata 205 205 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 260 270 2048 2 1 : tunables 24 12 8 : slabdata 135 135 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 204 204 1024 4 1 : tunables 54 27 8 : slabdata 51 51 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 367 464 512 8 1 : tunables 54 27 8 : slabdata 58 58 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 487 495 256 15 1 : tunables 120 60 8 : slabdata 33 33 0
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 2242 2490 128 30 1 : tunables 120 60 8 : slabdata 83 83 0
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
size-32(DMA) 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 1409 2950 64 59 1 : tunables 120 60 8 : slabdata 50 50 0
size-32 3596 3842 32 113 1 : tunables 120 60 8 : slabdata 34 34 0
kmem_cache 145 150 256 15 1 : tunables 120 60 8 : slabdata 10 10 0
[root@localhost ~]# slabtop -d 5
Active / Total Objects (% used) : 97257 / 113249 (85.9%)
Active / Total Slabs (% used) : 4488 / 4488 (100.0%)
Active / Total Caches (% used) : 101 / 146 (69.2%)
Active / Total Size (% used) : 15076.34K / 17587.55K (85.7%)
Minimum / Average / Maximum Object : 0.01K / 0.16K / 128.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
25776 25764 99% 0.05K 358 72 1432K buffer_head
16146 15351 95% 0.08K 351 46 1404K vm_area_struct
15138 7779 51% 0.13K 522 29 2088K dentry_cache
9720 9106 93% 0.19K 486 20 1944K filp
7714 7032 91% 0.27K 551 14 2204K radix_tree_node
5070 5018 98% 0.05K 65 78 260K sysfs_dir_cache
4826 4766 98% 0.01K 19 254 76K anon_vma
4824 3406 70% 0.48K 603 8 2412K ext3_inode_cache
3842 3691 96% 0.03K 34 113 136K size-32
2190 2174 99% 0.12K 73 30 292K size-128
1711 1364 79% 0.06K 29 59 116K size-64
1210 1053 87% 0.33K 110 11 440K inode_cache
1196 1147 95% 0.04K 13 92 52K Acpi-Operand
1170 814 69% 0.05K 15 78 60K selinux_inode_security
936 414 44% 0.05K 13 72 52K journal_head
747 738 98% 0.43K 83 9 332K shmem_inode_cache
693 617 89% 0.35K 63 11 252K proc_inode_cache
676 615 90% 0.02K 4 169 16K Acpi-Namespace
609 136 22% 0.02K 3 203 12K biovec-1
495 493 99% 0.25K 33 15 132K size-256
480 384 80% 0.12K 16 30 64K bio
440 399 90% 0.50K 55 8 220K size-512
312 206 66% 0.05K 4 78 16K delayacct_cache
303 209 68% 0.04K 3 101 12K pid
290 290 100% 0.38K 29 10 116K sock_inode_cache
[root@localhost ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
# Controls IP packet forwarding
net.ipv4.ip_forward=0
# Controls source route verification
net.ipv4.conf.default.rp_filter=1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route=0
# Oracle
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
net.ipv4.tcp_rmem=4096 65536 4194304
net.ipv4.tcp_wmem=4096 65536 4194304
# Keepalive Oracle
net.ipv4.tcp_keepalive_time=3000
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=15
net.ipv4.tcp_retries2=3
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_window_scaling=0
# Oracle
fs.file-max = 6553600
fs.aio-max-nr=3145728
kernel.shmmni=4096
kernel.sem=250 32000 100 142
kernel.shmmax=2147483648
kernel.shmall=3279547
kernel.msgmnb=65536
kernel.msgmni=2878
kernel.msgmax=8192
kernel.exec-shield=0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq=1
kernel.panic=60
kernel.core_uses_pid=1
[root@localhost ~]# free | grep Swap
Swap: 3148700 319916 2828784
[root@localhost ~]# cat /etc/fstab | grep "/dev/shm"
tmpfs /dev/shm tmpfs size=1024M 0 0
[root@localhost ~]# df | grep "/dev/shm"
tmpfs 1048576 452128 596448 44% /dev/shm
NON-DEFAULT DB PARAMETERS:
db_block_size 8192
memory_target 633339904 /* automatic memory management */
open_cursors 300
processes 256
disk_async_io TRUE
filesystemio_options SETALL -
Issues with setting appropriate ownership for file system
Hi All,
We are using ACFS File system. For some of the mount point we have set to change ownership according to requirement in rc.local file So that all permissions remain intact when the server restarts. But the permissions are not taking into account. Only after the rc.local is executed ASM disks are mounted I guess. Is there any where else can we write scripts to change ownership of mount points for ACFS so that when the disks are mounted proper Unix permissions are setup.
Thanks & Regards,
Vikas KrishnaTo configure raw devices if you are using Red Hat Enterprise Linux 4.0:
To confirm that raw devices are enabled, enter the following command:
# chkconfig --list
Scan the output for raw devices. If you do not find raw devices, then use the following command to enable the raw device service:
# chkconfig --level 345 rawdevices on
After you confirm that the raw devices service is running, you should change the default ownership of raw devices. When you restart a Red Hat Enterprise Linux 4.0 system, ownership and permissions on raw devices revert by default to the root user. If you are using raw devices with this operating system for your Oracle Clusterware files, then you need to override this default.
To ensure correct ownership of these devices when the operating system is restarted, create a new file in the /etc/udev/permissions.d directory, called oracle.permissions, and enter the raw device permissions information. Using the example device names discussed in step 5 of the previous section, the following is an example of the contents of /etc/udev/permissions.d/oracle.permissions:
# OCR
raw/raw[12]:root:oinstall:0640
# Voting Disks
raw/raw[3-5]:oracle:oinstall:0640
# ASM
raw/raw[67]:oracle:dba:0660
After creating the oracle.permissions file, the permissions on the raw devices are set automatically the next time the system is restarted. To set permissions to take effect immediately, without restarting the system, use the chown and chmod commands:
chown root:oinstall /dev/raw/raw[12]
chmod 640 /dev/raw/raw[12]
chown oracle:oinstall /dev/raw/raw[3-5]
chmod 640 /dev/raw/raw[3-5]
chown oracle:dba /dev/raw/raw[67]
chmod 660 /dev/raw/raw[67]
http://download.oracle.com/docs/cd/B19306_01/rac.102/b28759/preparing.htm#CHDGEEDC
Edited by: Babu Baskar on Apr 18, 2010 1:33 PM -
Sun Cluster.. Why?
What are the advantages of installing RAC 10.2.0.3 on a Sun Cluster.? Are there any benefits?
Oracle 10g onward, there is no such burning requirement for Sun Cluster (or any third party cluster) as far as you are using all Oracle technologies for your Oracle RAC database. You should Oracle RAC with ASM for shared storage and that would not require any third party cluster. Bear inmind that
You may need to install Sun Cluster in the following scenarios:
1) If there is applicaiton running with in the cluster along with Oracle RAC database that you want to configure for HA and Sun Cluster provide the cluster resourced (easy to use) to manage and monitor the application. THIS can be achieved with Oracle Clusterware but you will have to write your own cluster resource for that.
2) If you want to install cluster file system such as QFS then you will need to install the Sun Cluster. If this cluster is only running the Oracle RAC database then you can rely on Oracle technologies such as ASM, raw devices without installing Sun Cluster.
3) Any certification conflicts.
Any correction is welcome..
-Harish Kumar Kalra -
Oracle 9i Standalone to 10g RAC ??
We have Oracle 9i on Solaris 9 with about 2 TB of data/index on SAN. We want to migrate this to 2 new boxes running Solaris 10 and Oracle 10g RAC -- storage SAN. What should be the best approach considering:
1. Fastest copying method for data/index from existing system (Solaris 9/Oracle 9i -> Solaris 10/Oracle 10g -- all on separate machines).
2. No return to 9i, once it 10g, stay Stanalone and/or RAC.
3. If 10g Standalone to RAC creates problem, stay on 10g Standalone and try later.
4. If ASM creates any issue, stay non-ASM/RAW files and try later.
Thanks.Hi Sairam,
Due to the size of our Production environments we have decided not to pursue the MCOD option.
Eg: Take a QA environment which consists of SRM and BW and we have multiple nodes with both installed on each node. Now if I were to go the MCOD option then these instances can connect to one large Oracle File System using different schema's.
What I'm trying to establish is in that one RAC, is it possible to have the multiple database instances on each node (SRM, BW etc) connect to multiple Oracle File Systems ie SRM connect to its own OFS and BW to its own instead of the one large shared OFS?
Hope this makes sense! Easier if I could upload a diagram!
Regards,
Chengappa -
Oracle Streams on a Rac Environment
Hi
I have some questions with respect to Setting up Streams on a rac Environment.Would appreciate a quick response as I need the answers by tommorrow.Any help would be greatly appreciated.Here are the questions
1> Do we have to create capture process for each active instance on only 1 capture process will do?
2> If yes then do they need to have a seperate queue for each one?
3> How will the apply process access multiple capture process and the propogation take place?
4> can only 2 tables in the source be replicated instead of the entire database?
5> In case if we use a push job if both the primary and secondary go down how can we move to the third instance and use it?
6> If the instance goes down do we have to restart the capture process once again?
7>What is the best suited for rac - ASM/RAW FILES with respect to Streams?
Regards
ShwetaStreams in 9iR2 RAC environment mines only from archive logs not online redo logs. This restriction is lifted in 10g RAC. If you choose to go thru the downstream capture route in 10g then you can only mine from archive logs in 10gR1.
Having said the above here are my answers:
1> Do we have to create capture process for each active instance or only 1 capture process will do?
You can run multiple capture processes each on difference instance in RAC. Unless you have a requirement to do so, a single capture process would suffice. The in-memory queue should also be on the same instance as the capture process is running from.
2> If yes then do they need to have a seperate queue for each one?
YES
3> How will the apply process access multiple capture process and the propogation take place?
Propagation is from a source queue to the destination queue. If the destination is a single instance database, then you can direct propagations for all of your capture(s) into a single apply queue. If the destination is also RAC then you can run multiple apply processes on each node and apply changes for specific set of tables. Maintenance would be something to think about here along with what happens when one node goes down.
4> can only 2 tables in the source be replicated instead of the entire database?
YES. Streams is flexible to let you decide what level you want to replicate.
5> In case if we use a push job if both the primary and secondary go down how can we move to the third instance and use it?
In theory propagation is a push job. There are certain things you need to configure correctly. If done, then you can move the entire streams configuration to any of the surviving node(s).
6> If the instance goes down do we have to restart the capture process once again?
In 9iR2 you have to restart the streams processes. In 10g the streams processes automatically migrate and restart at the new "owning" instance. In both versions, Queue ownership is transferred automatically to the surviving instance.
7> What is the best suited for rac - ASM/RAW FILES with respect to Streams?
Streams is independent of the storage system you use. I cannot think of any correlation here. -
Certification Questions about SAP financials
Hi,
I'm planning to join in this SAP Certified Application Associate - Financial Accounting module.
Please suggest me the following and additional details:
1. What are the prescribed books for this module?
2. What are the procedures to take certification exam?
3. What is the minimum passing score of this exam?
4. How many times are possible to appear in this exam?
5. Is there any number of years limit, to finish this certification exam?
6. How may I take practice tests through SAP.com?
Thanks in advance.
AmyHello Rachel,
please find my answers in below to your questions..
Q)I've read SAP suports onky OCFS2 and not ASM.
What is the reason ?
ans:) Please find following link
http://decipherinfosys.wordpress.com/2007/11/12/ocfs-asm-raw-devices-and-regular-filesystem/
OCFS is to enable Oracle Real Application Cluster (RAC) users to run the clustered database without having to deal with RAW devices. The file system was designed to store database related files, such as data files, control files, redo logs,
archive logs, etc.
With OCFS2, one can store not only database related files on a shared disk, but also store Oracle binaries and configuration files (shared Oracle home) making management of RAC even easier.
Q) I've read CRS and Oracle RDBMS must be installed on a share CFS disk.
Is there a possibility to install it locally on all nodes of the cluster ?
ans:) Oracle CRS software must be installed on shared CFS as this is mandatory for CRS release 10.2,
the second reason why CRS to be on shared CFS so as to be visible/accesible to all the nodes.
Oracle RDBMS should also be installed on shared CFS so as to be visible/accessible to all nodes, Oracle RDBMS should also be visible/accessible by SAP nodes too.
Q)In different documentation, I saw the oracle directories recommanded by SAP were the following :
/oracle/<SAPSID>/102_64 for the $ORACLE_HOME and /oracle/<SAPSID>/ for the datafiles ...
Can we change these directories ?
ans:) Those directories are designed by SAP and as the database creation is done by SAP during instalaltion SAP understands those directories.
Change is not advisible.
Hope I have answered to your questions. -
Does NFS support Oracle RAC with TAF ?
Hi,
Does NFS support Oracle RAC with TAF feature. ?
If yes, please point to some valid document to prove the same.
Regards
SumitTAF has nothing to do with the underlying storage architecture and is therefore 'supported' with your choice of asm, raw devices, cluster file system and NFS. NFS is supported on some platforms. For more details, check the certify tab in metalink. Some information can also be found here
TAF will be available on any rac installation, but not all drivers support it (jdbc-thin for example)
Bjoern -
오라클 10gR2버전으로 RedHat4 ltanium으로 RAC를 구축하려고 합니다..
itanium으로 rac를 묶을 경우에 특별한 이슈가 있는지 알고 싶습니다..
또한, crs로만 묶을 경우 ASM을 꼭 사용해야 되는지 알고 싶습니다
raw를 이용할려고 하는데 아직 잘 모르겠군요
조언 부탁드립니다.10g 부터는 3rd 제품없이 crs만 설치하셔도 됩니다.
그런데 현업에서는 불안한지 crs만 설치는 잘 안하더군요
asm은 선택사항입니다. 써도 되고 안써도 되고..
아직 까지는 asm을 잘 안씁니다.
Maybe you are looking for
-
i keep trying to sync my phone with itunes and it is not showing that it is plugged in. I know my chord works because it is charging. I have even tried other peoples chord. I have even tried to use different computers to see what happens. I have hear
-
Aperture Import Can't See Apple Desktop Folders
For whatever reason, when importing pictures from my Desktop, Aperture can't see any folders on the Desktop. I can see folders from other places within the Mactintosh HD. What is likely the reason and what do I need to change to let Aperture see the
-
Hi All I am using ODI 10.1.3.5 I have a requirement like the temporary tables created during interface execution should not be dropped . Instead it should truncate and kept it for next execution. I have used the option DELETE_TEMP_OBJECT to no in flo
-
Choosing which HDD as the master boot
I got a Product name: HP ENVY TS 15 Notebook PC and recently I've replace the stock HDD to a 1TB SSD HDD which was working fine with the HP provided recovery solution. Then I bought another msata 500GB SSD HDD thinking to use it as a secondary HDD (S
-
IOS 6.1.2 upgrade and email sync
Since downloading iOS 6.1.2 I have been having trouble downloading my email on my iPad. Anyone else having a problem with this issue?