Benefits of ASM vs. NAS

I'm looking for some good articles and/or opinions on the arguments of using ASM instead of (or, rather, on top of) a standard NAS solution. What benefits does ASM provide in addition to NAS? What are the drawbacks? I'm specifically looking for information on a single-instance implementation (I know the additional benefits when dealing with RAC).
From my experience, ASM makes the DBA job a bit easier and may provide better striping for a database (vs standard files), but it has drawbacks like administrating yet another instance, not being able to maintain ASM files with standard command line tools (like cp, dd, etc), and having another layer of confusion when something goes wrong.
Thanks.

ASM is Oracle's centralized file server, specifically designed for serving Oracle databases.
When it comes to administration and management, in my opinion, a DBA will have more control and responsible over ASM as compared to NAS (or SAN or any centralized file servers for that matter, where a system admin will have more control over it). What I meant is, when it comes to tuning, deciding on the devices, operating system, stripe size, raid type, creating LVMs, disk group, backup, adding more space/scalability (dynamic), balancing disk groups, directory/filenames (OFA compliance), a DBA can do these without the intervention of a System (or Network) Administrator.
Control and Responsibility with regard to Administration and Management (a rough figure):
ASM: DBA 80% & SA 20%
NAS: DBA 30% & SA 70%
Use some of the Oracle native tools to manage, for example:-
- You can use Oracle Enterprise Manager to manage ASM.
- Use RMAN to back it up and recover.
On the other hand, ASM is more complex when compared to NAS, more administrative overhead.
If ASM is implemented and serves just for a few Oracle Databases/Instances (clients), you might not find it advantageous, would be beneficial if ASM serves for more databases or if you plan to increase the number of databases in future.

Similar Messages

  • Asm Benefits

    Dear All Guru's
    I have question's to clarify me doubt why we need to implement Asm while we get all feature in Raid 10.Why any organization should implement ASM.Can you please elaborate the benefilt of asm and raid level.It's help me alot.
    Regards,
    Merri

    If you currently have RAID10 then you can specify external redundancy when creating disk groups.
    Benefits of ASM are:
    You will have control of what disks are in what volume groups and there will be less reliance on System Administrators as compared to filesystems and raw devices.
    Automatic rebalancing when you add a disk.
    You will save on licensing if you plan to use clustered file systems. ASM is free !!
    Support from Oracle - one stop shop for any Oracle issues rather than relying on SA's or storage teams to troubleshoot issues.
    Makes administration easier, especially compared to raw devices.
    ASM can co-exist with filesystems or raw devices but I would find this pointless for the most part although there may be some exceptions.
    With 11gR2 you also have ACFS where your OCR, voting disk and binaries can sit on ACFS, although personally I would have binaries on filesystems on each host.
    Disadvantages:
    You can't backup using OS commands as can be done at filesystem level. You would need to use RMAN which could be considered an advantage in all honesty.
    I'm sure there are more of each.
    With regards to RAID level it really depends on what you want to achieve and how much you are willing to spend.
    As a general rule. redo logs should be on your fastest disks.
    Any data which will be accessed less frequently or which will be archived can be on slower disks with a lower RAID level to save money. So you can have different diskgroups on different types of RAID if this is part of your requirements.

  • Which is better to install Oracle 11g database based on ASM or Filesystem

    We will install 2 sets of Oracle 11.2.0.3 on Redhat Linix 5.6 and configure Data Guard for them further -- one will be a primary DB server, the other will be a physical standby DB server. The Oracle DB stoage is based on SAN Array Disk with 6TB size. Now there are two options to manage the DB datafiles:
    1. Install Oracle ASM
    2. Create the tranditional OS filesystem
    Which is better? in the past, our 10g data guard environment is not based on Oracle ASM.
    Someone think if we adopt the oracle ASM, the shortcomings are :
    1. as there is one more instance that will consume more memory and resource.
    2. as the ASM file system cannot be shown out on the OS level directly such as "df" command, the disk utilization monitor job will be more difficult. at least it cannot be supervised at OS level.
    3. as the DB bshoule be done the daily incremental backup (Mon-Sat) to Local Backup Drive. the bakup job must be done by RMAN rather than user-managed script.
    Who can provide some advices? Thanks very much in advance.

    user5969983 wrote:
    We will install 2 sets of Oracle 11.2.0.3 on Redhat Linix 5.6 and configure Data Guard for them further -- one will be a primary DB server, the other will be a physical standby DB server. The Oracle DB stoage is based on SAN Array Disk with 6TB size. Now there are two options to manage the DB datafiles:
    1. Install Oracle ASM
    2. Create the tranditional OS filesystem
    Which is better? in the past, our 10g data guard environment is not based on Oracle ASM. ASM provides a host of new features ito data management, and performance - to the extent that you can rip out the entire existing storage system, replace it with a brand new storage system, without a single second of database downtime.
    Someone think if we adopt the oracle ASM, the shortcomings are :
    1. as there is one more instance that will consume more memory and resource.Not really relevant on 64bit h/w architecture that removes limitations such a 4GB of addressable memory. On the CPU side... heck, my game PC at home has a 8 core 64bit CPU. Single die and dual core CPUs belong to the distant past.
    Arguing that an ASM instance has overheads would be silly. And totally ignores the wide range of real and tangible benefits that ASM provides.
    2. as the ASM file system cannot be shown out on the OS level directly such as "df" command, the disk utilization monitor job will be more difficult. at least it cannot be supervised at OS level.That is a A Very Good Thing (tm). Managing database storage from o/s level is flawed in many ways.
    3. as the DB bshoule be done the daily incremental backup (Mon-Sat) to Local Backup Drive. the bakup job must be done by RMAN rather than user-managed script.
    rman supports ASM fully.
    I have stopped using cooked file systems for Oracle - I prefer ASM first and foremost. The only exceptions are tiny servers with a single root disk that needs to be used for kernel, database s/w, and database datafiles. (currently these are mostly Oracle XE systems in my case, and configured that way as XE does not support ASM and is used as a pure cost decision).

  • RAC with ASM and without ASM

    Hi all,
    we planing to install RAC 11g instance active/active . and we are using SAN storage RAID 10.
    I know ASM is nice feature . but it need more maintenance in future . This is what I see
    it from Manual and training . for patching ..... because it maintain as instance.
    why I do need ASM since I have SAN and I can control mirroring ...etc
    I need sold answer here ?? why I need to use this feature that already can be covered using another facility like SAN.
    Best Regards,

    What I have found in a RAC world is there is maintenance no matter which way you go, A cluster file system will require upgrades, patches, etc. RAW volumes will require extra effort in allocation, etc. as well as increase the number of files in the database. ASM requires additional instance on each node to maintain which is quite simple and rolling patches in ASM is becoming reality slowly. I have found that removing the management of RAW volumes is more trouble then the maintenance of the ASM instances and the added benefits of ASM outweigh the maintenance for sure. I found that the cluster file system mainteance is pretty well a wash.
    As for ASM being widely used, the most recent RAC clusters (last 3) I have built have all been ASM....... 1 on HPUX and 2 on Linux (Red Hat and Oracle Enterprise Linux) and future clusters coming up that I know of are all going to be ASM as well. While it may be true that a lot of existing RAC environments have not yet gone to ASM almost all new RAC environments are. It is certainly taking hold. If you look at the effort on a large database to move to ASM from RAW volumes or cluster file system it can appear to be a lot of work and that is true, but in the long run my experience with ASM has been positive therefore I would not hesitate to recommend new RAC clusters be built with ASM and existing clusters should have a migration plan in place. As with some cluster file systems like veritas, GPFS, etc. There is addtional cost involved where ASM does not have the additional cost so moving existing clusters can save $$........ RAM volumne management may not fall on the DBA but someone has to manage all those volumnes at a SAN level and that is additional management just may not really be with the DBA.
    Just my additional 2 cents worth.
    Hope this helps.

  • Why do people love NAS?

    Usually people buy NAS for simplicity, flexibility, and support.
    Of course you can take a computer and connect a hard drive and share it out, and when you do that you've got a NAS too, so it comes down to doing whatever meets your requirements.
    It's a bit like how you can buy a bunch of components and build a computer, or you can go buy a Dell - same result, one is usually a bit more work but may be a better solution depending on your needs.

    DAS, read as Direct Attached Storage, is definitely an easier solution than NAS or SAN, and may have performance benefits if done correctly. I will add a big caveat to that and say that the DAS solution in question, to get the performance, would be multiple external drives, probably in a good quality chassis (dual power supplies, etc) and feeding into a good quality hardware RAID card or two. A single USB drive, even USB 3.0, isn't going to cut it with multiple connections reading and writing data.One of the benefits of decentralized storage - NAS or SAN - is the flexibility you get from it with virtual server systems, especially with multiple hosts... Oh, the host server died. Migrate the VMs over to another in the pool. Done. Oh, wait, there was directly attached storage on that host, guess that server won't be able to be used until...

  • ASM on RHEL6 using udev

    Hi,
    I'm installing Oracle RAC 11.2.0.3 on RHEL6.3. Since the asmlib RPMs are not provided any more (for RHEL6) I've looked into udev and thanks to another forum member I am now able to use either a whole disk or a partition.
    My question now is (since both ways seem to be supported): which method is the better one, what are the advantages and disadvantages of these methods?
    Thanks,
    Werner

    Hi Friend,
    Good Query.
    ASMLib is an optional set of tools and a kernel driver that can be inserted between ASM and the hardware, as well as an application library used by the Oracle database software to access ASM disks. It is a support library for the ASM feature of Oracle 10g and 11g single instance database servers as well as RAC installations. ASM and regular daabase instances can use ASMLib as an alternative interface for disk access. ASMLib has three components:
    Oracle recommends using ASM with ASMLib together for better manageability and persistent device naming. Note that Oracle makes no claims that ASM with ASMLib delivers performance benefits over ASM without ASMLib.
    Advantages
    • Perceived better manageability.
    • Well documented and recommended by Oracle.
    • Some Oracle DBAs and SysAdmins are trained in how to use ASM with ASMLib and are comfortable with this environment.
    • Optimized for database applications via direct and async I/O provided by the ASMLib kernel driver.
    Thanks
    LaserSoft

  • Benefits of Partitioning a Database running with ASM

    Hi All,
    I am trying to understand the additional benefits of performing partitioning of table space on a database which is running with ASM.
    Further I wish to understand the approach followed by other teams for partitioning tables and the benfits they reaped out of using ILM.
    Regards,
    Vinod Dudeja

    I would make sure you set all the Kernel parameters correctly in the install guide.
    So ASM is already running and you are creating the DB or you are trying to create an ASM instance?
    Also found this on Metalink:
    Applies to:
    Oracle Server - Enterprise Edition - Version: 9.2.0.4
    SUSE \ UnitedLinux x86-64
    Red Hat Enterprise Linux Advanced Server x86-64 (AMD Opetron Architecture)
    Linux x86-64
    Symptoms
    When trying to increase the SGA to approach half available RAM with an Oracle 64bit version on a Linux 64bit operating system, even though shmmax is set to match half the amount of RAM, you get the following error when trying to start the instance:
    SQL> startup nomount
    ORA-27102: out of memory
    Linux-x86_64 Error: 28: No space left on device
    Changes
    shmall is set to default at 2097152
    Cause
    shmall is number of pages of shared memory to be made available system-wide.
    Solution
    Set shmall equal to shmmax divided by the page size.
    FYI: The page size can be seen using the following command:
    $ getconf PAGE_SIZE
    Errors
    ORA-27102 out of memory

  • Clusterware with ASM on NFS / NAS

    Are there notes / experiences on using an NFS / NAS that provides the "LUNs" or "disks" for ASM in Grid Infrastructure install ?
    Oracle GI would be installed on local nodes but the Shared Storage that I am looking for is NFS / NAS (e.g. NetApp). Presumably ASMLib might not be required (even if the nodes are Oracle / RedHat Linux) ?
    Hemant K Chitale

    Hi Hemant,
    Hemant K Chitale wrote:
    Modifying ASM_DISKSTRING via an ALTER SYSTEM command isn't an option until and unless you already have ASM installed and running. 11gR2 Clusteware installation attempts to install and configure ASM (at least for OCR and Voting Disks).
    So it becomes a "chicken and egg situation" !You can configure ASM_DISKSTRING using OUI at step "Create ASM Disk Group" on button "Change Discovery Path".
    With devices created (by "dd") on an NFS mountpoint, oracleasm doesn't CREATEDISK unless I use losetup. Fine, I had the same issue testing 10gR2 ASM (although I have been told that 11gR2 does not support losetup).I belive, we will not need use losetup anymore.
    I installed Oracle Clusterware using 2 Hosts without Storage System using NFS. (it's for test only)
    Creating zero-padded files.
    dd if=/dev/zero of=/oracle/asm/devices/nfsdisk1 bs=8192 count=131072
    131072+0 records in
    131072+0 records out
    1073741824 bytes (1.1 GB) copied, 27.9333 seconds, 38.4 MB/s
    dd if=/dev/zero of=/oracle/asm/devices/nfsdisk2 bs=8192 count=131072
    131072+0 records in
    131072+0 records out
    1073741824 bytes (1.1 GB) copied, 19.9333 seconds, 50.4 MB/s
    dd if=/dev/zero of=/oracle/asm/devices/nfsdisk3 bs=8192 count=131072
    131072+0 records in
    131072+0 records out
    1073741824 bytes (1.1 GB) copied, 17.9333 seconds, 52.4 MB/s
    #  chown oracle.asmadmin /oracle/asm/devices/nfsdisk*
    #  chmod 0660 /oracle/asm/devices/nfsdisk*Configuring NFS
    ### NFS Server
    # cat /etc/exports
    /oracle/asm/devices     129.10.10.0/24(rw,sync,no_wdelay,insecure_locks,no_root_squash)
    ### Mounting NFS on both servers "/etc/fstab"
    server-nas:/oracle/asm/devices  /oracle/asm/disks  nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0  0 0At step Create ASM Disk Group I used "Redundancy Normal" and changed "Change Discovery Path" to "/oracle/asm/disks/*"
    The pre-reqs of ASM Devices will fails, I checked "Ignore":
    Error Message:PRVF-7039 : File system exists on location "/oracle/asm/disks/nfsdisk1"
    Cause: Existing file system found on the specified location.
    Action: Ensure that the specified location does not have an existing file system.
    Error Message:PRVF-7039 : File system exists on location "/oracle/asm/disks/nfsdisk2"
    Cause: Existing file system found on the specified location.
    Action: Ensure that the specified location does not have an existing file system.
    Error Message:PRVF-7039 : File system exists on location "/oracle/asm/disks/nfsdisk3"
    Cause: Existing file system found on the specified location.
    Action: Ensure that the specified location does not have an existing file system.Output of root.sh.
    # NODE1
    ASM created and started successfully.
    Disk Group DATA created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 948b607b2b554ffebf5d89ea1e68beff.
    Successful addition of voting disk 4ac78fefc7e94f78bf6749fa48edbe0e.
    Successful addition of voting disk 1a1931d2fcff4fa4bf9c8c4817c86b79.
    Successfully replaced voting disk group with +DATA.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ##  STATE    File Universal Id                File Name Disk group
    1. ONLINE   948b607b2b554ffebf5d89ea1e68beff (/oracle/asm/disks/nfsdisk1) [DATA]
    2. ONLINE   4ac78fefc7e94f78bf6749fa48edbe0e (/oracle/asm/disks/nfsdisk2) [DATA]
    3. ONLINE   1a1931d2fcff4fa4bf9c8c4817c86b79 (/oracle/asm/disks/nfsdisk3) [DATA]
    Located 3 voting disk(s).
    CRS-2672: Attempting to start 'ora.asm' on 'holanda'
    CRS-2676: Start of 'ora.asm' on 'holanda' succeeded
    CRS-2672: Attempting to start 'ora.DATA.dg' on 'holanda'
    CRS-2676: Start of 'ora.DATA.dg' on 'holanda' succeeded
    ACFS-9200: Supported
    ACFS-9200: Supported
    CRS-2672: Attempting to start 'ora.registry.acfs' on 'holanda'
    CRS-2676: Start of 'ora.registry.acfs' on 'holanda' succeeded
    Configure Oracle Grid Infrastructure for a Cluster ... succeededThe installation ended sucessfull and cluster is up.
    Regards,
    Levi Pereira

  • Asm disk on NFS/NAS in linux

    Hi,
    Currently i am having one diskgroup of 1.5TB on SAN in linux. Now there is a requirement to increase the space in the server and the space can only be added from NFS/NAS drive. I am planning to create asm disks using 'dd' command.
    I have below queries:
    1. Is it feasbile to keep the asm disk from both SAN & NAS drives.
    2. asm_diskstring parameter is not set as we are using multipath and created raw devices then how to set the parameter to detect the asmdisks.
    3. Can any one suggest me the size of one disk on NAS in linux.
    4. I tried with losetup and with this i can create upto 8 disks. If i create disk of 250GB using 'dd' command then is there any risk?
    Please advice.
    Thanks,

    user8379622 wrote:
    Can any one suggest me the ideal size of asm disk on NFS (in linux)? I have to create the disk of 1TB.There is no "ideal" size - there is a size limit though (2TB I think it is).
    I won't mix SAN and NAS disks in the same ASM disk group. Also keep in mind that NAS now introduces a whole new set of underlying dependencies that need to be satisfied in order for ASM to mount those disks and the database instance to use it. Not exactly the best of ideas for a robust system where keeping the number of moving parts to a minimum is required.
    IP is also a horribly slow medium to use for a storage protocol. It can have a significant negative performance impact.

  • Benefits of table partitioning on ASM controlled storage?

    Hi everyone,
    I understand that, as the documentation describes, table partitioning for the purpose of striping is a very good strategy for high performing OLTP databases. So you can explicitly assign partitions to different database-files, residing on different disks. A pretty clear benefit.
    What about ASM controlled storage? Does it help partitioning your tables here, for the purpose of striping? Or is it even better to abandon the ASM option for the sake of reliable striping benefit?
    Thanks in advance for any helpful thought or real life report...

    My situation is the following:
    I am a database developer/architect. In my organisation we have a DBA team, which has no knowledge of the application.
    So, I am not in a position to test. I am supposed to order/require this or that from our DBA, in this case ASM or NO ASM. Partitioning of tables is my job.
    That's why I am asking if anybody else has experience with this.
    To our app: our challenge - when we go into production - will not be the mass of data, but the concurrent use of it from up-to 10.000 users. This is why I am considering a partitioning solution, suitable to the usage profile.
    So does anybody have experience from similar scenarios? Is it possible to trace, what ASM actually does, e.g. does it stripe the tables across the drives optimally?

  • DNFS with ASM over dNFS with file system - advantages and disadvantages.

    Hello Experts,
    We are creating a 2-node RAC. There will be 3-4 DBs whose instances will be across these nodes.
    For storage we have 2 options - dNFS with ASM and dNFS without ASM.
    The advantages of ASM are well known --
    1. Easier administration for DBA, as using this 'layer', we know the storage very well.
    2. automatic re-balancing and dynamic reconfiguration.
    3. Stripping and mirroring (though we are not using this option in our env, external redundancy is provided at storage level).
    4. Less (or no) dependency on storage admin for DB file related tasks.
    5. Oracle also recommends to use ASM rather than file system storage.
    Advantages of DNFS(Direct Network File System) ---
    1. Oracle bypasses the OS layer, directly connects to storage.
    2. Better performance as user's data need not to be loaded in OS's kernel.
    3. It load balances across multiple network interfaces in a similar fashion to how ASM operates in SAN environments.
    Now if we combine these 2 options , how will be that configuration in terms of administration/manageability/performance/downtime in future in case of migration.
    I have collected some points.
    In favor of 'NOT' HAVING ASM--
    1. ASM is an extra layer on top of storage so if using dNFS ,this layer should be removed as there are no performance benefits.
    2. store the data in file system rather than ASM.
    3. Stripping will be provided  at storage level (not very much sure about this).
    4. External redundancy is being used at storage level so its better to remove ASM.
    points for 'HAVING' ASM with dNFS --
    1. If we remove ASM then DBA has no or very less control over storage. He can't even see how much is the free space left as physical level.
    2. Stripping option is there to gain performance benefits
    3. Multiplexing has benefits over mirroring when it comes to recovery.
    (e.g, suppose a database is created with only 1 controlfile as external mirroring is in place at storage level , and another database is created with 2 copies (multiplexed within Oracle level), and an rm command was issued to remove that file then definitely there will be a time difference between restoring the file back.)
    4. Now familiar and comfortable with ASM.
    I have checked MOS also but could not come to any conclusion, Oracle says --
    "Please also note that ASM is not required for using Direct NFS and NAS. ASM can be used if customers feel that ASM functionality is a value-add in their environment. " ------How to configure ASM on top of dNFS disks in 11gR2 (Doc ID 1570073.1)
    Kindly advise which one I should go with. I would love to go with ASM but If this turned out to be a wrong design in future, I want to make sure it is corrected in the first place itself.
    Regards,
    Hemant

    I agree, having ASM on NFS is going to give little benefit whilst adding complexity.  NAS will carrying out mirroring and stripping through hardware where as ASM using software.
    I would recommend DNFS only if NFS performance isn't acceptable as DNFS introduce an additional layer with potential bugs!  When I first used DNFS in 11gR1, I came across lots of bugs and worked with Oracle Support to have them all resolved.  I recommend having read of this metalink note:
    Required Diagnostic for Direct NFS Issues and Recommended Patches for 11.1.0.7 Version (Doc ID 840059.1)
    Most of the fixes have been rolled into 11gR2 and I'm not sure what the state of play is on 12c.
    Hope this helps
    ZedDBA

  • Oracle 11g benefits

    Hello Guys
    Oracle 11g new features notes as found on web very generic
    if i want to convince my client to switch from oracle 10g to oracle 11g,this information will not be useful
    if you had migrated to 11g and worked on it,please reply practical reasons from dba point of view
    how customer will be benefited from migrating from oracle 10 g to 11g
    like if i mentioned to him ADR feature it doesnt makes sense to him
    Active dataguard may be one of feature but will be have any benefit where dataguard is implemented
    still to application guys also it will give any advantage
    say for example i am discussing this with Application team head ,how will he benefit from migrating to 11g
    what feature will be benefitted to him and how
    will this feature will be appealing enough to efforts in migration
    thanks
    cheers

    In my opinion, the best way to convince them is to do an assessment for their particular environment, based on the specifics.
    For example, I've done such one for a database and here are some of the points:
    1. Almost 30%(as reported by Grid Control) of frequently executed queries are on tables like XX.XXXX and YY.YYYYY. These tables are relatively small and statements , and functions on them are good candidates for modification to use :
    Query Results Cache and PL/SQL Function Result Cache (Oracle 11G Enterprise Edition option, no extra cost needed).
    2. From time to time we experience sudden execution plan changes and get worse plans. This degrades applications' performance orders of magnitude. Oracle 11g has a new built-in feature called "SQL Plan Management". As described in the docs:
    "SQL plan management prevents performance regressions resulting from sudden changes to the execution plan of a SQL statement by providing components for capturing, selecting, and evolving SQL plan information. If you are performing a database upgrade that installs a new optimizer version, it can result in plan changes for a small percentage of SQL statements, with most of the plan changes resulting in either no performance change or improvement. However, certain plan changes may cause performance regressions.
    With SQL plan management, the optimizer automatically manages execution plans and ensures that only known or verified plans are used. When a new plan is found for a SQL statement, the plan is not used until it has been verified by the database to have comparable or better performance than the current plan. This means if you seed SQL plan management with your current (pre-11g) execution plan, which will become the SQL plan baseline for each statement, the optimizer uses these plans after the upgrade. If the 11g optimizer determines that a different plan should be used, the new plan is queued for verification and will not be used until it has been confirmed to have comparable or better performance than the current plan."
    I don't expect this to be perfect but at least it will decrease the number of such problematic periods.
    3. 10g Recovery Manager doesn't support parallelism per datafile. In our environment we have several 30+GB datafiles and since our storage system is not very fast for such type of operations, non-parallel restore of a single datafile will take much more time compared to the same activity on 11g. 11g supports RMAN datafile restore in parallel.
    4. Several optimizations in optimizer work. One of them is better handling of full scans:
    We've done some tests 10g/11g and were surprised because of this direct path read/db file scattered reads thing. Our tests are:
    Two databases - one 11g and one 10g with same sga_target,sga_max_size(500MB) and pga_aggregate_target(200MB), everything else is as it is set during install by DBCA. These two databases run on one host.
    The storage is a NAS(6 discs) managed by 11g ASM (one normal redundancy diskgroup ) which is used by the databases.
    I imported a 3GB table on every database and gathered stats. To not go in detail as of now, the results are:
    three consecutive runs on 11g.
    average execution time - 50 seconds
    three consecutive runs on 10g.
    average execution time - 125 seconds
    The statement is:
    select /*+ full(t) nocache(t) */ count(1) from zz.zzz t;
    and there is no other activity on the host and the storage.
    Same number of logical and physical reads during 11g and 10g runs.
    During the above runs while run on:
    11g, iostat reports ~14MB/s per disk
    10g, iostat reports ~5MB/s per disk
    After tracing these sessions we figured out that on 11g direct path read is used, while 10g uses db file scattered read.
    5. We can use AUDIT_SYS_OPERATIONS in conjunction with XML,EXTENDED for auditing purposes. 10.2.0.4 has a bug for this that is fix in 11.1.0.6 onwards.
    6. Database Resident Connection Pooling(Concepts guide 11g documentation)
    "Database Resident Connection Pooling (DRCP) provides a connection pool in the database server for typical Web application usage scenarios. DRCP pools dedicated servers, which comprise of a server foreground combined with a database session, to create pooled servers.
    A Web application typically acquires a database connection, uses the connection for a short period, and then releases the connection. DRCP enables multiple Web application threads and processes to share the pooled servers for their connection needs.
    DRCP complements middle-tier connection pools that share connections between threads in a middle-tier process. DRCP also enables you to share database connections across multiple middle-tier processes. These middle-tier processes may belong to the same or different middle-tier host.
    DRCP enables a significant reduction in key database resources that are required to support a large number of client connections. DRCP reduces the amount of memory required for the database server and boosts the scalability of both the database server and the middle-tier. The pool of readily available servers also reduces the cost of re-creating client connections.
    DRCP is especially useful for architectures with multi-process, single-threaded application servers, such as PHP and Apache servers, that cannot do middle-tier connection pooling. The database can scale to tens of thousands of simultaneous connections with DRCP."
    It will help us to improve dedicated server usage(currently we use shared server).
    7. A lot of bug fixes for bugs that don't have 10g patches , fixed in 11g only and affect us - examples XML,EXTENDED and AUDIT_SYS_OPERATIONS bug, FGA bugs, job system on logical standby database , "ORA-12569: TNS:packet checksum failure" during sqlplus login and so on.
    It's for sure that there are much more such convincing points, but a several days assessment will make it possible to identify the most important ones from performance/availability/security perspective for their environment. As it is often the case - the devil is in the details. It's all about the way they use the database.

  • About the error: "The account is not authorized to login from this station" when you access NAS devices from Windows 10 Technical Preview (build 9926)

    Scenario:
    With the release of Windows 10 Technical Preview (build 9926), some users may encounter an error message of “The account is not authorized to login from this station” when trying to access remote files saved in NAS storage. In
    addition, the following error log may also be found via Event Viewer:
    Rejected an insecure guest logon.
    This event indicates that the server attempted to log the user on as an unauthenticated guest but was denied by the client. Guest logons do not support standard security features such as signing and encryption. As a result,
    guest logons are vulnerable to man-in-the-middle attacks that can expose sensitive data on the network. Windows disables insecure guest logons by default. Microsoft does not recommend enabling insecure guest logons.
    Background:
    The error message is due to a change we made in Windows 10 Technical Preview (build 9926) which is related to security and remote file access that may affect you.
    Previously, remote file access includes a way of connecting to a file server without a username and password, which was termed as “guest access”.
    With guest access authentication, the user does not need to send a user name or password.
    The security change is intended to address a weakness when using guest access.  While the server may be fine not distinguishing among clients for files (and, you can imagine in the home scenario that it doesn’t
    matter to you which of your family members is looking at the shared folder of pictures from your last vacation), this can actually put you at risk elsewhere.  Without an account and password, the client doesn’t end up with a secure connection to the server. 
    A malicious server can put itself in the middle (also known as the Man-In-The-Middle attack), and trick the client into sending files or accepting malicious data.  This is not necessarily a big concern in your home, but can be an issue when you take your
    laptop to your local coffee shop and someone there is lurking, ready to compromise your automatic connections to a server that you can’t verify.  Or when your child goes back to the dorm at the university. The change we made removes the ability to connect
    to NAS devices with guest access, but the error message which is shown in build 9926 does not clearly explain what happened. We are working on a better experience for the final product which will help people who are in this situation. 
    As a Windows Insider you’re seeing our work in progress; we’re sorry for any inconvenience it may have caused.
    Suggestion:
    You may see some workarounds (eg. making a registry change restores your ability to connect with guest access).
    We do NOT recommend making that change as it leaves you vulnerable to the kinds of attacks this change was meant to protect you from.
    The recommended solution is to add an explicit account and password on your NAS device, and use that for the connections.  It is a one-time inconvenience,
    but the long term benefits are worthwhile.  If you are having trouble configuring your system, send us your feedback via the Feedback App and post your information here so we can document additional affected scenarios.
    Alex Zhao
    TechNet Community Support

    Hi RPMM,
    Homegroup works great in Windows 10 Technical Preview (9926 build), when I invited my Windows 10 Technical Preview (9926 build) joined in HomeGroup, I can access the shares smoothly:
    My shares settings is like this:
    Alex Zhao
    TechNet Community Support

  • How do you install Lightroom on a NAS drive?

    I want to install Lightroom and all data on my NAS drive. (as my main drive contains no data on it what so ever).
    I also do not have a 'Pictures', 'Videos', or 'Music' folder on this drive either.
    Any help would be appreciated.
    Thank you,
    Keith

    Keith first I would recommend installing applications on the same drive/partition which contains your operating system.  If you are utilizing a SSD for your operating system then much of the benefits of the SSD are lost when running applications from a traditional hard drive.
    What version of Photoshop Lightroom are you wanting to install on this drive?  Is Lightroom included as part of a membership or do you own a license?

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

Maybe you are looking for

  • Error message that has taken over every program on my computer

    Photoshop.exe - Entry Point Not Found     The procedure entry point ?Terminate@StructuralImageEditing@@YAXXZ could not be located in the dynamic link library PatchMatch.dll At first, working on a photo, photoshop cc was not able to perform a selectio

  • Documentum Crawl of .msg files

    I am trying to view .msg (Outlook saved) files in Documentum through the portal. I had my crawler bring in cards, however when I click on the link, the document is not in readable format. I tried using global document type map to map .msg files to MS

  • Unpaid Public Holiday

    Hi Gurus, I want to change the public holidays from OFF/Paid to OFF/Unpaid, so the system should not generated any pay. In my company last month SAP was generating 1 days pay even for employees absent for the entire month, I think this is because of

  • Define the Version of Excel starting over the Business Explorer

    Hello, we are running SAP BI 2004s. On my computer are two versions of Excel installed. Excel 2003 and Excel 2007. Actually over the Business Explorer I can only start Excel 2003. Is there a possibility to change this adjustment without deinstalling

  • Inactive materials tables

    Hi, i need to find out the materials that has no movement for a long time, time determined by user, and i went to the mkpf-mseg tables. However i have to select the data based on the bwart field's values. For example if mseg~bwart is not eq to '261'