Decision on File system management in Oracle+SAP

Hi All,
In my production system we use to have /oracle/SID/sapdata1 and oracle/SID/sapdata2. Initially there was many datafiles assigned to the table sapce PSAPSR3, few as autoextend on and few as autoextend off. As per my understanding DB02 shows you the information just tablespace wise it will report AUTOEXTEND ON as soon as at least one of the datafiles has AUTOEXTEND ON. In PSAPSR3 all the datafile with autoextend ON are from SAPDATA1 which has only 50 GBs left. All the files as Autoextend OFF are from SAPDATA2 which has 900 GBs of sapce left.
Now the question is :
1.Do I need to request for additional space for SAPDATA1 as some of the tablespaces are at the edge of autoextend and that much space is not left in the FS(sapadat1) , then how will they extend? DB growth is 100GB per month.
2.We usually were adding 10 GB of datafile in the tablespace with 30GB as autoextend.
Can we add another datafile from sapdata2 this time with autoextend ON and the rest will be taken care automatically.
Pleae suggest.
Regards,
VIcky

Hi Vicky,
As you have 100GB/month growth suggestion here would be
1) Add 2 more mount points sapdata3 and sapdata4 with around 1 TB space.
   This is to distribute data across 4 data partitions for better performance
2) As sapdata1 has datafiles with auto extend ON, you need to extend the file system to 500 GB atleast so that whenever data is written on datafiles under sapdata1, it will have space to grow using autoextend feature. Without sufficient disk space it may lead to space problem and transaction may result in dump.
3) No need to change anything on sapdata2 as you already have 900GB free space
Hope this helps.
Regards,
Deepak Kori

Similar Messages

  • File system management when sharing photos/music with other users in Mac OS

    I'm a new convert from the Windows world. I'm trying to setup multiple accounts for each user within my household. However, I'd like us to all share the same set of folders for photos within iPhoto and music in iTunes. Is it best to just used the "shared" folder instead of keeping them in the standard home directory and trying to share those folders with other users? What is the best approach to this. I would imagine most people do this for managing their music if not photos as well. Just trying to learn best practice on the file system management before I copy over gigs of files from my old PC.
    Thanks much,
    champlir

    champlir
    Welcome to the Apple Discussions. This might help with iPhoto.
    There are two ways to share, depending on what you mean by 'share'.
    If you want the other user to be able to see the pics, but not add to, change or alter your library, then enable Sharing in your iPhoto (Preferences -> Sharing), leave iPhoto running and use Fast User Switching to open the other account. In that account, enable 'Look For Shared Libraries'. Your Library will appear in the other source pane.
    Remember iPhoto must be running in both accounts for this to work.
    If you want the other user to have the same access to the library as you: to be able to add, edit, organise, keyword etc. then:
    Quit iPhoto in both accounts
    Move the iPhoto Library Folder to an external HD set to ignore permissions. You could also use a Disk Image or even partition your Hard Disk.
    In each account in turn: Hold down the option (or alt) key and launch iPhoto. From the resulting dialogue, select 'Choose Library' and navigate to the new library location. From that point on, this will be the default library location. Both accounts will have full access to the library, in fact, both accounts will 'own' it.
    However, there is a catch with this system and it is a significant one. iPhoto is not a multi-user app., it does not have the code to negotiate two users simultaneously writing to the database, and trying will cause db corruption. So only one user at a time, and back up, back up back up.
    Finally: If you’re comfortable in the Terminal, and understand File Permissions, ACL’s etc., some folks have reported success using the process outlined here . (Note this page refers to 10.4, but it should also work on 10.5). If you’re not comfortable with the terminal, and don’t know an ACL from the ACLU, then you’re best doing something else... Oh, and the warning about simultaneous users still applies.
    Regards
    TD

  • Need help with File system creation fro Oracle DB installation

    Hello,
    I am new to Solaris/Unix system landscape. I have a Sun enterprise 450 with 18GB hard drive. It has Solaris 9 on it and no other software at this time. I am planning on adding 2 more hard drives 18gb and 36gb to accommodate Oracle DB.
    Recently I went through the Solaris Intermediate Sys admin training, knows the basic stuff but not fully confident to carry out the task on my own.
    I would appreciate some one can help me with the sequence of steps that I need perform to
    1. recognize the new hard drives in the system,
    2. format,
    3. partition. What is the normal strategy for partitioning? My current thinking is to have 36+18gb drives as data drives. This is where I am little bit lost. Can I make a entire 36GB drive as 1 slice for data, I am not quite sure how this is done in the real life, need your help.
    4. creating the file system to store the database files.
    Any help would be appreciated.

    Hello,
    Here is the rough idea for HA from my experience.
    The important thing is that the binaries required to run SAP
    are to be accessible before and after switchover.
    In terms of this file system doesn't matter.
    But SAP may recommend certain filesystem on linux
    please refer to SAP installation guide.
    I always use reiserfs or ext3fs.
    For soft link I recommend you to refer SAP installation guide.
    In your configuration the files related to SCS and DB is the key.
    Again those files are to be accessible both from hostA and from hostB.
    Easiest way is to use share these files like NFS or other shared file system
    so that both nodes can access to these files.
    And let the clustering software do mount and unmount those directory.
    DB binaries, data and log are to be placed in shared storage subsystem.
    (ex. /oracle/*)
    SAP binaries, profiles and so on to be placed in shared storage as well.
    (ex. /sapmnt/*)
    You may want to place the binaries into local disk to make sure the binaries
    are always accessible on OS level, even in the connection to storage subsystem
    losts.
    In this case you have to sync the binaries on both nodes manually.
    Easiest way is just put on shared storage and mount them!
    Furthermore you can use sapcpe function to sync necessary binaries
    from /sapmnt to /usr/sap/<SID>.
    For your last question /sapmnt should be located in storage subsystem
    and not let the storage down!

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • Export 500gb database size to a 100gb file system space in oracle 10g

    Hi All,
    Please let me the know the procedure to export 500gb database to a 100gb file system space. Please let me know the procedure.

    user533548 wrote:
    Hi Linda,
    The database version is 10g and OS is linux. Can we use filesize parameter for the export. Please advice on this.FILESIZE will limit the size of a file in case you specify multiple dumpfiles. You could also could specify multiple dump directory (in different FS) when given multiple dumpfiles.
    For instance :
    dumpfile=dump_dir1:file1,dump_dir2:file2,dump_dir3:file3...Nicolas.

  • Export BLOB to file system folder in Oracle 8i

    Hi Folks,
    I want export the doc and zip files from oracle column BLOB to folder on my desktop / network . Can anyone please suggest the easiest way .
    Thanks

    For 8i, I think you will have to rely on Java.
    http://www.oracle-base.com/articles/8i/ExportBlob.php
    For 9i and above, you can use UTL_FILE built in.
    http://www.oracle-base.com/articles/9i/ExportBlob9i.php
    Cheers
    Sarma.

  • Source is a File system and Target 2 SAP reciever system scenario??

    Hi All
    Can you please guide me how to do the below scenario.
    sender is a FTP server , will pick the data from here .
    and RECIEVER  there are two Business system (SAPI), BAPI is being used here 
    for one is working fine , but how can i do it configure to trigger the same data to flow in both business system .
    Please help .
    i will be thankful if can post a step via step document .
    Thanks
    Regards
    Priya

    Howdy,
    Try the instructions in the wiki below:
    http://wiki.sdn.sap.com/wiki/display/SI/StepbyStepguidetoExplainEnhancedReceiver+Determination
    In your case it's really just the Integration Directory side of the scenario as your Data/Message Types will be identical.
    Cheers
    Alex

  • Is it possible to change  the file system owner in Oracle EBS R12.1.3

    We are in R12.1.3 runnig in Linux 5. the OS owners of the EBS is orcrp2ebs and apcrp2ebs because of 9 character length the ps-ef shows uid instead of compelte username. So we are plannign to change the owner name to 8 character length.
    What are the post steps we need to take care from the EBS database and application side after changing the owner in the Unix level.

    953216 wrote:
    We are in R12.1.3 runnig in Linux 5. the OS owners of the EBS is orcrp2ebs and apcrp2ebs because of 9 character length the ps-ef shows uid instead of compelte username. So we are plannign to change the owner name to 8 character length.
    What are the post steps we need to take care from the EBS database and application side after changing the owner in the Unix level.- Stop the application services
    - Change the ownership from the OS
    - Change the value of "s_dbuser" and "s_dbgroup" context variables in the database context file
    - Run AutoConfig on the database tier node
    - Change the value of "s_appsuser" and "s_appsgroup" in the application context file
    - Run AutoConfig on the application tier node
    - Start the application services
    Thanks,
    Hussein

  • Create file system directory, no oracle directory

    I want to create a directory in the filesystem, example / test / directory1,
    I want to create a directory tree example:
    / test / directory1
    / test / dir2
    / test / dir2 / folder
    as if a linux mkdir.
    Is it possible with sql?
    thanks
    Greetings

    i need Java (jserver) installed in oracle no?,
    for load in the database, with this:
    declare
    f file_type;
    fz file_type;
    r number;
    begin
    -- get a handle for the "tmp" directory
    f:=file_pkg.get_file('/tmp');
    -- create a new temporary directory where the zip archive is being
    -- extracted into ... make the filename unique using TIMESTAMP
    fz := f.create_dir(+
    'zipdir_temp_'||user||'_'||to_char(systimestamp, 'YYYYMMDD_HH24MISS.SSSS')
    -- DOIT:
    -- extract the zipfile; the -qq switch is very important here - otherwise
    -- the OS process will not come back
    r := os_command.exec('unzip -o -qq [PATH/TO/ZIPFILE] -d '||fz.file_path);
    -- if the result is 0 (=success) load the contents of the temporary directory
    -- (recursively) with ONE (!) SQL INSERT command
    if r = 0 then
    insert into document_table (
    select
    seq_documents.nextval id,
    e.file_path,
    e.file_name,
    file_pkg.get_file(e.file_path).get_content_as_clob('iso-8859-1') content
    from table(file_pkg.get_recursive_file_list(fz)) e
    end if;
    -- finally delete the temporary directory and its contents
    fz := fz.delete_recursive();
    end;
    sho err
    and how install jserver??
    Edited by: rafa298 on 05-may-2011 2:22

  • SAP/ORACLE File Systems Disappeared in Local Zone

    I created some 20-30 files systems as meta devices for a SAP/Oracle installation on my 6320 SAN for a local zone.
    I did a newfs on all of them and then mounted them in the global zone with their appropriate mount points.
    I then did a zonecfg with the following input.
    zonepath: /export/zones/zsap21
    autoboot: true
    pool:
    inherit-pkg-dir:
    dir: /lib
    inherit-pkg-dir:
    dir: /platform
    inherit-pkg-dir:
    dir: /sbin
    inherit-pkg-dir:
    dir: /opt/sfw
    inherit-pkg-dir:
    dir: /usr
    fs:
    dir: /oracle
    special: /oracle_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/stage/920_64
    special: /oracle/stage/920_64_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /temp
    special: /temp_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/local
    special: /usr/local_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap
    special: /usr/sap_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/install
    special: /usr/sap/install_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/trans
    special: /usr/sap/trans_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /export/home/zsap21
    special: /export/home_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1
    special: /oracle/FS1
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/920_64
    special: /oracle/FS1/920_64
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/mirrlogA
    special: /oracle/FS1/mirrlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/oraarch
    special: /oracle/FS1/oraarch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogA
    special: /oracle/FS1/origlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogB
    special: /oracle/FS1/origlogB
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/saparch
    special: /oracle/FS1/saparch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/sapdata1
    special: /oracle/FS1/sapdata1
    raw not specified
    type: lofs
    options: []
    ***********more available but I truncated it here**********************
    I successfully installed and configured the zone.
    I then logged into the local zone and installed a new version of Oracle and a working SAP instance. I know that it worked because I was able to log into it.
    Upon rebooting the server and the local zone I have lost access to all of my file systems in the Oracle/SAP zone.
    It's almost as if the global zone has not mounted its file systems over the local zones mount points. In the local zone I can see no files in the directories where I would expect Oracle or SAP files.
    I either mis-configured the zonecfg or missed a step somewhere.
    I suspect that my file system contents are still around somewhere waiting to be hooked up with a zone's mount point.
    In the local zone a df -k shows all of the file systems (like the local zone knows about what file systems belong) but they all have the same size and free space (probably from the root zone).
    Any thoughts appreciated.
    Atis

    Do you have amount point within the zone path for these oradb mounts?
    You have to add a directory entry for /oradb in the zone's root directory, i.e. ``zone_path''/root/oradb

  • Oracle eSSO - File System Synchronization

    I'm having a lot of trouble just setting up a simple file system synchronization for Oracle Enterprise Single Sign-On v10.1.0.2 or v10.1.0.4 (I've tried both). I've tried it many ways, but the agent never seems to be visible to the admin console. Here is the simplest example:
    On a single test machine, install the Admin Console. Use the Admin Console to create a new set of Global Agent Settings. Leave everything to default except under Synchronization, click Manage Synchronizers. Create a new synchronizer with synch type 'File System'. Go to the Required subkey and enter a network share under 'Server'. In this example, I created a share on the same local machine. The share gives 'Full Control' to 'Everyone' and 'Domain Users' groups. Save these settings in the Admin Console. Then Go to Tools > Generate Customized MSI in the Admin Console, exporting the newly created Global Agent Settings to the customized agent installer MSI. Then on the same test machine, install the agent by using the customized MSI. During the agent installation, choose 'Custom' and enable installation of Extensions > Synchronization Manager and File System synchronization type.
    So, that's what I've done. When I then go back to the Admin Console and connect to the repository (the share setup on the local machine) and select 'Configure SSO Support', the result is that it creates a folder called 'People' and two override objects: ENTLIST and ADMINOVERRIDE. That's it. No matter how many or where I install agents on this computer or elsewhere on the network 1) the admin console does not seem to see them 2) the ENTLIST of customized applications never gets pushed to the agents. In fact the ENTLIST shown in the Repository within the Admin Console does not seem to update when the apps list in the admin console is updated (even if I press 'Refresh' in the repository).
    Can anyone help? Is it something silly like the order in which I've done things, or did I miss a step somewhere?

    Hi,
    I have the same problem, but I use an Active Directory in the basckend :o(
    I'm able to configure the admin console (connection to AD, create a global agent for my AD server). When I install the ESSO-Agent on the same machine (where the administrative console is already installed)...it's quite easy because the following "option" is available : "Export Apps to Agent"!!! Using this, there is no problem.
    On the other hand, when I install the ESSO-Agent on the user's workstation...it's really an other story :o( For the monent, I'm not able to get the credentials (pre-configured applications) from Active Directory. I think that the (Global Agent) settings have been successfully pushed into AD, but I'm not able to get them from the user's workstation.
    What are the steps to realize this operation (get the pre-configured applications from AD)?
    I always get a connection window which requires the following three parameters: username, password and user path.
    Because AD is my backend server, I thought that the following "values" were correct...., but that's not the case, and I'm sure that I'm not connected to AD:
    username: cn=toto toto,cn=users,dc=name,dc=com
    password: mypwd
    user path: dc=name,dc=com
    I hope that you have resolved your problem, and that someone could help me :o)
    Thanks a lot.
    c u

  • Oracle RAC binaries on vxfs shared file system

    Hi,
    Is it possible to install oracle binaries on vxfs cluster file system for oracle RAC under sun cluster? Because as I know we can not use vxfs cluster file system for our oracle datafiles.
    TIA

    The above post is incorrect. You can have a cluster (global) file system using VxVM and VxFS. You do not need to have VxVM/CVM for this. A cluster file system using VxVM+VxFS can be used for Oracle binaries but cannot be used for Oracle RAC data files where they are updated from both nodes simultaneously.
    If further clarification is needed, please post.
    Tim
    ---

  • CE71SR5 install failure (Start SAP Secure Store in the File System)

    Hi!
        when i install CE71SR5 CompositionPlatform,met a question, below is the log information, waiting help,thanks.
    An error occurred while processing service SAP NetWeaver CE Developer Edition > SAP NetWeaver CE Development System( Last error reported by the step :Cannot create the secure store. SOLUTION: See output of log file SecureStoreCreate.log: INFO: Loading tool launcher... INFO: [OS: Windows XP] [VM vendor: SAP AG] [VM version: 5.1.021] [VM type: SAP Java Server VM] INFO: Main class to start: "com.sap.security.core.server.secstorefs.SecStoreFS" INFO: Loading 8 JAR files: [C:\usr\sap\CE1\SYS\global\security\lib\tools\iaik_jce.jar, C:\usr\sap\CE1\SYS\global\security\lib\tools\iaik_jsse.jar, C:\usr\sap\CE1\SYS\global\security\lib\tools\iaik_smime.jar, C:\usr\sap\CE1\SYS\global\security\lib\tools\iaik_ssl.jar, C:\usr\sap\CE1\SYS\global\security\lib\tools\w3c_http.jar, C:\Program Files\sapinst_instdir\CE71_DEV_ADA\INSTALL\install\lib\sap.comtcexceptionimpl.jar, C:\Program Files\sapinst_instdir\CE71_DEV_ADA\INSTALL\install\lib\sap.comtcloggingjavaimpl.jar, C:\Program Files\sapinst_instdir\CE71_DEV_ADA\INSTALL\install\lib\sap.comtcsecsecstorefsjavacore.jar] INFO: Start SAP Secure Store in the File System - Copyright (c) 2003 SAP AG FATAL: Main class "com.sap.security.core.server.secstorefs.SecStoreFS" cannot be started: java.lang.ExceptionInInitializerError at javax.crypto.Cipher.getInstance(DashoA12275) at javax.crypto.Cipher.getInstance(DashoA12275) at iaik.security.provider.IAIK.a(Unknown Source) at iaik.security.provider.IAIK.addAsJDK14Provider(Unknown Source) at iaik.security.provider.IAIK.addAsJDK14Provider(Unknown Source) at com.sap.security.core.server.secstorefs.Crypt. (Crypt.java:85) at com.sap.security.core.server.secstorefs.SecStoreFS.setSID(SecStoreFS.java:175) at com.sap.security.core.server.secstorefs.SecStoreFS.handleCreate(SecStoreFS.java:836) at com.sap.security.core.server.secstorefs.SecStoreFS.main(SecStoreFS.java:1306) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at com.sap.engine.offline.OfflineToolStart.main(OfflineToolStart.java:161) Caused by: java.lang.SecurityException: Cannot set up certs for trusted CAs at javax.crypto.SunJCE_b. (DashoA12275) ... 14 more Caused by: java.lang.SecurityException: Cannot locate policy or framework files! at javax.crypto.SunJCE_b.i(DashoA12275) at javax.crypto.SunJCE_b.g(DashoA12275) at javax.crypto.SunJCE_r.run(DashoA12275) at java.security.AccessController.doPrivileged(Native Method) ... 15 more FATAL: com.sap.engine.offline.OfflineToolStart will abort now with exitcode 2.). You may now
    choose Retry to repeat the current step.
    choose View Log to get more information about the error.
    stop the task and continue with it later.
    Log files are written to C:\Program Files/sapinst_instdir/CE71_DEV_ADA/INSTALL.

    I have the same problem.
    Although i try it use jdk1.5.0.16, the same err occured.
    Also i use the jce_policy-1_5_0.zip.
    I will appreciate your answer.
    I have resolved the problem by copy the
    local_policy.jar
    US_export_policy.jar
    to
    sapinst_instdir\CE71_DEV_ADA\INSTALL\sapjvm\sapjvm_5\jre\lib\security
    Edited by: wang zhen on Aug 28, 2008 11:08 AM
    Edited by: wang zhen on Aug 29, 2008 9:53 AM

  • DNFS with ASM over dNFS with file system - advantages and disadvantages.

    Hello Experts,
    We are creating a 2-node RAC. There will be 3-4 DBs whose instances will be across these nodes.
    For storage we have 2 options - dNFS with ASM and dNFS without ASM.
    The advantages of ASM are well known --
    1. Easier administration for DBA, as using this 'layer', we know the storage very well.
    2. automatic re-balancing and dynamic reconfiguration.
    3. Stripping and mirroring (though we are not using this option in our env, external redundancy is provided at storage level).
    4. Less (or no) dependency on storage admin for DB file related tasks.
    5. Oracle also recommends to use ASM rather than file system storage.
    Advantages of DNFS(Direct Network File System) ---
    1. Oracle bypasses the OS layer, directly connects to storage.
    2. Better performance as user's data need not to be loaded in OS's kernel.
    3. It load balances across multiple network interfaces in a similar fashion to how ASM operates in SAN environments.
    Now if we combine these 2 options , how will be that configuration in terms of administration/manageability/performance/downtime in future in case of migration.
    I have collected some points.
    In favor of 'NOT' HAVING ASM--
    1. ASM is an extra layer on top of storage so if using dNFS ,this layer should be removed as there are no performance benefits.
    2. store the data in file system rather than ASM.
    3. Stripping will be provided  at storage level (not very much sure about this).
    4. External redundancy is being used at storage level so its better to remove ASM.
    points for 'HAVING' ASM with dNFS --
    1. If we remove ASM then DBA has no or very less control over storage. He can't even see how much is the free space left as physical level.
    2. Stripping option is there to gain performance benefits
    3. Multiplexing has benefits over mirroring when it comes to recovery.
    (e.g, suppose a database is created with only 1 controlfile as external mirroring is in place at storage level , and another database is created with 2 copies (multiplexed within Oracle level), and an rm command was issued to remove that file then definitely there will be a time difference between restoring the file back.)
    4. Now familiar and comfortable with ASM.
    I have checked MOS also but could not come to any conclusion, Oracle says --
    "Please also note that ASM is not required for using Direct NFS and NAS. ASM can be used if customers feel that ASM functionality is a value-add in their environment. " ------How to configure ASM on top of dNFS disks in 11gR2 (Doc ID 1570073.1)
    Kindly advise which one I should go with. I would love to go with ASM but If this turned out to be a wrong design in future, I want to make sure it is corrected in the first place itself.
    Regards,
    Hemant

    I agree, having ASM on NFS is going to give little benefit whilst adding complexity.  NAS will carrying out mirroring and stripping through hardware where as ASM using software.
    I would recommend DNFS only if NFS performance isn't acceptable as DNFS introduce an additional layer with potential bugs!  When I first used DNFS in 11gR1, I came across lots of bugs and worked with Oracle Support to have them all resolved.  I recommend having read of this metalink note:
    Required Diagnostic for Direct NFS Issues and Recommended Patches for 11.1.0.7 Version (Doc ID 840059.1)
    Most of the fixes have been rolled into 11gR2 and I'm not sure what the state of play is on 12c.
    Hope this helps
    ZedDBA

  • Base64 Encoding of PDF file existing in file system

    Hello,
    I need your help in finding the best way to read a file in PDF format that is in the file system of the oracle application server from the oracle database server.
    From a stored procedure in database, I need to read the PDF file and get its data to binary 64 text.
    I need to use PL/SQL, what are your recomendations, any sample?
    Thanks.

    Did you try google?
    http://www.google.com/#sclient=psy&hl=en&source=hp&q=oracle+read+pdf+file&aq=0v&aqi=g-v1&aql=&oq=&pbx=1&bav=on.2,or.r_gc.r_pw.&fp=e5b130cc10bf5fa1
    G.

Maybe you are looking for