Configure a DNS to use SCAN for Solaris VMs

Grid Version: 11.2.0.3
Planning a two node RAC cluster
I am creating a RAC setup with Solaris VM in my Desktop. Although it is a VM setup, I don't want to use /etc/hosts file for name resolution as I am going to fully test SCAN. ie. All SCAN VIPs rotating in round robin manner when nslookup is executed.
Can I make one of my 2-node RAC a DNS server ?
If anyone has any document on Linux/Solaris, please let me know.

Hi,
technically yes, but it would not help you in a way. If the node with the DNS server dies your cluster will not function correctly anymore.
However for a test cluster, you don't need DNS. You can simply stay with 1 address for the SCAN, which is totally enough (for test, not production). No need to go through DNS setup. This would be an overkill for this VBox setup.
Regards
Sebastian

Similar Messages

  • Problem configuring SOA suite to use OID for authentication

    We are in the process of rebuilding our environment to use the full SOA suite with our OID server for authentication (was previously just BPEL using AD directly), and have encountered several problems (below). We have rebuilt the OID server, and reinstalled the SOA suite into a clean ORACLE_HOME to no avail.
    We first rebuilt the OID server using the following steps (derived from Oracle® Internet Directory Administrator's Guide):
    1)     Create the Import and Export profiles for AD synchronization. We did this using the Directory Integration and Provisioning Server Administration tool under “Active Directory Configuration”
    2)     Modify the map file to specify the correct OU mappings between AD and OID.
    3)     Update the profile with the new map file using “dipassistant.bat mp”
    4)     Bootstrap the import profile using “dipassistant.bat bootstrap”
    5)     Start a new instance of the Integration server (odisrv) running on config set 1 (the config set containing the Active Directory import/export profiles) using “oidctl”
    6)     Set the Import profile to Enable. The OID server does not export changes to AD in our current configuration, so the Export profile is left on disable (and not bootstrapped)
    At this point it appears that the AD synchronizes correctly into our new OID server.
    Next we installed the SOA suite:
    1)     We ran “irca.bat” on our database server to create the ORABPEL, ORAESB, and ORAWSM schemas and associated integration repository structure.
    2)     After launching the SOA suite installer, we selected Advanced Install.
    3)     On the next screen, we selected J2EE Server, Web Server, and SOA Suite.
    4)     We then provided the credentials for our Oracle database, and the passwords for ORABPEL, ORAESB, and ORAWSM.
    5)     We configured our new AS instance as an administration instance, but did not opt to use from a separate HTTP server, and did not make this instance part of an OAS cluster topology.
    And finally, we configured our new SOA suite instance to use OID for authentication (using the instructions in Oracle® BPEL Process Manager Administrator's Guide section 2.1.3):
    1)     Used the configure_oid.bat command to seed OID with required users only.
    2)     Logged into the OracleAS Control Console
    3)     Chose the oc4j_soa instance, then Administration->Security->Identity Management
    4)     Configured the OID server using a non-ssl connection and the cn=orcladmin account.
    5)     When prompted, chose to reconfigure all applications in the oc4j_soa instance to OID, but not to use SSO for any of them.
    6)     Copied the contents of ORACLE_HOME\j2ee\home\config\jazn.xml to ORACLE_HOME\j2ee\oc4j_soa\config\jazn.xml
    7)     Restarted the application server.
    After this procedure, we encountered the following issues:
    1)     The BPEL console appears to authenticate users correctly out of OID, but no users have access to the default domain, including bpeladmin and oc4jadmin. All users receive a similar access denied message when attempting to log into the BPEL Admin Console.
    2)     We cannot upload a BPEL process to our new server via JDeveloper’s standard BPEL deployment mechanisms. The connection appears to be working properly and passes all tests, but on uploading a process we get a Java AccessDeniedException. ESB appears to be functioning properly, and accepts uploaded projects without issue.

    Bassman,
    We recently configured our SOA Suite to use OID and SSO. We had the same issues you are having, and we found the resolutions in a blog from Jaas Poot (http://blog.jpoot.com/category/oracle-appserver/oid-ldap/). For the BPEL domain access, this involved going to the data-sources.xml file and changing the database passwords from using ->pwForOrabpel for the orabpel schema and ->pwForOraesb for the oraesb schema to the real passwords; the blog explains more about this.
    The blog also covers the JDeveloper deployment issue, and another issue we encountered, where we couldn't access the BPEL Admin console. All of these were resolved following the steps in the blog.
    Hope this helps
    Candace

  • Configuring RMAN in oracle 10g database for solaris OS

    hi
    send me the step by step configuration of RMAN in oracle10g for solaris OS
    BR
    Durga

    Check the following:
    Backup and Recovery Quick Start Guide
    Backup and Recovery Basics
    Backup and Recovery Advanced Users Guide

  • How could you install 10g R2 Db EE using containers for Solaris?

    Do you know if there is a particular procedure in order to have my installation using Solaris' containers?
    First of all, what is a container for Solaris?
    Please could you give a reference to read about it?
    Thank you.
    Paola
    : (

    Hello Madrid!:
    My Oracle´s hero!, your are always a leader for us, the dbas who have just one passion talking about databases: Oracle.
    Well It is going to be hard, because I am not only installing Oracle inside a Solaris container but also migrating from Sybase to Oracle!
    But again I am acting as a leader, and being woman it is more difficult. I hope that I can see you later, in a course again, but it could be, at the end of this year, because I am not going to be in Mexico City, I am now studying Sybase in order to convert all the dbs to Oracle. I have already installed successfully with your recommendations. You are perfect.
    c u later my favorite OCM. I insist, what a beautiful last name you have.
    Paola.
    8 ) (wearing glasses)

  • Error while configuring iscsi client node using udev for RAC on linux

    Hi All,
    I am setting up iSCSI on Oracle Enterprise Linux referring the document "+Build Your Own Oracle RAC Cluster on Oracle Enterprise Linux and iSCSI+" posted on OTN by Jeffrey Hunter.
    After configuring the iscsi service on iscsi client node, the symbolic link iscsi under the /dev folder is not created when the iscsi and udev service is started.
    Created the following UNIX shell script /etc/udev/scripts/iscsidev.sh on iscsi client node:
    #!/bin/sh
    # FILE: /etc/udev/scripts/iscsidev.sh
    BUS=${1}
    HOST=${BUS%%:*}
    [ -e /sys/class/iscsi_host ] || exit 1
    file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
    target_name=$(cat ${file})
    # This is not an open-scsi drive
    if [ -z "${target_name}" ]; then
    exit 1
    fi
    # Check if QNAP drive
    check_qnap_target_name=${target_name%%:*}
    if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then
    target_name=`echo "${target_name%.*}"`
    fi
    echo "${target_name##*.}"
    Following message is seen in the /var/log/messages:
    +May 2 17:10:44 rashida1 udevd-event[15897]: run_program: exec of program '/etc/udev/scripts/iscsidev.sh' failed+
    Please find below the output of the +iscsidev.sh+ script when executed manually:
    =========================================
    [root@rashida1 scripts]# sh iscsidev.sh
    cat: /sys/class/iscsi_host/host/device/session*/iscsi_session*/targetname: No such file or directory
    =========================================
    Contents of directory : /sys/class/iscsi_host/host*
    ================================================================
    [root@rashida1 scripts]# ls /sys/class/iscsi_host/host*
    /sys/class/iscsi_host/host1:
    device hwaddress initiatorname ipaddress netdev subsystem uevent
    /sys/class/iscsi_host/host2:
    device hwaddress initiatorname ipaddress netdev subsystem uevent
    =================================================================
    If I supply an argument to the script, it returns proper result as shown below:
    =============================
    [root@rashida1 scripts]# sh iscsidev.sh 1
    crs1
    [root@rashida1 scripts]# sh iscsidev.sh 2
    data1
    =============================
    But this input/argument value is supposed to be passed internally to the udev daemon. And its value will keep on varying.
    Please advice how I should proceed.
    Regards.

    I modified the iscsidev.sh on OEL6 as below and it worked for me:
    #!/bin/bash
    BUS=${1}
    HOST=${BUS%%:*}
    LID=`echo ${BUS}|awk -F":" '{print $NF}'`
    [ -e /sys/class/iscsi_host ] || exit 1
    if [ -f /sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname ]
    then
    file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"
    else
    file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session/session*/targetname"
    fi
    target_name=$(cat ${file})
    # This is not an open-scsi drive
    if [ -z "${target_name}" ]; then
    exit 1
    fi
    # Check if QNAP drive
    check_qnap_target_name=${target_name%%:*}
    if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then
    target_name=`echo "${target_name%.*}"`
    fi
    echo "${target_name##*.}"
    LUN=`echo $target_name|awk -F":" '{print $NF}'`
    echo `date` $0 $* ${LUN}_${LID}
    } >>/tmp/udev_getlun.log
    echo ${LUN}_${LID}
    Now restart iscsi service
    service iscsi stop
    service iscsi start
    Wait for 2-3minutes,
    check /var/log/messages file and do a listing using ls -ltr /dev/iscsi/*
    Reboot the server and do the checks again
    At this stage, you will see the list of persistent devices.
    Please confirm if it resolves your issue.
    Thanks
    Mohan

  • How to use VNC for Solaris 10

    Hello, I am completely green when it comes to Unix / Solaris. I am trying to use VNC from the Solaris 10 Companion DVD. I have it installed. I followed the easy to use steps from this site : http://www.salixtraining.co.uk/index_files/vncsol10.htm
    But the problem is I don't know how to use it now that I have it installed.
    What I am tring to do is install an e-ticketing software program on my test Solaris server and I am stuck on getting PHP to work correctly and I am trying get the company, whose product I am trying to use, to connect to my server to investigate. I want them to be able to remote connect to and be able to see and access the server to see if they can help.
    Can anyone help? I really need step by step instructions. Assume that I don't know anything!
    Thanks in advance,
    Aaron

    Hello, here are the instructions I follow to install VNC on my systems running Solaris Express, you may use this to your own install, I dont use the one included on the companion CD, I download the one from RealVNC instead, I hope this helps:
    VNC
    INSTALL VNC (RealVNC) 4.1.2 - Solaris Express 11 b71 Sparc
    a)
    File to use: �vnc-4_1_2-sparc_solaris.tar.gz�
    #gunzip vnc-4_1_2-sparc_solaris.tar.gz
    #tar �xvf vnc-4_1_2-sparc_solaris.tar
    #cd vnc-4_1_2-sparc_solaris
    b)
    Install it by running:
    # ./vncinstall /usr/bin /usr/share/man
                   Note: this will also install man pages into �/usr/share/man/man1�
    If you want to use the Java VNC viewer (Open browser and type pc_name:display#):
    Copy the files from the java directory to some suitable installation directory
    such as /usr/local/vnc/classes (vncserver will read this path!- don�t change it):
    # mkdir -p /usr/local/vnc/classes
    # cp java/* /usr/local/vnc/classes
    c)
    Add these 2 lines to .profile (under the user)- (This will make it to connect always with screen 1):
    vncserver -kill :1
    vncserver -depth 24 -geometry 1024x768 (or any other combination you want like 1280x1024,etc)
    note: dont use it if you want to have a different screen to connect everytime.
    d)
    Create a /.vnc directory first, we will create the xstartup file inside:
    file /.vnc/xstartup:
    #!/bin/sh
    # xrdb $HOME/.Xresources      < This line Not really needed!! >
    xsetroot -solid grey
    gnome-session          <add selection for KDE or CDE if needed see bellow notes>
    ** Here is How to Create the �xstartup� file:
    echo "#!/bin/sh
    xrdb $HOME/.Xresources
    xsetroot -solid grey
    #/usr/dt/bin/dtsession CDE < CDE, gnome or sessions: >
    #gnome-session Gnome < comment with '#' the one you don't >
    < want to use. >
    " > xstartup
    # chmod 744 xstartup To make it executable!.
    *** Sample used for the user: (use the same file for the other users) �it will be created
    the first time using it, just change it/add to the following sample:
    First time to login will ask to create a password or
    Run /export/home/�name�/.vnc/passwd (to add password to login)
    You may need to change ownership on the following 3 if you create them under different user:
    /.vnc folder
    /.vnc/xstartup
    /.vnc/passwd
    using #chown -name- .vnc (xstartup and passwd)
    To check process for VNC running:
    # ps -ef | grep Xvnc
    To stop the process:
    # kill pid# (stop it hard way)
    kill display:
    #vncserver -kill :#
    to connect from another system just type the hostIP:screen#, e.g. 192.168.1.20:1
    murilloa
    Edited by: murilloa on Sep 13, 2007 7:52 PM

  • Can a non-RAC database use SCAN ?

    I have a 4 node 11gR2 cluster which has all different kinds of versions ( 10g ,11gR1 & 11gR2 ) on both RAC & NON RAC .
    I know for sure that I can use SCAN for non 11gR2 databases for RAC databases .
    For standalone db's It seems like it is working intermitently but is this supported ? the reason why I ask is there is no documentation on how to configure or troubleshoot for non-RAC using SCAN .
    Appreciate your help
    Ravi

    I was trying to find some information about the issue, but i think SCAN is really designed for RAC databases.
    You don't really need to use the SCAN for what you just described:
    you can update the client tnanames with the vip addresses of all nodes with FAILOVER = ON
    Example:
    (DESCRIPTION=
    (ADDRESS_LIST=
    (ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))
    (ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))
    (ADDRESS=(PROTOCOL=TCP)(HOST=node3-vip)(PORT=1521))
    (ADDRESS=(PROTOCOL=TCP)(HOST=node4-vip)(PORT=1521))
    (FAILOVER = ON)
    (CONNECT_DATA=(SERVER = DEDICATED)(SERVICE_NAME=DB.WORLD))
    However, you can try this (i never tried it, but i think it should work):
    - See the document http://download.oracle.com/docs/cd/E11882_01/install.112/e10813/undrstnd.htm#BEICFAIC
    +... If you do not set LOCAL_LISTENER, then the Database Agent automatically keeps the database associated with the Grid home's node listener updated...+
    So don't set the LOCAL_LISTENER in your db.
    - Update the client tnsnames with the SCAN if the version of the database is 11.2.0.1
    - Try to see if it works fine
    - Move your db to another node and try again

  • SCTP API for Solaris 10

    I've been trying to use SCTP for Solaris 10, and it seems that the API is very far behind the current SCTP API. Does anyone know if there's an update to SCTP for Solaris, or if one is planned at some point? Thanks.
    - Jon

    Don't go back any farther though.
    I've heard reports that prior to Solaris 9, the included in.tftpd lacks certain extensions that are required for PXEGRUB in Solaris 10 to work properly.
    Darren

  • How to recognize patch code I in scan for ISIS macro of ODC

    Hi all,
    I am using SCAN for ISIS macro in my ODC for autocommit in which there are only 3 patch code available : patch II, patch III and patch T.
    But I wanted to detect patch I by ODC.
    Is it possible?? If yes then what changes should I do in macro??
    Any help is highly appreciated.
    Thanks in advance.

    Hi all,
    I am using SCAN for ISIS macro in my ODC for autocommit in which there are only 3 patch code available : patch II, patch III and patch T.
    But I wanted to detect patch I by ODC.
    Is it possible?? If yes then what changes should I do in macro??
    Any help is highly appreciated.
    Thanks in advance.

  • Issue with Authentication using JAAS for coherence

    Hi,
    I have configured security frame work using JAAS for storage enabled node,
    I am using keystore for authenticating the users, Below is the code used for authentication,
        Subject subject;
            try{ subject = Security.login(sUsername, sPassword.toCharArray()); }
            catch (Throwable t){
                subject = null;
                log("Authentication error:");
                log(t); }
            if (subject != null)
                for (Iterator iter = subject.getPrincipals().iterator(); iter.hasNext(); )
                    Principal principal = (Principal) iter.next();
                    log("Principal: " + principal.getName());
            Security.runAs(subject, new PrivilegedAction()
                public Object run()
                    NamedCache cache = CacheFactory.getCache(CACHE_NAME);
                    boolean flag = true;
                    while (flag) {}
                    return null;
                });and i am calling the above class in the callback handler which is defined in coherence operation descriptor.
            <security-config>
                    <enabled system-property="tangosol.coherence.security">true</enabled>
                    <login-module-name>TestCoherence</login-module-name>
                     <access-controller>
                    <class-name>com.tangosol.net.security.DefaultController</class-name>
                            <init-params>
                            <init-param id="1">
                            <param-type>java.io.File</param-type>
                            <param-value>config/keystore.jks</param-value>
                            </init-param>
                            <init-param id="2">
                            <param-type>java.io.File</param-type>
                            <param-value>config/permissions.xml</param-value>
                            </init-param>
                            </init-params>
                     </access-controller>
                     <callback-handler>
                            <class-name>Test</class-name>
                     </callback-handler>
             </security-config>I am using the following command line parameters for bringing up the storage enabled node.
    -Dtangosol.coherence.security.permissions="$CONFIG_PATH/permissions.xml" 
    -Dtangosol.coherence.security.keystore="$CONFIG_PATH/keystore.jks" 
    -Djava.security.auth.login.config="$CONFIG_PATH/login.config" 
    -Dtangosol.coherence.security=trueNow till the callback handler thread is alive, storage enabled node will be up. As soon as the call back handler thread dies. Storage enabled node stops with the following error,
    Exception in thread "main" java.lang.SecurityException: Authentication failed: Error initializing keystore
    at com.tangosol.coherence.component.net.security.Standard.loginSecure(Standard.CDB:36)
    at com.tangosol.coherence.component.net.security.Standard.getTempSubject(Standard.CDB:11)
    at com.tangosol.coherence.component.net.security.Standard.checkPermission(Standard.CDB:18)
    at com.tangosol.coherence.component.net.Security.checkPermission(Security.CDB:11)
    at com.tangosol.coherence.component.util.SafeCluster.ensureService(SafeCluster.CDB:6)
    at com.tangosol.coherence.component.net.management.Connector.startService(Connector.CDB:25)
    at com.tangosol.coherence.component.net.management.gateway.Remote.registerLocalModel(Remote.CDB:8)
    at com.tangosol.coherence.component.net.management.gateway.Local.registerLocalModel(Local.CDB:8)
    at com.tangosol.coherence.component.net.management.Gateway.register(Gateway.CDB:1)
    at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:50)
    at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
    at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:948)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:748)
    at com.tangosol.net.DefaultCacheServer.start(DefaultCacheServer.java:140)
    at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:61)
    Please let me know where should i pass the credentials to the default cache server for authentication or should i change the any implementation of authentication here.
    Thanks in advance,
    Bhargav

    Bhargav,
    Rather than trying to loop forever in a callback handler try this
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.DefaultCacheServer;
    import com.tangosol.net.security.Security;
    import javax.security.auth.Subject;
    import java.security.PrivilegedExceptionAction;
    public class SecureCacheServer {
        public static void main(final String[] args) throws Exception {
            LoginContext lc = new LoginContext("Coherence");
            lc.login();      
            Subject subject = lc.getSubject();
            Security.runAs(subject, new PrivilegedExceptionAction() {
                public Object run() throws Exception {
                    DefaultCacheServer.main(args);
                    return null;
    }Then when you start your cache server just use the SecureCacheServer class above rather than DefaultCacheServer
    As the main method of DefaultCacheServer is running in a PrivilegedExceptionAction Coherence will use this identity anywhere it needs to do anything secured.
    I hope the code above compiles OK as it is a modified version of the code I really use.
    Hope this helps
    JK

  • Does Jit for Solaris works for linux?

    Can i used JIT for Solaris under Linux platform? what are the implications and any problems that i can encounter. I need to know abt what kinda problems that could be encountered since the project will be used as a prototype for the main project.
    OR is there a JIT for linux supported by SUN? i think i saw a link somewhere in java.sun.com that gave me the option of downloading JIT for linux but it said "Sun doesn't support it" beside it, so i didnt download it
    thanks

    we are using Linux 7.2 and Java 1.4OK, then you're using the HotSpot virtual machine.
    How do I explain this as simply as possible? HotSpot has a "built-in JIT", in that it does not interpret the byte codes directly, but does so after translating snippets of byte code into native code. I.e. the VM IS the JIT.
    There's no separate JIT you can use to "speed up HotSpot". And anyway, a JIT doesn't "compile" the code in any permanent way - it has to do the translation statically each time the program is loaded.
    However, you do have a point - there have been several reports of a slowdown between JDK 1.3.1 and 1.4.0. 1.4.1 hasn't improved much. I'm waiting for 1.4.2.
    Things you can try in the meantime:
    * Download the full JDK, and use the "java -server" option to use the server VM. If you're running a server-type application (i.e. not a GUI), this might improve things.
    * Profile the application using any of the available Java profiling tools (do a Google search - there are free and commercial profilers).
    There are a thousand possible reasons why your application is slow - anything from excessive memory allocation to excessive locking to poorly chosen algorithms. A profiler may help you narrow things down a little.

  • Proper Configuration of DNS server for our new branch office

    Hi All,
    Our new office will setup a new branch office with a routed network link to our HO. In HO, we have 2 domain controllers configured as AD and DNS just for fail over scenarios.
    How will we configure the DNS server of our 3rd domain controller which we will placed in the new branch office. What would be the proper settings of DNS server integrated to AD to work well especially to have a successful replication and communication to
    the 2 DC's located in HO?

    Hi,
    If you have multiple DC's in that site i would recommend using any of the partner DC's IP addresses as preferred one and secondary DNS IP to pointing to itself. Dont use loopback addresses configure it with actual IP addresses.
    If you have only one server in branch office point itself as the primary DNS and HO DC as secondary and tertiary.
    Make sure that all clients in your branch site are pointing to the branch DC as primary DNS server.
    Regards,
    Rafic
    If you found this post helpful, please give it a "Helpful" vote.
    If it answered your question, remember to mark it as an "Answer".
    This posting is provided "AS IS" with no warranties and confers no rights! Always test ANY suggestion in a test environment before implementing!

  • OK to use fdisk/100% "SOLARIS System" partition for RAID6 Virtual Drive?

    Solaris newb, here - I am configuring an x4270 with 16 135 GB drives. Basic approach is
    D0, D1: RAID 1 (Boot volume, Solaris, Oracle Software)
    D2-D13: RAID 6 (Oracle dB files)
    D14, D15: global spares
    After configuring the RAID's w/WebBIOS Utility, I am now trying to format/partition the RAID 6 Virtual Drive, which shows up as 1.327 TB 'Optimal' in the MegaRAID Storage Manager. After hunting around the ether for advice on how to do this, I came across http://docs.oracle.com/cd/E23824_01/html/821-1459/disksxadd-50.html#disksxadd-54639
    "Creating a Solaris fdisk Partition That Spans the Entire Drive"
    which is painfully simple: after 'format', just do an 'fdisk' and accept the default 100% "SOLARIS System" partition. After doing this, partition>print and prtvtoc show this:
    partition> print
    Current partition table (original):
    Total disk cylinders available: 59125 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 59124 1.33TB (59125/0/0) 2849529375
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 23.53MB (1/0/0) 48195
    9 unassigned wm 0 0 (0/0/0) 0
    # prtvtoc /dev/dsk/c0t1d0s2
    * /dev/dsk/c0t1d0s2 partition map
    * Dimensions:
    * 512 bytes/sector
    * 189 sectors/track
    * 255 tracks/cylinder
    * 48195 sectors/cylinder
    * 59127 cylinders
    * 59125 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * Unallocated space:
    * First Sector Last
    * Sector Count Sector
    * 48195 2849481180 2849529374
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    2 5 01 0 2849529375 2849529374
    8 1 01 0 48195 48194
    My question: is there anything inherently wrong with this default partitioning? Database is for OLTP & fairly small (<200 GB), with about 140 GB being LOB images.
    Thanks,
    Barry

    First off, RAID-5 or RAID-6 is fine for database performance unless you have some REALLY strict and REALLY astronomical performance requirements. Requirements that someone with lots of money is willing to pay to meet.
    You're running a single small x86 box with only onboard storage.
    So no, you're not operating in that type of environment.
    Here's what I'd do, based upon a whole lot of experience with Solaris 10 and not so much with Solaris 11, and also assuming this box is going to be around for a good long time as an Oracle DB server:
    1. Don't use SVM for your boot drives. Use the onboard RAID controller to make TWO 2-disk RAID-1 mirrors. Use these for TWO ZFS root pools. Why two? Because if you use live upgrade to patch the OS, you want to create a new boot environment in a separate ZFS pool. If you use live upgrade to create new boot environments in the same ZFS pool, you wind up with a ZFS clone/snapshot hell. If you use two separate root pools, each new boot environment is a pool-to-pool actual copy that gets patched, so there are no ZFS snapshot/clone dependencies between the boot environments. Those snapshot/clone dependencies can cause a lot of problems with full disk drives if you wind up with a string of boot environments, and at best they can be a complete pain in the buttocks to clean up - assuming live upgrade doesn't mess up the clones/snapshots so badly you CAN'T clean them up (yeah, it has been known to do just that...). You do your first install with a ZFS rpool, then create rpool2 on the other mirror. Each time you do an lucreate to create a new boot environment from the current boot environment, create the new boot environment in the rpool that ISN'T the one the current boot environment is located in. That makes for ZERO ZFS dependencies between boot environments (at least in Solaris 10. Although with separate rpools, I don't see how that could change....), and there's no software written that can screw up a dependency that doesn't exist.
    2. Create a third RAID-1 mirror either with the onboard RAID controller or ZFS, Use those two drives for home directories. You do NOT want home directories located on an rpool within a live upgrade boot environment. If you put home directories inside a live upgrade boot environment, 1) that can be a LOT of data that gets copied, 2) if you have to revert back to an old boot environment because the latest OS patches broke something, you'll also revert every user's home directory back.
    3. That leaves you 10 drives for a RAID-6 array for DB data. 8 data and two parity. Perfect. I'd use the onboard RAID controller if it supports RAID-6, otherwise I'd use ZFS and not bother with SVM.
    This also assumes you'd be pretty prompt in replacing any failed disks as there are no global spares. If there would be significant time before you'd even know you had a failed disk (days or weeks), let alone getting them replaced, I'd rethink that. In that case, if there were space I'd probably put home directories in the 10-disk RAID-6 drive, using ZFS to limit how big that ZFS file system could get. Then use the two drives freed up for spares.
    But if you're prompt in recognizing failed drives and getting them replaced, you probably don't need to do that. Although you might want to just for peace of mind if you do have the space in the RAID-6 pool.
    And yes, using four total disks for two OS root ZFS pools seems like overkill. But you'll be happy when four years from now you've had no problems doing OS upgrades when necessary, with minimal downtime needed for patching, and with the ability to revert to a previous OS patch level with a simple "luactivate BENAME; init 6" command.
    If you have two or more of these machines set up like that in a cluster with Oracle data on shared storage you could then do OS patching and upgrades with zero database downtime. Use lucreate to make new boot envs on each cluster member, update each new boot env, then do rolling "luactivate BENAME; init 6" reboots on each server, moving on to the next server after the previous one is back and fully operational after its reboot to a new boot environment.

  • How to configure DHCP on linux jumpstart for solaris installation

    I have configured jumpstart on linux and able to install solaris on SUN sparcs
    using rarp and bootparams files.now im trying to use linux DHCP for solaris clients. I have the done the DHCP setup on linux using this doc http://www.sun.com/bigadmin/content/submitted/setup_dhcp.jsp.
    but when im trying to boot the sun spac client with boot net:dhcp - install command it is failing with error "panic - boot: Could not mount filesystem.
    Program terminated". exports file is ok and NFS service is also running.
    Please help me on this issue.
    Thanks in advance.
    Shashi

    Darren,
    Thanks for the response.
    I tried to install client60001dev (sparc client) from server60060pxe (linux jumpstart) as follows
    client60001dev is able to get the IP address from server60060pxe DHCP and then the boot file also, but after that the client is not showing any NFS queries.
    {0} ok boot net:dhcp - install
    Boot device: /pci@1f,4000/network@1,1:dhcp File and args: - install
    Using Onboard Transceiver - Link Up.
    Timeout waiting for BOOTP/DHCP reply. Retrying ...
    Timeout waiting for BOOTP/DHCP reply. Retrying ...
    2aa00
    Server IP address: xx.xx.xx.119
    Client IP address: xx.xx.xx.111
    Subnet Mask : 255.255.255.0
    Using Onboard Transceiver - Link Up.
    panic - boot: Could not mount filesystem.
    Program terminated
    tcpdump on server60060pxe
    03:16:12.292836 IP server60060pxe.42445 > client60001dev.20759: UDP, len
    gth 516
    03:16:12.303646 IP client60001dev.20759 > server60060pxe.42445: UDP, len
    gth 4
    03:16:12.303669 IP server60060pxe.42445 > client60001dev.20759: UDP, len
    gth 516
    03:16:12.314479 IP client60001dev.20759 > server60060pxe.42445: UDP, len
    gth 4
    03:16:12.314501 IP server60060pxe.42445 > client60001dev.20759: UDP, len
    gth 516
    03:16:12.325313 IP client60001dev.20759 > server60060pxe.42445: UDP, len
    gth 4
    03:16:12.325347 IP server60060pxe.42445 > client60001dev.20759: UDP, len
    gth 516
    03:16:12.336158 IP client60001dev.20759 > server60060pxe.42445: UDP, len
    /var/log/messages on server60060pxe
    Feb 26 03:15:35 server60060pxe dhcpd: DHCPDISCOVER from 08:00:20:fe:4a:23 via eth0.
    369
    Feb 26 03:15:35 server60060pxe dhcpd: DHCPOFFER on xx.xx.xx.111 to 08:00:20:fe:4
    a:23 via eth0.369
    Feb 26 03:16:08 server60060pxe dhcpd: Dynamic and static leases present for 139.185
    .168.111.
    Feb 26 03:16:08 server60060pxe dhcpd: Remove host declaration client60001dev or remove
    139.185.168.111
    Feb 26 03:16:08 server60060pxe dhcpd: from the dynamic address pool for xx.xx.xx
    /24
    Feb 26 03:16:08 server60060pxe dhcpd: DHCPREQUEST for xx.xx.xx.111 (xx.xx.xx.
    119) from 08:00:20:fe:4a:23 via eth0.369
    Feb 26 03:16:08 server60060pxe dhcpd: DHCPACK on xx.xx.xx.111 to 08:00:20:fe:4a:
    23 via eth0.369
    Feb 26 11:16:09 server60060pxe in.tftpd[10266]: RRQ from xx.xx.xx.111 filename 8
    BB9A86F
    Feb 26 03:22:00 server60060pxe kernel: eth0.369: dev_set_promiscuity(master, -1)
    Feb 26 03:22:00 server60060pxe kernel: device eth0 left promiscuous mode
    Feb 26 03:22:00 server60060pxe kernel: device eth0.369 left promiscuous mode
    Shashi

  • I would appreciate it if someone could advise me as to the optimum resolution, dimensions and dpi for actual photographic slides that I am scanning for use in a Keynote Presentation, that will be projected in a large auditorium.  I realize that most proje

    I would appreciate it if someone could advise me as to the optimum resolution, dimensions and dpi for actual photographic slides that I am scanning for use in a Keynote Presentation, that will be projected in a large auditorium. I realize that most projectors in auditoriums that I will be using have 1024 x 1200 pixels, and possibly 1600 x 1200. There is no reference to this issue in the Keynote Tutorial supplied by Apple, and I have never found a definitive answer to this issue online (although there may be one).
                Here’s my question: When scanning my photographic slides, what setting, from 72 dpi to 300 dpi, would result in the best image quality and use up the most efficient amount of space? 
                Here’s what two different photo slide scanning service suppliers have told me: 
    Supplier No. 1 tells me that they can scan slides to a size of 1544 x 1024 pixels, at 72 dpi, which will be 763 KB, and they refer to this as low resolution (a JPEG). However, I noticed when I looked at these scanned slides, the size of the slides varied, with a maximum of 1.8 MB. This supplier says that the dpi doesn’t matter when it comes to the quality of the final digital image, that it is the dimensions that matter.  They say that if they scanned a slide to a higher resolution (2048 x 3072), they would still scan it at 72 dpi.
    Supplier No. 2: They tell me that in order to have a high quality image made from a photographic slide (starting with a 35 mm slide, in all cases), I need to have a “1280 pixel dimension slide, a JPEG, at 300 dpi, that is 8 MB per image.” However, this supplier also offers, on its list of services, a “Standard Resolution JPEG (4MB file/image – 3088 x 2048), as well as a “High Resolution JPEG (8 MB file/image – 3088x2048).
    I will be presenting my Keynotes with my MacBook Pro, and will not have a chance to try out the presentations in advance, since the lecture location is far from my home, so that is not an option. 
    I do not want to use up more memory than necessary on my laptop.  I also want to have the best quality image. 
    One more question: When scanning images myself, on my own scanner, for my Keynote presentations, would I be better off scanning them as JPEGs or TIFFs? I have been told that a TIFF is better because it is less compressed. 
    Any enlightenment on this subject would be appreciated.
    Thank you.

    When it comes to Keynote, I try and start with a presentation that's 1680 x 1050 preset or something in that range.  Most projectors that you'll get at a conference won't project much higher than that and if they run at a lower resolution, it's better to have the device downsize your Keynote.  Anything is better than having the projector try and upsize your presentation... you work hard to make it look good, and it's mangled by some tired Epson projector.
    As far as slides go, scan them in at 150 dpi or better, and make them at least the dimensions of your presentation.  Keynote is really only wanting 72dpi, but I do them at 150, just in case I need to print out the presentation as a handout later, and having the pix at 150 dpi gives me a little help with their quality on a printer.
    You'd probably have to drop in the 150 versions again if you output the Keynote to .pdf or Word or something, but at least you have the option.
    And Gary's right (above) go ahead and scan them as TIFFs.  Sooner or later you'll want to do something else with these slides (like make something for an iPad or the like) and having them as TIFFs keeps your presentation looking good.
    Finally, and this is a big one, get to the location for your presentation ahead of time if you can, and plug the laptop in and see what you get.  There's always connection problems. Don't let the AV bonehead tell you everything will work just fine ('... I don't have any adapters for a Mac...') .  See it for yourself... you're the one that's standing up there.  Unless it's your boss, then you better be really sure it works.

Maybe you are looking for

  • How do I snyc the pictures from my old apple ID/icloud on to my new one????

    My apple account was hacked into so I had to change my email on my account. I had tons of pictures saved on that account with icloud, but now that I have a new email address on that account the pictures did not automaticly transfer over. How do I get

  • Mail Adapter to Soap Adapter keeping the attachments - How?

    Hi guys, I am working on a scenario where I should pull emails from an Exchange server and I should forward them (with their attachments) to a separate systems via a Web Service. So far I have been able to pull the emails using the Mail Adapter and I

  • How do I change the width of the x-scrollbar for a waveform graph?

    I have a LV7.1 application that contains a waveform graph and must be usable on several PC with widely varying screen resolutions.  I have decided to position and scale the front panel objects programatically, including the graph itself.  The appeara

  • Order type Z031 plant AT01. No checking rule mantained for operation

    Hello PS Gurus, I am trying to create a new element PEP, and when i tried to release the element PEP i get. Order type Z031 plant AT01. No checking rule mantained for operation Can someone tell me what is missing in our customization?              Be

  • Business Objects Auditing Database

    Hi, Is there any way to filter events in the auditing database for a particular user, so that these events are not written to the audit database? Also, in terms of the management of the auditing database, i.e. purging old data, truncating, setting co