Adding Samba to an existing NFS-Cluster

Hello!
Currently we are running a cluster consisting of two nodes which provide two NFS servers (one for each node). Both nodes are connected to an 6140 Storage array. If one of the nodes fails, the zpool is mounted on the other node, all works fine. The NFS servers are started in the global zone. Here is the config of the current cluster:
# clrs status
Cluster Resources ===
Resource Name        Node Name            State     Status Message
fileserv01            clnode01            Online    Online - LogicalHostname online.
                      clnode02            Offline   Offline
fileserv01-hastp-rs   clnode01            Online    Online
                      clnode02            Offline   Offline
fileserv01-nfs-rs     clnode01            Online    Online - Service is online.
                      clnode02            Offline   Offline
fileserv02            clnode02            Online    Online - LogicalHostname online.
                      clnode01            Offline   Offline - LogicalHostname offline.
fileserv02-hastp-rs   clnode02            Online    Online
                      clnode01            Offline   Offline
fileserv02-nfs-rs     clnode02            Online    Online - Service is online.
                      clnode01            Offline   Offline - Completed successfully.Now we want to add two Samba servers which additionally share the same zpools. From what I read in the "Sun Cluster Data Service for Samba Guide for Solaris OS" this can only be done using zones, because I cannot run two Samba services on the same host.
At this point I am stuck. We have to failover both NFS and Samba services if a failure occurs (because they share the same data) and have therefore to be in the same resource group. But this will not work because the list of nodes for each service is different ("clnode01, clnode02" vs. "clnode01:zone1, clnode02:zone2").
I can mount the zpool from the global zone using "mount -F lofs", but I think I need a real dependency for the storage.
My question is: How can we accomplish this setup?

Hi,
You can run multiple Samba services on the same host or global zone in your case. For example, suppose you had smb1 and smb2, you could run smb1 within your fileserv01 RG and smb2 within your fileserv02 RG.
The only restriction is if you also require winbind, as only one instance of winbind is possible per global or non-global zone. In this regard if deploying smb1 and smb2 as above, if winbind is also required you would also need a scalable RG to ensure that winbindd gets started on each node.
Please see http://docs.sun.com/app/docs/doc/819-3063/gdewn?a=view in particular Restrictions for multiple Samba instances that require Winbind, also please note the "note" within that section, which indicates that "... you may also use global as the zone name ..."
Alternatively, please post the restriction from the docs.
Regards
Neil

Similar Messages

  • Set samba on NFS cluster

    Hi,
    I have a two nodes cluster running HA NFS on top of several UFS and ZFS file systems. Now I'd like to have samba sharing those file systems using the same logical hostname. In this case, do I need to create a separate samba RG as the user guide suggested, or I can just use the same RG for the NFS and only create a samba resource through samba_register?
    Thanks,

    Thanks Neil for spending time on this issues on the weekend.. I should have thought about these two things when I did the testing Friday.
    I just corrected the config file and re-run the test. It still failed. I notice it complained the faultmonitor user as well as the RUN_NMBD variable.
    For the first complaint, I did verify the fmuser as the manual suggested through the smbclient command.
    # hostname
    test2
    # /usr/sfw/sbin/smbd -s /global/ufs/fs1/samba/test10/lib/smb.conf -D# smbclient -s /global/ufs/fs1/samba/test10/lib/smb.conf -N -L test10
    Anonymous login successful
    Domain=[WINTEST] OS=[Unix] Server=[Samba 3.0.28]
    Sharename Type Comment
    testshare Disk
    IPC$ IPC IPC Service (Samba 3.0.28)
    Anonymous login successful
    Domain=[WINTEST] OS=[Unix] Server=[Samba 3.0.28]
    Server Comment
    Workgroup Master
    # smbclient -s /global/ufs/fs1/samba/test10/lib/smb.conf '//test10/scmondir' -U test10/fmuser%samba -c 'pwd;exit'
    Domain=[ANSYS] OS=[Unix] Server=[Samba 3.0.28]
    Current directory is \\test10\scmondir\
    # pkill -TERM smbd
    For the RUN_NMBD, I recalled somebody posted on this forum but his solution was not recognized.
    Start_command output,
    # ksh -x /opt/SUNWscsmb/samba/bin/start_samba -R 'samba-rs' -G 'nfs-rg' -X 'smbd nmbd' -B '/usr/sfw/bin' -S '/usr/sfw/sbin' -C '/global/ufs/fs1/samba/test10' \
    -L '/global/ufs/fs1/samba/test10/logs' -U test10/fmuser%samba -M 'scmondir' -P '/usr/sfw/lib' -H test10/bin/pwd
    2> /dev/null
    PWD=/var/adm
    + + basename /opt/SUNWscsmb/samba/bin/start_samba
    MYNAME=start_samba
    + + /usr/bin/awk -F_ {print $1}
    + /usr/bin/echo start_samba
    parm1=start
    + + /usr/bin/awk -F_ {print $2}
    + /usr/bin/echo start_samba
    parm2=samba
    + /opt/SUNWscsmb/bin/control_samba -R samba-rs -G nfs-rg -X smbd nmbd -B /usr/sfw/bin -S /usr/sfw/sbin -C /global/ufs/fs1/samba/test10 -L /global/ufs/fs1/samba/test10/logs -U test10/fmuser%samba -M scmondir -P /usr/sfw/lib -H test10 start samba
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + validate_common
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + rc=0
    + [ ! -d /usr/sfw/bin ]
    + debug_message Validate - samba bin directory /usr/sfw/bin exists
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + [ ! -d /usr/sfw/sbin ]
    + debug_message Validate - samba sbin directory /usr/sfw/sbin exists
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + [ ! -d /global/ufs/fs1/samba/test10 ]
    + debug_message Validate - samba configuration directory /global/ufs/fs1/samba/test10 exists
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + [ ! -f /global/ufs/fs1/samba/test10/lib/smb.conf ]
    + debug_message Validate - smbconf /global/ufs/fs1/samba/test10/lib/smb.conf exists
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + [ ! -x /usr/sfw/bin/nmblookup ]
    + debug_message Validate - nmblookup /usr/sfw/bin/nmblookup exists and is executable
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + /usr/sfw/bin/nmblookup -h
    + 1> /dev/null 2>& 1
    + [ 1 -eq 0 ]
    + + /usr/bin/awk {print $2}
    + /usr/sfw/bin/nmblookup -V
    VERSION=3.0.28
    + + /usr/bin/cut -d. -f1
    + /usr/bin/echo 3.0.28
    SAMBA_VERSION=3
    + + /usr/bin/cut -d. -f2
    + /usr/bin/echo 3.0.28
    SAMBA_RELEASE=0
    + + /usr/bin/cut -d. -f3
    + /usr/bin/echo 3.0.28
    SAMBA_UPDATE=28
    + debug_message Validate - Samba version <3.0.28> is being used
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + + stripfunc 3
    SAMBA_VERSION=3
    + + stripfunc 0
    SAMBA_RELEASE=0
    + + stripfunc 28
    SAMBA_UPDATE=28
    + rc_validate_version=0
    + [ -z 3.0.28 ]
    + [ 3 -lt 2 ]
    + [ 3 -eq 2 -a 0 -le 2 -a 28 -lt 2 ]
    + [ 0 -gt 0 ]
    + debug_message Function: validate_common - End
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + return 0
    + rc1=0
    + validate_samba
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + rc=0
    + [ ! -d /global/ufs/fs1/samba/test10/logs ]
    + debug_message Validate - Samba log directory /global/ufs/fs1/samba/test10/logs exists
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + [ ! -x /usr/sfw/sbin/smbd ]
    + debug_message Validate - smbd /usr/sfw/sbin/smbd exists and is executable
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + [ ! -x /usr/sfw/sbin/nmbd ]
    + debug_message Validate - nmbd /usr/sfw/sbin/nmbd exists and is executable
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + /usr/bin/grep \[ /global/ufs/fs1/samba/test10/lib/smb.conf
    + /usr/bin/cut -d[ -f2
    + /usr/bin/grep scmondir
    + /usr/bin/cut -d] -f1
    + [ -z scmondir ]
    + debug_message Validate - Faultmonitor resource scmondir exists
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + [ ! -x /usr/sfw/bin/smbclient ]
    + debug_message Validate - smbclient /usr/sfw/bin/smbclient exists and is executable
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + + /usr/bin/echo test10/fmuser%samba
    + /usr/bin/cut -d% -f1
    + /usr/bin/awk BEGIN { FS="\\" } {print $NF}
    USER=test10/fmuser
    + /usr/bin/getent passwd test10/fmuser
    + [ -z  ]
    + syslog_tag
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + scds_syslog -p daemon.error -t SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs -m Validate - Couldn't retrieve faultmonitor-user <%s> from the nameservice test10/fmuser
    + rc=1
    + + /usr/bin/tr -s [:lower:] [:upper:]
    + /usr/bin/echo YES
    Bad string
    RUN_NMBD=
    + [  = YES -o  = NO ]
    + syslog_tag
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + scds_syslog -p daemon.error -t SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs -m Validate - RUN_NMBD=%s is invalid - specify YES or NO
    + rc=1
    + debug_message Function: validate_samba - End
    + print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
    + return 1
    + rc2=1
    + rc=1
    + [ 0 -eq 0 -a 1 -eq 0 ]
    + [ 1 -eq 0 ]
    + rc=1
    + exit 1
    + rc=1
    + exit 1
    Messages file output,
    # tail -f /var/adm/messages
    Nov 21 11:14:28 test2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node test3 change to RG_OFFLINE
    Nov 21 15:52:57 test2 syslogd: going down on signal 15
    Nov 21 15:53:32 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-ufs-fs2-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 failed to run: share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 exited with status 0.
    Nov 21 15:53:32 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-zfs-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 failed to run: share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 exited with status 0.
    Nov 21 15:56:18 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 327132 daemon.error] Validate - Couldn't retrieve faultmonitor-user <test10/fmuser> from the nameservice
    Nov 21 15:56:18 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 287111 daemon.error] Validate - RUN_NMBD= is invalid - specify YES or NO
    Nov 23 04:20:40 test2 cl_eventlogd[1156]: [ID 848580 daemon.info] Restarting on signal 1.
    Nov 23 04:20:40 test2 last message repeated 2 times
    Nov 23 04:21:43 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-ufs-fs2-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 failed to run: share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 exited with status 0.
    Nov 23 04:21:43 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-zfs-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 failed to run: share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 exited with status 0.
    Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_common - Begin
    Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Method: control_samba - Begin
    Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - samba bin directory /usr/sfw/bin exists
    Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - samba sbin directory /usr/sfw/sbin exists
    Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - samba configuration directory /global/ufs/fs1/samba/test10 exists
    Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - smbconf /global/ufs/fs1/samba/test10/lib/smb.conf exists
    Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - nmblookup /usr/sfw/bin/nmblookup exists and is executable
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - Samba version <3.0.28> is being used
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_common - End
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_samba - Begin
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - Samba log directory /global/ufs/fs1/samba/test10/logs exists
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - smbd /usr/sfw/sbin/smbd exists and is executable
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - nmbd /usr/sfw/sbin/nmbd exists and is executable
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - Faultmonitor resource scmondir exists
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - smbclient /usr/sfw/bin/smbclient exists and is executable
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 327132 daemon.error] Validate - Couldn't retrieve faultmonitor-user <test10/fmuser> from the nameservice
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 287111 daemon.error] Validate - RUN_NMBD= is invalid - specify YES or NO
    Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_samba - End
    Thanks again,
    Jon

  • Issue in adding Space to the existing Virtual Machine from added repository

    Hi,
    I'm facing issue in adding Space to the existing Virtual Machine (Guest OS) from added repository.
    Environment details :
    VM Server : /OVS => 130GB
    /OVS/935970F2CC2D4B4391701397517F1001/ => 512 GB
    Things I have done :
    •     I created a VM (Guest OS) in the VM Server with 120 GB
    •     After creating the VM (Guest OS) , When I tried adding a VIRTUAL DISK of size 150 GB , I got an error “Maximum available Disk space is only 10GB”.
    My query :
    •     Will I be able to add space to Existing VM from the added Repository ( /OVS/935970F2CC2D4B4391701397517F1001/ ) , whose system.img is stored in path /OVS/running_pool/34_rhel/ .
    Kindly help me out in this.
    Thanks in advance.
    -- Sri

    Hi all,
    I checked with Oracle on the above and got the info currently , the we can utilise only the space available in the existing repo and cannot extend to additional repo.
    Work around is : Clone it to the other repo , or Use Symbolic link .
    Thanks,
    Sri.

  • Adding Index server to existing farm, "Server Farm Product and Patch Status" returned 34 Missing Locally required products and patches

    Once I connected my new server to my farm's config db, it returned all of the following missing locally. I stripped out any redundancies and Headings, and I'm left with 43. I'm looking for a efficient strategy. Should I start with the lowest version number
    and work my way up? Current DB version is 14.0.7015.1000. IIRC, SP2 is cumulative, so can I ignore the first two (SP1 and Hotfix), install SP2, and then the Language packs and etc on top?
    Sorted by version:
    Microsoft SharePoint 2010 Service Pack 1 (SP1) (14.0.6029.1000)
    Hotfix for Microsoft SharePoint Server 2010 (KB2775353) 64-Bit Edition (14.0.6105.5000)
    Service Pack 2 for Microsoft SharePoint 2010 (KB2687453) 64-Bit Edition (14.0.7015.1000)
    Service Pack 2 for Microsoft 2010 Server Language Pack (KB2687462) 64-Bit Edition (14.0.7015.1000)
    Microsoft Office Server Proof (English) 2010 (14.0.7015.1000)
    Microsoft Office Server Proof (French) 2010 (14.0.7015.1000)
    Microsoft Office Server Proof (Russian) 2010 (14.0.7015.1000)
    Microsoft Office Server Proof (Spanish) 2010 (14.0.7015.1000)
    Microsoft SharePoint Portal (14.0.7015.1000)
    Microsoft User Profiles (14.0.7015.1000)
    Microsoft SharePoint Portal English Language Pack (14.0.7015.1000)
    Microsoft Shared Components (14.0.7015.1000)
    Microsoft Shared Coms English Language Pack (14.0.7015.1000)
    Microsoft Slide Library (14.0.7015.1000)
    Microsoft InfoPath Forms Services (14.0.7015.1000)
    Microsoft InfoPath Form Services English Language Pack (14.0.7015.1000)
    Microsoft Word Server (14.0.7015.1000)
    Microsoft Word Server English Language Pack (14.0.7015.1000)
    PerformancePoint Services for SharePoint (14.0.7015.1000)
    PerformancePoint Services in SharePoint 1033 Language Pack (14.0.7015.1000)
    Microsoft Visio Services English Language Pack (14.0.7015.1000)
    Microsoft Visio Services Web Front End Components (14.0.7015.1000)
    Microsoft Excel Services Components (14.0.7015.1000)
    Microsoft Document Lifecycle Components (14.0.7015.1000)
    Microsoft Excel Services English Language Pack (14.0.7015.1000)
    Microsoft Search Server 2010 Core (14.0.7015.1000)
    Microsoft Search Server 2010 English Language Pack (14.0.7015.1000)
    Microsoft Document Lifecycle Components English Language Pack (14.0.7015.1000)
    Microsoft Slide Library English Language Pack (14.0.7015.1000)
    Microsoft SharePoint Server 2010 (14.0.7015.1000)
    Microsoft Access Services Server (14.0.7015.1000)
    Microsoft Access Services English Language Pack (14.0.7015.1000)
    Microsoft Web Analytics Web Front End Components (14.0.7015.1000)
    Microsoft Web Analytics English Language Pack (14.0.7015.1000)
    Microsoft Excel Mobile Viewer Components (14.0.7015.1000)
    Recommendations?
    Thanks,
    Scott

    Thanks guys. I was able to get through all of the patches except for
    Language Pack for SharePoint, Project Server and Office Web Apps 2010 - English   missing locally
    Language Pack for SharePoint, Project Server and Office Web Apps 2010 -
    Spanish/Español missing locally
    This was my process:
    Config Wizard:
    Adding Index server to existing farm, "Server Farm Product and Patch Status" returned 34 Missing Locally required products and patches.
    SKIP installing the following two, as SP2 is cumulative
    Microsoft SharePoint 2010 Service Pack 1 (SP1) (14.0.6029.1000) (officeserver2010sp1-kb2460045-x64-fullfile-en-us.exe)
    Hotfix for Microsoft SharePoint Server 2010 (KB2775353) 64-Bit Edition (14.0.6105.5000)
    install SP2 oserversp2010-kb2687453-fullfile-x64-en-us.exe
    install oslpksp2010-kb2687462-fullfile-x64-en-us.exe
    Got "There are no products affected by this package installed on this system."
    SO!
    Uninstalled Sharepoint 2010 Server
    WIll try to install again, without skipping #2, and reorder installation of oslpksp2010-kb2687462-fullfile-x64-en-us.exe
    Retry (the long way):
    Run SharePointServer.exe, get Missing Locally...
    Install oslpksp2010-kb2687462-fullfile-x64-en-us.exe SUCCESSFUL
    Install officeserver2010sp1-kb2460045-x64-fullfile-en-us.exe SUCCESSFUL
    Reboot
    Install oserversp2010-kb2687453-fullfile-x64-en-us.exe SUCCESSFUL
    Rerun Config
    STILL MISSING
    Download oslpksp2010-kb2687462-fullfile-x64-es-es.exe, run, "there are no products affected..."
    Uninstall.
    Re-retried, only got through the first couple:
    Run SharePointServer.exe, get Missing Locally...
    Install SP1 officeserver2010sp1-kb2460045-x64-fullfile-en-us.exe, SUCCESSFUL
    Install English Lang Pack oslpksp2010-kb2687462-fullfile-x64-en-us.exe, "There are no products affected by this package installed on this system."
    Install Spanish Lang Pack oslpksp2010-kb2687462-fullfile-x64-es-es.exe,
    Install SP2 oserversp2010-kb2687453-fullfile-x64-en-us.exe, run Config Wizard
    I'm now downloading 462150_intl_x64_zip.exe, going to try and install it as step three.
    Any suggestions greatly appreciated.
    Thanks,
    Scott

  • InDesign CC 2014 version adding pages to an existing document keeps crashing, why?

    InDesign CC 2014 version adding pages to an existing document keeps crashing, why?

    Because there's a problem in the file. See Remove minor corruption by exporting

  • Adding portal to an existing domain

    Hi I have a JSP/Servlet web applicatin and
    I want to make it a portal application.
    I was following Weblogic Portal Development
    Guide 7.0. I found the section
    "Adding portal to an existing domain" is very incomplete
    or confusing and I cannot do it. Is there any better document
    for this?

    Judging from the dates between these two entrries and no answers, I'd
    guess that there isn't anyone from BEA monitoring this group? I am in a
    similar situation. What I want to understand is how one can "portalize" an
    existing web app. I understand there are implications to some of the
    presentation, but specifically, I want to deploy a Servlet based app that
    uses EJBs and deploy this as an EAR in the same domain as the portal
    (perhaps under the /applications directory). I can make everything work
    except the webflow where I reference the Servlet.
    Specifically, I have a presentation page that is defined to be my servlet.
    When I try to build a webflow URL referencing this event, I get errors from
    the portal because it presumes it may be located somewhere else or in a
    different namespace. It leads me to believe that I can't reference a
    servlet unless it is under the /framework or /portal directories under the
    portal web-app directory. I'm convinced there must be a way to reference
    other web components that aren't specifically part of the portal web app,
    but can't figure out how to propagate the webflow element to the actual
    servlet I want.
    Any ideas from the community at large? This must be something other
    people have wrestled with since I can't imagine that only new apps will be
    put into portals.
    Thanks,
    Mark
    "Sara" <[email protected]> wrote in message
    news:[email protected]...
    >
    I'm facing similar problems too. I have an ear file in an existing domainthat
    I want to use to create a portal. The documentation on this is quiteconfusing
    and the steps require so many manual steps that it is bound to beerror-prone.
    What's BEA's recommended way to expose an existing Weblogic application asa portal
    Thanks in advance
    -Sara
    "David Zhang" <[email protected]> wrote:
    Hi I have a JSP/Servlet web applicatin and
    I want to make it a portal application.
    I was following Weblogic Portal Development
    Guide 7.0. I found the section
    "Adding portal to an existing domain" is very incomplete
    or confusing and I cannot do it. Is there any better document
    for this?

  • Adding replication to an existing database.

    I am looking at adding replication to an existing database. This database has lots of existing containers many of them are 1G in size. How will this be handled when the master and client initially start? Will the master just start sending to the client? Is there another option?

    Yes. This, and the alternatives, are described in the Ref Guide, on the page entitled "Initializing a new site".
    Alan Bram
    Oracle

  • I was adding songs to and existing playlist and now I can't save the changes to the playlist. What do I need to do to save the changes? All my music have now the "  " sign in front of them

    I was adding songs to and existing playlist and now I can't save the changes to the playlist. What do I need to do to save the changes? All my music have now the " " sign in front of them

    I was adding songs to and existing playlist and now I can't save the changes to the playlist. What do I need to do to save the changes? All my music have now the "+ " sign in front of them

  • Adding phone layout to existing desktop site - No scratch page created

    Trying to make a phone version of site created in muse but no scratch page is created in desktop plan after adding phone layout. Ideas?

    I'm following adobe help tutorial  http://helpx.adobe.com/muse/tutorials/creating-mobile-layout-designs-muse.html#id_25691htt p://
    Under heading "Adding Phone Layout to Existing Muse Site"  #3 starts refering to "scratch page"

  • Query on adding new node to existing 2 node cluster

    Hi experts
    current environment
    Oracle VM 2 Node RAC cluster
    Oracle 11G 11.2.0.3.0
    Oracle 11g rel 2 GI
    on current 2 node cluster we have GI and RAC db configured
    Nodes: vmorarac1,vmorarac2
    we shutdown vmorarac1 to clone it to vmorarac5
    on new node I have changed the hostname to vmorarac5
    In /etc/sysconfig/network-scripts/ifcfg-eth0 change ip to new ip and same for /etc/sysconfig/network-scripts/ifcfg-eth1
    152.144.199.210,152.144.199.211
    mad echanges to /etc/hosts on vmorarac1/2 for 2 new IP address assigned to vmorarac5
    152.144.199.171 vmorarac1.pbi.global.pvt vmorarac1
    192.168.2.30 vmorarac1-priv.pbi.global.pvt vmorarac1-priv
    152.144.199.184 vmorarac1-vip.pbi.global.pvt vmorarac1-vip
    152.144.199.172 vmorarac2.pbi.global.pvt vmorarac2
    192.168.2.31 vmorarac2-priv.pbi.global.pvt vmorarac2-priv
    152.144.199.185 vmorarac2-vip.pbi.global.pvt vmorarac2-vip
    152.144.199.210 vmorarac2.pbi.global.pvt vmorarac2
    192.168.2.32 vmorarac2-priv.pbi.global.pvt vmorarac2-priv
    152.144.199.211 vmorarac2-vip.pbi.global.pvt vmorarac2-vip
    152.144.199.201 vmorarac-scan.pbi.global.pvt
    Query is is that all ok to reboot the new node to reflect changes as below and change the hostname to vmorarac2
    152.144.199.210 vmorarac2.pbi.global.pvt vmorarac2
    192.168.2.32 vmorarac2-priv.pbi.global.pvt vmorarac2-priv
    152.144.199.211 vmorarac2-vip.pbi.global.pvt vmorarac2-vip
    hostname change in /etc/sysconfig/network
    please let me know
    On new node vmorarac5, there is alreay software for RAC DB and GUI cloned from vmorarac1
    addnode.sh is used to add existing node, but as part of pre-requisite configuratoin is there any config step missing
    thanks

    The only thing you want "cloned" is the base OS install... The GI home directory, the oraInventory and a whole lot of other stuff is configured at addNode time or cluster install time. I would say that proceeding with your current plans will corrupt your clusterware. You really need to STOP LISTENING to the SA's and START reading the documentation on how to add a node to a cluster. Simply changing the hostname IS NOT a supported "feature".
    Start with a CLEAN OS. Configure shared devices, networks, /etc/hosts, memory and packages.
    Follow the instructions for addNode.
    Doing this any other way has the potential to corrupt your entire cluster.

  • Question on adding new server to existing pool in OVM 2.2 environment

    I have an existing iscsi based server pool with one server attached to the pool. I am adding a second ovm server. I have configured the iscsi and multipath modules and can see the repository lun.
    I then added the server to the pool via ovm manager, but the server is not fully participating as the agent did not create a mount point for /OVS so I can't see the repos directiories from the new server. A "df" does not show the device as it does on the first server.
    The manager gui does show the server as part of the pool even though we received an error.
    In the ovm manager log, we see the following messages.
    Check prerequisites to add server (ovm2.highlinehosting.com) to server pool (san_pool1) succeed
    During adding servers ([ovm2.domain.com]) to server pool (san_pool1), Cluster setup failed: (OVM-1011 OVM Manager communication with ovm1.domain.com for operation HA Setup for Oracle VM Agent 2.2.0 failed: errcode=00000, errmsg=Unexpected error: <Exception: ha_check_cpu_compatibility failed:<Exception: CPU not compatible! {'ovm1.domain.com': 'vendor_id=GenuineIntel;cpu_family=6;model=44', 'ovm2.highlinehosting.com': 'vendor_id=GenuineIntel;cpu_family=6;model=46'}> )
    I'm not sure why it's complaining about cpu compatibility as we are not using any HA features at this time, and both servers are identical in make and model. They are both equipped with the same cpu type and speed.
    I checked ocfs2 network connectivity by executing the command "nc -zv nodename 7777", which ran successfully from both nodes. The manager was able to propagate a valid cluster.conf file to both servers. I restarted the ovs-agent service on the new server.
    Are there any other configuration steps that I need to take on the new server before trying to add it to the pool? I do need to tread carefully as I have live guests running in the pool.
    Thx

    Bob Weinmann wrote:
    If I have to go the route of using a separate pool for each server, do you have any suggestions on how I would be able to access the repos of server1 from server2 if server1 were to go down hard? I don't mean to access it as a repos for running the vms, but just to be able to copy the vm files over to the running server.This would be possible if you were using NFS for your storage, but not if you're using FC or iSCSI, as the LUN is formatted with the OCFS2 cluster ID of each pool, so probably wouldn't be able to be mounted by another pool. Your best bet is to upgrade to Oracle VM 3.0 so that you can create a pool that contains both servers: you won't have live migration, but HA will still work just fine.

  • Adding 2 disks to existing production ASM disk group - (urgent)

    Environment: AIX
    ORACLE: 10.2.0.4 RAC ( 2 node)
    We want to add 2 disks(/dev/asm_disk65 & /dev/asm_disk66) to existing production ASM diskgroup (DATA_GRP01).
    SQL> /
    DISK_GROUP_NAME DISK_FILE_PATH DISK_FILE_NAME DISK_FILE_FAIL_GROUP
    DATA_GRP01 /dev/asm_disk14 DATA_GRP01_0006 DATA_GRP01_0006
    DATA_GRP01 /dev/asm_disk15 DATA_GRP01_0007 DATA_GRP01_0007
    DATA_GRP01 /dev/asm_disk13 DATA_GRP01_0005 DATA_GRP01_0005
    DATA_GRP01 /dev/asm_disk3 DATA_GRP01_0010 DATA_GRP01_0010
    DATA_GRP01 /dev/asm_disk12 DATA_GRP01_0004 DATA_GRP01_0004
    DATA_GRP01 /dev/asm_disk11 DATA_GRP01_0003 DATA_GRP01_0003
    DATA_GRP01 /dev/asm_disk10 DATA_GRP01_0002 DATA_GRP01_0002
    DATA_GRP01 /dev/asm_disk4 DATA_GRP01_0011 DATA_GRP01_0011
    DATA_GRP01 /dev/asm_disk1 DATA_GRP01_0001 DATA_GRP01_0001
    DATA_GRP01 /dev/asm_disk9 DATA_GRP01_0016 DATA_GRP01_0016
    DATA_GRP01 /dev/asm_disk0 DATA_GRP01_0000 DATA_GRP01_0000
    DATA_GRP01 /dev/asm_disk7 DATA_GRP01_0014 DATA_GRP01_0014
    DATA_GRP01 /dev/asm_disk5 DATA_GRP01_0012 DATA_GRP01_0012
    DATA_GRP01 /dev/asm_disk8 DATA_GRP01_0015 DATA_GRP01_0015
    DATA_GRP01 /dev/asm_disk2 DATA_GRP01_0009 DATA_GRP01_0009
    DATA_GRP01 /dev/asm_disk16 DATA_GRP01_0008 DATA_GRP01_0008
    DATA_GRP01 /dev/asm_disk6 DATA_GRP01_0013 DATA_GRP01_0013
    [CANDIDATE] /dev/asm_disk65
    [CANDIDATE] /dev/asm_disk66
    We issue,
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65'; name DATA_GRP01_0017 REBALANCE POWER 5;
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk66'; name DATA_GRP01_0018 REBALANCE POWER 5;
    SQL> ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017 REBALANCE POWER 5;
    ALTER DISKGROUP DATA_GRP01 ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017 REBALANCE POWER 5
    ERROR at line 1:
    ORA-15032: not all alterations performed
    ORA-15075: disk(s) are not visible cluster-wide
    Node1:
    crw-rw---- 1 oracle dba 38, 68 Nov 18 02:33 /dev/asm_disk65
    crw-rw---- 1 oracle dba 38, 13 Nov 18 02:53 /dev/asm_disk13
    crw-rw---- 1 oracle dba 38, 10 Nov 18 03:00 /dev/asm_disk10
    crw-rw---- 1 oracle dba 38, 3 Nov 18 03:00 /dev/asm_disk3
    crw-rw---- 1 oracle dba 38, 7 Nov 18 03:00 /dev/asm_disk7
    crw-rw---- 1 oracle dba 38, 2 Nov 18 03:00 /dev/asm_disk2
    crw-rw---- 1 oracle dba 38, 8 Nov 18 03:00 /dev/asm_disk8
    crw-rw---- 1 oracle dba 38, 15 Nov 18 03:00 /dev/asm_disk15
    crw-rw---- 1 oracle dba 38, 14 Nov 18 03:00 /dev/asm_disk14
    crw-rw---- 1 oracle dba 38, 12 Nov 18 03:00 /dev/asm_disk12
    crw-rw---- 1 oracle dba 38, 11 Nov 18 03:00 /dev/asm_disk11
    crw-rw---- 1 oracle dba 38, 5 Nov 18 03:00 /dev/asm_disk5
    crw-rw---- 1 oracle dba 38, 4 Nov 18 03:00 /dev/asm_disk4
    crw-rw---- 1 oracle dba 38, 69 Nov 18 03:39 /dev/asm_disk66
    crw-rw---- 1 oracle dba 38, 9 Nov 18 04:39 /dev/asm_disk9
    crw-rw---- 1 oracle dba 38, 6 Nov 18 06:36 /dev/asm_disk6
    crw-rw---- 1 oracle dba 38, 16 Nov 18 06:36 /dev/asm_disk16
    Node 2 :
    crw-rw---- 1 oracle dba 38, 68 Nov 15 17:59 /dev/asm_disk65
    crw-rw---- 1 oracle dba 38, 69 Nov 15 17:59 /dev/asm_disk66
    crw-rw---- 1 oracle dba 38, 8 Nov 17 19:55 /dev/asm_disk8
    crw-rw---- 1 oracle dba 38, 7 Nov 17 19:55 /dev/asm_disk7
    crw-rw---- 1 oracle dba 38, 5 Nov 17 19:55 /dev/asm_disk5
    crw-rw---- 1 oracle dba 38, 3 Nov 17 19:55 /dev/asm_disk3
    crw-rw---- 1 oracle dba 38, 2 Nov 17 19:55 /dev/asm_disk2
    crw-rw---- 1 oracle dba 38, 10 Nov 17 19:55 /dev/asm_disk10
    crw-rw---- 1 oracle dba 38, 4 Nov 17 21:04 /dev/asm_disk4
    crw-rw---- 1 oracle dba 38, 12 Nov 17 21:45 /dev/asm_disk12
    crw-rw---- 1 oracle dba 38, 14 Nov 17 22:21 /dev/asm_disk14
    crw-rw---- 1 oracle dba 38, 15 Nov 17 23:06 /dev/asm_disk15
    crw-rw---- 1 oracle dba 38, 11 Nov 17 23:18 /dev/asm_disk11
    crw-rw---- 1 oracle dba 38, 9 Nov 18 04:39 /dev/asm_disk9
    crw-rw---- 1 oracle dba 38, 6 Nov 18 04:41 /dev/asm_disk6
    crw-rw---- 1 oracle dba 38, 16 Nov 18 06:20 /dev/asm_disk16
    crw-rw---- 1 oracle dba 38, 13 Nov 18 06:20 /dev/asm_disk13
    node1 SQL> select GROUP_NUMBER,NAME,STATE from v$asm_diskgroup;
    GROUP_NUMBER NAME STATE
    1 DATA_GRP01 MOUNTED
    2 FB_GRP01 MOUNTED
    SQL> select GROUP_NUMBER,NAME,PATH,HEADER_STATUS,MOUNT_STATUS from v$asm_disk order by name;
    GROUP_NUMBER NAME PATH HEADER_STATUS MOUNT_STATUS
    1 DATA_GRP01_0000 /dev/asm_disk0 MEMBER CACHED
    1 DATA_GRP01_0001 /dev/asm_disk1 MEMBER CACHED
    1 DATA_GRP01_0002 /dev/asm_disk10 MEMBER CACHED
    1 DATA_GRP01_0003 /dev/asm_disk11 MEMBER CACHED
    1 DATA_GRP01_0004 /dev/asm_disk12 MEMBER CACHED
    1 DATA_GRP01_0005 /dev/asm_disk13 MEMBER CACHED
    1 DATA_GRP01_0006 /dev/asm_disk14 MEMBER CACHED
    1 DATA_GRP01_0007 /dev/asm_disk15 MEMBER CACHED
    1 DATA_GRP01_0008 /dev/asm_disk16 MEMBER CACHED
    1 DATA_GRP01_0009 /dev/asm_disk2 MEMBER CACHED
    1 DATA_GRP01_0010 /dev/asm_disk3 MEMBER CACHED
    1 DATA_GRP01_0011 /dev/asm_disk4 MEMBER CACHED
    1 DATA_GRP01_0012 /dev/asm_disk5 MEMBER CACHED
    1 DATA_GRP01_0013 /dev/asm_disk6 MEMBER CACHED
    1 DATA_GRP01_0014 /dev/asm_disk7 MEMBER CACHED
    1 DATA_GRP01_0015 /dev/asm_disk8 MEMBER CACHED
    1 DATA_GRP01_0016 /dev/asm_disk9 MEMBER CACHED
    0 /dev/asm_disk65 MEMBER IGNORED
    0 /dev/asm_disk66 MEMBER CLOSED
    Please guide us with next step, we are running out of space. The disks are added to non-existing group name (group number zero in above query).How to drop the disks and add it again.
    Is there query to drop disk with pathname like alter diskgroup <disk group name> drop disk '<path of the disk?'; -> is it right???
    Thanks in advance.
    Sakthi
    Edited by: SAKTHIVEL on Nov 18, 2010 10:08 PM

    1. just use one alter diskgroup command, rebalance would be executed just one time
    2. on node 2, do you see the disks as on node 1?
    select GROUP_NUMBER,NAME,PATH,HEADER_STATUS,MOUNT_STATUS from v$asm_disk order by name;3. try this:
    ALTER DISKGROUP DATA_GRP01
    ADD DISK '/dev/asm_disk65' name DATA_GRP01_0017,
                   '/dev/asm_disk66' name DATA_GRP01_0018
    REBALANCE POWER 5;

  • Existing SQL Cluster: Easiest Method to Move to new Datacenter

    Although there are a few questions similar to mine, I didn't find anything with these special circumstances, so I'll try a new thread:
    We have an existing SQL 2008 Cluster with shared storage in a data center and we're moving this to a new data center.  It's quite a bit of data (400GB+), and I'm looking for a way to move it over without extended downtime.  Is there a way to
    add a third node perhaps to the cluster that's read only but syncs the data?  As a systems administrator (this company just lost their DBA, so I'm short filling the role), I may be approaching this the wrong way, but anyway.
    Any ideas?
    Thanks!

    Hi VintageDon,
    According to your description, personally,if you choose to use log shipping after adding a third node, you will need to manually transfer things like logins, linked servers, SQL jobs, etc. Even if you can failover the databases and recover
    them, however, this still causes SQL Server downtime due to the bit of data. Microsoft added live migration to Hyper-V in Windows Server 2008 R2, and Windows Server 2012. The management feature helps reduce planned downtime and provides a foundation for the
    dynamic data center by allowing you to move virtual machines (VMs) between Hyper-V hosts with no downtime at all.
    A memory synchronization process occurs where the target VM is updated with the user’s changes to the source VM. After the memory is synchronized, the user is cut over to the VM running on the target Hyper-V host. The VM on the new host can immediately access
    its virtual hard disk (VHD) files stored on Cluster Shared Volumes (CSVs). For more information, see:
    http://windowsitpro.com/windows-server-2012/windows-server-2012-shared-storage-live-migration
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Hi, I have a problem with adding one song to existing album in my itunes, how to do it ?

    This is is always added separately like new album, if I rename it to existing album, it's always stay separately.

    Take a look at my article on Grouping tracks into Albums, in particular the topic Tried that, there are STILL too many covers!.
    tt2

  • Adding new fields to existing cube

    Hi all,
    We have a cube with data, now we added 2 new fields to the datasource. And want to add two more fields in the Cube to correspond to the new fields. What would be the impact on the data that is already there in the cube? Can I go ahead and add the fields in the cube and start extracting data? What I want to know is what would be the implications in such a scenario?
    Any insight or help would be appreciated and fully rewarded.
    Thanks,
    Mav

    Hi,
    At least from the experience I had there were no performance issues. The only issue would be in terms of the time taken to transport. The larger the amount of data ,longer the transport time.
    You can continue extraction of data for those 2 fields from that point on. No need to empty the data from the cube and reload again. Only thing is that the data in the past for these 2 fields may not be mapped to the existing data in the cube. If you want to do that dump the cube and reload. Otherwise the data will be mapped from the time you start extracting the new fields with the data in the cube.
    Cheers,
    Kedar

Maybe you are looking for

  • Read CSV file each field is enveloped within double quotes

    Hi All, I have a txt file in Apps server which is separated by comma but each values are in double quotes. Reason some text fields might have comma as value and not as separator Example:- "00601100001","","","","","BIJOUTERIE ,SARAH III","","1500" Ho

  • Can I download and install CS3 (NOT TRIAL - full version)?

    I have the original CDs and serial #s but no optical drive on my new laptop. I have it installed on my iMac, and it may be on my OLD laptop, which is dead. I understand you may only have Adobe products on two machines, so I would want to deactivate i

  • Apple power mac g3 upgrade

    I have a power mac g3 blue and white and I was wondering if I can upgrade the ram more than 1gb and upgrade the hard drive to more than 128gb. Basically upgrade the whole unit. Thanks

  • 1% battery new rMBR is it bad to drain it to that percentage?

    Hello, fairly new to Macbook pro used an iMac for over 6 years.. but I've had my macbook pro retina for just under a week and lovin' it! But I was experiecing problems with battery life to what I thought. So i called Apple care and disccusessd this i

  • Premiere Pro Export auf DVD

    Guten Tag, wer weiß Rat ? beim exportieren der fertigen Sequenz auf DVD , wird das Video am Fernsehgerät nur schwarz/weiß wiedergegeben. was habe ich falsch gemacht ? beim abspielen am PC ist es farbig , dies nur als Zusatzinfo ! Vielen Dank im vorau