Set samba on NFS cluster

Hi,
I have a two nodes cluster running HA NFS on top of several UFS and ZFS file systems. Now I'd like to have samba sharing those file systems using the same logical hostname. In this case, do I need to create a separate samba RG as the user guide suggested, or I can just use the same RG for the NFS and only create a samba resource through samba_register?
Thanks,

Thanks Neil for spending time on this issues on the weekend.. I should have thought about these two things when I did the testing Friday.
I just corrected the config file and re-run the test. It still failed. I notice it complained the faultmonitor user as well as the RUN_NMBD variable.
For the first complaint, I did verify the fmuser as the manual suggested through the smbclient command.
# hostname
test2
# /usr/sfw/sbin/smbd -s /global/ufs/fs1/samba/test10/lib/smb.conf -D# smbclient -s /global/ufs/fs1/samba/test10/lib/smb.conf -N -L test10
Anonymous login successful
Domain=[WINTEST] OS=[Unix] Server=[Samba 3.0.28]
Sharename Type Comment
testshare Disk
IPC$ IPC IPC Service (Samba 3.0.28)
Anonymous login successful
Domain=[WINTEST] OS=[Unix] Server=[Samba 3.0.28]
Server Comment
Workgroup Master
# smbclient -s /global/ufs/fs1/samba/test10/lib/smb.conf '//test10/scmondir' -U test10/fmuser%samba -c 'pwd;exit'
Domain=[ANSYS] OS=[Unix] Server=[Samba 3.0.28]
Current directory is \\test10\scmondir\
# pkill -TERM smbd
For the RUN_NMBD, I recalled somebody posted on this forum but his solution was not recognized.
Start_command output,
# ksh -x /opt/SUNWscsmb/samba/bin/start_samba -R 'samba-rs' -G 'nfs-rg' -X 'smbd nmbd' -B '/usr/sfw/bin' -S '/usr/sfw/sbin' -C '/global/ufs/fs1/samba/test10' \
-L '/global/ufs/fs1/samba/test10/logs' -U test10/fmuser%samba -M 'scmondir' -P '/usr/sfw/lib' -H test10/bin/pwd
2> /dev/null
PWD=/var/adm
+ + basename /opt/SUNWscsmb/samba/bin/start_samba
MYNAME=start_samba
+ + /usr/bin/awk -F_ {print $1}
+ /usr/bin/echo start_samba
parm1=start
+ + /usr/bin/awk -F_ {print $2}
+ /usr/bin/echo start_samba
parm2=samba
+ /opt/SUNWscsmb/bin/control_samba -R samba-rs -G nfs-rg -X smbd nmbd -B /usr/sfw/bin -S /usr/sfw/sbin -C /global/ufs/fs1/samba/test10 -L /global/ufs/fs1/samba/test10/logs -U test10/fmuser%samba -M scmondir -P /usr/sfw/lib -H test10 start samba
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ validate_common
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ rc=0
+ [ ! -d /usr/sfw/bin ]
+ debug_message Validate - samba bin directory /usr/sfw/bin exists
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ [ ! -d /usr/sfw/sbin ]
+ debug_message Validate - samba sbin directory /usr/sfw/sbin exists
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ [ ! -d /global/ufs/fs1/samba/test10 ]
+ debug_message Validate - samba configuration directory /global/ufs/fs1/samba/test10 exists
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ [ ! -f /global/ufs/fs1/samba/test10/lib/smb.conf ]
+ debug_message Validate - smbconf /global/ufs/fs1/samba/test10/lib/smb.conf exists
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ [ ! -x /usr/sfw/bin/nmblookup ]
+ debug_message Validate - nmblookup /usr/sfw/bin/nmblookup exists and is executable
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ /usr/sfw/bin/nmblookup -h
+ 1> /dev/null 2>& 1
+ [ 1 -eq 0 ]
+ + /usr/bin/awk {print $2}
+ /usr/sfw/bin/nmblookup -V
VERSION=3.0.28
+ + /usr/bin/cut -d. -f1
+ /usr/bin/echo 3.0.28
SAMBA_VERSION=3
+ + /usr/bin/cut -d. -f2
+ /usr/bin/echo 3.0.28
SAMBA_RELEASE=0
+ + /usr/bin/cut -d. -f3
+ /usr/bin/echo 3.0.28
SAMBA_UPDATE=28
+ debug_message Validate - Samba version <3.0.28> is being used
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ + stripfunc 3
SAMBA_VERSION=3
+ + stripfunc 0
SAMBA_RELEASE=0
+ + stripfunc 28
SAMBA_UPDATE=28
+ rc_validate_version=0
+ [ -z 3.0.28 ]
+ [ 3 -lt 2 ]
+ [ 3 -eq 2 -a 0 -le 2 -a 28 -lt 2 ]
+ [ 0 -gt 0 ]
+ debug_message Function: validate_common - End
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ return 0
+ rc1=0
+ validate_samba
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ rc=0
+ [ ! -d /global/ufs/fs1/samba/test10/logs ]
+ debug_message Validate - Samba log directory /global/ufs/fs1/samba/test10/logs exists
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ [ ! -x /usr/sfw/sbin/smbd ]
+ debug_message Validate - smbd /usr/sfw/sbin/smbd exists and is executable
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ [ ! -x /usr/sfw/sbin/nmbd ]
+ debug_message Validate - nmbd /usr/sfw/sbin/nmbd exists and is executable
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ /usr/bin/grep \[ /global/ufs/fs1/samba/test10/lib/smb.conf
+ /usr/bin/cut -d[ -f2
+ /usr/bin/grep scmondir
+ /usr/bin/cut -d] -f1
+ [ -z scmondir ]
+ debug_message Validate - Faultmonitor resource scmondir exists
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ [ ! -x /usr/sfw/bin/smbclient ]
+ debug_message Validate - smbclient /usr/sfw/bin/smbclient exists and is executable
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ + /usr/bin/echo test10/fmuser%samba
+ /usr/bin/cut -d% -f1
+ /usr/bin/awk BEGIN { FS="\\" } {print $NF}
USER=test10/fmuser
+ /usr/bin/getent passwd test10/fmuser
+ [ -z  ]
+ syslog_tag
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ scds_syslog -p daemon.error -t SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs -m Validate - Couldn't retrieve faultmonitor-user <%s> from the nameservice test10/fmuser
+ rc=1
+ + /usr/bin/tr -s [:lower:] [:upper:]
+ /usr/bin/echo YES
Bad string
RUN_NMBD=
+ [  = YES -o  = NO ]
+ syslog_tag
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ scds_syslog -p daemon.error -t SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs -m Validate - RUN_NMBD=%s is invalid - specify YES or NO
+ rc=1
+ debug_message Function: validate_samba - End
+ print SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs
+ return 1
+ rc2=1
+ rc=1
+ [ 0 -eq 0 -a 1 -eq 0 ]
+ [ 1 -eq 0 ]
+ rc=1
+ exit 1
+ rc=1
+ exit 1
Messages file output,
# tail -f /var/adm/messages
Nov 21 11:14:28 test2 Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node test3 change to RG_OFFLINE
Nov 21 15:52:57 test2 syslogd: going down on signal 15
Nov 21 15:53:32 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-ufs-fs2-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 failed to run: share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 exited with status 0.
Nov 21 15:53:32 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-zfs-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 failed to run: share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 exited with status 0.
Nov 21 15:56:18 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 327132 daemon.error] Validate - Couldn't retrieve faultmonitor-user <test10/fmuser> from the nameservice
Nov 21 15:56:18 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 287111 daemon.error] Validate - RUN_NMBD= is invalid - specify YES or NO
Nov 23 04:20:40 test2 cl_eventlogd[1156]: [ID 848580 daemon.info] Restarting on signal 1.
Nov 23 04:20:40 test2 last message repeated 2 times
Nov 23 04:21:43 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-ufs-fs2-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 failed to run: share -F nfs -o sec=sys,rw /localfs/ufs/fs2/data > /var/run/.hanfs/.run.out.2024 2>&1 exited with status 0.
Nov 23 04:21:43 test2 SC[SUNW.nfs:3.2,nfs-rg,nfs-zfs-rs,nfs_probe]: [ID 903370 daemon.debug] Command share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 failed to run: share -F nfs -o sec=sys,rw /CLSzpool/export > /var/run/.hanfs/.run.out.2026 2>&1 exited with status 0.
Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_common - Begin
Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Method: control_samba - Begin
Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - samba bin directory /usr/sfw/bin exists
Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - samba sbin directory /usr/sfw/sbin exists
Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - samba configuration directory /global/ufs/fs1/samba/test10 exists
Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - smbconf /global/ufs/fs1/samba/test10/lib/smb.conf exists
Nov 24 09:11:50 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - nmblookup /usr/sfw/bin/nmblookup exists and is executable
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - Samba version <3.0.28> is being used
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_common - End
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_samba - Begin
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - Samba log directory /global/ufs/fs1/samba/test10/logs exists
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - smbd /usr/sfw/sbin/smbd exists and is executable
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - nmbd /usr/sfw/sbin/nmbd exists and is executable
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - Faultmonitor resource scmondir exists
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Validate - smbclient /usr/sfw/bin/smbclient exists and is executable
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 327132 daemon.error] Validate - Couldn't retrieve faultmonitor-user <test10/fmuser> from the nameservice
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 287111 daemon.error] Validate - RUN_NMBD= is invalid - specify YES or NO
Nov 24 09:11:51 test2 SC[SUNWscsmb.samba.start]:nfs-rg:samba-rs: [ID 702911 daemon.debug] Function: validate_samba - End
Thanks again,
Jon

Similar Messages

  • Problems setting up an NFS server

    Hi everybody,
    I just completed my first arch install. :-)
    I have a desktop and a laptop, and I installed Arch on the desktop (the laptop runs Ubuntu 9.10). I had a few difficulties here and there, but I now have the system up and running, and I'm very happy.
    I have a problem setting up an NFS server. With Ubuntu everything was working, so I'm assuming that the Ubuntu machine (client) is set-up correctly. I'm trying to troubleshoot the arch box (server) now.
    I followed this wiki article: http://wiki.archlinux.org/index.php/Nfs
    Now, I have these problems:
    - when I start the daemons, I get:
    [root@myhost ~]# /etc/rc.d/rpcbind start
    :: Starting rpcbind [FAIL]
    [root@myhost ~]# /etc/rc.d/nfs-common start
    :: Starting rpc.statd daemon [FAIL]
    [root@myhost ~]# /etc/rc.d/nfs-server start
    :: Mounting nfsd filesystem [DONE]
    :: Exporting all directories [BUSY] exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.1.1/24:/home".
    Assuming default behaviour ('no_subtree_check').
    NOTE: this default has changed since nfs-utils version 1.0.x
    [DONE]
    :: Starting rpc.nfsd daemon [FAIL]
    - If I mount the share on the client with "sudo mount 192.168.1.20:/home /media/desktop", IT IS mounted but I can't browse it because I have no privileges to access the home directory for the user.
    my /etc/exports looks like this:
    # /etc/exports: the access control list for filesystems which may be exported
    # to NFS clients. See exports(5).
    /home 192.168.1.1/24(rw,sync,all_squash,anonuid=99,anongid=99))
    /etc/conf.d/nfs-common.conf:
    # Parameters to be passed to nfs-common (nfs clients & server) init script.
    # If you do not set values for the NEED_ options, they will be attempted
    # autodetected; this should be sufficient for most people. Valid alternatives
    # for the NEED_ options are "yes" and "no".
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=
    # Options to pass to rpc.statd.
    # See rpc.statd(8) for more details.
    # N.B. statd normally runs on both client and server, and run-time
    # options should be specified accordingly. Specifically, the Arch
    # NFS init scripts require the --no-notify flag on the server,
    # but not on the client e.g.
    # STATD_OPTS="--no-notify -p 32765 -o 32766" -> server
    # STATD_OPTS="-p 32765 -o 32766" -> client
    STATD_OPTS="--no-notify"
    # Options to pass to sm-notify
    # e.g. SMNOTIFY_OPTS="-p 32764"
    SMNOTIFY_OPTS=""
    # Do you want to start the idmapd daemon? It is only needed for NFSv4.
    NEED_IDMAPD=
    # Options to pass to rpc.idmapd.
    # See rpc.idmapd(8) for more details.
    IDMAPD_OPTS=
    # Do you want to start the gssd daemon? It is required for Kerberos mounts.
    NEED_GSSD=
    # Options to pass to rpc.gssd.
    # See rpc.gssd(8) for more details.
    GSSD_OPTS=
    # Where to mount rpc_pipefs filesystem; the default is "/var/lib/nfs/rpc_pipefs".
    PIPEFS_MOUNTPOINT=
    # Options used to mount rpc_pipefs filesystem; the default is "defaults".
    PIPEFS_MOUNTOPTS=
    /etc/hosts.allow:
    nfsd: 192.168.1.0/255.255.255.0
    rpcbind: 192.168.1.0/255.255.255.0
    mountd: 192.168.1.0/255.255.255.0
    Any help would be very appreciated!

    Thanks, I finally got it working.
    I realized that even though both machines had the same group, my Ubuntu machine (client) group has GID 1000, while the Arch one has GID 1001. I created a group that has GID 1001 on the client, and now everything is working.
    I'm wondering why my Arch username and group both have 1001 rather than 1000 (which I suppose would be the default number for the first user created).
    Anyway, thanks again for your inputs.

  • Can DB look up option set up using SQL cluster

    Could you please confirm, if the DB look up feature in ICM can be set up using SQL cluster.
    If, yes, could you please confirm, if this right configuration.
    Registry : \\CORPVCLUCSQL899\MSSQLP899\ETN_ICMContactLookup=(ETNICMLookup,l000kOnly)
    ICM configuration\Database look up explorer :
    \\MSSQLP899\DEBN_3701
    SQL Cluster server name : CORPVCLUCSQL899
    Instance name : MSSQLP899
    Table name :
    DEBN_3701

    I tried various combination of updating the path. But still getting same error. Please find below log of DB worker. Yes, we are aware that it supports only minimal feture.
    17:15:55:093 ra-dbw Add of ScriptTable complete. externaldb (5000).
    17:15:55:093 ra-dbw Trace: Invalid pathname for externaldb: \\CORPVCLUCSQL899\MSSQLP899\DEBN_3701
    17:15:55:094 ra-dbw Trace: Handling delete for ScriptTable 5000 (ScriptTableInfo::QueueUpdate (BadPath))
    17:15:55:094 ra-dbw Trace: Action=4 type=0 count=0
    17:15:55:094 ra-dbw Trace: Action=1 type=127 count=1
    17:15:55:094 ra-dbw Trace: Action=5 type=0 count=0
    17:15:55:094 ra-dbw Trace: Transaction complete.
    17:15:55:094 ra-dbw Add of ScriptTableColumn complete. ID 5000.
    17:15:55:094 ra-dbw Trace: Invalid pathname for externaldb: \\CORPVCLUCSQL899\MSSQLP899\DEBN_3701
    17:15:55:094 ra-dbw Trace: ScriptTable externaldb (ID 5000) disconnected.
    17:18:24:502 ra-dbw Trace: Action=4 type=0 count=0
    17:18:24:504 ra-dbw Trace: Action=3 type=126 count=1
    17:18:24:505 ra-dbw Trace: Action=5 type=0 count=0
    17:18:24:506 ra-dbw Trace: Transaction complete.
    17:18:24:506 ra-dbw Update of ScriptTable complete. externaldb (5000).
    17:18:24:506 ra-dbw Trace: Invalid pathname for externaldb: \\MSSQLP899\DEBN_3701
    17:18:24:506 ra-dbw Trace: Handling delete for ScriptTable 5000 (ScriptTableInfo::QueueUpdate (BadPath))
    17:18:24:506 ra-dbw Trace: ScriptTable externaldb (ID 5000) disconnected.
    17:34:21:436 ra-dbw Trace: Action=4 type=0 count=0
    17:34:21:437 ra-dbw Trace: Action=3 type=126 count=1
    17:34:21:437 ra-dbw Trace: Action=5 type=0 count=0
    17:34:21:437 ra-dbw Trace: Transaction complete.
    17:34:21:437 ra-dbw Update of ScriptTable complete. externaldb (5000).
    17:34:21:438 ra-dbw Trace: Invalid pathname for externaldb: \\151.110.200.190\MSSQLP899\DEBN_3701
    17:34:21:438 ra-dbw Trace: Handling delete for ScriptTable 5000 (ScriptTableInfo::QueueUpdate (BadPath))
    17:34:21:438 ra-dbw Trace: ScriptTable externaldb (ID 5000) disconnected.
    17:36:40:954 ra-dbw Unable to parse SQL login info string.
    17:38:18:725 ra-dbw Unable to parse SQL login info string.
    17:45:54:366 ra-dbw SQL login info updated.
    17:46:55:926 ra-dbw Trace: Action=4 type=0 count=0
    17:46:55:926 ra-dbw Trace: Action=3 type=126 count=1
    17:46:55:927 ra-dbw Trace: Action=5 type=0 count=0
    17:46:55:927 ra-dbw Trace: Transaction complete.
    17:46:55:927 ra-dbw Update of ScriptTable complete. externaldb (5000).
    17:46:55:928 ra-dbw Trace: Invalid pathname for externaldb: \\151.110.200.190\ETN_ICMContactLookup
    17:46:55:928 ra-dbw Trace: Handling delete for ScriptTable 5000 (ScriptTableInfo::QueueUpdate (BadPath))
    17:46:55:928 ra-dbw Trace: ScriptTable externaldb (ID 5000) disconnected.
    18:02:54:302 ra-dbw SQL login info updated.
    18:51:57:417 ra-dbw Trace: Action=4 type=0 count=0
    18:51:57:417 ra-dbw Trace: Action=3 type=126 count=1
    18:51:57:418 ra-dbw Trace: Action=5 type=0 count=0
    18:51:57:418 ra-dbw Trace: Transaction complete.
    18:51:57:419 ra-dbw Update of ScriptTable complete. externaldb (5000).
    18:51:57:419 ra-dbw Trace: Invalid pathname for externaldb: \\ETN_ICMContactLookup\DEBN_3701
    18:51:57:419 ra-dbw Trace: Handling delete for ScriptTable 5000 (ScriptTableInfo::QueueUpdate (BadPath))
    18:51:57:420 ra-dbw Trace: ScriptTable externaldb (ID 5000) disconnected.
    18:59:46:937 ra-dbw Trace: Action=4 type=0 count=0
    18:59:46:937 ra-dbw Trace: Action=3 type=126 count=1
    18:59:46:937 ra-dbw Trace: Action=5 type=0 count=0
    18:59:46:938 ra-dbw Trace: Transaction complete.
    18:59:46:938 ra-dbw Update of ScriptTable complete. externaldb (5000).
    18:59:46:939 ra-dbw Trace: Invalid pathname for externaldb: \\ICMLOOKUP\ETN_ICMContactLookup\DEBN_3701
    18:59:46:939 ra-dbw Trace: Handling delete for ScriptTable 5000 (ScriptTableInfo::QueueUpdate (BadPath))
    18:59:46:939 ra-dbw Trace: ScriptTable externaldb (ID 5000) disconnected.
    19:00:28:945 ra-dbw Trace: Action=4 type=0 count=0
    19:00:28:945 ra-dbw Trace: Action=3 type=126 count=1
    19:00:28:945 ra-dbw Trace: Action=5 type=0 count=0
    19:00:28:946 ra-dbw Trace: Transaction complete.
    19:00:28:946 ra-dbw Update of ScriptTable complete. externaldb (5000).
    19:00:28:946 ra-dbw Trace: Invalid pathname for externaldb: \\ICMLOOKUP\ETN_ICMContactLookup
    19:00:28:946 ra-dbw Trace: Handling delete for ScriptTable 5000 (ScriptTableInfo::QueueUpdate (BadPath))
    19:00:28:947 ra-dbw Trace: ScriptTable externaldb (ID 5000) disconnected.

  • Adding Samba to an existing NFS-Cluster

    Hello!
    Currently we are running a cluster consisting of two nodes which provide two NFS servers (one for each node). Both nodes are connected to an 6140 Storage array. If one of the nodes fails, the zpool is mounted on the other node, all works fine. The NFS servers are started in the global zone. Here is the config of the current cluster:
    # clrs status
    Cluster Resources ===
    Resource Name        Node Name            State     Status Message
    fileserv01            clnode01            Online    Online - LogicalHostname online.
                          clnode02            Offline   Offline
    fileserv01-hastp-rs   clnode01            Online    Online
                          clnode02            Offline   Offline
    fileserv01-nfs-rs     clnode01            Online    Online - Service is online.
                          clnode02            Offline   Offline
    fileserv02            clnode02            Online    Online - LogicalHostname online.
                          clnode01            Offline   Offline - LogicalHostname offline.
    fileserv02-hastp-rs   clnode02            Online    Online
                          clnode01            Offline   Offline
    fileserv02-nfs-rs     clnode02            Online    Online - Service is online.
                          clnode01            Offline   Offline - Completed successfully.Now we want to add two Samba servers which additionally share the same zpools. From what I read in the "Sun Cluster Data Service for Samba Guide for Solaris OS" this can only be done using zones, because I cannot run two Samba services on the same host.
    At this point I am stuck. We have to failover both NFS and Samba services if a failure occurs (because they share the same data) and have therefore to be in the same resource group. But this will not work because the list of nodes for each service is different ("clnode01, clnode02" vs. "clnode01:zone1, clnode02:zone2").
    I can mount the zpool from the global zone using "mount -F lofs", but I think I need a real dependency for the storage.
    My question is: How can we accomplish this setup?

    Hi,
    You can run multiple Samba services on the same host or global zone in your case. For example, suppose you had smb1 and smb2, you could run smb1 within your fileserv01 RG and smb2 within your fileserv02 RG.
    The only restriction is if you also require winbind, as only one instance of winbind is possible per global or non-global zone. In this regard if deploying smb1 and smb2 as above, if winbind is also required you would also need a scalable RG to ensure that winbindd gets started on each node.
    Please see http://docs.sun.com/app/docs/doc/819-3063/gdewn?a=view in particular Restrictions for multiple Samba instances that require Winbind, also please note the "note" within that section, which indicates that "... you may also use global as the zone name ..."
    Alternatively, please post the restriction from the docs.
    Regards
    Neil

  • NFS  Cluster

    Hi gurus,
    Is supported to configure more than one resource group that uses the NFS resource?
    That is, to have more than one instance of NFS services.
    Thanks in advance,
    AB

    Yes, absolutely it is supported. Doing so allows you to balance NFS services across the cluster. HOWEVER, there is one rule you must obey and that is that you can only share an NFS mount point from one cluster node at any one time.
    So, suppose you had a set of home directories that you wanted to share from /failover/export/home, then rather than sharing that from both cluster nodes (in two resource groups), you would break the shares up into two pieces. For example, /failover/export/home/a_to_m and /failover/export/home/n_to_z. That way, you can share one from one node and the other from the other node using two separate resource groups.
    One other rule that Solaris Cluster users need to be aware of: You cannot share an HA-NFS mount point from within the cluster to another cluster node. If you do, you risk deadlocks.
    Hope that helps,
    Tim
    ---

  • [Solved] Help Setting Samba Share In Qemu With XP Guest

    Hello all,
    After getting around all the quirks of setting an XP guest inside qemu, the only (though, pretty important) thing I can't solve is setting a samba share between the Arch host and the XP guest.
    I tried whatever I could think of that's relevant, with no success:
    Qemu is launched with:
    qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -boot d -m 1024 -net user,smb={my home folder} -net nic,model=virtio -rtc base=localtime -drive file=XP.qcow2,if=virtio -spice port=5900,disable-ticketing,image-compression=off,jpeg-wan-compression=never,zlib-glz-wan-compression=never,playback-compression=off -vga qxl -global qxl-vga.vram_size=67108864 -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent -balloon virtio
    I installed samba, added a user, and started the service according to the wiki.
    I allowed the samba ports in UFW (using the CIFS rule).
    Whenever I try accessing the share inside my XP guest (\\10.0.2.4\qemu), I usually end up with a target not found error, or at times, a password prompt that will not accept the samba user/password.
    Anything else I forgot to do?
    Thanks, Adam.
    * EDIT *
    I somehow missed the section regarding the guest OS able to access the host OS normal samba shares at 10.0.2.2.
    Default samba share for home folder works as expected.
    Last edited by adam777 (2013-07-15 16:05:44)

    Thanks.
    I tried toying with the options a bit more, with no success.
    I then decided to try Spice+QXL anyway, and am very happy with it.
    Aside from the lengthy compilation of the Qemu version that supports Spice from the AUR, once I got it set up, installing the guest was a breeze.
    Using the Spice guest tools for windows, everything was set up very conveniently (virtio for network, hdd, qxl driver, mouse integration etc.).
    Responsiveness is much better comparing to previous attempts with "vga std", "vga cirrus" and normal adapter.
    Bottom line, I'm sticking with Spice+QXL, seems to work best for me.

  • Is it possible to set up a bigger cluster with different AGs failover in sets of the nodes

    For SQL 2012 alwaysON, can it support this scenario?
    Set up a 12 node windows server failover cluster (A B C ... L)
    create 8 availability groups, let them fail over in 3 nodes of the cluster, e.g.
    AG1 failover in A B  C nodes
    AG2 failover in B C D nodes
    AG3 failover in C D E nodes
    etc.
    if this is supported, I guess then it's possible to say for any node in the Windows cluster, it can be at the same time be primaries for some AGs and secondaries for some other AGs (if we don't worry about performance)

    Microsoft supporting it is one thing. You and your team supporting it is another. Highly available systems need to be kept as simple as possible so long as it meets your recovery objectives and service level agreements. The question here is this: do all
    databases on those Availability Groups all have the same recovery objectives and service level agreements? This should dictate how you architect your solution
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • How to set multple values in cluster with local variable?

    Hello all,
    Ok, I've made my way through Labview for everyone, and have some basic concepts down. I can see with a cluster, if acting directly upon it, you can unbundle, change values, rebundle, etc.
    I'm trying something a bit more complex...and just not sure how to get started on this.
    I have a drop down menu ring. I have set this up as a typedef, with 4 values.  I have used this typedef 7 times, plus some LED bools, in a cluster. I have made this cluster a typedef.
    So, in my main vi I"m starting to design, I've set up an example posted here....and in it, I have two instantiations of the cluster typedef, Left Selector and Right Selector.
    I have dropped into this vi, a copy of the menu ring typedef (same typedef as in the clusters, same values)....called reset all tubes.
    I'm trying now, to figure out how, with an event on the change of value for 'reset all tubes'....that I can start with the left selector, and change all the tubes (these are the menu ring selector)  to the same value as what has been selected with the 'reset all tubes' menu ring.
    I've created a local variable for the left selector. It is set to read values. (I'll be doing the same with the right one too, but just starting with the left).
    In examples I've seen where directly accessing a cluster, you could unbundle the cluster...loop through and change the values...maybe pull out all of the 'tubes' into an array and move through that to update the values.  And when you bundle or unbundle the cluster...you can see the values, etc, when you stretch them out on the block diagram.
    With the local variable..I can't see to 'stretch' it out like I was expecting..so I can access the values for the 'tubes'...and set them all to the value of the 'reset all tubes' ring menu value.
    Can someone put me on the path on the best way to do this....or is there a structure of component I'm missing here ?  Am I on the right track to begin with here?
    This would seem pretty basic to me, but I'm just missing something here on how to start...
    Thank you in advance,
    cayenne
    Solved!
    Go to Solution.
    Attachments:
    example_select_dropdown.ctl ‏5 KB
    public_selector_cluster.ctl ‏6 KB
    laptop_test_public.vi ‏12 KB

    nathand wrote:
    You can't do this with a for loop the way the cluster is structured, but why make it so complicated?  Just bundle the new value into the cluster as shown below:
    If you do want to use a for loop, consider restructuring your cluster.  Group the ring and a boolean into a cluster, then drop 7 of those into the selector cluster.  Then you can use "cluster to array" and "array to cluster" since all elements in the outer cluster will be of the same type.
    Also, be careful using rings as type definitions.  You probably want to use an enumeration instead.  The items in a ring do not update when you update the type definition because they're considered to be cosmetic; the items in an enumeration type definition do update, because the items in an enumeration are considered part of the data type.
    Oh my goodness!!
    What was MUCH easier than I was trying to make it!!
    I was attempting to do this reset, in an incremental fashion, as that later, I'm going to need to change values in this cluster by individual elements....and was using this as a way to try to understand how to iterate between cluster elements, and change the values of each one.
    This solution works much better for this 'reset' all solution.
    One question on this....with regard to the enum vs the ring menu.
    I was actually going to go with the enum, however, I could not find a way to make it LOOK like the ring menu...with the arrow to the side that a user would know to click to present the menu list of choices.
    I see with the enum, that you can remove the increment/decrement indicator.....and if the user clicks the control, it will indeed pop up a menu of choices...but there is no 'arrow' on that control to indicate to the user that it is a menu choice there.....
    Is there a way to make and enum look like a ring with the "drop down menu" control look?
    Again, thank you !!
    This helps!
    C

  • NFS cluster node crashed

    Hi all, we have a 2-node cluster running Solaris 10 11/06 and Sun Cluster 3.2.
    Recently, we were asked to nfs mount on node 1 of the cluster, a directory from an external Linux host (ie node 1 of the cluster is the nfs client; the linux server is the nfs server).
    A few days later, early on a Sunday morning, the linux server developed a high load and was very slow to log into. Around the same time, node 1 of the cluster rebooted. Was this reboot of node 1 a coincidence? I'm not sure.
    Anyone got ideas/suggestions about this situation (eg the slow response of the nfs linux server caused node 1 of the cluster to reboot; the external nfs mount is a bad idea)?
    Stewart

    Hi,
    your assumption sounds very unreasonable. But without any hard facts like
    - the panic string
    - contents of /var/adm/messages at time of crash
    - configuration information
    - etc.
    it is impossible to tell.
    Regards
    Hartmut

  • Setting up multi-Mac cluster for Compressor is not working

    Hi
    I have Mac Pro 8 Core Intel Xenon ( Master ) and MacBook Pro Intel Core 2 Duo ( slave ) .
    I am using FCS 3 .
    I try set up Cluster for both of them. Everything is fine to the point when is starting sheering jobs between computers. MacBook Pro doesn't take any action. I can see in Batch Monitor that job is segmented for all cores ( we have total 10 cores- if I sign all off them in Cluster I got in Batch Monitor 20 segments ) but all jobs take my master.
    I called Apple Care but they told me to reinstall compressor but this doesn't help.
    What I should do ??

    Thank you for closing your thread with a workable solution. Hardly anyone does that around here. It is a valuable and gracious act.
    congratulations on figuring it out on your own. I do not believe any of us would have suggested you do a complete reinstallation of both the MacOS and FCS to solve that problem because you did not include the single most vital factor: you had migrated your FCS system to a new Macintosh using Migration Assistant.
    bogiesan

  • Setting umask on NFS mounts under OS X 10.6

    I don't see any up to date information on setting umasks with 10.6. We use NFS SAN storage for several folders, mounted using the new NFS Mounts feature of Disk Utility and the SAN and Mac are playing nicely for UID and GID mapping even using our Active Directory LDAP groups and accounts.
    However, it creates everything as 755 (umask 0022, the system default) and I'd like to change that umask to 002 to create 775's instead. Nothing for 10.4 or 10.5 seems to apply here (NSUMask, etc.) and it doesn't appear to be a NFS Mount option, either.
    NFSv4 and ACLs are not really a good option for me since it's a multiprotocol SAN environment we want to keep NFSv3 permissions bits for UNIX/Linux/MAC and ACLs for Windows. Any ideas how to set the default umask, or at least a way to set the umask for each mount?

    xzi wrote:
    Yes, I tried that, it does not appear to work on 10.6 that's my point.
    I'm sorry, but I just can't figure out "etc". All I can do is keep asking if you have tried this or that. Have tried both variations of that config file and it didn't work? Have you tried instructions in this document, which seems updated for Snow Leopard?
    Have you considered that you may have already tried the correct solution, but simply didn't do it correctly? Have you tried with logging-out and back in? With rebooting?

  • How do I set up a NFS server?

    Running 10.5 on macbook. Trying to create NFS server to mount ISO to proprietary UNIX client, AIX 5.2. Looked under SHARING and directory utility. Not finding any documentation which seems to indicate how this is done.
    On UNIX, I would start NFS services, add directory to export list with apprpriate permissions (Which client can access it etc) and it would be good to go (assuming name resolution/routing etc). Any insight would be helpful.

    You're over my head with your UNIX knowledge, but I wonder if the 10.4.x method still works in 10.5.x?
    http://mactechnotes.blogspot.com/2005/09/mac-os-x-as-nfs-server.html
    Nope, I guess not, they removed NetInfoManager in 10.5.
    Few know as much as this guy about OSX & he has a little utility to do it, but requires 10.6+...
    http://www.williamrobertson.net/documents/nfs-mac-linux-setup.html
    Maybe some clues you haven't seen yet anyway???

  • Installing and setting up Real Application Cluster

    We installed Oracle 9i R2 and then inserted the CD of RAC. The CD did not open any front-end for installing and configuring RAC. Please let me know how to start the setup and configure RAC on Windows XP.

    Maybe searching and starting manually setup on RAC-cd will work?

  • Problems Setting up NFS

    Hi.
    I've been having problems setting up NFS on a LAN at home.  I've read through all the Arch NFS-related threads and looked at (and tried some of) the suggestions, but without success.
    What I'm trying to do is set up an NFS share on a wireless laptop to copy one user's files from the laptop to a wired workstation.  The laptop is running Libranet 2.8.1. The workstation is running Arch Linux with the most up-to-date (as of today) software, including NFS.
    I have the portmap, nfslock, and nfsd daemons running via rc.conf, set to run in that order.
    I've re-installed nfs-utils and set up hosts.deny as follows:
    # /etc/hosts.deny
    ALL: ALL
    # End of file
    and hosts.allow as follows:
    # /etc/hosts.allow
    ALL: 192.168.0.0/255.255.255.0
    ALL: LOCAL
    # End of file
    In other words, I want to allow access on the machine to anything running locally or on the LAN.
    I have the shared directories successfully exported from the laptop (192.168.0.3), but I have not been able to mount the exported directories on the workstation (192.168.0.2).
    The mount command I'm using at the workstation:
    mount -t nfs -o rsize=8192,wsize=8192 192.168.0.3:/exports /exports
    There is an /exports directory on both machines at root with root permissions. There is an /etc/exports file on the NFS server machine that gives asynchronous read/write permissions to 192.168.0.2 for the /exports directory.
    I receive the following message:
    mount: RPC: Program not registered
    This appears to be a hosts.allow/hosts.deny type problem, but I can't figure out what I need to do to make RPC work properly.
    Some additional information, just in case it's helpful:
    This is the result if running rpcinfo:
       program vers proto   port
        100000    2   tcp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  32790  status
        100024    1   tcp  32835  status
        100003    2   udp   2049  nfs
        100003    3   udp   2049  nfs
        100003    4   udp   2049  nfs
        100003    2   tcp   2049  nfs
        100003    3   tcp   2049  nfs
        100003    4   tcp   2049  nfs
        100021    1   udp  32801  nlockmgr
        100021    3   udp  32801  nlockmgr
        100021    4   udp  32801  nlockmgr
        100021    1   tcp  32849  nlockmgr
        100021    3   tcp  32849  nlockmgr
        100021    4   tcp  32849  nlockmgr
        100005    3   udp    623  mountd
        100005    3   tcp    626  mountd
    Running ps shows that the following processes are running:
    portmap
    rpc.statd
    nfsd
    lockd
    rpciod
    rpc.mountd
    Thanks for any help here!
    Regards,
    Win

    Hi ravster.
    Thanks for the suggestions. I had read the Arch NFS HowTo and did as you suggested get rcpinfo for the server machine. The Libranet NFS server is configured rather different since the NFS daemons are built into the default kernel.
    Since I needed to complete the transfer immediately, I gave up for now trying to set up NFS and used netcat (!) instead to transfer the files from the Libranet laptop to the Arch workstation. The laptop will be converted to Arch Linux soon so I think I'll be able to clear things up soon.  I haven't had any problems setting up NFS with homogeneous (i.e., all Arch or all Libranet) computer combinations.
    Thanks again.
    Win

  • New files and folders on a Linux client mounting a Windows 2012 Server for NFS share do not inherit Owner and Group when SetGID bit set

    Problem statement
    When I mount a Windows NFS service file share using UUUA and set the Owner and Group, and set the SetGID bit on the parent folder in a hierarchy. New Files and folders inside and underneath the parent folder do not inherit the Owner and Group permissions
    of the parent.
    I am given to understand from this Microsoft KnowledgeBase article (http://support.microsoft.com/kb/951716/en-gb) the problem is due to the Windows implmentation of NFS Services not supporting the Solaris SystemV or BSD grpid "Semantics"
    However the article says the same functionality can acheived by using ACE Inheritance in conjunction with changing the Registry setting for "KeepInheritance" to enable Inheritance propagation of the Permissions by the Windows NFS Services.
    1. The Precise location of the "KeepInheritance" DWORD key appears to have "moved" in  Windows Server 2012 from a Services path to a Software path, is this documented somewhere? And after enabling it, (or creating it in the previous
    location) the feature seems non-functional. Is there a method to file a Bug with Microsoft for this Feature?
    2. All of the references on demonstrating how to set an ACE to achieve the same result "currently" either lead to broken links on Microsoft technical websites, or are not explicit they are vague or circumreferential. There are no plain Examples.
    Can an Example be provided?
    3. Is UUUA compatible with the method of setting ACE to acheive this result, or must the Linux client mount be "Mapped" using an Authentication source. And could that be with the new Flat File passwd and group files in c:\windows\system32\drivers\etc
    and is there an Example available.
    Scenario:
    Windows Server 2012 Standard
    File Server (Role)
    +- Server for NFS (Role) << -- installed
    General --
    Folder path: F:\Shares\raid-6-array
    Remote path: fs4:/raid-6-array
    Protocol: NFS
    Authentication --
    No server authentication
    +- No server authentication (AUTH_SYS)
    ++- Enable unmapped user access
    +++- Allow unmapped user access by UID/GID
    Share Permissions --
    Name: linux_nfs_client.host.edu
    Permissions: Read/Write
    Root Access: Allowed
    Encoding: ANSI
    NTFS Permissions --
    Type: Allow
    Principal: BUILTIN\Administrators
    Access: Full Control
    Applies to: This folder only
    Type: Allow
    Principal: NT AUTHORITY\SYSTEM
    Access: Full Control
    Applies to: This folder only
    -- John Willis, Facebook: John-Willis, Skype: john.willis7416

    I'm making some "major" progress on this problem.
    1. Apparently the "semantics" issue to honor SGID or grpid in NFS on the server side or the client side has been debated for some time. It also existed as of 2009 between Solaris nfs server and Linux nfs clients. The Linux community defaulted to declaring
    it a "Server" side issue to avoid "Race" conditions between simultaneous access users and the local file system daemons. The client would have to "check" for the SGID and reformulate its CREATE request to specify the Secondary group it would have to "notice"
    by which time it could have changed on the server. SUN declined to fix it.. even though there were reports it did not behave the same between nfs3 vs nfs4 daemons.. which might be because nfs4 servers have local ACL or ACE entries to process.. and a new local/nfs
    "inheritance" scheme to worry about honoring.. that could place it in conflict with remote access.. and push the responsibility "outwards" to the nfs client.. introducing a race condition, necessitating "locking" semantics.
    This article covers that discovery and no resolution - http://thr3ads.net/zfs-discuss/2009/10/569334-CR6894234-improved-sgid-directory-compatibility-with-non-Solaris-NFS-clients
    2. A much Older Microsoft Knowledge Based article had explicit examples of using Windows ACEs and Inheritance to "mitigate" the issue.. basically the nfs client "cannot" update an ACE to make it "Inheritable" [-but-] a Windows side Admin or Windows User
    [-can-] update or promote an existing ACE to "Inheritable"
    Here are the pertinent statements -
    "In Windows Services for UNIX 2.3, you can use the KeepInheritance registry value to set inheritable ACEs and to make sure that these ACEs apply to newly created files and folders on NFS shares."
    "Note About the Permissions That Are Set by NFS Clients
    The KeepInheritance option only applies ACEs that have inheritance enabled. Any permissions that are set by an NFS client will
    only apply to that file or folder, so the resulting ACEs created by an NFS client will
    not have inheritance set."
    "So
    If you want a folder's permissions to be inherited to new subfolders and files, you must set its permissions from the Windows NFS server because the permissions that are set by NFS clients only apply to the folder itself."
    http://support.microsoft.com/default.aspx?scid=kb;en-us;321049
    3. I have set up a Windows 2008r2 NFS server and mounted it with a Redhat Enteprise Linux 5 release 10 x86_64 server [Oct 31, 2013] and so far this does appear to be the case.
    4. In order to mount and then switch user to a non-root user to create subdirectories and files, I had to mount the NFS share (after enabling Anonymous AUTH_SYS mapping) this is not a good thing, but it was because I have been using UUUA - Unmapped Unix
    User Access Mapping, which makes no attempt to "map" a Unix UID/GID set by the NFS client to a Windows User account.
    To verify the Inheritance of additional ACEs on new subdirectories and files created by a non-root Unix user, on the Windows NFS server I used the right click properties, security tab context menu, then Advanced to list all the ACEs and looked at the far
    Column reflecting if it applied to [This folder only, or This folder and Subdirectories, or This folder and subdirectories and files]
    5. All new Subdirectories and files createdby the non-root user had a [Non-Inheritance] ACE created for them.
    6. I turned a [Non-Inheritance] ACE into an [Inheritance] ACE by selecting it then clicking [Edit] and using the Drop down to select [This folder, subdirs and files] then I went back to the NFS client and created more subdirs and files. Then back to the
    Windows NFS server and checked the new subdirs and folders and they did Inherit the Windows NFS server ACE! - However the UID/GID of the subdirs and folders remained unchanged, they did not reflect the new "Effective" ownership or group membership.
    7. I "believe" because I was using UUUA and working "behind" the UID/GID presentation layer for the NFS client, it did not update that presentation layer. It might do that "if" I were using a Mapping mechanism and mapped UID/GID to Windows User SIDs and
    Group SIDs. Windows 2008r2 no longer has a "simple" Mapping server, it does not accept flat text files and requires a Schema extension to Active Directory just to MAP a windows account to a UID/GID.. a lot of overhead. Windows Server 2012 accepts flat text
    files like /etc/passwd and /etc/group to perform this function and is next on my list of things to see if that will update the UID/GID based on the Windows ACE entries. Since the Local ACE take precedence "over" Inherited ACEs there could be a problem. The
    Inheritance appears to be intended [only] to retain Administrative rights over user created subdirs and files by adding an additional ACE at the time of creation.
    8. I did verify from the NFS client side in Linux that "Even though" the UID/GID seem to reflect the local non-root user should not have the ability to traverse or create new files, the "phantom" NFS Server ACEs are in place and do permit the function..
    reconciling the "view" with "reality" appears problematic, unless the User Mapping will update "effective" rights and ownership in the "view"
    -- John Willis, Facebook: John-Willis, Skype: john.willis7416

Maybe you are looking for

  • How to create and use scales for DAQ-6014 in Labview?

    Hello, In my application, I use the DAQmx-driver to read the 16 analog input terminals of my DAQ-Card. I create a task with all the 16 channels and define for each channels the input domain (from 0 to 10V) and configure them as NRSE. Then I configure

  • Passing Configuration Details with Dynamic Value - SFTP Adapter

    Hi: We are using SFTP adapter to get and load the files to a secured location. Now there is one more secured location which has IP address where we need to load the same file in that location. So is it possible to make the configurations in Deploymen

  • How to delete single MRP element line item

    Hi gurus, I need to delete a SINGLE line item for an MRP list for materials.  Is there a transaction to do this?  I have tried MD08, but that only deletes whole lists.  I am trying to clear out old elements that are years old in my list for a materia

  • Insert Excel information to Access Table, cf error

    I am trying to create a file upload that will allow me to use a .csv file, to insert the data into an access database. I have the code written, but right now, it's giving me an error and I can't figure out what I'm doing wrong, this is my first time

  • BI Publisher Desktop - MS Word giving error!!!

    HI Experts, I am following BI Publisher 'Getting started' article on Oracle site. In on of the step it asks to open BI Publisher report into MS word. When I did this it give an error. Article link: http://st-curriculum.oracle.com/obe/fmw/bi/biee/r101