EBS 7.4 with ZFS and Zones

The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
Edited by: neilnewman on Apr 3, 2008 6:42 AM

The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
Edited by: neilnewman on Apr 3, 2008 6:42 AM

Similar Messages

  • EBS with ZFS and Zones

    I will post this one again in desperation, I have had a SUN support call open on this subject for some time now but with no results.
    If I can't get a straight answer soon, I will be forced to port the application over to Windows, a desperate measure.
    Has anyone managed to recover a server and a zone that uses ZFS filesystems for the data partitions.
    I attemped a restore of the server and then the client zone but it appears to corrupt my ZFS file systems.
    The steps I have taken are listed below:
    Built a server and created a zone, added a ZFS fileystem to this zone and installed the EBS 7.4 client software into the zone making the host server the EBS server.
    Completed a backup.
    Destroyed the zone and host server.
    Installed the OS and re-created a zone with the same configuration.
    Added the ZFS filesystem and made this available within the zone.
    Installed EBS and carried out a complete restore.
    Logged into the zone and installed the EBS client software then carried out a complete restore.
    After a server reload this leaves the ZFS filesytem corrupt.
    status: One or more devices could not be used because the the label is missing
    or invalid. There are insufficient replicas for the pool to continue
    functioning.
    action: Destroy and re-create the pool from a backup source.
    see: http://www.sun.com/msg/ZFS-8000-5E
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    p_1 UNAVAIL 0 0 0 insufficient replicas
    mirror UNAVAIL 0 0 0 insufficient replicas
    c0t8d0 FAULTED 0 0 0 corrupted data
    c2t1d0 FAULTED 0 0 0 corrupted data

    I finally got a solution to the issue, thanks to a SUN tech guy rather than a member of the EBS support team.
    The whole issue revolves around the file:/etc/zfs/zpool.cache which needs to be backed up prior to carrying out a restore.
    Below is a full set of steps to recover a server using EBS7.4 that has zones installed and using ZFS:
    Instructions On How To Restore A Server With A Zone Installed
    Using the servers control guide re-install the OS from CD configuring the system disk to the original sizes, do not patch at this stage.
    Create the zpool's and the zfs file systems that existed for both the global and non-global zones.
    Carry out a restore using:
    If you don't have a bootstrap printout, read the backup tape to get the backup indexes.
    cd /usr/sbin/nsr
    Use scanner -B -im <device>
    to get the ssid number and record number
    scanner �B -im /dev/rmt/0hbn
    cd /usr/sbin/nsr
    Enter: ./mmrecov
    You will be prompted for the SSID number followed by the file and record number.
    All of this information is on the Bootstrap report.
    After the index has been recovered:
    Stop the backup demons with: �/etc/rc2.d/S95networker stop�
    Copy the original res file to res.org and then copy res.R to res.
    Start the backup demons with: �/etc/rc2.d/S95networker start�
    Now run: nsrck �L7 to reconstruct the indexes.
    You should now have your backup indexes intact and be able to perform standard restores.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache /etc/zfs/zpool.cache.org
    To restore the whole system:
    Shutdown any sub zones
    cd /
    Run �/usr/sbin/nsr/nsrmm �m� to mount the tape
    Enter �recover�
    At the Recover prompt enter: �force�
    Now enter: �add *� (to restore the complete server, this will now list out all the files in the backup library selected for restore)
    Now enter: �recover� to start the whole system recovery, and ensure the backup tape is loaded into the server.
    If the system is using ZFS:
    cp /etc/zfs/zpool.cache.org /etc/zfs/zpool.cache
    Reboot the server
    The non-global zone should now be bootable use zoneadm -z <zoneaname> boot
    start an X session onto the non-global zone and carry out a selective restore of all the ZFS file systems.

  • Striping with ZFS and SAN(AMS500 HDS)

    There is a way to do striping with zfs?
    I have Storage-SAN of HDS(AMS500), and I want to do striping on luns from the storage. I don't want any raid-5(because I have one on the disks in the HDS storage).
    I only want to do strip on 2 luns(25GB each) that cames from different controllers.
    Is striping on zfs those 2 luns is the best why to do it?
    thanks.

    In theory yes. So I would have expected that to work.
    Of course, zfs is supposed to be a "smart" volume manager.
    Your not supposed to have to control factors like striping and concatenation.
    You just tell it what storage is available and it works out the best way to lay it out.
    So theres always a chance its trying to do something "smart" which doesnt turn out to be what you want.
    You might have to take this up on the opensolaris zfs forums.

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Solaris 10 6/06 ZFS and Zones, not quite there yet...

    I was all excited to get ZFS working in our environment, but alas, a warning appeared in the docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=view
    essentially it says that ZFS should not be used for non-global zone root file systems.. I was hoping to do this, and make it easy, global zone root UFS, and another disk all ZFS where all non-global whole root zones would live.
    One can only do so much with only 4 drives that require mirroring! (x4200's, not utilizing an array)
    Sigh.. Maybe in the next release (I'll assume ZFS will be supported to be 'bootable' by then...
    Dave

    I was all excited to get ZFS working in our
    environment, but alas, a warning appeared in the
    docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=
    view
    essentially it says that ZFS should not be used for
    non-global zone root file systems..Yes. If you can live with the warning it gives (you may not be able to upgrade the system), then you can do it. The problem is that the the installer packages (which get run during an upgrade) don't currently handle ZFS.
    Sigh.. Maybe in the next release (I'll assume ZFS
    will be supported to be 'bootable' by then...Certainly one of the items needed for bootable ZFS is awareness in the installer. So yes it should be fixed by the time full support for ZFS root filesystems is released. However last I heard, full root ZFS support was being targeted for update 4, not update 3.
    Darren

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • More Major Issues with ZFS + Dedup

    I'm having more problems - this time, very, very serious ones, with ZFS and deduplication. Deduplication is basically making my iSCSI targets completely inaccessible to the clients trying to access them over COMSTAR. I have two commands right now that are completely hung:
    1) zfs destroy pool/volume
    2) zfs set dedup=off pool
    The first command I started four hours ago, and it has barely removed 10G of the 50G that were allocated to that volume. It also seems to periodically cause the ZFS system to stop responding to any other I/O requests, which in turn causes major issues on my iSCSI clients. I cannot kill or pause the destroy command, and I've tried renicing it, to no avail. If anyone has any hints or suggestions on what I can do to overcome this issue, I'd very much appreciate that. I'm open to suggestions that will kill the destroy command, or will at least reprioritize it such that other I/O requests have precedence over this destroy command.
    Thanks,
    Nick

    To add some more detail, I've been review iostat and zpool iostat for a couple of hours, and am seeing some very, very strange behavior. There seem to be three distinct patterns going on here.
    The first is extremely heavy writes. Using zpool iostat, I see write bandwidth in the 15MB/s range sustained for a few minutes. I'm guessing this is when ZFS is allowing normal access to volumes and when it is actually removing some of the data for the volume I tried to destroy. This only lasts for two to three minutes at a time before progressing to the next pattern.
    The second pattern is categorized by heavy, heavy read access - several thousand read operations per second, and several MB/s bandwidth. This also lasts for five or ten minutes before proceeding to the third pattern. During this time there is very little, if any write activity.
    The third and final pattern is categorized by absolutely no write activity (0s in both the write ops/sec and the write bandwidth columns, and very, very small read activity. By small ready activity, I mean 100-200 read ops per second, and 100-200K read bandwidth per second. This lasts for 30 to 40 minutes, and then the patter proceeds back to the first one.
    I have no idea what to make of this, and I'm out of my league in terms of ZFS tools to figure out what's going on. This is extremely frustrating because all of my iSCSI clients are essentially dead right now - this destroy command has completely taken over my ZFS storage, and it seems like all I can do is sit and wait for it to finish, which, as this rate, will be another 12 hours.
    Also, during this time, if I look at the plain iostat command, I see that the read ops for the physical disk and the actv are within normal ranges, as are asvc_t and %w. %b, however is pegged at 99-100%.
    Edited by: Nick on Jan 4, 2011 10:57 AM

  • ZFS mount points and zones

    folks,
    a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
    we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
    with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
    is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
    we haven't looked at zone clusters at this stage if for no other reason than time....
    or is there a better way?
    thanks muchly,
    nelson

    i must be missing something...any ideas what and where?
    nelson
    devsun012~> zpool import Zbob
    devsun012~> zfs list|grep bob
    Zbob 56.9G 15.5G 21K /export/zfs/bob
    Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
    Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
    devsun012~> zpool export Zbob
    devsun012~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    1 bob running /opt/zones/bob native shared
    devsun013~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    16 bob running /opt/zones/bob native shared
    devsun012~> clrt list|egrep 'oracle_|HA'
    SUNW.HAStoragePlus:6
    SUNW.oracle_server:6
    SUNW.oracle_listener:5
    devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
    devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
    devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
    root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
    root@devsun012 > bob-has-rs
    clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
    clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
    clrs: (C891200) Failed to create resource "bob-has-rs".

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Confused about ZFS filesystems created with Solaris 11 Zone

    Hello.
    Installing a blank Zone in Solaris *10* with "zonepath=/export/zones/TESTvm01" just creates one zfs filesystem:
    +"zfs list+
    +...+
    +rzpool/export/zones/TESTvm01 4.62G 31.3G 4.62G /export/zones/TESTvm01"+
    Doing the same steps with Solaris *11* will ?create? more filesystems:
    +"zfs list+
    +...+
    +rpool/export/zones/TESTvm05 335M 156G 32K /export/zones/TESTvm05+
    +rpool/export/zones/TESTvm05/rpool 335M 156G 31K /rpool+
    +rpool/export/zones/TESTvm05/rpool/ROOT 335M 156G 31K legacy+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris 335M 156G 310M /export/zones/TESTvm05/root+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris/var 24.4M 156G 23.5M /export/zones/TESTvm05/root/var+
    +rpool/export/zones/TESTvm05/rpool/export 62K 156G 31K /export+
    +rpool/export/zones/TESTvm05/rpool/export/home 31K 156G 31K /export/home"+
    I dont understand why Solaris 11 is doing that. Just one FS (like in Solaris 10) would be better for my setup. I want to configure all created volumes by myself.
    Is it possible to deactivate this automatic "feature"?

    There are several reasons that it works like this, all guided by the simple idea "everything in a zone should work exactly like it does in the global zone, unless that is impractical." By having this layout we get:
    * The same zfs administrative practices within a zone that are found in the global zone. This allows, for example, compression, encryption, etc. of parts of the zone.
    * beadm(1M) and pkg(1) are able to create boot environments within the zone, thus making it easy to keep the global zone software in sync with non-global zone software as the system is updated (equivalent of patching in Solaris 10). Note that when Solaris 11 updates the kernel, core libraries, and perhaps other things, a new boot environment is automatically created (for the global zone and each zone) and the updates are done to the new boot environment(s). Thus, you get the benefits that Live Upgrade offered without the severe headaches that sometimes come with Live Upgrade.
    * The ability to have a separate /var file system. This is required by policies at some large customers, such as the US Department of Defense via the DISA STIG.
    * The ability to perform a p2v of a global zone into a zone (see solaris(5) for examples) without losing the dataset hierarchy or properties (e.g. compression, etc.) set on datasets in that hierarchy.
    When this dataset hierarchy is combined with the fact that the ZFS namespace is virtualized in a zone (a feature called "dataset aliasing"), you see the same hierarchy in the zone that you would see in the global zone. Thus, you don't have confusing output from df saying that / is mounted on / and such.
    Because there is integration between pkg, beadm, zones, and zfs, there is no way to disable this behavior. You can remove and optionally replace /export with something else if you wish.
    If your goal is to prevent zone administrators from altering the dataset hierarchy, you may be able to accomplish this with immutable zones (see zones admin guide or file-mac-profile in zonecfg(1M)). This will have other effects as well, such as making all or most of the zone unwritable. If needed, you can add fs or dataset resources which will not be subject to file-mac-profile and as such will be writable.

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

  • Start-up / shut-down (and other) problems with ZFS over iSCSI

    Hi,
    I've had a limited time in which to try some concepts on an evaluation x4500 server. This is unfortunately my last day, so by the time anyone can reply I will probably not be able to to further tests, but maybe someone will be able to reproduce the issues.
    I'm using the evaluation server to export some iSCSI targets, and I'm connecting to them from my laptop, which has a fresh installation of SXDE 01/08. I was able to attach to the targets with
    iscsiadm add static-config \
    iqn.1986-03.com.sun:02:f4281081-c3fc-e448-c8f0-943d8861c9e8,192.168.15.62
    iscsiadm add static-config \
    iqn.1986-03.com.sun:02:371c6fe7-f0c8-e02c-c865-baef90fb71ce,192.168.15.62
    iscsiadm add static-config \
    iqn.1986-03.com.sun:02:ee74d5de-3046-6295-f35e-afe60e13db23,192.168.15.62
    iscsiadm modify discovery -s enableThis made the appropriate entries appear in /dev/dsk so I could then set up a simple zpool and zfs filesystem with:
    zpool create -m none dPool c3t010000144FA70C1400002A0047C2BF73d0 \
    c3t010000144FA70C1400002A0047C2BF81d0 c3t010000144FA70C1400002A0047C2BF8Bd0
    zfs create -o mountpoint=/data -o sharenfs=on dPool/dataThe general concept I'm testing is having a ZFS-based server using an IP SAN as a growable source of storage, and making the data available to clients over NFS/CIFS or other services. In principle this solution should also allow failover to another server, since all the ZFS data and metadata is in the IP SAN, not on the server. Although not done in this example it should also be possible to run raidz across multiple iSCSI disk arrays. However, it's not been a bed of roses. I've had a lot of errors in dmesg like the following, which I think are causing zfs/zpool commands to stall at times:
    Feb 28 14:30:39 F4060 iscsi: [ID 866572 kern.warning] WARNING: iscsi connection(ffffff014f6b6b78) protocol error - received an unsupported opcode:0x41
    Feb 28 14:30:41 F4060 iscsi: [ID 158826 kern.warning] WARNING: iscsi connection(10) login failed - failed to receive login response
    Feb 28 14:30:41 F4060 scsi_vhci: [ID 734749 kern.warning] WARNING: vhci_scsi_reset 0x1
    Feb 28 14:30:41 F4060 iscsi: [ID 339442 kern.notice] NOTICE: iscsi connection failed to set socket optionTCP_NODELAY, SO_RCVBUF or SO_SNDBUF
    Feb 28 14:30:41 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(13) unable to connect to target iqn.1986-03.com.sun:02:ee74d5de-3046-6295-f35e-afe60e13db23
    Feb 28 14:30:41 F4060 iscsi: [ID 339442 kern.notice] NOTICE: iscsi connection failed to set socket optionTCP_NODELAY, SO_RCVBUF or SO_SNDBUF
    Feb 28 14:30:41 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(7) unable to connect to target iqn.1986-03.com.sun:02:f4281081-c3fc-e448-c8f0-943d8861c9e8Does anyone know why a Solaris iSCSI target would send an unsupported opcode (0x41) to a Solaris iSCSI initiator? Surely they should be talking the same language!
    The main problems however are with shutdown and start-up. On occasions, I suspect that the ordering of ZFS, iSCSI and network services gets a bit out of sync. On one occasion the laptop even refused to complete the shutdown because it was reporting a continuous stream of console messages like
    Feb 27 18:26:37 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(13) unable to connect to target iqn.1986-03.com.sun:02:ee74d5de-3046-6295-f35e
    -afe60e13db23
    Feb 27 18:26:37 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(10) unable to connect to target iqn.1986-03.com.sun:02:371c6fe7-f0c8-e02c-c865
    -baef90fb71ce
    Feb 27 18:26:37 F4060 iscsi: [ID 933263 kern.notice] NOTICE: iscsi connection(7) unable to connect to target iqn.1986-03.com.sun:02:f4281081-c3fc-e448-c8f0-
    943d8861c9e8I also get these on start-up, where it looks like ZFS tries to load the zpool configuration before iSCSI has found the disks, and even worse, iSCSI is starting up before nwamd has time to do its network auto-magic, and complains that the devices are unavailable.
    If these problems sorted themselves out after everything came up, I wouldn't really mind some temporary complaints in the log file, but what I get after a reboot is a working zpool but an unmounted ZFS filesystem! Here is what I have today:
    bash-3.2# zpool status
      pool: dPool
    state: ONLINE
    scrub: scrub completed with 0 errors on Thu Feb 28 14:33:43 2008
    config:
            NAME                                     STATE     READ WRITE CKSUM
            dPool                                    ONLINE       0     0     0
              c3t010000144FA70C1400002A0047C2BF73d0  ONLINE       0     0     0
              c3t010000144FA70C1400002A0047C2BF81d0  ONLINE       0     0     0
              c3t010000144FA70C1400002A0047C2BF8Bd0  ONLINE       0     0     0
    errors: No known data errors
    bash-3.2# zfs list
    NAME         USED  AVAIL  REFER  MOUNTPOINT
    dPool        480M  28.9G     1K  none
    dPool/data   480M  28.9G   480M  /dataThis all looks fine, and you can see that I was even able to scrub the pool data with no problems. But where are the 480MB of data I have put in the /data mountpoint:
    bash-3.2# ls /data
    bash-3.2# df -h /data
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c1d0s0         15G   4.4G    11G    30%    /As you can see, /data is unmounted, causing df to revert to the / filesystem containing the empty /data mountpoint, instead of showing the zpool mount.
    Since zfs is supposed to take care of its own mounts rather than using vfstab, I can't use "mount /data" to force this to mount. The only workaround I've found is to export and import the zpool. Then I get the filesystem to reappear:
    bash-3.2# df -h /data
    Filesystem             size   used  avail capacity  Mounted on
    dPool/data              29G   480M    29G     2%    /dataDoes anyone know if these are known issues with snv_79b, and is there a fix available or in the works?
    TIA,
    Graham

    EasyE, Welcome to the discussion area!
    (a) Call Apple and get your iMac G5 fixed since it is in the repair extension group. Don't waste your time doing anything else.
    (b) This area is for discussing the iMac G4. Since you have an iMac G5 in the future you should post in the iMac G5 discussion area.

  • Integrating Oracle EBS and ApEx Aplication with Responsibilities and SSO.

    Good day all.
    I am looking forward from getting somebody 's help, the trouble I am facing is described below:
    a) I am currently working on SSO with EBS. I mean, my users can connect and work perfectly.
    b) ApEx is Configured as Application Partner with SSO, and the application we built (it's call PR-Auto)
    is working good under SSO platform. I mean I am able to login using TEST user and password TEST in
    both applications (EBS and PR-Auto).
    c) The thing is that I need to call PR-Auto from one responsibility in EBS;
    Following my setup for the responsibility:
    - I have created a function:
    Name: APEX_FA_PR
    Properties:
    Function type: SSWA plsql
    Web HTML :apps.apex_launcher.launch_fa_pr
    Web Host agent: pls/apex
    - I have created a menu, application and responsibility using the function APEX_FA_PR.
    - I have create launcher package:
    create or replace package apex_launcher is
    procedure launch_fa_pr;
    end apex_launcher;
    create or replace package body apex_launcher as
    procedure launch_fa_pr as
    begin
    /** 110 Is ID of PR-Auto, my app **/
    /** 5 is my home page **/
    f(p=>'110:5');
    end;
    end apex_launcher;
    d) New responsibility shows on EBS menu page.
    e) Click on responsibility, and the page shows 'redirecting to login server for authentication', but
    nothing happens, page goes blank with this url:
    http://fahorromex37.fahorro.com.mx:8004/pls/apex/wwv_flow_custom_auth_sso.process_success?urlc=v1.2~42
    03F9A8A1D696097BEA96499E6B6845E80C14A56DF724C3FFF879578FC734C5E1DEEA9129A4117E62A3676A409528E8EB927AA55
    0EA7B208C34F5A3FDB4472679EDE448F8971966BE9BADD22207FE90BDBA2800E6529F3967A18DEC76DCC17DE21D96A65CA2C424
    319F159CC78ED78E8B99F69F1BA8297A1EECF6AD137A6C3896E1C4E8D5F93874A9A08887D3F95058D33F667D7B785FF0A065B53
    891B8B393DFD24530BD0720150F05DE63F0CD5AFD86F0267BAF4C9CAE8C5AA693B4E488B3776BF43450FD412167B402C962BABE
    A54707043AFA6FBB168B29EDB3BE120FFE0C30683D53283B036E781ABF1A5F7374ADF83463D57D2EE958765B0501CE2B0F4E3DF
    24845A54A1CF02526FA39EF60644ED5A0D9D2A05EBFAD3BD01007D0817135989A4B97D68C92C6E2BA767CFDB0AF188054024BB1
    EFFA7DEC8699BBA7485A349D87BA1C15475927E52110DF56FCC3FD560D2CBBA1C0D7D9D3ADFCDB975CD2
    the address of my application pr-auto is http://fahorromex37.fahorro.com.mx:8004/pls/apex/f?p=110
    f) DBA teams follow instructions from the following documentation
    "Integrating Oracle E-Business Suite Release 11i with Oracle Internet Directory and Oracle Single Sign-On"
    and "Note 261914.1 Integrating Oracle E-Business Suite Release 11i with Oracle Internet Directory and
    Oracle Single Sign-On"
    g) We are using:
    DB: Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
    SO: Linux 2.6.9-42.ELsmp
    ApEx: 3.0.1.00.07
    Any help will be greatly appreciated.
    J.O.

    Many Thanks Daniel for your prompt reply.
    Tried to understand the white Paper and your thread but I am still facing problem,although able to Call ApEX page but now i
    want to pass th e session Id where I am stuck.
    MY three functions:
    CREATE OR REPLACE FUNCTION SYMAPEX.apex_authorise (
    p_username IN VARCHAR2
    , p_password IN VARCHAR2) RETURN BOOLEAN
    AS
    BEGIN
    IF apex_validate_hash (p_username, p_password) THEN RETURN TRUE;
    END IF;
    RETURN (FND_WEB_SEC.validate_login@VCSDEV2_QA (p_username, p_password) = 'Y');
    END apex_authorise;
    CREATE OR REPLACE FUNCTION SYMAPEX.apex_generate_hash (
    p_string IN VARCHAR2
    , p_offset IN NUMBER DEFAULT 0) RETURN VARCHAR2
    IS
    BEGIN
    IF p_string IS NULL THEN RETURN NULL;
    END IF;
    RETURN RAWTOHEX(UTL_RAW.cast_to_raw(
    DBMS_OBFUSCATION_TOOLKIT.MD5(input_string=>p_string||':'||
    TO_CHAR(SYSDATE-(p_offset/24*60*60),'YYYYMMDD HH24MISS'))));
    END apex_generate_hash;
    CREATE OR REPLACE FUNCTION SYMAPEX.apex_validate_hash (
    p_string IN VARCHAR2
    , p_hash IN VARCHAR2
    , p_delay IN NUMBER DEFAULT 5) RETURN BOOLEAN
    IS
    BEGIN
    FOR i IN 0..p_delay LOOP
    IF p_hash = apex_generate_hash (p_string, i) THEN RETURN TRUE; END IF;
    END LOOP;
    RETURN FALSE;
    END apex_validate_hash;
    MY Launch Procedure:
    CREATE OR REPLACE Package body OAE_PKG1 AS
    PROCEDURE LaunchOAE1 (application IN NUMBER DEFAULT 101
    , page IN NUMBER DEFAULT 111
    , request IN VARCHAR2 DEFAULT NULL
    , item_names IN VARCHAR2 DEFAULT NULL
    , item_values IN VARCHAR2 DEFAULT NULL)
    AS
    BEGIN
    OWA_UTIL.mime_header('text/html', false);
    OWA_COOKIE.send
    (name=>'APEX_APPS_'||application,
    value=>FND_GLOBAL.user_name||':'||apex_generate_hash@QA_VCSDEV2(FND_GLOBAL.user_name),
    domain => '.orvcsd01.symprod',
    path=>'/');
    OWA_UTIL.redirect_url('http://orvcsd01.symprod.com:7780'||'/pls/apex/f?p='||application||':'||page||'::'||request||':::'||ite
    m_names||':'||item_values);
    END LaunchOAE1;
    END OAE_PKG1;
    MY On Load before headre process:
    DECLARE
    c OWA_COOKIE.cookie;
    a wwv_flow_global.vc_arr2;
    BEGIN
    c := OWA_COOKIE.get('APEX_APPS_101');
    a := htmldb_util.string_to_table(c.vals(1));
    :P111_USERNAME := a(1);
    :P111_PASSWORD := a(2);
    IF :P111_PASSWORD IS NOT NULL THEN
    wwv_flow_custom_auth_std.login(
    P_UNAME => :P111_USERNAME,
    P_PASSWORD => :P111_PASSWORD,
    P_SESSION_ID => v('APP_SESSION'),
    P_FLOW_PAGE => :APP_ID||':111');
    END IF;
    END;
    I am doing custom authencitaion and calling apex_authorise function there.
    Although I am able to Call the ApEX and able to validate application server password,but moment i try taking help off cookies
    to pass on my application session details to ApEX so that users would not have to login twice,i am gettign the error.
    Second question:
    Do we have any other methos of passing session to ApEX from Application server other than cookies.
    Please suggest.
    Thanks.
    Ravijeet

  • Solaris 10 zone configuration with sysidcfg and dhcp and hostname

    Hi
    Excuse me if I look like a n00b... it's probably because I'm a n00b.
    I've been struggling in the dark for more than 2 days now and I'm wondering if I'm thinking about this all wrong...
    I have stand-alone server where I need to run zones. I want to create zones and automagically configure them at boot (read: by running a script). So here's what I need...
    A zone
    starting from unconfigured state
    whose hostname is not the same as the zone name
    using corporate DHCP to get its IP address
    with DNS config coming from the DHCP server
    registering its address the DNS
    with a preconfigured root password
    (I don't own the corporate DHCP or DNS servers, I can't put my own DHCP or DNS servers on the network.)
    I would lke to create the zone, throw some config at it, then boot the zone and walk away. I am using zones with exclusive-IP. I can construct the zones and manually configure them once they're started to have DHCP, my own name, registered IP address with DNS and everything else I have specified above. But I don't want to do it manually...
    Sysidcfg seems to do some of what I want but not entirely.
    In sysidcfg I can set the root_password, the primary interface using DHCP, DNS server. I can't set a hostname in sysidcfg AND use configure it for DHCP. So the hostname is not what I want it to be after the zone is started and ready to go. The DHCP server is providing the DNS configuration, Solaris does not seem to honour it, but i'll ignore that for the moment.
    I have tried various combinations of using sysidcfg, /etc/nodename, /etc/hostname.+interface+ and /etc/dhcp.+interface+ but I can't find any combination that actually works.
    I can write to the zonestorage/etc/nodename to set the nodename, that works. But it does not match the DHCP address, so I get prompted for a new name service because it can't find a DNS entry for the name.
    I can write to the zonestorage/etc/hostname.+interface+ and /etc/dhcp.+interface+ (to get the system to register its name with the DNS server after getting its DHCP address) but then I get a system with no root password and no DNS configuration, even though they are set in the sysidcfg file.
    I can write a script that gets part of the way using sysidcfg and /etc/... files, then boots the zone and then runs a bunch of voodoo via zlogin commands to fix all the stuff that couldn't be done 'properly', but that's not a 'boot and walk away' environment. I can write a script that uses sysidcfg and hacks around with other files in /etc (like nsswitch.conf, resolv.conf), but that just feels likes a dirty hack to fix something that wasn't done properly in the first place.
    So where am I going wrong and how do I do it right (within the constraints defined)? Why can't I configure, boot and walk away?
    Thanks

    Thanks abrante
    Thanks for your response!
    I don't think the config is messed up after the installation. I think the installation is fine, it's just not what I want :-)
    I'm trying to decouple the zonename from the system name and get DNS registrations working. After installation, a DHCP client can get its hostname from DNS but I'm trying to do it the other way around. I want the DHCP client specify its own hostname, get an address from the DHCP server and then register its hostname with DNS. If the system gets its name from DNS/DHCP then I have to configure those to provide the system name and I don't own the DHCP/DNS infrastructure. These zones are for a development/QA environment, so we create and reconfigure these frequently. Hence the need to specify the system name within the zone and register that name in the DNS.
    I have tried fiddling with the PARAM_REQUEST_LIST but it does not seem to be working as I expect. :-$ Removing 12 did not help with setting the hostname from the system. DNS does not have a registered name for this system anyway, so even if it tried to get a name for this system, it would get nothing.
    I also do want the DHCP to change the DNS server and domain name, but this does not happen even though my dhcpagent includes 6 and 15 in the PARAM_REQUEST_LIST. I still have to set them in the sysidcfg file because it is always ignored in Solaris (S10u8 with 10_Recommended 30-Jul-2010)
    As stated, I know I can hack around with the system after it has booted. But I'm trying to configure the system before it starts and let it take care of itself and not have to touch it. Frankly I'm surprised that the sysidcfg does not allow you to set a hostname name when you are using DHCP, that the default DHCP configuration does not register the system name with the DNS server, and the DNS config from the DHCP response is ignored. Even a sys-unconfiged system requires DNS configuration during initial boot, when I know that the DHCP response contains DNS information.
    FYI: Windows systems using DHCP work as expected in this respect by default, i.e. set system name, use DHCP --> system gets address from corporate DHCP, DNS settings are set from DHCP information, DNS registration is made for system name.
    I'm working around this at the moment... I call my zone by the system name I want, I hardcode the DNS settings in the sysidcfg file and I create the hostname.+nic+ and dhcp.+nic+ files in the zone storage to get the system to register its name with DNS, them boot.
    Edited by: cydonian on Aug 19, 2010 7:45 PM

  • Best Practices to configure the connectivity with Bank and EBS

    Hi All,
    We are working on a requirement where Middleware(SOA) has to read the file from Oracle E-biz server and send it to BANK.
    For Connectivity setup between Oracle E-biz and Middleware(SOA), Middleware(SOA) and BANK do we need to use oracle as user or we can use any user to communicate between E-biz and Bank?
    Could you please share best practices/documents if you have any on how we should establish the connectivity and setup the workflow with Bank and E-Biz. E-Business
    Thanks & Regards
    Narendra

    Hi Narendra,
    Oracle User and the FTP user are 2 different users.
    I'm assuming you'll be reading the file from R12 through File Adapter and writing it to Bank using FTP Adapter.
    Oracle User is able to login into R12, do some operations, submit some concurrent programs/requests based on responsibilities and generate the file to be transferred (like in my case it did by running a Concurrent Request). The file so generated should be placed at a location from where File Adapter can read it within the BPEL process. Now to read the file, the user that is used is a SOA server user (again different from R12 user). This is the same user that you use to login into your SOA server physical box. Hence to be able to read the file, your file should have appropriate privileges (we set that as 777) so that it can be read by the SOA process (using SOA user).
    FTP user, on the other hand, is the user that allows connection to Bank FTP server. This has absolutely no connection with R12. Bank who hosts the FTP server must give you the FTP user details that you'll use inside your FTP JNDI Configuration on Weblogic. When you deploy and run your process (you don't deploy adapter), it picks up the connection details from FTP JNDI properties that you defined in weblogic.
    Hence both the jusers can be different and I don't think any best practices are required or do exist for this.
    Regards,
    Neeraj Sehgal

Maybe you are looking for

  • Imports Photos from one user name to another

    We have two user profiles on my IMac (myself and my wife).  Each has a separate IPhoto.  My wife has imported some of our photos into her IPhoto, while I have imported others into mine.  How can I import/share the photos between the user names so tha

  • Report using LV5.1

    I would like to create a report-file (txt or Excel..) that contains each input data and each output data in my VI.. How can I do? Have you a G-prg for this?? Thanks

  • Photo App Crashes Since Upgrade to IOS 8

    Since I updated to IOS 8 I cannot get the "PHOTO" app (not Iphoto) to stay open...will open for 1-2 seconds and then crash.   Works perfectly fine on my Ipad Retina, but other Ipad, which is a generation behind just crashes.   Itunes shows the photos

  • Nokia N8 Update.Help!! :D

    So...first of all hey xD,i am new to this forums so I am sorry if I placed this topic in a wrong section. So the problem is that I got a dark grey Nokia N8 and that is from French. His product code is:059C304 The telephone is from a company named SFR

  • Fill or swatch that makes layers underneath transparent?

    I'm working on a vector with Illustrator CS3, and soon I'm planning to have many of the elements animate. What I'm looking for is a swatch or something that I can fill a shape with that will occlude everything on every layer underneath that shape. To