Mount options for ZFS filesystem on Solaris 10

Do you know some recomendation
about mount options for SAP on Oracle
with data on ZFS filesystem?
Also recomended block size required.
We assume that file system with datafiles has 8kb block size
and offline redologs has default (128kB).
But what about ONLINE REDOLOGS?
Best regards
Andy

SUN Czech installed new production HW for one Czech customer with ZFS filesystem on data-, redo- and archivelog files.
Now we have performance problem and currently there is no SAP recomendation
for ZFS file system.
The HW which are by benchmark about tvice power has worst responses than
old hardware.
a) There is bug in Solaris 10 - ZFS buffers once allocated are not released
    (generally we do not want to use buffering due to prevence of double
     buffering)
b) ZFS buffers takes about 20GB (32GB total) of memory on DB server
and we are not able to define huge shared pool and db cache. (it may be possible
to set special parameter in /etc/system to reduce maximum size of ZFS buffers to e.g. 4GB )
c) We are looking for proven mount option for ZFS to enable asynchronious/concurent io for database filesystems
d) There is no proven clear answer for support of ZFS/SOLARIS/Oracle/SAP.
SAP says It is Oracle problem, Oracle does not certify filesystems from Jan2007
any more and says ask your OS provider and SUN looks happy, but performance
goes down and it is not so funny for system with 1TG DB with over 30GB grow
per month.
Andy

Similar Messages

  • Mounting /tmp as a filesystem on Solaris 10?

    Hi,
    Does anyone know if it's possible to mount /tmp as a filesystem in Solaris 10? My server is running Veritas Volume Manager 5.0 and the boot drive is encapsulated. After I change vfstab to mount /tmp as a standard filesystem, the server goes into single user mode whenever I reboot. It complains that it is unable to find the volume device for the /tmp filesystem. I also notice that the diskgroup where the /tmp volume resides is not imported so it seems like Veritas is not initialized at this point.
    On a side note, I have been able to mount /tmp as a filesystem in the past in Solaris 9.
    Any help would be appreciated.
    Thanks.

    dxchea wrote:
    Hi,
    Does anyone know if it's possible to mount /tmp as a filesystem in Solaris 10? That's the default. I'm guessing you mean to mount it as a UFS filesystem instead?
    My server is running Veritas Volume Manager 5.0 and the boot drive is encapsulated. After I change vfstab to mount /tmp as a standard filesystem, the server goes into single user mode whenever I reboot. It complains that it is unable to find the volume device for the /tmp filesystem. I also notice that the diskgroup where the /tmp volume resides is not imported so it seems like Veritas is not initialized at this point.How did you create the filesystem you want to use for /tmp? What does your vfstab entry look like? Almost certainly the error is there.
    On a side note, I have been able to mount /tmp as a filesystem in the past in Solaris 9./tmp is a filesystem by default on all versions of Solaris. I think you're trying to say something else here.
    Darren

  • DskPercent not returned for ZFS filesystems?

    Hello.
    I'm trying to monitor the space usage of some ZFS filesystems on a Solaris 10 10/08 (137137-09) Sparc system with SNMP. I'm using the Systems Management Agent (SMA) agent.
    To monitor the stuff, I added the following to /etc/sma/snmp/snmpd.conf and restarted svc:/application/management/sma:default:
    # Bug in SMA?
    # Reporting - NET-SNMP, Solaris 10 - has a bug parsing config file for disk space.
    # -> http://forums.sun.com/thread.jspa?threadID=5366466
    disk /proc 42%  # Dummy Wert; wird fälschlicherweise ignoriert werden...
    disk / 5%
    disk /tmp 10%
    disk /apps 4%
    disk /data 3%Now I'm checking what I get via SNMP:
    --($ ~)-- snmpwalk -v2c -c public 10.0.1.26 dsk
    UCD-SNMP-MIB::dskIndex.1 = INTEGER: 1
    UCD-SNMP-MIB::dskIndex.2 = INTEGER: 2
    UCD-SNMP-MIB::dskIndex.3 = INTEGER: 3
    UCD-SNMP-MIB::dskIndex.4 = INTEGER: 4
    UCD-SNMP-MIB::dskPath.1 = STRING: /
    UCD-SNMP-MIB::dskPath.2 = STRING: /tmp
    UCD-SNMP-MIB::dskPath.3 = STRING: /apps
    UCD-SNMP-MIB::dskPath.4 = STRING: /data
    UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/md/dsk/d200
    UCD-SNMP-MIB::dskDevice.2 = STRING: swap
    UCD-SNMP-MIB::dskDevice.3 = STRING: apps
    UCD-SNMP-MIB::dskDevice.4 = STRING: data
    UCD-SNMP-MIB::dskMinimum.1 = INTEGER: -1
    UCD-SNMP-MIB::dskMinimum.2 = INTEGER: -1
    UCD-SNMP-MIB::dskMinimum.3 = INTEGER: -1
    UCD-SNMP-MIB::dskMinimum.4 = INTEGER: -1
    UCD-SNMP-MIB::dskMinPercent.1 = INTEGER: 5
    UCD-SNMP-MIB::dskMinPercent.2 = INTEGER: 10
    UCD-SNMP-MIB::dskMinPercent.3 = INTEGER: 4
    UCD-SNMP-MIB::dskMinPercent.4 = INTEGER: 3
    UCD-SNMP-MIB::dskTotal.1 = INTEGER: 25821143
    UCD-SNMP-MIB::dskTotal.2 = INTEGER: 7150560
    UCD-SNMP-MIB::dskTotal.3 = INTEGER: 0
    UCD-SNMP-MIB::dskTotal.4 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.1 = INTEGER: 13584648
    UCD-SNMP-MIB::dskAvail.2 = INTEGER: 6471520
    UCD-SNMP-MIB::dskAvail.3 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.4 = INTEGER: 0
    UCD-SNMP-MIB::dskUsed.1 = INTEGER: 11978284
    UCD-SNMP-MIB::dskUsed.2 = INTEGER: 679040
    UCD-SNMP-MIB::dskUsed.3 = INTEGER: 0
    UCD-SNMP-MIB::dskUsed.4 = INTEGER: 0
    UCD-SNMP-MIB::dskPercent.1 = INTEGER: 47
    UCD-SNMP-MIB::dskPercent.2 = INTEGER: 9
    UCD-SNMP-MIB::dskPercent.3 = INTEGER: 0
    UCD-SNMP-MIB::dskPercent.4 = INTEGER: 0
    UCD-SNMP-MIB::dskPercentNode.1 = INTEGER: 9
    UCD-SNMP-MIB::dskPercentNode.2 = INTEGER: 0
    UCD-SNMP-MIB::dskPercentNode.3 = INTEGER: 0
    UCD-SNMP-MIB::dskPercentNode.4 = INTEGER: 0
    UCD-SNMP-MIB::dskErrorFlag.1 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorFlag.2 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorFlag.3 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorFlag.4 = INTEGER: noError(0)
    UCD-SNMP-MIB::dskErrorMsg.1 = STRING:
    UCD-SNMP-MIB::dskErrorMsg.2 = STRING:
    UCD-SNMP-MIB::dskErrorMsg.3 = STRING:
    UCD-SNMP-MIB::dskErrorMsg.4 = STRING: As expected, dskPercent.1 and dskPercent.2 (ie. */* and */tmp*) returned good values. But why did Solaris/SNMP/??? return 0 for dskPercent.3 (*/apps*) and dskPercent.4 (*/data*)? Those directories are on two ZFS which are on seperate zpools:
    --($ ~)-- zpool list
    NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    apps  39.2G  20.4G  18.9G    51%  ONLINE  -
    data   136G   121G  15.2G    88%  ONLINE  -
    --($ ~)-- zfs list apps data
    NAME   USED  AVAIL  REFER  MOUNTPOINT
    apps  20.4G  18.3G    20K  /apps
    data   121G  13.1G   101K  /dataOr is it supposed to be that way? I'm pretty much confused, because I found some blog posting by a guy called asyd at http://sysadmin.asyd.net/home/en/blog/asyd/zfs+snmp. Copying from there:
    snmpwalk -v2c -c xxxx katsuragi.global.asyd.net UCD-SNMP-MIB::dskTable
    UCD-SNMP-MIB::dskPath.1 = STRING: /
    UCD-SNMP-MIB::dskPath.2 = STRING: /home
    UCD-SNMP-MIB::dskPath.3 = STRING: /data/pkgsrc
    UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/dsk/c1d0s0
    UCD-SNMP-MIB::dskDevice.2 = STRING: data/home
    UCD-SNMP-MIB::dskDevice.3 = STRING: data/pkgsrc
    UCD-SNMP-MIB::dskTotal.1 = INTEGER: 1017935
    UCD-SNMP-MIB::dskTotal.2 = INTEGER: 0
    UCD-SNMP-MIB::dskTotal.3 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.1 = INTEGER: 755538
    UCD-SNMP-MIB::dskAvail.2 = INTEGER: 0
    UCD-SNMP-MIB::dskAvail.3 = INTEGER: 0
    UCD-SNMP-MIB::dskPercent.1 = INTEGER: 21
    UCD-SNMP-MIB::dskPercent.2 = INTEGER: 18
    UCD-SNMP-MIB::dskPercent.3 = INTEGER: 5What I find confusing, are his dskPercent.2 and dskPercent.3 outputs - for him, he got back dskPercent for what seems to be directories on ZFS filesystems.
    Because of that I'm wondering about how it is supposed to be - should Solaris return dskPercent values for ZFS?+
    Thanks a lot,
    Alexander

    I don't have the ability to test out my theory, but I suspect that you are using the default mount created for the zpools you've created (apps & data) as opposed to specific ZFS files systems, which is what the asyd blog shows.
    Remember, the elements reported on in the asyd blog ARE zfs file systems; they are not just directories. They are indeed mountpoints, and it is reporting the utilization of those zfs file systems in the pool ("data") on which they are constructed. In the case of /home, the administrator has specifically set the mountpoint of the ZFS file system to be /home instead of the default /data/home.
    To test my theory, instead of using your zpool default mount point, try creating a zfs file system under each of your pools and using that as the entry point for your application to write data into the zpools. I suspect you will be rewarded with the desired result: reporting of "disk" (really, pool) percent usage.

  • What is the best way to backup ZFS filesystem on solaris 10?

    Normally on Linux environment, I'd use mondorescue to create image (full & incremental) so it can be easily restored (full or file/folders) to a new similar server environment for restore purposes in case of disaster.
    I'd like to know the best way to backup a ZFS filesystem to a SAN storage and to restore it from there with minimal downtime. Preferrably with tools already available on Solaris 10.
    Thanks.

    the plan is to backup whole OS, and configuration files
    2 servers to be backed up
    server A zpool:
    - rootpool
    - usr
    - usrtmp
    server B zpool:
    - rootpool
    - usr
    - usrtmp
    if we were to cut hardware cost, it is possible to back up to samba share?
    any suggestions?

  • Oracle VM Server 2.2.1 - Extra mount options for /OVS

    Hi,
    we would like to know how to add extra mount options (rw,hard,intr,tcp,rsize=32768,wsize=32768,timeo=600) to /OVS in Oracle VM 2.2.1.
    In version 2.1.5 and lower it was possible by adding those options together with the UUID in /etc/ovs/repositories.options
    How to manage mount options in Oracle VM 2.2.1? I've tried the same and no success.
    Best regards and thanks in advance,
    Marc Caubet

    Marc Caubet wrote:
    Since we have seen some I/O problems which causes high CPU Wait for some VM (those which contain postgres databases with high I/O activity) we finally decided to apply those NFS options to see if it solves the problem. If it doesn't work we will analize which options can improve the performance.Marc, why not just use the OVS' NFS mount for the system/root partition and mount an NFS volume from within the guest itself? That way, you can set all the parameters you need. Also, the netfront drivers (I'm told) are slightly more efficient that the blockfront drivers, so you could even see a performance improvement.

  • [HAL] Mount options for removable devices

    I need my USB drives automounted with noatime mount options. Following the guidelines at http://wiki.archlinux.org/index.php/HAL, I created a file /usr/share/hal/fdi/policy/10osvendor/10-custom-mount-options.fdi with the following content:
    <?xml version="1.0" encoding="UTF-8"?>
    <deviceinfo version="0.2">
    <device>
    <match key="block.is_volume" bool="true">
    <match key="@block.storage_device:storage.hotpluggable" bool="true">
    <merge key="volume.policy.mount_option.noatime" type="bool">true</merge>
    </match>
    <match key="@block.storage_device:storage.removable" bool="true">
    <merge key="volume.policy.mount_option.noatime" type="bool">true</merge>
    </match>
    </match>
    </device>
    </deviceinfo>
    And then restarted HAL. Even tried reboot. But my drives still won't mount with noatime. Any help is much appreciated?

    [:bump:]

  • Any way to specify options for scheduled filesystem checks?

    Today I ran e2fsck -fD on my netbook's ext4 partitions, and was surprised to see the machine's cumulative boot and login time drop by about ten seconds. Evidently the "rarely necessary" nature of the -D option doesn't mean it shouldn't be done once in a while.
    So I'm wondering if there's a way to make e2fscks scheduled by tune2fs use -D. Is it possible to specify the parameters through tune2fs, or some other way? The man page doesn't say; nor does there appear to be anything on it for e2fsck.conf.
    Last edited by Gullible Jones (2012-08-06 02:43:28)

    This should be possible via the configuration file e2fsck.conf (/etc/e2fsck.conf) - it has an own man page. If you want to change options for your root system I guess you have to create such a config file in your initramfs.
    An alternative would be to boot from a live cd.
    Greetings
    matse

  • Mount point for SMB filesystem

    Hi folks
    Using the finder I've connected to a Windows box drive "D" and get a icon on the desktop. This contains GIS data. I'm using an OpenSource application GRASS and want to use that data. GRASS opens a dialogue box which wants an input of the form /<filesystempath>/<filename>. It doesnt accept drag and drop. But I cant find where the Windoze Drive "D" has been mounted!! In Linux it would have been under /mnt but that doesnt exist here,
    any ideas??
    Hugh
    PS Ive tried using spotlight to find"D"... nogo.
      Mac OS X (10.4.8)  

    Sorted!
    The remote filesystem is mounted under /Volumes.
    The other solution I've found is to create a dir
    mkdir /datapoint
    Then mount the SMB filesystem on that dir
    mount_smbfs -W <WORKGROUPNAME> //<username>@<windoze-computername>/<nameofshare> /datapoint
    you will be asked for <username>'s password
    and the the remote data can be accessed using the dir /datapoint
      Mac OS X (10.4.8)  
      Mac OS X (10.4.8)  

  • Mounting options for iMac G5

    I need to attach an iMac G5 to mounting arm. The rep that sales the mounts said that his will attach to standard flat screen interface called VESA. The VESA interface is either 75mm or 100mm. Anyone tried to attach an iMac G5 to one of these?
    Thanks

    Thanks.
    In another thread I found that the Apple Store sells a mounting bracket that replaces the stand. This bracket has VESA compliant holes at 100mm.
    'http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore?productLearnMore=M9 755G/A'

  • Does SAP support Solaris 10 ZFS filesystem when using DB2 V9.5 FP4?

    Hi,
    I'm installing NW7 (BI usage). SAPINST has failed in the step "ABAP LOAD due to the DB2 error message
    "Unsupported file system type zfs for Direct I/O". It appears my Unix Admin must have decided to set these filesystems as ZFS on this new server.
    I  have several questions requiring your expertise.
    1) Does SAP support ZFS filesystems on Solaris 10 (SPARC hardware)? I can not find any reference in SDN or Service Market Place? Any reference will be much appreciated.
    2) How can I confirm my sapdata fielsystems are ZFS?
    3) What actions do you recommend for me to resolve the SAPINST errors? Do I follow the note "Note 995050 - DB6: NO FILE SYSTEM CACHING for Tablespaces" to disable "Direct I/O" for all DB2 tablespaces? I have seen Markus Doehr's forum Link:[ thread|Re: DB2 on Solaris x64 - ZFS as filesystem possible?; but it does not state exactly how he overcame the error.
    regards
    Benny

    Hi Frank,
    Thanks for your input.
    I have also found  the command "zfs list" that would display any ZFS filesystems.
    We have also gone back to UFS as the ZFS deployment schedule does not meet this particular SAP BW implementation timeline.
    Has any one come across any SAP statement that states NW7 can be deployed with ZFS for DB2 database on Solaris SPARC platform. If not, I'll open an OSS message.
    regards
    Benny

  • Mount options

    We have a filesystem mounted via NFS on 2 systems, both are automounted. When we run a bdf command de filesystem doesn’t appear, but if we try to access to the filesystem we can. We would like to know if there’s any recommendation about the mount options for filesystems.

    The directory is mounted by autmount when you access that directory.
    This is exact automount functionaility.
    This is why you cannot see in the output of "df" command when you are not using.
    The recommended NFS mount option could be ...
    if you use NetApp filer as NFS server,
    this techinical white paper could be helpful.
    http://www.netapp.com/us/library/technical-reports/tr_3442.html
    -<snip>-
    Correct NFS mount options are important to provide optimal performance and system stability.
    Linux®: rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,suid,timeo=600
    Solaris™: rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,suid,[forcdirectio or llock]
    AIX, HP/UX rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,suid

  • USB mount options in kde

    Is it possible to use custom mount options for removable (usb) drives in kde4?
    I'm mostly interested in -o flush option. So the actual write on the disk ocurs when copying, but not when I unmout the device.
    P.S. editing fstab is not a solution! May be there is a way to amoid hal and automount using udev?

    http://wiki.archlinux.org/index.php/Ude … SB_devices

  • BTRFS : Subvolumes, mount options

    Hello,
    I took a look at btrfs, read the wiki, some guides and I have a few questions :
    1) In every guide I read, even on the official wiki, the root and home subvolumes are created separately. I don't understand why since the two are "linked" it would make more sense to create the home subvolume inside the root subvolume. From what I understand it wouldn't make the snapshots harder because the subvolume children are ignored when snapshotting. Moreover since subvolume children are automatically mounted we can mount the all system in one command only, so only one line in fstab to mount everything. So why not create the subvolumes directly in the root subvolume ? The only drawback I can see is that you can't set different mounting options for the children.
    2) Same goes for autodefrag option, wherever I read about btrfs and SSDs the autodefrag option is always there. I understand defragmentation for a spinning device but fail to see the usefulness on SSDs. Moreover defragmentation is really bad for SSDs as it shortens their lifespan. So the way I see it defragmentation and autodefrag only has drawbacks for SSDs. What am I missing ?
    3) For the same SSD lifespan concerns I plan to not use inode_cache and space_cache by using the nospace_cache option. So my options would be :
    rw,noatime,compress=lzo,ssd,discard,nospace_cache
    Does that seem optimal ?
    Thanks in advance,
    Nolhian
    Last edited by Nolhian (2014-04-29 16:36:37)

    If /home is nested it may lead to confusion if you later want to mount home-snapshot-1 to /home. Keeping things separate, while not required, is still a good idea.
    My mount options for my ssd-based root subvolume are: "defaults,noatime,discard,ssd,subvolid=0". I don't know where you've seen autodefrag as a standard ssd option, like you say, it doesn't make much sense for ssds.
    Can't comment on the caches.  There's a few threads on the mailing list that may be of interest to you though (not read them myself).
    http://www.mail-archive.com/linux-btrfs … 24827.html
    http://www.mail-archive.com/linux-btrfs … 30739.html
    http://www.mail-archive.com/linux-btrfs … 07498.html
    There's a lot more.

  • Linux RHEL 6 Oracle 11.2.0.3 RAC NFS Mount Options

    Hi
    Does anyone have any details as to what the mount options should be for the following please.
    Oracle RAC 11.2.0.3
    RHEL 6.3
    NFS being used
    I need to determine the mount options for both BINARIES and DATABASE FILES (temp, control, data, redo)
    I know there must be a document out there that does not involve making the process as complicated as possible
    Thanks FORUM

    Hi,
    Please check Oracle Support note 359515.1 for detail.
    Cheers,
    SAM L.

  • Confused about ZFS filesystems created with Solaris 11 Zone

    Hello.
    Installing a blank Zone in Solaris *10* with "zonepath=/export/zones/TESTvm01" just creates one zfs filesystem:
    +"zfs list+
    +...+
    +rzpool/export/zones/TESTvm01 4.62G 31.3G 4.62G /export/zones/TESTvm01"+
    Doing the same steps with Solaris *11* will ?create? more filesystems:
    +"zfs list+
    +...+
    +rpool/export/zones/TESTvm05 335M 156G 32K /export/zones/TESTvm05+
    +rpool/export/zones/TESTvm05/rpool 335M 156G 31K /rpool+
    +rpool/export/zones/TESTvm05/rpool/ROOT 335M 156G 31K legacy+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris 335M 156G 310M /export/zones/TESTvm05/root+
    +rpool/export/zones/TESTvm05/rpool/ROOT/solaris/var 24.4M 156G 23.5M /export/zones/TESTvm05/root/var+
    +rpool/export/zones/TESTvm05/rpool/export 62K 156G 31K /export+
    +rpool/export/zones/TESTvm05/rpool/export/home 31K 156G 31K /export/home"+
    I dont understand why Solaris 11 is doing that. Just one FS (like in Solaris 10) would be better for my setup. I want to configure all created volumes by myself.
    Is it possible to deactivate this automatic "feature"?

    There are several reasons that it works like this, all guided by the simple idea "everything in a zone should work exactly like it does in the global zone, unless that is impractical." By having this layout we get:
    * The same zfs administrative practices within a zone that are found in the global zone. This allows, for example, compression, encryption, etc. of parts of the zone.
    * beadm(1M) and pkg(1) are able to create boot environments within the zone, thus making it easy to keep the global zone software in sync with non-global zone software as the system is updated (equivalent of patching in Solaris 10). Note that when Solaris 11 updates the kernel, core libraries, and perhaps other things, a new boot environment is automatically created (for the global zone and each zone) and the updates are done to the new boot environment(s). Thus, you get the benefits that Live Upgrade offered without the severe headaches that sometimes come with Live Upgrade.
    * The ability to have a separate /var file system. This is required by policies at some large customers, such as the US Department of Defense via the DISA STIG.
    * The ability to perform a p2v of a global zone into a zone (see solaris(5) for examples) without losing the dataset hierarchy or properties (e.g. compression, etc.) set on datasets in that hierarchy.
    When this dataset hierarchy is combined with the fact that the ZFS namespace is virtualized in a zone (a feature called "dataset aliasing"), you see the same hierarchy in the zone that you would see in the global zone. Thus, you don't have confusing output from df saying that / is mounted on / and such.
    Because there is integration between pkg, beadm, zones, and zfs, there is no way to disable this behavior. You can remove and optionally replace /export with something else if you wish.
    If your goal is to prevent zone administrators from altering the dataset hierarchy, you may be able to accomplish this with immutable zones (see zones admin guide or file-mac-profile in zonecfg(1M)). This will have other effects as well, such as making all or most of the zone unwritable. If needed, you can add fs or dataset resources which will not be subject to file-mac-profile and as such will be writable.

Maybe you are looking for

  • Will Compressor 4 work with my Final Cut Pro 7 in Final Cut Studio?

    I decide to use Final Cut Pro 7 instead of Final Cut Pro X to use the Apple Color. But I wonder if the Final Cut Pro 7 can render video or work with Compressor 4 instead of the Compressor 3. Any ideas of what to do? Thank you for the answer. Steve

  • Is there an easy way to convert from a String with a comma, ie. 1,000.00 to

    Is there an easy way to convert from a String with a comma, ie. 1,000.00 to a float or a double? thanks, dosteov

  • What is FI-Asset Accounting mean

    hi can you guide me for this question 1. What is FI-Asset Accounting mean to you? 2. What is Business value it adds? 3. Why business looking for this data? 4. Why kind of Key performance indicates involved in this?

  • Clustering issue with WLCS 3.2 weblogic 5.1 and service pack 8

    I'm having a problem with Portlets on a page with commerce server 3.2 on           Cluster A below. One Portal page has multiple portlets on it. When the           Portal Page executes the portlets are being scheduled across both SCCWLS00           a

  • Import file txt

    Hi, I've one file txt (test.txt) like this: AAAAAAAAABBBBCCCCCCCDFFFFFFTTTTTTTTTTTT07/07/08 more 1000 records I'd like to create an "external table" to import this file with these columns: ID (from 1 to 9 character) --AAAAAAAAA MY_COD (from 10 to 13