Large number of Transport errors on ZFS pool

This is sort of a continuation of thread:
Issues with HBA and ZFS
But since it is a separate question thought I'd start a new thread.
Because of a bug in 11.1, I had to downgrade to 10_U11. Using an LSI 9207-8i HBA (SAS2308 chipset). I have no errors on my pools but i consistently see errors when trying to read from the disks. They are always Retryable or Reset. All in all the system functions but as I started testing I am seeing a lot of errors in IOSTAT.
bash-3.2# iostat -exmn
extended device statistics ---- errors ---
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot device
0.1 0.2 1.0 28.9 0.0 0.0 0.0 41.8 0 1 0 0 1489 1489 c0t5000C500599DDBB3d0
0.0 0.7 0.2 75.0 0.0 0.0 21.2 63.4 1 1 0 1 679 680 c0t5000C500420F6833d0
0.0 0.7 0.3 74.6 0.0 0.0 20.9 69.8 1 1 0 0 895 895 c0t5000C500420CDFD3d0
0.0 0.6 0.4 75.5 0.0 0.0 26.7 73.7 1 1 0 1 998 999 c0t5000C500420FB3E3d0
0.0 0.6 0.4 75.3 0.0 0.0 18.3 68.7 0 1 0 1 877 878 c0t5000C500420F5C43d0
0.0 0.0 0.2 0.7 0.0 0.0 0.0 2.1 0 0 0 0 0 0 c0t5000C500420CE623d0
0.0 0.6 0.3 76.0 0.0 0.0 20.7 67.8 0 1 0 0 638 638 c0t5000C500420CD537d0
0.0 0.6 0.2 74.9 0.0 0.0 24.6 72.6 1 1 0 0 638 638 c0t5000C5004210A687d0
0.0 0.6 0.3 76.2 0.0 0.0 20.0 78.4 1 1 0 1 858 859 c0t5000C5004210A4C7d0
0.0 0.6 0.2 74.3 0.0 0.0 22.8 69.1 0 1 0 0 648 648 c0t5000C500420C5E27d0
0.6 43.8 21.3 96.8 0.0 0.0 0.1 0.6 0 1 0 14 144 158 c0t5000C500420CDED7d0
0.0 0.6 0.3 75.7 0.0 0.0 23.0 67.6 1 1 0 2 890 892 c0t5000C500420C5E1Bd0
0.0 0.6 0.3 73.9 0.0 0.0 28.6 66.5 1 1 0 0 841 841 c0t5000C500420C602Bd0
0.0 0.6 0.3 73.6 0.0 0.0 25.5 65.7 0 1 0 0 678 678 c0t5000C500420D013Bd0
0.0 0.6 0.3 76.5 0.0 0.0 23.5 74.9 1 1 0 0 651 651 c0t5000C500420C50DBd0
0.0 0.6 0.7 70.1 0.0 0.1 22.9 82.9 1 1 0 2 1153 1155 c0t5000C500420F5DCBd0
0.0 0.6 0.4 75.3 0.0 0.0 19.2 58.8 0 1 0 1 682 683 c0t5000C500420CE86Bd0
0.0 0.0 0.2 0.7 0.0 0.0 0.0 1.9 0 0 0 0 0 0 c0t5000C500420F3EDBd0
0.1 0.2 1.0 26.5 0.0 0.0 0.0 41.9 0 1 0 0 1511 1511 c0t5000C500599E027Fd0
2.2 0.3 133.9 28.2 0.0 0.0 0.0 4.4 0 1 0 17 1342 1359 c0t5000C500599DD9DFd0
0.1 0.3 1.1 29.2 0.0 0.0 0.2 34.1 0 1 0 2 1498 1500 c0t5000C500599DD97Fd0
0.0 0.6 0.3 75.6 0.0 0.0 22.6 71.4 0 1 0 0 677 677 c0t5000C500420C51BFd0
0.0 0.6 0.3 74.8 0.0 0.1 28.6 83.8 1 1 0 0 876 876 c0t5000C5004210A64Fd0
0.6 43.8 18.4 96.9 0.0 0.0 0.1 0.6 0 1 0 5 154 159 c0t5000C500420CE4AFd0
Mar 12 2013 17:03:34.645205745 ereport.fs.zfs.io
nvlist version: 0
     class = ereport.fs.zfs.io
     ena = 0x114ff5c491a00c01
     detector = (embedded nvlist)
     nvlist version: 0
          version = 0x0
          scheme = zfs
          pool = 0x53f64e2baa9805c9
          vdev = 0x125ce3ac57ffb535
     (end detector)
     pool = SATA_Pool
     pool_guid = 0x53f64e2baa9805c9
     pool_context = 0
     pool_failmode = wait
     vdev_guid = 0x125ce3ac57ffb535
     vdev_type = disk
     vdev_path = /dev/dsk/c0t5000C500599DD97Fd0s0
     vdev_devid = id1,sd@n5000c500599dd97f/a
     parent_guid = 0xcf0109972ceae52c
     parent_type = mirror
     zio_err = 5
     zio_offset = 0x1d500000
     zio_size = 0xf1000
     zio_objset = 0x12
     zio_object = 0x0
     zio_level = -2
     zio_blkid = 0x452
     __ttl = 0x1
     __tod = 0x513fa636 0x26750ef1
I know all of these drives are not bad and I have confirmed they are all running the latest firmware and correct sector size, 512 (ashift 9). I am thinking it is some sort of compatibility with this new HBA but have no way of verifying. Anyone have any suggestions?
Edited by: 991704 on Mar 12, 2013 12:45 PM

There must be something small I am missing. We have another system configured nearly the same (same server and HBA, different drives) and it functions. I've gone through the recommended storage practices guide. The only item I have not been able to verify is
"Confirm that your controller honors cache flush commands so that you know your data is safely written, which is important before changing the pool's devices or splitting a mirrored storage pool. This is generally not a problem on Oracle/Sun hardware, but it is good practice to confirm that your hardware's cache flushing setting is enabled."
How can I confirm this? As far as I know these HBAs are simply HBAs. No battery backup. No on-board memory. The 9207 doesn't even offer RAID.
Edited by: 991704 on Mar 15, 2013 12:33 PM

Similar Messages

  • Submit submit a large number of task to a thread pool (more than 10,000)

    i want to submit a large number of task to a thread pool (more than 10,000).
    Since a thread pool take runnable as input i have to create as many objects of Runnable as the number of task, but since the number of task is very large it causes the memory overflow and my application crashes.
    Can you suggest me some way to overcome this problem?

    Ravi_Gupta wrote:
    I have to serve them infinitely depending upon the choice of the user.
    Take a look at my code (code of MyCustomRunnable is already posted)
    public void start(Vector<String> addresses)
    searching = true;What is this for? Is it a kind of comment?
    >
    Vector<MyCustomRunnable> runnables = new Vector<MyCustomRunnable>(1,1);
    for (String address : addresses)
    try
    runnables.addElement(new MyCustomRunnable(address));
    catch (IOException ex)
    ex.printStackTrace();
    }Why does MyCustomRunnable throw an IOException? Why is using up resources when it hasn't started. Why build this vector at all?
    >
    //ThreadPoolExecutor pool = new ThreadPoolExecutor(100,100,50000L,TimeUnit.MILLISECONDS,new LinkedBlockingQueue());
    ExecutorService pool = Executors.newFixedThreadPool(100);You have 100 CPUs wow! I can only assume your operations are blocking on a Socket connection most of the time.
    >
    boolean interrupted = false;
    Vector<Future<String>> futures = new Vector<Future<String>>(1,1);You don't save much by reusing your vector here.
    for(int i=1; !interrupted; i++)You are looping here until the thread is interrupted, why are you doing this? Are you trying to generate loading on a remote server?
    System.out.println("Cycle: " + i);
    for(MyCustomRunnable runnable : runnables)Change the name of you Runnable as it clearly does much more than that. Typically a Runnable is executed once and does not create resources in its constructor nor have a cleanup method.
    futures.addElement((Future<String>) pool.submit(runnable));Again, it unclear why you would use a vector rather than a list here.
    >
    for(Future<String> future : futures)
    try
    future.get();
    catch (InterruptedException ex)
    interrupted = true;If you want this to break the loop put the try/catch outside the loop.
    ex.printStackTrace();
    catch (ExecutionException ex)
    ex.printStackTrace();If you are generating a load test you may want to record this kind of failure. e.g. count them.
    futures.clear();
    try
    Thread.sleep(60000);Why do you sleep even if you have been interrupted? For better timing, you should sleep, before check if you futures have finished.
    catch(InterruptedException e)
    searching = false;again does nothing.
    System.out.println("Thread pool terminated..................");
    //return;remove this comment. its dangerous.
    break;why do you have two way of breaking the loop. why not interrupted = true here.
    searching = false;
    System.out.println("Shut downing pool");
    pool.shutdownNow();
    try
    for(MyCustomRunnable runnable : runnables)
    runnable.close(); //release resources associated with it.
    catch(IOException e)put the try/catch inside the loop. You may want to ignore the exception but if one fails, the rest of the resources won't get cleaned up.
    The above code serve the task infinitely untill it is terminated by user.
    i had created a large number of runnables and future objects and they remain in memory until
    user terminates the operation might be the cause of the memory overflow.It could be the size of the resources each runnable holds. Have you tried increasing your maximum memory? e.g. -Xmx512m

  • Large number of transport request released from Dev

    Hello Gurus,
    Our Landscape for Dev consist of three clients. We do development in one client and the testing teams test it in another client. We maintained all these clients in a route which takes around 10 mins to reach after releasing a transport and when testing team fails the test then we correct the error and again create a fresh transport and release it. I understand at some point we are going to reach Maximum number of transports.
    Normally, If the tesing team fails the test two times. I create a test role in a testing client and request the user to test it. When everything goes well then I insert it into original role. But all this stuff takes time.
    Please advise if there is any other way also to cut this short.
    Regards,
    MA

    Hello,
    >
    Salman123 wrote:
    > I understand at some point we are going to reach Maximum number of transports.
    The number range for transports is alphanumeric  (over 40 million free numbers are available ). Have a look at sap note [106911|https://websmp230.sap-ag.de/sap%28bD1kZSZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=10691] for details.
    >
    Salman123 wrote:
    > Please advise if there is any other way also to cut this short
    You could copy customizing transport requests via transaction SCC1. But personally I would not recommend this.
    Cheers
    Joerg

  • Error in send message to A large number of audience

    I want to send a message to A large number of audience but I get an error
    This error
    << Messaging is sending a large number of SMS messages. Do you want to allow this app to continue sending messages? >>
    Deny ....... Allow
    Please Help me

    @AB2
    You will have to contact Google's Android division on this issue, not Sony. 
    You have to see this pop up or this feature as a way to protect you, some people don't have unlimited text messages, and there are apps that might start sending large amounts of texts. 
    You can see this on other forums 
    http://forums.androidcentral.com/google-nexus-4/227096-messaging-sending-large-amount-messages.html
    http://android.stackexchange.com/questions/38461/pop-up-message-when-sending-large-amounts-of-sms-me...
    https://code.google.com/p/android/issues/detail?id=36617
    Is or was your phone locked to a carrier/network branded? if it is or was, perhaps your carrier network could fix this. 
    "I'd rather be hated for who I am, than loved for who I am not." Kurt Cobain (1967-1994)

  • Oracle Error 01034 After attempting to delete a large number of rows

    I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
    SQL Command I ran:
    delete from oss_cell_main where time < '30 jul 2009'
    If I try to connect to the database now I get the following error:
    ORA-01034: ORACLE not available
    df -h returns the following:
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
    /dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
    /dev/md/dsk/d8 42G 42G 0K 100% /dbo
    I tried to get the space back by deleting all the data in the table oss_cell_main :
    drop table oss_cell_main purge
    But no change in df output.
    I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
    du -h :
    du -h8K ./lost+found
    1008M ./system/69333
    1008M ./system
    10G ./rollback/69333
    10G ./rollback
    27G ./data/69333
    27G ./data
    1K ./inx/69333
    2K ./inx
    3.8G ./tmp/69333
    3.8G ./tmp
    150M ./redo/69333
    150M ./redo
    42G .
    I think its the rollback folder that has increased in size immensely.
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 10800
    undo_tablespace string UNDOTBS1
    select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
    TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
    MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
    ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
    UNDOTBS1 8192 65536 1
    2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
    SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
    Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.

    Check the alert log for errors.
    Select file_name, bytes from dba_data_files order by bytes;
    Try to shrink some datafiles to get space back.

  • Internal Error 500 started appearing even after setting a large number for postParametersLimit

    Hello,
    I adopted a CF 9 web-application and we're receiving the Internal 500 Error on a submit from a form that has line items for a RMA.
    The server originally only had Cumulative Hot Fix 1 on it and I thought if I installed Cumulative Hot Fix 4, I would be able to adjust the postParametersLimit variable in the neo-runtime.xml.  So, I tried doing this, and I've tried setting the number to an extremely large number (last try was 40000), and I'm still getting this error.  I've tried putting a <cfabort> on the first line on the cfm file that is being called, but I'm still getting the 500 error.
    As I mentioned, it's a RMA form and if the RMA has a few lines say up to 20 or 25 it will work.
    I've tried increasing the following all at the same time:
    postParameterSize to 1000 MB
    Max size of post data 1000MB
    Request throttle Memory 768MB
    Maximum JVM Heap Size - 1024 MB
    Enable HTTP Status Codes - unchecked
    Here's some extra backgroun on this situation.  This is all that happened before I got the server:
    The CF Server is installed as a virtual machin and was originally part of a domain that was exposed to the internet and the internal network.  The CF Admin was exposed to the internet.
    AT THIS TIME THE RMA FORM WORKED PROPERLY, EVEN WITH LARGE NUMBER OF LINE ITEMS.
    The CF Server was hacked, so they did the following:
    They took a snapshot of the CF Server
    Unjoined it from the domain and put it in the DMZ.
    The server can no longer connect to the internet outbound, inbound connections are allowed through SSL
    Installed cumulative hot fix 1 and hot fix APSB13-13
    Changed the Default port for SQL on the SQL Server.
    This is when the RMA form stopped working and I inherited the server.  Yeah!
    Any ideas on what i can try next or why this would have suddenly stopped working after making the above changes on the server.
    Thank you

    Start from the beginning. Return to the default values, and see what happens. To do so, proceed as follows.
    Temporarily shut ColdFusion down. Create a back-up of the file neo-runtime.xml, just in case.
    Now, open the file in a text editor and revert postParametersLimit and postSizeLimit to their respective default values, namely,
    <var name='postParametersLimit'><number>100.0</number></var>
    <var name='postSizeLimit'><number>100.0</number></var>
    That is, 100 parameters and 100 MB, respectively. (Note that there is no postParameterSize! If you had included that element in the XML, remove it.)
    Restart ColdFusion. Test and tell.

  • Large number of errors on 6500, when using Apple MACS

    Hi,
    Getting large number of errors on 6500
    ETHC-5-PORT FROM STP - PORT LEAVING BRIDGE - PORT JOINING BRIDGE
    connected devices are apple macs running GigE.
    Anyone seen this before?
    Cheers
    Scott

    Hi Scott,
    Can you check the speed and duplex settings on machines as well as switch ports on which you have connected apple macs? I will say to make it a manual config if you have them on auto auto settings.
    Regards,
    Ankur

  • Zfs pool I/O failures

    Hello,
    Been using an external SAS/SATA tray connected to a t5220 using a SAS cable as storage for a media library.  The weekly scrub cron failed last week with all disks reporting I/O failures:
    zpool status
      pool: media_NAS
    state: SUSPENDED
    status: One or more devices are faulted in response to IO failures.
    action: Make sure the affected devices are connected, then run 'zpool clear'.
       see: http://www.sun.com/msg/ZFS-8000-HC
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        2.34T scanned out of 9.59T at 14.7M/s, 143h43m to go
        0 repaired, 24.36% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   UNAVAIL  10.6K    75     0  experienced I/O failures
              raidz2-0  UNAVAIL  21.1K    10     0  experienced I/O failures
                c6t0d0  UNAVAIL    212     6     0  experienced I/O failures
                c6t1d0  UNAVAIL    216     6     0  experienced I/O failures
                c6t2d0  UNAVAIL    225     6     0  experienced I/O failures
                c6t3d0  UNAVAIL    217     6     0  experienced I/O failures
                c6t4d0  UNAVAIL    202     6     0  experienced I/O failures
                c6t5d0  UNAVAIL    189     6     0  experienced I/O failures
                c6t6d0  UNAVAIL    187     6     0  experienced I/O failures
                c6t7d0  UNAVAIL    219    16     0  experienced I/O failures
                c6t8d0  UNAVAIL    185     6     0  experienced I/O failures
                c6t9d0  UNAVAIL    187     6     0  experienced I/O failures
    The console outputs this repeated error:
    SUNW-MSG-ID: ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major
    EVENT-TIME: 20
    PLATFORM: SUNW,SPARC-Enterprise-T5220, CSN: -, HOSTNAME: t5220-nas
    SOURCE: zfs-diagnosis, REV: 1.0
    EVENT-ID: e935894e-9ab5-cd4a-c90f-e26ee6a4b764
    DESC: The number of I/O errors associated with a ZFS device exceeded acceptable levels.
    AUTO-RESPONSE: The device has been offlined and marked as faulted. An attempt will be made to activate a hot spare if available.
    IMPACT: Fault tolerance of the pool may be compromised.
    REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. Run 'zpool status -x' for more information. Please refer to the associated reference document at http://sun.com/msg/ZFS-8000-FD for the latest service procedures and policies regarding this diagnosis.
    Chassis | major: Host detected fault, MSGID: ZFS-8000-FD
    /var/adm/messages has an error message for each disk in the data pool, this being the error for sd7:
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.warning] WARNING: /pci@0/pci@0/p
    ci@9/scsi@0/disk@2,0 (sd7):
    May  3 16:24:02 t5220-nas       Error for Command: read(10)                Error
    Level: Fatal
    May  3 16:24:02 t5220-nas scsi: [ID 107833 kern.notice]         Requested Block:
    1815064264                Error Block: 1815064264
    Have tried rebooting the system and running zpool clear as the zfs link in the console errors suggest.  Sometimes the system will reboot fine, other times it requires issuing a break from LOM, because the shutdown command is still trying after more than an hour.   The console usually outputs more messages, as the reboot is completing,  basically saying the faulted hardware has been restored, and no additional action is required.  A scrub is recommended in the console message.  When I check the pool status the previously suspended scrub starts back where it left off:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.83T scanned out of 9.59T at 165M/s, 6h37m to go
        0 repaired, 60.79% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    Then after an hour or two all the disks go back into an I/O error state.   Thought it might be the SAS controller card, PCI slot, or maybe the cable, so tried using the other PCI slot in the riser card first (don't have another cable available).   Now the system is back online and again trying to complete the previous scrub:
    zpool status
      pool: media_NAS
    state: ONLINE
    scan: scrub in progress since Thu Apr 30 09:43:00 2015
        5.58T scanned out of 9.59T at 139M/s, 8h26m to go
        0 repaired, 58.14% done
    config:
            NAME        STATE     READ WRITE CKSUM
            media_NAS   ONLINE       0     0     0
              raidz2-0  ONLINE       0     0     0
                c6t0d0  ONLINE       0     0     0
                c6t1d0  ONLINE       0     0     0
                c6t2d0  ONLINE       0     0     0
                c6t3d0  ONLINE       0     0     0
                c6t4d0  ONLINE       0     0     0
                c6t5d0  ONLINE       0     0     0
                c6t6d0  ONLINE       0     0     0
                c6t7d0  ONLINE       0     0     0
                c6t8d0  ONLINE       0     0     0
                c6t9d0  ONLINE       0     0     0
    errors: No known data errors
    the zfs file systems are mounted:
    bash# df -h|grep media
    media_NAS               14T   493K   6.3T     1%    /media_NAS
    media_NAS/archive       14T   784M   6.3T     1%    /media_NAS/archive
    media_NAS/exercise      14T    42G   6.3T     1%    /media_NAS/exercise
    media_NAS/ext_subs      14T   3.9M   6.3T     1%    /media_NAS/ext_subs
    media_NAS/movies        14T   402K   6.3T     1%    /media_NAS/movies
    media_NAS/movies/bluray    14T   4.0T   6.3T    39%    /media_NAS/movies/bluray
    media_NAS/movies/dvd    14T   585K   6.3T     1%    /media_NAS/movies/dvd
    media_NAS/movies/hddvd    14T   176G   6.3T     3%    /media_NAS/movies/hddvd
    media_NAS/movies/mythRecordings    14T   329K   6.3T     1%    /media_NAS/movies/mythRecordings
    media_NAS/music         14T   347K   6.3T     1%    /media_NAS/music
    media_NAS/music/flac    14T    54G   6.3T     1%    /media_NAS/music/flac
    media_NAS/mythTV        14T    40G   6.3T     1%    /media_NAS/mythTV
    media_NAS/nuc-celeron    14T   731M   6.3T     1%    /media_NAS/nuc-celeron
    media_NAS/pictures      14T   5.1M   6.3T     1%    /media_NAS/pictures
    media_NAS/television    14T   3.0T   6.3T    33%    /media_NAS/television
    but the format command is not seeing any of the disks:
    format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
           0. c1t0d0 <SEAGATE-ST9146803SS-0006 cyl 65533 alt 2 hd 2 sec 2187>
              /pci@0/pci@0/pci@2/scsi@0/sd@0,0
           1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@1,0
           2. c1t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
              /pci@0/pci@0/pci@2/scsi@0/sd@2,0
           3. c1t3d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>  solaris
              /pci@0/pci@0/pci@2/scsi@0/sd@3,0
    Before moving the card into the other slot in the riser card format saw each disk in the zfs pool.    Not sure why the disks are not seen in format but the zfs pool seems to be available to the OS.    The disks in the attached tray were setup for Solaris to see using the Sun StorageTek RAID Manager, they were passed as 2TB raid0 components to Solaris, and format saw them as available 2TB disks.    Any suggestions as to how to proceed if the scrub completes with the SAS card in the new I/O slot?    Should I force a reconfigure of devices on the next reboot?  If the disks fault out again with I/O errors in this slot, the next steps were to try a new SAS  card and/or cable.  Does that sound reasonable?
    Thanks,

    Was the system online (and the ZFS pool) too when you moved the card? That might explain why the disks are confused. Obviously, this system is experiencing some higher level problem like a bad card or cable because disks generally don't fall over at the same time. I would let the scrub finish, if possible, and shut the system down. Bring the system to single-user mode, and review the zpool import data around the device enumeration. If the device info looks sane, then import the pool. This should re-read the device info. If the device info is still not available during the zpool import scan, then you need to look at a higher level.
    Thanks, Cindy

  • Need to blacklist (block) a large number of domains and email addresses

    I tried to import a large number of domains into a transport rule (new/set-transportrule -name xxx -domainis $csv -quarantine $true). 
    I added a few domains manually and tested to make sure the rule was working.  It was.  Then I ran the cmdlet to import a small number of domains (4).  That also worked.  Then I ran the cmdlet on a large number of domains.  The cmdlet
    worked with no error message (once I got the list of domains well under 4096 bytes).  I waited a day to see if the domains would show up in the rule.  But the imported domains did not show up.
    Is there a better solution to blocking large numbers of domains and email addresses?  Or is the transport rule the only option? 

    Since you do not want to crop your images to a square 1:1 aspect ratio changing the canvas to be square will not make your images square they will retain their Aspect Ratio and  image size will be changer to fit within your 1020 px square. There will be a border or borders on a side or two borders on opposite sides.   You do not need a script because Photoshop ships with a Plug-in script to be used in Actions.   What is good about Plugins is the support Actions.  When you record the action the plug-in during action recording records the setting you use in its dialog into  the actions step.  When the Action is played the Plug-in use the recorded setting an bypasses displaying its dialog. So the Action can be Batch.  The Action you would record would have two  Steps.   Step 1  menu File>Automate>Fit Image... in the Fit Image dialog enter 1020 in the width and height  fields.  Step 2 Canvas size enter 1020 pixels in width and height  not relative leave the anchor point centered it you want even borders on two sides set color to white in the canvas size dialog. You can batch the action.
    The above script will also work. Its squares the document then re-sizes to 1020x1020  the action re-sizes the image to fit with in an area 1020 x 1020 then add any missing canvas. The script like the action only process one image so it would also need to be batched. Record the script into and action and batch the action. As the author wrote. The script re size canvas did not specify an anchor point so the default center anchor point is uses  like the action canvas will be added to two sides.

  • Ramifications of large number of connections.

    Our application uses direct driver connections. We are thinking about increasing the number of connections in /etc/odbc.ini to a very large number.
    (1) Are there any ramifications in increasing this attribute?
    (2) I read that each connection takes 1 semaphore. Does TT use all the semaphores when odbc.ini is first read or as needed?
    TimesTen 6.0.4
    RedHat Linux
    Thanks,
    Linda

    Hi Linda
    The 'Connections' attribute will tell TimesTen what the maximum number of connections is expected to be to the data store. TimesTen will then pre-allocate one semaphore for each connection at the time the data store is loaded.
    If you exceed this value (e.g. you set Connections to 200 and actualy make 300 concurrent connections) then you wont get an error, also TimesTen wont crash, but connections #201->300 will perform sub-optimally.
    So I'd recommend you 1) work out what your maximum number of connections is 2) set the kernel parameters relating to semaphores appropriately (see the section in the install guide relevant to your OS for info on how to do this) 3) set the value in your odbc.ini 4) reload your data store.
    Keep in mind that you are allowed ~2000 connections to a data store, and this figure also must include internal connections (like the rep agent, cache agent etc).
    If you anticipate make a great number of connections, it is probably better to consider implementing a connection pool. There is documentation on this in the guides, and some examples can be found on how to achieve it in the demo directory.

  • Mail - how to send to large number of recipients?

    How can I send an email to a large number of recipients that I hold addresses for in a document without a) adding them to my contacts, and b) so that they can't all see each others addresses?

    I thought about using BCC, but it seems a little messy although it would do the job.
    Messy how?  That's exactly what it's for.  In fact, there's no other way to hide addresses from other recipients of the message.  The only other way to do it would be to script automated sending of one message per address, and that would get quite messy!
    What is the maximum number of recipients that Mail can handle?
    There's no limit enforced by Mail, AFAIK.  The limits come from the SMTP server you're using.
    One of the issues I had when using Outlook on Windows for this by copying and pasting from a text document, was that if there was an invalid/non-existent address in the list, it would just return an error and I was never quite sure if the whole thing didn't send, or if it was just to the dodgy address(es).
    In Mail, you'll just get a bounce message for any addresses that don't exist.  The one exception to that is addresses on the same server that you're sending from...  often, the server will simply reject the attempt to send to an address that it knows doesn't exist.  I'm not sure what kind of message the server returns in that case, though, and suspect it depends on the server.  It's been a while since I've seen such a problem.

  • Transport error while exporting swf to enterprise

    Hi all,
    OLD scenario : Server OS :  Windows2003, BO  XI3.1   Client OS : Windows XP , all BO lcient tools and Xcelsius 2008 ..system works fine in all respects
    New Scenario : Server OS upgraded to Windows 2008 with out doing any modifications on the running applicaitions.
    All client tools are able to connect to BO Application including designer, deski , importwizrd, webirichclient with EXCEPTION Xelcius 2008.
    In Xcelsius 2008 with SP1 --> while tring to export to Business objects platform, upon giving username and password and authenticationi (any) ...waits for 3-4 sec ...pops up " Transport Error : Communication Failure" .... same error while importing also ..application is not getting response from server ...
    pls guide ...thnx ..
    .kishore

    Sorry for delayed update .The Issue is resolved after un-checking auto assign port to CMS and assigning port number manually.

  • 401 Unauthorized:HTTP transport error while calling external WSDL from BPEL

    Hi,
    I have simple BPEL process whic calls siebel WSDL file through partner link.
    I have successfully compiled and deployed BPEL process, when i initiate the Process, instace got errored out at invoke activity.
    Error Goes like this
    "<remoteFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="summary"><summary>
    exception on JaxRpc invoke: HTTP transport error: javax.xml.soap.SOAPException:
    java.security.PrivilegedActionException: javax.xml.soap.SOAPException: Bad response: 401 Unauthorized</summary>
    </part></remoteFault>"
    Can anybody help on this issue, Could you please give me possible solution.
    Thank youl.

    we have not provided any authentication credentials to XMLSpy, we have just uploaded siebel WSDL file then sent the SOAP request with some customer Id then we have received proper response with all desired customer data.
    In the same way we tried to call same siebel WSDL file from BPEL process throgh partner link. errored instance has created with 401 unauthorised HTTP transport error..
    Do you think is this probem at siebel application side?
    We have deployed our BPEL process to SOA server which is having HTTP port 7777.
    as per my investigation,i found that we have to open port from siebel side (which WSDL file we are calling) which matches with above HTTP port number.
    Do you have any idea on this above clue? Please let us know more details on this.

  • BW Transport Error

    Hi All,
    I get the following Transport error when trying to transport 0Customer from BW Dev. to Quality.
    Number range interval for characteristic 0CITY is missing
    Message no. BRAIN049
    Diagnosis
    The interval for number range object BIM0000093 of characteristic 0CITY is missing. The interval is created automatically, when you run the system in the BW clients.
    Procedure
    Execute the corresponding action again in the BW clients, so that the interval is created automatically. Then you can drag numbers for the characteristic 0CITY to other clients.
    I get similar error for 2 other attributes of 0Customer (Name and Street). Some of the attributes have been turned navigational. All are activated in Development.
    Is there something i am missing?
    George.

    Hi Roberto,
    For City, name and street- the number range object entries (for SID'd) in Development system are:
    No From number  To number  Number status   Ext
    01 0000000001   2000000000   0000000057
    01 0000000001   2000000000   0000000196
    01 0000000001   2000000000   0000000107
    As per the error message and your reply, i went to SNRO and tried entering the objects: BIM0000093, BIM0000249 and BIM0000318 for the 3 Info objects and clicked the 'Number ranges' button. It appears like these number range objects do not exist.
    George.

  • HTTP Transport Error in Net BEans 6.5

    Hello,
    I am consuming a web service using JAX-RPC. I am trying to log onto a web interface through my program but I am getting the following error:
    Feb 6, 2009 1:10:54 PM ttsample.Engine Login
    SEVERE: null
    java.rmi.RemoteException: HTTP transport error: java.net.UnknownHostException: http; nested exception is:
    Logged in...
    HTTP transport error: java.net.UnknownHostException: http
    at ttsample.TtsoapcgiPortType_Stub.databaseLogon(TtsoapcgiPortType_Stub.java:819)
    at ttsample.Engine.Login(Engine.java:77)
    at ttsample.Main.main(Main.java:30)
    Caused by: HTTP transport error: java.net.UnknownHostException: http
    at com.sun.xml.rpc.client.http.HttpClientTransport.invoke(HttpClientTransport.java:140)
    at com.sun.xml.rpc.client.StreamingSender._send(StreamingSender.java:96)
    at ttsample.TtsoapcgiPortType_Stub.databaseLogon(TtsoapcgiPortType_Stub.java:802)
    ... 2 more
    I searched the web and found that this is a proxy setting proble.
    I configured apache earlier to use the local host that run the web client.
    When I change the http port host and port number and try to login into the web client its giving me page not found error.So I changed the proxy settings to default and then restarted the system and then tried to log into the web client and it worked.
    I dont know where i am going wrong.I want to consume the WSDL and log into the web client but i am gettingthe above error.
    Anyone's help will be greatly appreciated.
    Thank You,
    Sravanthi.

    Where did you get the address: http://sal006.salnetwork.com:82/bin/lccountrycodes.cgi (shouldn't the 82 be 83 instead?)

Maybe you are looking for