Using Xserve with Sun StorageTek Disk Array

Hi
Is it possible to connect an Xserve with a Fibre Channel Card installed to a Sun StorageTek 6140 Disk Array?
I don't see any devices in /dev. Is the Xserve limited to the Xserve RAID or can it be used with other storage solutions? Do I need to purchase Xsan to do this (e.g. partition, format and mount a device from the Sun disk array)?
Xserve (Intel)   Mac OS X (10.4.9)  

Oh my, I should retire...
When I read that page I somehow confused all that X* stuff... :o/
I was wondering if it is possible to connect other
storage solutions than the Xserve RAID (like the Sun
StorageTek ones) directly (or via switch) to an
Xserve (the pizza box server )
Of course you can connect any DAS storage device that presents itself as a raw device on the device channel in use (fc, scsi, ...).
But reading the specs on that special Sun Raid I'd say that it falls into another category: SAN storage. And therefore it needs OS support/drivers which seem to be available only for Solaris, Linux and Win.
How exactly do I partition, format and mount the
virtual device under Mac OS X Server?
DiscUtility.app? In CLI: diskutil
I want to try this procedure:
Connect the Sun Diskarray to the Xserve via the
copper cables.
(Perhaps reboot the Xserve)
Partition with pdisk
mkfs.xfs (or something similar mac os like)
/dev/sda(x)
mount /dev/sda1 /Users/
diskutil
see man diskutil
But as said I guess you can't access this kind of SAN storage device.
Am I completely wrong? It's the procedure in Linux,
so it can't be very different in BSD (say: Mac OS
X)...
Same procedure as every year: install the driver then drink.
-Ralph

Similar Messages

  • ESXi 4.0  with sun storagetek 2500 array server is getting crashed

    Hi all,
    I have configured ESXi 4.0 in an IBM server followed by creating virtual host in that server. The server was working fine until I connect the server with storage array.
    My setup looks like this.
    IBM Server installed with ESXi 4.0 (linux version) followed by virtual host creation. the virtual host is Windows 2008 - 64 bit. Once all this setup were made, I have installed a software called CAM module in this server. This module is to manage my Sun storagetek 2500 array family. Suddenly i see my ESXi server (licensed one) is getting crashed. Following is the error message displayed.
    Vmware ESXI release build_164009 x86_64
    #PF Exception(14) in world 5354:sfcbd in 0X418028840C99 addr 0X410008ebdd84
    Press escape to enter local debugger
    I got a work around from one of the forum for this similar problem as "increase the swap size may solve this". but i tried that also , but still the issue persists. Swap size is set as twice of the RAM (my ram is with 4 GB,2X dual core AMD processor and HDD capacity is 80GB)
    can anyone help me on this.
    With regards,
    Sri

    Hello,
    if it's that urgent, why do you post in a user-to-user forum instead of getting in touch with tech support of Vmware, IBM, Microsoft and maybe Sun ?
    Michael

  • Dell disks with Sun's storage array

    Hi,
    Does anyone know if its possible to use Dell's disk drives with Sun's StorEdge family of disk arrays. Specifically, the drive is a 300 GB 10K U320 SCSI disk
    http://www1.ca.dell.com/content/topics/reftopic.aspx/gen/ccare/en/ccare_confirm?c=ca&cs=CADHS1&l=en&s=gen&~section=default
    and the disk array is Sun Storage Tek 3320 SCSI
    http://www.sun.com/storagetek/disk_systems/workgroup/3320/
    Any help or pointers will be appreciated.
    thanks
    Nilesh

    Oh my, I should retire...
    When I read that page I somehow confused all that X* stuff... :o/
    I was wondering if it is possible to connect other
    storage solutions than the Xserve RAID (like the Sun
    StorageTek ones) directly (or via switch) to an
    Xserve (the pizza box server )
    Of course you can connect any DAS storage device that presents itself as a raw device on the device channel in use (fc, scsi, ...).
    But reading the specs on that special Sun Raid I'd say that it falls into another category: SAN storage. And therefore it needs OS support/drivers which seem to be available only for Solaris, Linux and Win.
    How exactly do I partition, format and mount the
    virtual device under Mac OS X Server?
    DiscUtility.app? In CLI: diskutil
    I want to try this procedure:
    Connect the Sun Diskarray to the Xserve via the
    copper cables.
    (Perhaps reboot the Xserve)
    Partition with pdisk
    mkfs.xfs (or something similar mac os like)
    /dev/sda(x)
    mount /dev/sda1 /Users/
    diskutil
    see man diskutil
    But as said I guess you can't access this kind of SAN storage device.
    Am I completely wrong? It's the procedure in Linux,
    so it can't be very different in BSD (say: Mac OS
    X)...
    Same procedure as every year: install the driver then drink.
    -Ralph

  • Core dump using iostream with Sun Studio 8

    I'm running on Solaris 9 using C++ compiler Sun Studio 8, and encoutered a very strange problem.
    My application failed with a core and here is the stack.
    [1] t_splay(0x3774b470, 0x387a0ec0, 0x389aec60, 0x39e5ef1f, 0x3774b470, 0x1), at 0xfc347930
    [2] t_delete(0x387a0ec0, 0x80, 0x39be9748, 0x20, 0x383ccd20, 0x0), at 0xfc347698
    [3] mallocunlocked(0x80, 0x0, 0x21b08, 0xfc3bc000, 0x10, 0x20), at 0xfc346d40
    [4] malloc(0x80, 0xfbaa7400, 0x1, 0x759c40, 0x0, 0x0), at 0xfc346b74
    [5] operator new(0x80, 0x0, 0xd3a18, 0x759c74, 0x0, 0x8345fc), at 0x760c10
    [6] strstreambuf::overflow(0xf41f6e28, 0x29, 0xf41f6e28, 0x39fe0e88, 0x39fe0e88, 0x8345fc), at 0x75bb1c
    [7] streambuf::xsputn(0xf41f6e28, 0xfee00bc0, 0x1, 0x0, 0x81010100, 0x1), at 0x75ab94
    [8] unsafe_ostream::outstr(0xf41f6e78, 0xfee00bbf, 0x0, 0x0, 0xf41f6e7c, 0xfffffffe), at 0x757bac
    =>[9] unsafe_ostream::operator<<(this = 0xf41f6e78, _s = 0xfee00bbf ")"), line 1211 in "iostream.h"
    [10] ostream::operator<<(this = 0xf41f6e74, _s = 0xfee00bbf ")"), line 1350 in "iostream.h"
    [11] CorInterfaceEntity::ifState(this = 0x1bc3de78, newState = MISMATCH_OF_INSTALLED_AND_EXPECTED, needToSendEvent = false
    Can somebody help me giving a direction of investigation ?
    Is there perhaps known problem with iostream on Sun Studio 8 running on Solaris 9 ?
    Thanks for any tips.
    Yaakov Berkovitch
    [email protected]

    But sorry for my insistence, but do you think that
    purify or/and runtime are not able to detect any
    corruption of the heap ?They can detect some kinds of corruption, such as some uses of an invalid pointer.
    But a wild pointer that changes the value of data that is part of your program cannot be detected by RTC or Purify. Those programs can't know whether that change is intentional.
    >
    BTW, we are not using any volatile declaration, and weIf non-local data is shared among threads, it should be declared volatile. For example, suppose you have
    x = foo;
    ... // code not changing foo
    y = foo;
    If foo is not marked volatile, the compiler is allowed to assume its value hasn't changed, and assign to y the same value assigned to x. If foo was changed by another thread, y will not have the current value of foo. The "volatile" declaration says that the variable's value might change without any obvious reason, and the compiler should generate code to load a fresh value each time it is referenced.
    are using the rwtools library packaged with the
    compiler, and are not using the STD library.OK, you are not running into a std::string synchronization bug.
    >
    Also about the compiler option -xarch=v8, is probably
    not relevant for us because we are running Solaris 9.
    So the relevant compiler is probably -xarch=v9. Do you
    advise us using this option ?I think you misunderstand. The -xarch values refer to the kind of processor your program will run on.
    The -xarch=v8 option generates 32-bit code that will run on all SPARC chips, including the chips found in very old SPARCstations. This option is the default for compilers prior to Sun Studio 9 (which ships this week).
    The -xarch=v8plus option generates 32-bit code that runs only on UltraSPARC chips, found in Ultra workstations. These are the only kinds of workstations shipped by Sun since about 1996. Unless you need to support the ancient SPARCstations, we recommend compiling with -xarch=v8plus, to get smaller and faster code.
    The -xarch=v9 option generates 64-bit code that runs only on UltraSPARC chips, an only on Solaris 7 or later. Unless your program requires a very large address space, you generally don't want to generate 64-bit code. On SPARC, 64-bit code is larger and slower than 32-bit code. (Type "long" and all pointers are 64 bits instead of 32 bits.)
    >
    Also I want to return you two new questions:
    1) I read in another discussion,
    http://forum.sun.com/thread.jsp?forum=5&thread=18124&me
    sage=47854#47854
    that another memory manager can be more efficient in
    multi-threaded environment (libmtmalloc.so), and also
    an alternate threads library (/usr/lib/lwp) that
    reduced CPU usage. Do you advice us using this
    alternate library ?The libmtmalloc library usually has better performance in MT programs than the default version of malloc. It also can result in more memory fragmentation. In that case, the larger working set can sometimes have a large negative effect, more than offsetting the MT efficiency. You have to experiment to see whether it is appropriate for your particlular program. If you are running into heap corruption, the pattern of corruption will probably be different with libmtmalloc than with the default malloc. The differences might provide a clue to what is wrong.
    The alternative "T2" threads library was introduced in Solaris 8 as an option.
    In Solaris 9 it is the default, so you are already using it.
    >
    2) We are using the Rogue Wave library shiped with the
    compiler. Is it an up-to-date version ? Can we assume
    that is a good choice or it will be preferable to move
    to the STD library ?I assume you mean Rogue Wave Tools.h++. As explained in the compiler docs, this version of Tools is obsolete, and has not been supported for many years. We continue to provide it for customers who used it before the introduction of the C++ Standard Library in 1998, and who don't want to change their code. We do not recommend it for use in new code.

  • Can I use Tomahawk with Sun's JSF implementation?

    I would like to know whether I can use Tomahawk-1.1.3 with Sun's JSF implementation or have to use JSF implementation from myfaces?
    Thanks
    Zhong

    If the components are written per the specification, they should be portable, and as such should run on either implementation.

  • How to use KAWT with "Sun Java Wireless Toolkit 2.3 Beta"?

    Hi!
    Im new to developing java for mobile devices so all of this is pretty confusing for me. I started with installing suns:s "Wireless Toolkit 2.3 Beta" and it works fine but now I want to use awt classes so I started to look it up and that�s how I found out about kawt. I followed the tutorial at http://www.kawt.de/ and i was able to use it for Java Wireless Toolkit 1.0.4_02 so that it compiled fine and was run able.
    Then I tried the same thing in v 2.3 but I got a error that looked like this "Uncaught exception java/lang/NoClassDefFoundError: awtDemo: /awt/event/ActionListener: Cannot create class in system package." when i tried to run it. It compiled fine when I pressed the build button witch wouldn�t have happened if i hadn�t installed it correctly. So I�m wondering if someone knows were to find a tutorial for installing kawt for "Sun Java Wireless Toolkit 2.3 Beta" or if anyone knows what might be wrong?
    I'd welcome any help
    Thanks!

    If using the zip install of DSEE, you need to use your own java container to host DSEE. Try downloading the latest Tomcat (http://tomcat.apache.org) and deploying your dscc in it.

  • How to use Webui with Sun RI?

    Hi All,
    I am using Sun RI for my JSF application, Please tell me how to configure and use the WebUI taglibs in my applicaton(I am using Eclipse as IDE) / pls give me some useful links describing the topic.
    Thanks
    Renju Thomas.

    Thanks BalusC, Thanks for the reply event though my question was with little info,
    I am asking about the tag library using by Sun Studio Creator, Its coming with a rich set of components.
    How can I use the same taglib in my application and my dev environment is in Eclipse.
    Anybody knows the jars and configuration required for that pls give me a reply.
    Thanks
    Renju Thomas.

  • Unable to use JProfiler with SUN Access Manager (in Weblogic Server)

    Whe I was trying to profiling with JProfiler , I am getting the following exception during the Weblogic startup:
    <Dec 12, 2005 6:30:49 PM IST> <Warning> <HTTP> <BEA-101247> <Application: '/opt/SUNWam', Module: 'amcommon': Public ID references the old version of the Servlet DTD. You must change the public ID in web.xml file to "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN".>
    Dec 12, 2005 6:30:54 PM com.sun.xml.rpc.server.http.JAXRPCContextListener contextInitialized
    INFO: JAXRPCSERVLET12: JAX-RPC context listener initializing
    <Dec 12, 2005 6:30:56 PM IST> <Warning> <Net> <BEA-000905> <Could not open connection with host: ctsblrsun14.gmacfs.com and port: 8001.>
    <Dec 12, 2005 6:30:56 PM IST> <Warning> <Net> <BEA-000905> <Could not open connection with host: ctsblrsun14.gmacfs.com and port: 8001.>
    <Dec 12, 2005 6:30:56 PM IST> <Warning> <Net> <BEA-000905> <Could not open connection with host: ctsblrsun14.gmacfs.com and port: 8001.>
    <Dec 12, 2005 6:30:56 PM IST> <Warning> <Net> <BEA-000905> <Could not open connection with host: ctsblrsun14.gmacfs.com and port: 8001.>
    <Dec 12, 2005 6:30:57 PM IST> <Error> <HTTP> <BEA-101216> <Servlet: "LoginLogoutMapping" failed to preload on startup in Web application: "amserver".
    javax.servlet.ServletException
    at weblogic.servlet.internal.ServletStubImpl.createServlet(ServletStubImpl.java:919)
    at weblogic.servlet.internal.ServletStubImpl.createInstances(ServletStubImpl.java:883)
    at weblogic.servlet.internal.ServletStubImpl.prepareServlet(ServletStubImpl.java:822)
    at weblogic.servlet.internal.WebAppServletContext.preloadServlet(WebAppServletContext.java:3335)
    at weblogic.servlet.internal.WebAppServletContext.preloadServlets(WebAppServletContext.java:3292)
    at weblogic.servlet.internal.WebAppServletContext.preloadServlets(WebAppServletContext.java:3278)
    at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:3261)
    at weblogic.servlet.internal.WebAppServletContext.setStarted(WebAppServletContext.java:5951)
    at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:862)
    at weblogic.j2ee.J2EEApplicationContainer.start(J2EEApplicationContainer.java:2127)
    at weblogic.j2ee.J2EEApplicationContainer.activate(J2EEApplicationContainer.java:2168)
    at weblogic.j2ee.J2EEApplicationContainer.activate(J2EEApplicationContainer.java:2115)
    at weblogic.management.deploy.slave.SlaveDeployer$Application.setActivation(SlaveDeployer.java:3082)
    at weblogic.management.deploy.slave.SlaveDeployer.setActivationStateForAllApplications(SlaveDeployer.java:1751)
    at weblogic.management.deploy.slave.SlaveDeployer.resume(SlaveDeployer.java:359)
    at weblogic.management.deploy.DeploymentManagerServerLifeCycleImpl.resume(DeploymentManagerServerLifeCycleImpl.java:229)
    at weblogic.t3.srvr.SubsystemManager.resume(SubsystemManager.java:131)
    at weblogic.t3.srvr.T3Srvr.resume(T3Srvr.java:966)
    at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:361)
    at weblogic.Server.main(Server.java:32)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.jprofiler.agent.Agent$_E.run(Unknown Source)
    at java.lang.Thread.run(Thread.java:534)
    Caused by: java.lang.NullPointerException
    at com.sun.identity.authentication.UI.LoginLogoutMapping.init(LoginLogoutMapping.java:71)
    at weblogic.servlet.internal.ServletStubImpl$ServletInitAction.run(ServletStubImpl.java:1028)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.ServletStubImpl.createServlet(ServletStubImpl.java:904)
    ... 25 more
    >

    This is an SSL handshake problem of Websphere - has nothing to do with AM.
    Websphere�s JDK does not trust the Signer / Cert of AM�s deployment container.
    Either configure a truststore (or use an existing webshpere truststore) where you import the Cert of the Signing CA of your AM DC�s cert.
    Other option - import the mentioned cert in cacert file of IBM JDK - but be aware that this might get lost when applying an Websphere fixpack/refreshpack.
    BTW what have you configured for server.port,server.host and server.protocol in your AMConfig.properties?
    If you have not changed that settings agent will use the port/protocol specified to communicate with AM.
    -Bernhard

  • Use BootCamp with HP Recovery Disk

    I want to use bootcamp on my MacBook. I own a HP Desktop. I have the Recovery Disks that came with the Computer (XP operating system). I also have also bought a vista upgrade from Microsoft. I tried to use the recovery disks to install Windows on my second partition, but I always receive the "invalid CD, press any key to continue".
    Is there a way to extract the Windows operating system from the recovery disks or can i use the vista upgrade to install windows on my second partition?

    Forget about using those HP installation disks. That is not going to work. Please read *[_This Section_|http://docs.info.apple.com/article.html?path=Mac/10.5/en/11889.html]* which indicates which version(s) of windows will work with Boot Camp. In short you will need a FULL RETAIL version of either Windows XP or Vista.
    can i use the vista upgrade to install windows on my second partition?
    There is a way to do that however it is not recommended by Apple. I am not sure if it is even ethical to do so. It appears to be a simple process. Please *[_Click Here|http://www.winsupersite.com/showcase/winvista_upgradeclean.asp]* to see that process. I have not used this process and have no further information if you run into issues during installation.
    Axel F.

  • Using TM with 2 backup disks in 2 different locations?

    I've been using TM for a while now and have moved country since I first started using it. I originally have a disk to back everything up but couldn't take it with me initially when I moved... so I bought another disk when I got to my new place and started another TM session from scratch... now I have all my old gear back I want to use the old disk again to back up to as well as the newer one... I know this could get a little complicated knowing which one was backed up when...
    Anyway, I want to use one at home and one at the office (just in case I get robbed or whatever)... what do I need to know before I plug my old BU disk in? I'm just afraid it might do something weird to my very old data...
    Thanks
    Owen

    biggo wrote:
    I've been using TM for a while now and have moved country since I first started using it. I originally have a disk to back everything up but couldn't take it with me initially when I moved... so I bought another disk when I got to my new place and started another TM session from scratch... now I have all my old gear back I want to use the old disk again to back up to as well as the newer one... I know this could get a little complicated knowing which one was backed up when...
    Anyway, I want to use one at home and one at the office (just in case I get robbed or whatever)... what do I need to know before I plug my old BU disk in? I'm just afraid it might do something weird to my very old data...
    As Dan says, that will work fine in most cases. All you have to do is tell TM via +Change Disk.+ You'll have two separate, independent sets of backups.
    But, if you go several days (at least 10) and have a large volume of changes between "swaps," TM may do a full backup instead of an incremental one. That's probably what happened to Dan, and isn't a problem if your TM disk is big enough, and you do not have the +Warn when old backups are deleted+ box checked in TM Preferences > Options.

  • Problems using dbms_scheduler with attached network disks

    Oracle 11.2.0.1
    I created a job which run  .bat - files.
    dbms_scheduler.create_job(
            job_name => 'JOB_FILES',
            job_type => 'EXECUTABLE',
            job_action => 'e:\dump\abc.bat',
            auto_drop => FALSE,
            enabled => TRUE);
    Content of abc.bat is
    "E:\Oracle\product\11.2.0\dbhome_1\BIN\7za a e:\dump\den3_arch_171020131420.zip x:\mtr\"
    Im trying to make an archive with files in some folder. In this example the folder is "x:\mtr\" where "x" is attached network disk.
    When i run this job It creates null archive and raises error
    ORA-27369: job of type EXECUTABLE failed with exit code: Incorrect function.
    If in "abc.bat" i change network disk "x" to server disk with existing folder all works properly(It archives all files).
    If i execute "abc.bat" from cmd window all works properly(It archives all files).
    Ive created java source
    create or replace and compile java source named "OSCommand" as
    import java.io.*;
    public class OSCommand{
    public static String Run(String Command){
    try{
    Process p = Runtime.getRuntime().exec(Command);
    p.waitFor();
    return("0");
    catch (Exception e){
    System.out.println("Error running command: " + Command +
    "\n" + e.getMessage());
    return(e.getMessage());
    and pl/sql function
    CREATE OR REPLACE FUNCTION OSCommand_Run(Command IN STRING)
    RETURN VARCHAR2 IS
    LANGUAGE JAVA
    NAME 'OSCommand.Run(java.lang.String) return int';
    And it works like dbms_scheduler...
    May be user need any special grants?

    user617289 wrote:
    Thats right. How can i resolve this issue?
    when all else fails Read The Fine Manual
    DBMS_SCHEDULER

  • Sun StorageTek Data Replicator. prerequisites

    Hello All
    MY client is looking for a SAN to SAN replication solution between there Primary & DR Site, we are proposing 2 x Storage Tek 6140 with Sun StorageTek Data Replicator Software
    kindly let me know what type of link is required to get 0% latency. They want to get replicated data at both end without any delay
    Kindly help in this regards
    Ali Jafri

    Hi defunkt0r
    I don't know if there is a so called "whitepaper" best practice guide for what your wanting to do , but here is what I would do
    From what I understand, it should be as simple as unmounting the volume on the old host, moving the HBAs to the new box w/ correct drivers etc, connecting the >6140 up to the new box, bringing up the data host machine and importing the disk/mounting in the new box.This part is correct , don't forget you'll also need to modify your switch zoning to reflect your new WWN's of your HBA's.
    You'll also need to login into CAM and make sure those new "WWN's" of those HBA's are seen , once you can confirm the new WWN's in CAM , Don't forget to set up your host initiator correctly i.e Windows , Linux , vmware.
    Once these steps are done you'll be good to go and power up your host.
    Good luck
    David

  • Oracle recommend configuring a Sun StorageTek 6140 with 15 hard disks

    Hi,
    How would Oracle recommend configuring a Sun StorageTek 6140 with 15 hard disks for optimal use with multiple Solaris servers running Oracle instances connected via a SAN? Should the storage array be configured as a single RAID 5 device and then LUNs created for the different servers? Or should each Oracle instance have its own dedicated hard disks?
    Also might we see better performance if we used Solid State Devices for the ZFS Intent Log (ZIL) and/or L2ARC on ZFS, instead of using the UFS file system straight to the SAN?
    Regards
    NM

    I don't think Oracle would recommended any particular way. I would consider the following, but your mileage may vary, so testing is very important.
    Raid 1 (10) for logs
    RAID5 for Tables
    But if you have many servers access the same LUN's/VDISKS then contention may be a problem. Maybe consider consolidating all you servers to a single Oracle Server.( For protection a second server with Oracle Solaris Cluster)
    ASM is the best method for managing disks and data placement. But if like to see files/directories then go for UFS with direct I/O enabled. ZFS is fantastic if you want to use Snapshot/clones/Compression, etc.. but I think UFS is faster.
    As for cache, Oracle 11gR2 has support for storing objects specifically in FLASH, so look to use the Sun F20 card with 96Gb of flash.
    HTH
    Andy

  • Any experience/suggestions with Oracle on Sun Storagetek 2540 SAN via NFS?

    Hi,
    Currently Oracle 11g is up and running on local drives. And we intend to move it off to Sun Storagetek 2540 SAN with Sun server 4140 (as NFS server) via NFS. So as far as the database server is concerned, it'd be NFS mount to the NFS server.
    There's a new feature in Oracle 11g, Direct NFS Client, which is supposed to be more effective and more efficient than relying on O/S handling of NFS protocol. But in the white paper, it ONLY mentioned about NFS NAS scenario, not SAN. But I think it'd work with SAN as well, because as far as the database server is concerned, it's communicating to NFS server via NFS protocol. What do you think? Any experience/suggestions?
    In terms of Storagetek 2540 SAN support, the Oracle's official answer would be "it's not supported unless the storage vendor and the storage device are listed in the Oracle Storage Compatibility Program list..." However, OSCP has been discommissioned in 1/2007, and Storagetek 2540 was introduced AFTER the date.
    I think it MIGHT work given it's Sun/Oracle alliance.
    What do you think?
    Input/suggestion is appreciated,
    Helen

    My 3 year old linux installtion on my laptop, which is my NFS client most of the time uses udp as default (kernel 2.4.19).
    Anyway the key is that the NFS client, or better, the RPC implementation on the client is intelligent enough to detect a failed TCP connection and tries to reestablish it with the same IP address. Now once the cluster has failed over the logical IP the reconnect will be successful and NFS traffic continues as if nothing bad had happened. This only(!) works if the NFS mount was done with the "hard" option. Only this makes the client retry the connection.
    Other "dumb" TCP based applications might not retry and thus would need manual intervention.
    Regarding UFS or PxFS, it does not make a difference. NFS does not know the difference. It shares a mount point.
    Hope that helped.

  • Xserve + 3rd party disk array

    So now that the XRAID has been EOL'ed, I have a question that's quite possibly silly. Is the Promise disk array the only compatible array for Xserve or are there alternative (such as Dell, HP or other brands). And if so, what models, interface, ...
    Thanks in advance.

    Hi
    Donald has you covered for the rest but this question:
    Did the XRAID connect directly to an Xserve
    Yes. Essentially you use a mac. Could be an XServe but could equally be any desktop model that meets the minimum qualifying spec for OSX Server. Anything from a G4 MDD to a high spec MacPro will do. As long as the model decided upon has a free PCI slot. An Apple FC PCI Card and two FC cables and thats it. Basically think of it as a large external drive attached to a host using fibre channel cables. It does not even have to be Server Software. Client software will do equally as well. If you have enough PCI slots (3 or more) you could have 3 or more XRAIDs connected to it.
    Tony

Maybe you are looking for

  • How do I display a PDF file in Adobe from h:commandLink or h:outputLink

    I am looking to add a simple "Help" link on my header. The idea is to add an h:outputLink or h:commandLink component and when it is clicked, an existing PDF file will be displayed in the default (Adobe) viewer. Can anyone give me a simple example of

  • Imovie events out of order

    Since ive upgraded to imovie  in Maverick all of my events are not in order according to dates the clips were taken.  They all out of order. is there an easy way to have the events in order according to date the video clips were taken?

  • Ios 8.0.2 list of issues on my iphone 5s

    I Have the following problems: 1. Random capitalizations as you can see above. 2. Shortcuts in Safari causes a crash. I have "em" and "emm" as 2 different shortcuts. But just by typing that, safari crashes. 3. All of my apps crash. Even settings and

  • Internal order discription in source doc

    Hi Is it possible to have the description of the internal order to appear on the source document (Cheque Payment Voucher, Cash Payment Voucher or Material Requisition Note). If so pls advice on that... Thanx. K

  • Finder bug in icon view. can anyone help me?

    So, I'm experiencing a weird bug since I've installed OS X Mountain Lion. When I set 'show items as icons' in Finder, the font used to label the files and folders are most of the time kinda messed up. (they're not the normal black clean fonts, but th