File System Sharing using Sun Cluster 3.1

Hi,
I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
The files in the remote system should be read/writabe from both the solaris servers concurrently.
As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
thanks
Suresh

You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
The other option would be to use shared QFS, but without Sun Cluster.
Regards,
Tim
---

Similar Messages

  • RAC: using Sun cluster or Oracle clusterware?

    Going to build RAC on SUN machine (Solaris 10) and Sun SAN storage, couldn't decide how to build the foundation-----use Sun cluster 3.1 or Oracle clusterware?
    Any idea? Thanks.

    Starting with 10g Release 2, Oracle provides its own and indipendient cluster management, i.e., Clusterware which can be used as replacement for the OS specific cluster software.
    You can still use the OS level cluster, in this case, Oracle Clusterware depends on OS cluster for the node information.
    If you are on 10g, you can safely, go with Oracle Clusterware.
    Jaffar

  • Best practices for ZFS file systems when using live upgrade?

    I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
    Here is the ZFS layout I have come up with (swap, home, etc omitted):
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              11.0G  52.0G    94K  /rpool
    rpool/ROOT                         4.80G  52.0G    18K  legacy
    rpool/ROOT/boot1                   4.80G  52.0G  4.28G  /
    rpool/ROOT/boot1/zones-root         534M  52.0G    20K  /zones-root
    rpool/ROOT/boot1/zones-root/zone1   534M  52.0G   534M  /zones-root/zone1
    rpool/zone-data                      37K  52.0G    19K  /zones-data
    rpool/zone-data/zone1-runtime        18K  52.0G    18K  /zones-data/zone1-runtimeThere are 2 key components here:
    1) The ROOT file system - This stores the / file systems of the local and global zones.
    2) The zone-data file system - This stores the data that will be changing within the local zones.
    Here is the configuration for the zone itself:
    <zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
      <inherited-pkg-dir directory="/lib"/>
      <inherited-pkg-dir directory="/platform"/>
      <inherited-pkg-dir directory="/sbin"/>
      <inherited-pkg-dir directory="/usr"/>
      <filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
      <network address="192.168.0.1" physical="e1000g0"/>
    </zone>The key components here are:
    1) The local zone / is shared in the same file system as global zone /
    2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
    The system (local and global zone) will operate like this:
    The global zone is used to manage zones only.
    Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
    During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
    Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
    What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
    Thanks in advance for your feedback!
    --Jason                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello "jemurray".
    Have you read this document? (page 198)
    http://docs.sun.com/app/docs/doc/820-7013?l=en
    Then the solution is:
    01.- Create an alternate boot enviroment
    a.- In a new rpool
    b.- In the same rpool
    02.- Upgrade this new enviroment
    03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
    This maybe sound easy but you should be carefull, please try this in a development enviroment
    Good luck

  • Partitioning previously used Windows NT File System for use with Mac and PC

    With reference to this thread I found when searching for a solution: Can Windows NT File System(NTFS) format external hard drive work on iMac?
    Below is the comment which led to me creating this new thread.
    "rkaufmann87Sep 8, 2014 8:26 AM Re: Can Windows NT File System(NTFS) format external hard drive work on iMac?
    in response to djj19
    djj19 wrote:
    Hi there,
    I am about to follow your advice also for the same issue, the worry I have is that when I select partition 1, it says "To erase and partition the selected disk..." and so I am worried that if I do this it will erase the current contents of the hard disk drive (which were copied there from PCs in previous use).
    If so, is there a way to use the hard disk drive with a mac without erasing the contents?
    Thanks
    Your case is different and NO do not partition the EHD yet unless you are prepared to completely erase your PC data. Before progressing though, do you intend to share the EHD with a PC and your Mac? If so this is not advisable but it is possible, however you will need to compromise. Please answer that question first and then we can proceed with some advice.
    BTW it's generally not a good idea to ask a question in a thread that has already been answered, particularly one that is 3+ years old! In such a case it's normally better to create your own thread. "
    The answer to the question in the above comment is that ideally I would like to use with both Mac and PC, why is this not advisable?
    Please discuss my best options. I am wondering if I could transfer the current contents of the disk drive elsewhere, partition (which would erase) the disk drive, then put back the contents along with the new files which I have been trying to put on in the first place?
    If this were to work would I then be tied down to only using this disk drive with a Mac?
    Thank you.
    Message was edited by: djj19

    Ideally it is best to have seperate (discreet) EHD's for a PC and Mac due to their differences in HD formatting. However, this is not always feasable but there are alternative solutions availble. Among those are:
    Creating and useing a Dropbox account that can be shared by PCs, Macs, IOS and Android devices. Dropbox allows for 2GB of free storage and additional storage is available for a small monthly fee.
    Another possiblity is formating an EHD so that it may be shared by both a PC and Mac, this format is FAT 32. The downside of doing this is the size of the files written to the HD are limited to 4GB.

  • Unable to create File System Repository using Network Path

    Hi All,
    I am trying to create a File System Repository.
    I created a networkpath and windows system and used the same while creating File Repository according to the steps in help.
    With out using Networkpath(i.e.,If I use IP address,I am able to create File System Repository).
    But with the network path,it throws the following error as seen in Component Monitor.
    <b>Startup Error:    The localroot does not exist: C:\usr\sap\IGTE\j2ee\j2ee_00\cluster\server\NWP_001
    My network path is NWP_001
    </b>
    Did any one face this problem?
    Any help please........
    Rgds,
    Santhosh

    Hi,
    I am getting the same error even if I replace(\) by slashes(/).
    I tried on EP6SP12 as well as EP6SP2 and got the same error on both.
    The repository is searching for a folder named with
    n/w path on the portal server itself(i.e, it is searching in the following folder
    <b>C:\usr\sap\IGTE\j2ee\j2ee_00\cluster\server\NWP_001
    My network path is NWP_001</b>),instead of searching on the remote system.
    Can any one help me in solving the problem
    Thanks in Advance.
    Rgds,
    Santhosh

  • How can i put a file into blob(using sun.jdbc.odbc.JdbcOdbcDriver)

    Hi
    i tried to put a file into blob , but got a problem.....
    My environment:windows 2000pro,JBuilder 5.0 enterprise,oracle 8.1.6,(not install oracle jdbc driver )
    a part of program(my program is very uglily,if anyone want,later i paste it ba....~_~)
    //Statement stmt2=null;
    //Resultset rs2;
    //opa1 is the blob data
    void saveBlobTableToDisk(Connection con) {
    try {
    stmt2=con.createStatement();
    sqlStr2="SELECT * FROM emp3 where id=1004";
    rs2=stmt2.executeQuery(sqlStr2);
    while (rs2.next()) {
    Blob aBlob=rs2.getBlob("opa1");
    i got the exception :
    " null
    java.lang.UnsupportedOperationException
         at sun.jdbc.odbc.JdbcOdbcResultSet.getBlob(JdbcOdbcResultSet.java:4174)
         at test3.Frame1.saveBlobTableToDisk(Frame1.java:48)
         at test3.Frame1.<init>(Frame1.java:26)
         at test3.Application1.<init>(Application1.java:5)
         at test3.Application1.main(Application1.java:8) "
    and the windows pop up a messagebox said that(about) my memory "0x09af007f" could not read, error in javaw.exe .
    Later i used (ResultSet)getBinaryStream() to solve it. but getBinaryStream() only return a InputStream,so that i can make blob to a file,but i can't make a file to blob using jdbc.....
    I am very stupid that installing sun java, oracle jdbc driver etc....(because i must set a lot of thing such as classpath,java_home etc), Can i only use JBuilder to do that ?
    Or i must install oracle jdbc driver ?
    Thanks.

    My guess here is that Sun's JDBC-ODBC bridge doesn't handle the BLOB datatype. Most ODBC drivers don't support that datatype, so I wouldn't expect the bridge to.
    Is there a reason that you can't use the Oracle driver?
    Justin

  • File System Task - using a wildcard in variable

    Hi,When using the "delete file" operation in the file system task I get an error stating that I have an incorrect path when using the wildcard symbol.  Is there a way around this or do I have to spell out the whole name for each file in the
    directiory.  I cannot not use the "delete directory contents" operation because I have two sub directiories in my directory.  I'm trying to use the following when I get the error
    \\devbox\df\*.csv  when I spell out whole name of the file it works.  Any suggestions?
    LISA86

    this cant be done like this. Only possible if you put file system task in for each loop container and perform deletion one at at time.
    http://technet.microsoft.com/en-us/library/ms140185.aspx
    The File System task operates on a single file or directory. Therefore, this task does not support the use of wildcard characters to perform the same operation on multiple files. To have the File System task repeat an operation on multiple files or directories,
    put the File System task in a Foreach Loop container, as described in the following steps:
    Configure the Foreach Loop container   On the Collection page of the Foreach Loop Editor, set the enumerator to
    Foreach File Enumerator and enter the wildcard expression as the enumerator configuration for
    Files. On the Variable Mappings page of the Foreach Loop Editor, map a variable that you want to use to pass the file names one at a time to the File System task.
    Add and configure a File System task   Add a File System task to the Foreach Loop container. On the
    General page of the File System Task Editor, set the SourceVariable or
    DestinationVariable property to the variable that you defined in the Foreach Loop container.
    Thanks, hsbal

  • Can btrfs(2.6.31) file system be used during an install

    Hi,
    I would like to use btrfs for my root and home partition during an initial installation, but I notice that the btrfs in kernel 2.6.30 (what's on the CD) is old and has had the file system
    format changed as of 2.6.31, so is not backward compatible.
    Is there any way to install using the latest btrfs from 2.6.31 (or 2.6.32 if it is out ..)  .... or am I just too beeding edge on this ..
    Cheers,
    Bernie

    Cool, I am not the only one as it seems
    I might be wrong but doesn't btrfs handle file system upgrades? But even then it doesn't seem to be possible to setup the root filesystem for btrfs currently. Because the setup enviroment lacks the proper tools, like mkfs.brtfs.
    Maybe stability is an issue here. Some user might get very angry if their data gets lost After all btrfs is still under development.

  • What file system to use in exporting from cs3 HD

    Hi just a quick thank you for the information that I have received from this forum. Being from a small community this service is well appreciated.
    My question is that okay so now I want to export my project and I need to know if there is a file system I can save it to, to be able to bring it back into premier if I so choose. The work flow I use in Premier especially with 2hr plus projects, with all the layering, slo mo etc bogs the project down, when I edited sd I would export half as a movie, edited second half and render to movie and combined both files together to create the long movie. In hd I don't understand the file system I could use, I am shooting sony hd hvr z7u shooting 1080i. So what file system and should it be mpeg or h264. Does Premiere support a workable file system.
    Thanks
    Joan

    If I were you, I would use MPEG-2. Match all the output options to that of your origional footage. I see that your camera shoots HDV, so set the bitrate to 25Mbps or higher. If you have any more questions feel free to ask.

  • HELP!: How to read and write to External hard drives? What file system to use?

    hey all, this has been really frustrating for me today. i want to be able to write to an external HDD that's formatted in NTFS. today i find out it isn't possible to write to this file system on a Mac and that on FAT32 it is possible. however, FAT32 can't address files larger than 4GB so this hurts me a lot. are there any other file systems that'll allow me to read and write from a Mac? my goal was to use freeNAS to build a NAS with an old tower and connect my external hard drives to it, but if i can't write to them this may not be possible.
    thanks guys, i'd appreciate any help!

    Read this, you don't need to install any software unless you have no choice.
    https://discussions.apple.com/docs/DOC-3044
    https://discussions.apple.com/community/notebooks/macbook_pro?view=documents

  • Implementing a remote file system to use with JFileChooser

    Hi all
    What I want to do is create a virtual file system so that I can browse through it with JFileChooser, select a file, and then take the correct action. If anyone has something like this that they can post it'd be much appreciated. Otherwise, does anyone have any advice about creating a navigation tool for traversing a vertual file system?

    It appears that some people have indeed created some graphical FTP clients incorporating JFileChooser capabilities. The first entry on the search (and a few others) appear useful.
    http://www.google.com/search?num=100&hl=en&c2coff=1&q=java+ftp+client+jfilechooser

  • File System Repository - using DFS path

    Hi, I want to integrate a windows share through a DFS (distributed file system) path. I only found this thread about DFS support where Marcel Rabe says that they have connected KM to a Microsoft based DFS.
    Has anybody else managed to use a windows share through a dfs path?
    I've created a network path where I have configured the dfs path with the correct user and password. I've provided the domain for the user like this:
    domain\username
    Then I've created a File System Repository where I have defined the same dfs path as Root directory. I use AclSecurityManager but I've also tried to change it to "not set".
    The Component Monitor shows the following error:
    Startup Error:  getting mapped math - Access is denied
    But I know that the username and password are correct.
    Any ideas?

    Hi Marcel,
    that's exactly the problem we are experiencing with the same setup and same error-message.
    Did you manage to resolve the problem? I'm gratefull for any hint.
    Regards,
    Alexander

  • How can I access the Server file system without using any signed applet?

    Is it possible for me to run an applet on the client machine such that the client can view my server file system and perform uploading and downloading of files through the applet without signing the applet?

    Add the following in your java.policy file, your plug in accesses.
    grant {
    permission java.permission.AllPermission;

  • Shared object over cluster

    I have 2 oc4j servers, which are "independent" but have same functionality. I would share one object between them (with hashMap). Object should be something like singleton - only one instance exists and 2 servers share it.
    I think solution is cluster this 2 servers (with no load balancing) but I don't know how configure servers to be clustered to share this object and what kind of object it should be.

    Hi Ehab,
    I can give you a clear answer for your first question. If you want to use CFS (I assume you mean the global file system provided by Sun Cluster). This can only be used as part of SC. Reason is that in order to make the filesystem highly available SC takes care of replicating metadata operations to other nodes in the cluster, so that in case of a node crash, you still have a valid and up-to-date structure of your data available.
    The architecture for QFS is different. (I must admit that I am not an expert for QFS, so use this information with care). shared QFS can be configured as multi-writer. But QFS has no HA metadata server by design. In a (Sun) clustered environments, you would cluster this metadata server.
    You should check the QFS documentation if you need more information.
    Hope this helps.
    Regards
    Hartmut

  • Cache of a File System

    Would appreciate it if someone could validate my thought process:
    I'm looking to put together a large distributed cache and use the file system (shared by all cluster nodes) as long-term storage. My approach has been to: use a distributed scheme with POF serialization, and to create a custom cache store in order to write-behind to Berkeley db
    In order to speed up the process of seeding/bulk-loading the cache at start-up I thought that it would be a good idea to partition the data and to write to different files/directories based on my own fairly primitive partitioning strategy, and then to load the data similar to this example: [Performing Distributed Bulk Loading|http://coherence.oracle.com/display/COH35UG/Pre-Loading+the+Cache#Pre-LoadingtheCache-PerformingDistributedBulkLoading]
    I later realized that coherence also provides a bdb-store-manager.
    My question is:
    1. Does the overall approach make sense? I'd like to optimize this as much as possible so if you can think of a better/faster way then please let me know
    2. Can I achieve the same by just using the bdb-store-manager and not implementing my own Cache Store?
    Thanks,
    Kevin

    user10766240 wrote:
    Would appreciate it if someone could validate my thought process:
    I'm looking to put together a large distributed cache and use the file system (shared by all cluster nodes) as long-term storage. My approach has been to: use a distributed scheme with POF serialization, and to create a custom cache store in order to write-behind to Berkeley db
    You have to have a separate Berkeley DB for each partition they need to be "mounted" and "unmounted" when partitions move between nodes.
    You also have to write a partition listener handling the PARTITION_TRANSFER_* events with "mounting" and "unmounting" the per-partition Berkeley DB instance appropriately as the partition is transferred.
    In order to speed up the process of seeding/bulk-loading the cache at start-up I thought that it would be a good idea to partition the data and to write to different files/directories based on my own fairly primitive partitioning strategy, and then to load the data similar to this example: [Performing Distributed Bulk Loading|http://coherence.oracle.com/display/COH35UG/Pre-Loading+the+Cache#Pre-LoadingtheCache-PerformingDistributedBulkLoading]
    It should be possible to adapt that example to your needs, e.g. get the keys for all entries in the same partition should be fairly easy by going to the appropriate per-partition BDB instance.
    I later realized that coherence also provides a bdb-store-manager.
    My question is:
    1. Does the overall approach make sense? I'd like to optimize this as much as possible so if you can think of a better/faster way then please let me knowYes.
    2. Can I achieve the same by just using the bdb-store-manager and not implementing my own Cache Store?
    Thanks,
    KevinNo. The BDB store manager is not able to initialize the cache from a non-empty BDB database. It recreates an empty BDB database on restart, thereby you would lose all your data before you could have preloaded it, even if you could configure the BDB store manager to access your data files.
    Best regards,
    Robert

Maybe you are looking for