Best practices for additonal storage devices in a clustered container

I am wondering what is considered as best practices related to adding more storage devices. I have a Solaris Cluster 3.2, a failover resource group with a SZBT resource and a HASP resource for the zone root path. Now i have to add more storage devices (for Oracle to the zone). The additional storage devices are in the same metaset as the zone root. I see the following options:
-> Add it to the zone (simple not managed by HASP)
-> Add it to the existing HASP for the RG and lofs mount it in the Zone (managed by HASP, but needs lofs mount)
Are there other options, and what is considered best practices?
Fritz

Hi Fritz,
no matter which option you take it should resoult in lofs mounts.
I in person would not mix the concepts, you started with HASP, so I would continue with HASP.
To achieve this you have to take three steps:
1 add the file systems to the HASP resource,
2. create the mountpoints in the local zone
3 add the file system to the Mounts variable in the SCZBT's parameter file.
If you do not want to reboot the zones, you can mount the loopback files manually, but the safest option ( you will not mess up with typos in th mount pints and realize it later) would be to disable and reenable th sczbt resource.
Cheers
Detlef

Similar Messages

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • Best Practices for ASA 5500 Device Monitoring

    I have looked high and low and am unable to find anything on this topic. I am hoping that somebody here may be able to share some insight into what are considered the best practices for monitoring ASA's--specifically the 5510 with Sec+ License.
    My current monitoring application keeps reporting issues with outbound interface buffers being too high, but there are not any performance issues and I believe the thresholds are just set absurdly low.
    Thank you in advance for any assistance.

    Hi James,
    You probably won't be able to find any all-encompassing documentation for these types of best practices that cover all scenarios. The better method would be to define exactly what items you'd like to monitor and we can provide some guidance on how to best get that working for you.
    -Mike

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best practices for attaching iSCSI devices

    Hello all,
    My environment: Nexus 5010 with 2148T FEX switches in the racks.
    Can (or should) I use the 2148T as the switch between clustered servers and the iSCSI SAN?  The SAN is a Dell MD3000i connected to Dell R710 servers, Win Ent 2008 R2 Server.
    Plans are to use two vlans so I have dual data paths to the SAN.  Being new to the Nexus, I'm not sure what the best practice/configuration is for that.
    Any suggestions on reading materials or best practice configurations would be appreciated!
    Thanks...
    Ted

    We recently implemented Nexus 7010s, 5020s, and 2248s in our data center. Now we would like to harden them from a security persective; are there any Best Practices available for hardening Nexus devices?
    Hi Carl,
    I dont think a specifc hardening document has been realsed but yes you can refer the generic ios based hardening and try with NX-OS which are all supported or not.
    http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080120f48.shtml
    Hope to Help !!
    Ganesh.H
    Remeber to rate the helpful post

  • Best practices for hardening Nexus devices

    We recently implemented Nexus 7010s, 5020s, and 2248s in our data center. Now we would like to harden them from a security persective; are there any Best Practices available for hardening Nexus devices?

    We recently implemented Nexus 7010s, 5020s, and 2248s in our data center. Now we would like to harden them from a security persective; are there any Best Practices available for hardening Nexus devices?
    Hi Carl,
    I dont think a specifc hardening document has been realsed but yes you can refer the generic ios based hardening and try with NX-OS which are all supported or not.
    http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a0080120f48.shtml
    Hope to Help !!
    Ganesh.H
    Remeber to rate the helpful post

  • Best practice for video storage?

    What is the best way for storing videos on external drives? I am a new FCP user coming from Sony Vegas. In Vegas I could import the .mts files into Vegas without the need of the complete file structure as FCP requires when ingesting to ProRes. Since Vegas worked in this manner, I store the .mts files in folders that are dated when shot. Now I'm faced with finding another program to convert to ProRes as FCP needs to complete file structure to ingest media. Another issue is that I have different types of shoots on one single flash card and I can no longer just dump the .mts files in dated folders. What is the best way of splitting up the flash card, yet keeping the required information FCP needs for ingesting media?
    The solution would be to dump the videos to my external drive as soon as I'm done shooting, but most likely it's something I'll forget to do.
    Anyone have any advice on storing videos that have two plus types footage of different events?

    I have a tutorial that covers this...
    http://library.creativecow.net/ross_shane/tapeless-workflow_fcp-7/1

  • Best Practice for CQ Updates in complex installations (clustering, replication)?

    Hi everybody,
    we are planning a production setup of CQ 5.5 with an authoring cluster replicating to 4 publisher instances. We were wondering what the best update process looks like in a scenario like this. Let's say, we need to install the latest CQ 5 Update - which we actually have to -:
    Do we need to do this on every single instance, or can replication be utilized to distribute updates?
    If updating a cluster - same question: one instance at a time? Just one, and the cluster does the rest?
    The question is really: can update packages (official or custom) be automatically distributed to multiple instances? If yes, is there a "best practice" way to do this?
    Thanks for any help on this!
    Henning

    Hi Henning,
    The CQ5.5 servicepacks are distributed as CRX packages. You can replicate these packages and on the publishs they are unpacked and installed.
    In a cluster the situation is different: You have only 1 repository. So when you have installed the servicepack on one node, the new versions of bundles and other stuff is unpacked to the repository (most likely to /libs). Then the magic (essentially the JcrInstaller) takes care, that the bundles are extracted to started.
    I would not recommend to activate the service pack in a production environment, because then all publishs will be updated the same time. And as a restart is required, you might encounter downtimes. Of course you can make it work when you play with the replication agents :-)
    cheers,
    Jörg

  • Best practice for using both devices

    Hello,
    I currently have a mac book pro and mac pro that I use for my editing.  My questions are as follows:
    * Best way to import / export photos for use on both devices
    * Is there a way to sync catalogs on both computers so they are able to update on thier own
    * best way to maintain integrity of the photos (edits) when moving them from computer to computer.
    I appreciate any help that is available!
    Thanks,
    Chris

    There have been a number of threads on this issue, that probably give more detail, but this is what I would do
    Put the photos and catalog on an external HD, and move that back and forth from mac book pro to mac pro. Then you have one catalog, and one set of photos (plus backups somewhere else) that are being used.
    There is no catalog sync facility in Lightroom.
    If you need to travel light, and can't bring an external disk, then the photos and catalog must be on an internal disk; and you could move photos and catalog from one computer to another using the external HD (when you get home) and the Lightroom command File->Export as Catalog; followed by physically transferring the catalog and photos to the other computer; followed by File->Import from Another Catalog.

  • Best Practices for data storage in portal database

    Hi,
    I need to store some stuctured data which is related to portal only. This data may grow data by data and may huge amount some point of time.
    Iam thinking which is the best way to handle it like maintanance point of view.
    i think i can store it in R/3 as custom table which is easy to maintain and read/write using RFC.
    the other one is store it on the portal database (dictionary), may not be easy to handle it.
    suggestions please?
    Thanks.

    Best way is to maintian data(growing data) on R3 and use  JCA to write a simple portal application(JSPDynpage or WD) to get the data back on portal.
    Using JCA, won't affect noticeable portal performance..
    Itz not advisable to store large data on dictionary.
    Regards,
    N.

  • IOS Update Best Practices for Business Devices

    We're trying to figure out some best practices for doing iOS software updates to business devices.  Our devices are scattered across 24 hospitals and parts of two states. Going forward there might be hundreds of iOS devices at each facility.  Apple has tools for doing this in a smaller setting with a limited network, but to my knowledge, nothing (yet) for a larger implementation.  I know configurator can be used to do iOS updates.  I found this online:
    https://www.youtube.com/watch?v=6QPbZG3e-Uc
    I'm thinking the approach to take for the time being would be to have a mobile sync station setup with configurator for use at each facility.  The station would be moved throughout the facility to perform updates to the various devices.  Thought I'd see if anyone has tried this approach, or has any other ideas for dealing with device software updates.  Thanks in advance. 

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • Best practice for server configuration for iTunes U

    Hello all, I'm completely new to iTunes U, never heard of this until now and we have zero documentation on how to set it up. I was given the task to look at best practice for setting up the server for iTunes U, and I need your help.
    *My first question*: Can anyone explains to me how iTunes U works in general? My brief understanding is that you design/setup a welcome page for your school with sub categories like programs/courses, and within that you have things like lecture audio/video files and students can download/view them on iTunes. So where are these files hosted? Is it on your own server or is it on Apple's server? Where & how do you manage the content?
    *2nd question:* We have two Xserve(s) sitting in our server room ready to roll, my question is what is the best method to configure them so it meets our need of "high availability in active/active mode, load balancing, and server scaling". Originally I was thinking about using a 3rd party load balancing device to meet these needs, but I was told there is no budget for it so this is not going to happen. I know there is IP Failover but one server has to sit in standby mode which is a waste. So the most likely scenario is to setup DNS round robin and put both xserves in active/active. My question now is (this maybe related to question 1), say that all the content data like audio/video files are stored by us, (We are going to link a portion of our SAN space to Xserve for storage), if we are going with DNS round robin and put the 2 servers in Active/Active mode, can both servers access a common shared network space? or is this not possible and each server must have its own storage space? And therefore I must use something like RSYNC to make sure contents on both servers are identical? Should I use XSAN or is RSYNC good enough?
    Since I have no experience with iTunes U whatsoever, I hope you understand my questions, any advice and suggestion are most welcome, thanks!

    Raja Kondar wrote:
    wht is the Best Practice for having server pool i.e
    1) having a single large serverpool consisting of "n" number of guest vm
    2) having a multiple small serverpool consisting of less of number of guest vm I prefer option 1, as this gives me the greatest amount of resources available. I don't have to worry about resources in smaller pools. It also means there are more resources across the pool for HA purposes. Not sure if this is Official Best Practice, but it is a simpler configuration.
    Keep in mind that a server pool should probably have up to 20 servers in it: OCFS2 starts to strain after that.

  • Best Practice for Recording to HDD's.

    I want some answers from some people who have been using Logic for a while and have experience in this topic. I have a 2010 MBP 13" with 1 Firewire 800 port, after looking for a cheap quality interface i purchased a Saffire Pro 14 which has a Firewire 6pin interface and i assume is Firewire 400mbps. The interface also doesn't have an extra Firewire port and the manual says not to "daisy-chain" becuase of latency and CPU spikes. Honestly i haven't noticed any problems with my HDD I/O in Logic, when i skip around it peaks but never any errors. I formatted an old 160GB Sata 1 drive and hooked it up over USB, i noticed that when i "skip" arround or w/e its called it doesn't peak (get red) anymore. I want to know peoples experience with Daisy chaining firewire devices, and using USB HDD's.
    When i invest more money, would it be ok to stick with my interface for now and get a Glyph drive and daisy chain my interface on at the end? Should i stick with USB storage? Should i stick with internal 5400RPM drive? Should i upgrade it to a 7200RPM ( my original idea). In I.T. we call it best practices, whats the best practice for storing your audio/logic files on a Mac when running Logic.

    Please don't just tell me Firewire is faster than USB.....I'm an I.T. Proffesional (wishing i went to school for recording ) I know how comptuers work, i build them and fix them all day. I want to know from peoples personal experience what they run and their suggestions. Thanks so much!

Maybe you are looking for

  • Error while publishing thru Service registry

    Hi friends While publishing service thru service registry, I am getting this error message. Any Solution? Thanks    -Mitesh com.sap.esi.uddi.sr.api.exceptions.SRExceptionerror code: null detail message: com.sap.esi.uddi.sr.impl.uddi.wsdl.validation.V

  • Unable to access satellite offices with Cisco VPN client

    There are 4 sites: Main office - 192.168.0.x/24 Sat office1 - 10.0.0.x/24 Sat Office2 - 10.0.1.x/24 Sat Office3 - 10.0.2.x/24 All 4 offices are connected via MPLS using other Cisco routers from the telcom co. The user VPN endpoint is at the main offi

  • Error Installing a TREX Instance Only 7.0 on a Windows 2003 Server (64 bit)

    Hello, during the installation of a TREX 7.0 Instance on a Windows 2003 (64 bit) Server following error occured: "SAP System XXX is a ABAP or double stack system. Cannot switch this to a java standalone system." The error occured after/during the ste

  • Problem: Word document with Thai font to PDF

    I have a MS Office 2007 Word document (.doc) that contains both English and Thai language. I need to convert this document to PDF with Acrobat 9 Pro. However, after going through the conversion process from the ribbon within MS Word, all of the Thai

  • SOAP Adapter + asynchronous message + Runtime Workbench

    Hi I have problem with error monitoring in Runtime Workbench -> Message Monitoring I have SOAP adapter deployed on Apache Axis. I have flow SOAP->XI->SOAP (asynchronous mode) It works great until receiver adapter generate error. I can see error messa