Best Practice for VOD Remote Storage

Hi All,
We have a couple of servers that host FLV videos for on-demand playback.  The file storage however is on a separate server, and all the stored VOD content is accessed via UNC share, ie \\servername\share\foo\bar.  The servers have a fast connection to the file server, however, whenever we try to play videos from the FMS servers, the first time its requested, they play back jerky and jumpy.  Most of the time though, the second time the video is played, it plays fine.  Also, if you try to seek ahead to a spot where no one's played back from before, it will be jerky the first time, but fine the second time.
I'm assuming that the first time through, the FMS server caches the content, and then the 2nd time it servers it from the cache.  Since the servers have a very fast connection between them, what can be done to solve this?
Also, would it be better to have an Origin server host the file content locally, and then use Edge servers to connect to and serve the files?  Would this alleviate the problem at all?  Thank you for any help.
Best,
-Costin

We're probably going to have to define 'fast connection' a bit here.  It sounds like either the speed of requests needed for FMS, or their size is causing an issue getting sufficient data flow to FMS to avoid stuttering on the desired file.  There's a few ways that you can measure this.  You can go the route of measuring the I/O on network from your UNC server location, or if you're running an FMIS file adaptor you can plug in and log the speed of return of your reads.  At the end of it though it's going to be about the numbers of how quickly we're able to answer FMS' requests for data and demonstrating that we do or don't have enough flowing through here.
Asa

Similar Messages

  • Best practice for using remote control under limited rights?

    Hi. We are getting ready to take admin rights away from our users and make them standard users. We plan to utilize Zen for most of our in-scope applications so that we can allow users to install supported software. There is usually no problem in that case because Zen can elevate to System access during the install. However, we know that there are applications out there that a user may want to install that is not packaged in Zen. Also, in the event that a system setting needs to be changed, we will have to have a method for supporting this. In either case, the user will call our help desk. Unfortunately, the user will not have enough rights to do the install or system change even if the help desk associate remote control's the PC. What is the best practice to handle this situation in a Netware/Zenworks environment where users only have limited access?
    I was thinking of three possibilities:
    1.) The obivous one is to send a technician over to log in using local admin credentials to install the software or perform the change. (Drawback - not very efficient because a desktop tech would have to get over to the user's PC to perform the work)
    2.) Have the help desk engineer log out of the machine through remote control, log back in as local admin to install the software or perform the change. (Drawback - not very convienant and time consuming.)
    3.) Have the help desk engineer use the "run as" command or even create a Zen application object that could be executed to provide temporary rights for installing software or making system changes. Aaron Margosis of Microsoft writes about this quite a bit in his blog Aaron Margosis' "Non-Admin" WebLog : Table of Contents (Aaron Margosis' Non-Admin WebLog) (Drawback - some software or settings will not work properly using this technique)
    The last one that I didn't list was creating a new application object. I did not factor this one in because this isn't always applicable to system changes and we really don't want to be making app objects for every out of scope app that exists in the user community. We typically only make them for widely used and supported apps.
    Your feedback is appreciated.
    Thanks

    Originally Posted by spond
    Joshbilsky,
    how about
    4) use the remote execute option to remotely launch an app as admin?
    Shaun Pond
    That's probably an option that we will make available. I wasn't sure how some things will work under the SYSTEM context vs local admin.

  • Best practice for presenting different storage vendors arrays..?

    Hi, can't seem to find a specific answer in writing on this.
    Typical FlexPod already deployed, with different vsans on different n5ks.
    Uplinks from UCS / NetApp are FCoE.
    So I now want to present some normal FC (not FCoE, nexsan) storage to the same blades on the UCS system via the n5ks.
    My main questions are can I use same vsans / vHBAs as for NetApp storage, or should you be creating different vsans and vHBAs for different storage vendors?
    If multiple vsan is recommended, is Trunking multiple vsans over FCoE from UCS to FC fine, I'm sure it is but just want some experience on it.
    Thanks!

    Thanks guys for the input, interesting to see there is a difference of opinion on this topic. Makes me feel happier that I'm not being too stupid and someones posted back a link to a best practice guide on the subject that I've blatantly missed! (still time for that no doubt!)
    Like I've said, I haven't found anything specific in writing on this, but keeping vendors on seperate vsans / hbas feels like the best way to do it anyway with only a little extra config in the grand scheme of things.

  • Best Practices for many Remote Objects?

    Large Object Model doing JDBC over RMI
    Please read the following and provide suggestions for a recommended approach or literature that covers this area well?
    N-Tiered Architecture
    JSP/Servlet (MVC) - Database Access Layer - Database
    Applets - JSp/Servlet Engine - Database Access Layer - Database
    Application Layer - Application Layer - Database Access Layer - Database
    I have an object model developed using Torque (A JDBC Object Relational modeling framework) for over 100 tables (to be over 160) that I am commencing to enable over RMI. I have got several remote methods up and running. For some of the simple methods starting up has been easy. Going forward I forsee issues.
    Each table has a wrapper or data object and a peer object that have setters/getters and special methods as desired. The majority of these classes are extended from Base Objects that have basic common functionality for retrieving, creating, and manipulating with a database using SQL.
    I have started building a Remote Interface and an Implementation class that invoke the necessary methods and classes within the Object Model to pull successfully off or update the database. Additionally the methods will need to return objects that represent non-primitive serializable dataobjects and collections of objects.
    Going forward client applications, servlets, and jsps will be using the database in more complex and comprehensive methods over rmi. Here are a couple of things I am concerned about.
    1) When to use java.rmi.server.codebase for class loading? In my implementation several of the remote methods will return objects (e.g Party, Country, CountryList, AccountList). These objects themselves are composed of other objects that are not part of the jvm. For all remote methods that return non-primitive objects must you include the classes in the codebase for the client to operate upon them. Couldn't this be pointless as you have abstract and extended classes all residing within the codebase? In practice do people generally build very thin proxy objects for the peer/data objects to hold just the basic table elements and sets?
    2) Server Versioning/Identity - Going forward more server classes will be enabled via rmi. Everytime one wants to include more methods on interface that is available must you update the interface, create new implementation classes, and redistribute the Remote Interface, and stubs/skels to client apps to operate on again? Is there some sort of list lookup that a client can do to say which processes are available remotely presently (not just at initialization)? As time changes more or fewer methods might be available?
    Any help is greatly appreciated.

    More on Why other approaches would be better?
    I have implemented some proxy objects for the remote Data Objects produced by Torque. To ease the pain, I have also constructed a proxy Builder that takes the table schema and builds a Proxy Object, an inteface for the proxy, and methods to copy between the Torque Data Object (which only lives on the server) into the Proxy Object (accessible by client and/or server).
    The generated methods are useable in the object implementing the Remote Interface but are themselves not remoteable. Only the Server would use these methods. Clients can only receive primitives, proxy Objects, or collections of ProxyObjects.
    This seems to be fairly light currently. I had to jump hoops to use Torque and enable remote apps to use the proxy objects. What would be the scaling issues that will come up? Why would EJBs with containers and all kinds of things about such as CMP vs BMP to be concerned with be a better approach?
    Methods can be updated to do several operations verses Torque and return appropriately (transactions). In this implementation the client (Servlet, mini App or App) needs the remote stub and the proxy objects (100 or so) to stand in for the Torque generated Data Objects. A much smaller and lighter set of classes, based on common JDK classes, instead of the torque classes (and necessary abstracts/objects/interfaces/exceptions for Torque).

  • Best Practice for configuring ZS3 storage

    Hi Team
    Can someone help me with the best way to configure data profile for a ZS3-2 storage which is mainly going to be used for oracle database and EBU application.
    If we are planning to install ldoms os on storage lun then what data profile should be the best one.
    Regards
    AB

    Without knowing more I would do the following, being very conservative;
    Create too pools, with pool0 on controller0 and pool1 on controller1. This is to take advantage of both controllers DRAM, for better performance. Regardless of RAIDZ or RAID 1+0, I would do this.
    I am assuming both trays are the same HHDs ( 900G?)
    Make the first pool RAID 1+0, with 2 write SSDs and one READ SSD. I would use this pool for redo logs, and database files. Mounting via OISP or dNFS.
    Make the second pool RAIDZ, with 2 write SSDs and one READ SSD, I would place my RMAN mount here, and any binary mounts and archive logs. Maybe DB files, if using partitioning. HCC is a great technology to use if you application will benefit from it.
    Now for the relaity check!
    9 times out of 10 method...
    Realistic... just use RAIDZ on both pools. In reality RAID 1+0 generates more IOPS in the array and leads to more DB I/O wait time! It's true!
    If you are headed to ECO I'll sit down and explain. short version, RAIDZ does less writes, which means better database performance.
    Erik

  • Best practices for accessing remote management

    So, I've been looking into consolidating and moving our servers and such to a colocation datacenter. A problem for me that arises from moving is, what do we do about our remote access?In a private office enviornment, I haven't ever opened up VMWare vCenter to the open internet, nor have I ever opened up DRAC/iLO past our firewall to the net. I've always just had all that management stuff hanging out on its own subnet/VLAN and I haven't ever bothered with giving remote access to anyone, really. (Well, I did once set up a windows box to allow me to RDP in and opened up the firewall for that RDP so I could then access that management VLAN from that PC)Moving to a colocation facility makes me wonder, what does everyone else do for this? Would one have a VPN configured on a router in their colo space and remote in that way, and if the...
    This topic first appeared in the Spiceworks Community

    Harvard University recently announced that on June 19, 2015, it discovered an intrusion into the IT networks of the Faculty of Arts and Sciences and Central Administration."Since discovering this intrusion, Harvard has been working with external information security experts and federal law enforcement to investigate the incident, protect the information stored on our systems, and strengthen IT environments across the University," university provost Alan Garber and executive vice president Katie Lapp said in a statement."At this time, we have no indication that personal data, research data, or PIN System credentials have been exposed," Garber and Lapp added. "It is possible that Harvard login credentials (username and password) used to access individual computers and University email accounts have been exposed."...Read More
    Read More

  • Best Practices for Setting up a Windows 2012 R2 STD Domain Controller in a Remote Site

    So I'm looking for an article or writeup similar to the "Adding Domain Controllers in Remote Sites" TechNet article but for Windows Server 2012 STD R2.  Here is my scenario:
    1.  I want to setup the domain controller at Site A where the primary domain controller is located.  The primary domain controller is Windows Server 2008 R2. 
    2.  Once the DC is setup I plan on leaving it on our network for a few days before shipping it to remote Site B for installation
    Other key items:
    1.  The remote Site B will have a different IP range than Site A but will be connected to Site A via a single VPN tunnel.  All the DCs that replicate with each other are on the same domain. 
    2.  The 2012 DC that I setup for Site B (same domain in same forest) will be a DHCP, DNS, and WSUS server all replicating to the primary DC at Site A
    Questions:
    1.  What items can I setup while it's at Site A without effecting or conflicting with the existing network and domain controller?  Can I setup a scope once the DHCP role is added? 
    2.  All of our DCs replicate through Sites and Services, do I have to manually add this to our primary DC for the new DC going to remote Site B?  Or when does this happen automatically when I promote the DC? 
    All and all I'm just looking for a list of Best Practices for 2012 or a Step by Step Guide.  Any help would be appreciated. 

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best Practice for SRST deployment at a remote site

    What is the best practice for a SRST deployment at a remote site? Should a separate router such as a 3800 series be deployed for telephony in addition to another router to be deployed for Data? Is there a need for 2 different devices?

    Hi Brian,
    This is typically done all on one ISR Router at the remote site :)There are two flavors of SRST. Here is the feature comparison;
    SRST Fallback
    This feature enables routers to provide call-handling support for Cisco Unified IP phones if they lose connection to remote primary, secondary, or tertiary Cisco Unified Communications Manager installations or if the WAN connection is down. When Cisco Unified SRST functionality is provided by Cisco Unified CME, provisioning of phones is automatic and most Cisco Unified CME features are available to the phones during periods of fallback, including hunt-groups, call park and access to Cisco Unity voice messaging services using SCCP protocol. The benefit is that Cisco Unified Communications Manager users will gain access to more features during fallback ****without any additional licensing costs.
    Comparison of Cisco Unified SRST and
    Cisco Unified CME in SRST Fallback Mode
    Cisco Unified CME in SRST Fallback Mode
    • First supported with Cisco Unified CME 4.0: Cisco IOS Software 12.4(9)T
    • IP phones re-home to Cisco Unified CME if Cisco Unified Communications Manager fails. CME in SRST allows IP phones to access some advanced Cisco Unified CME telephony features not supported in traditional SRST
    • Support for up to 240 phones
    • No support for Cisco VG248 48-Port Analog Phone Gateway registration during fallback
    • Lack of support for alias command
    • Support for Cisco Unity® unified messaging at remote sites (Distributed Exchange or Domino)
    • Support for features such as Pickup Groups, Hunt Groups, Basic Automatic Call Distributor (BACD), Call Park, softkey templates, and paging
    • Support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0 on same computer
    • No support for secure voice in SRST mode
    • More complex configuration required
    • Support for digital signal processor (DSP)-based hardware conferencing
    • E-911 support with per-phone emergency response location (ERL) assignment for IP phones (Cisco Unified CME 4.1 only)
    Cisco Unified SRST
    • Supported since Cisco Unified SRST 2.0 with Cisco IOS Software 12.2(8)T5
    • IP phones re-home to SRST router if Cisco Unified Communications Manager fails. SRST allows IP phones to have basic telephony features
    • Support for up to 720 phones
    • Support for Cisco VG248 registration during fallback
    • Support for alias command
    • Lack of support for features such as Pickup Groups, Hunt Groups, Call Park, and BACD
    • No support for Cisco IP Communicator 2.0 with Cisco Unified Video Advantage 2.0
    • Support for secure voice during SRST fallback
    • Simple, one-time configuration for SRST fallback service
    • No per-phone emergency response location (ERL) assignment for SCCP Phones (E911 is a new feature supported in SRST 4.1)
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/prod_qas0900aecd8028d113.html
    These SRST hardware based restrictions are very similar to the number of supported phones with CME. Here is the actual breakdown;
    Cisco 880 SRST Series Integrated Services Router
    Up to 4 phones
    Cisco 1861 Integrated Services Router
    Up to 8 phones
    Cisco 2801 Integrated Services Router
    Up to 25 phones
    Cisco 2811 Integrated Services Router
    Up to 35 phones
    Cisco 2821 Integrated Services Router
    Up to 50 phones
    Cisco 2851 Integrated Services Router
    Up to 100 phones
    Cisco 3825 Integrated Services Router
    Up to 350 phones
    Cisco Catalyst® 6500 Series Communications Media Module (CMM)
    Up to 480 phones
    Cisco 3845 Integrated Services Router
    Up to 730 phones
    *The number of phones supported by SRST have been changed to multiples of 5 starting with Cisco IOS Software Release 12.4(15)T3.
    From this excellent doc;
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6788/vcallcon/ps2169/data_sheet_c78-485221.html
    Hope this helps!
    Rob

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • What is best practice for remotely managing bank of switches over POTS

    I need to be able to have a back door into several catalyst switches and ASA.
    What is the best practice for accessing them remotely. ?

    Just place a modem into any console port. Ideally you use a terminal server, but is not always really needed.

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best practice for integrating a 3 point metro-e in to our network.

    Hello,
    We have just started to integrate a new 3 point metro-e wan connection to our main school office. We are moving from point to point T-1?s to 10 MB metro-e. At the main office we have a 50 MB going out to 3 other sites at 10 MB each. For two of the remote sites we have purchase new routers ? which should be straight up configurations. We are having an issue connecting the main office with the 3rd site.
    At the main office we have a Catalyst 4006 and at the 3rd site we are trying to connect to a catalyst 4503.
    I have attached configurations from both the main office and 3rd remote site as well as a basic diagram of how everything physically connects. These configurations are not working ? we feel that it is a gateway type problem ? but have reached no great solutions. We have tried posting to a different forum ? but so far unable to find the a solution that helps.
    The problem I am having is on the remote side. I can reach the remote catalyst from the main site, but I cannot reach the devices on the other side of the remote catalyst however the remote catalyst can see devices on it's side as well as devices at the main site.
    We have also tried trunking the ports on both sides and using encapsulation dot10q ? but when we do this the 3rd site is able to pick up a DHCP address from the main office ? and we do not feel that is correct. But it works ? is this not causing a large broad cast domain?
    If you have any questions or need further configuration data please let me know.
    The previous connection was a T1 connection through a 2620 but this is not compatible with metro-e so we are trying to connect directly through the catalysts.
    The other two connection points will be connecting through cisco routers that are compatible with metro-e so i don't think I'll have problems with those sites.
    Any and all help is greatly welcome ? as this is our 1st metro e project and want to make sure we are following best practices for this type of integration.
    Thank you in advance for your help.
    Jeff

    Jeff, form your config it seems you main site and remote site are not adjacent in eigrp.
    Try adding a network statement for the 171.0 link and form a neighbourship between main and remote site for the L3 routing to work.
    Upon this you should be able to reach the remote site hosts.
    HTH-Cheers,
    Swaroop

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

Maybe you are looking for