Best practices for dealing with Exceptions on storage members

We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
thanks, Aidan
Exception below:
2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

Bob - Here is the full stacktrace:
2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
     at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
     at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
     at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
     at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
     at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
     at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
     at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
     at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
     at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
     at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
     at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
     at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
     at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
     at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
     at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
     at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
     at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
     at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
     at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
     at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
     at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
     at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
     at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
     at java.lang.Class.forName0(Native Method)
     at java.lang.Class.forName(Class.java:169)
     at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
     ... 25 more
2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
catch(Exception e){
    throw new RuntimeException(e);
}Would using the ensureRuntimeException call do anything for us here?
Edited by: aidanol on Feb 12, 2010 11:41 AM

Similar Messages

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Best practice to deal with computer or departed employee

    Dear All,
        I would like to inquire about the best practice to deal with computer and computer account of a departed employee. should be disabled, reset, deleted, or just kept as it is until it is needed by another user?
    Regards
    hiam
    Hiam

    Ultimately your needs for their identities and equipment after they leave are what dictate how you should design this policy.
    First off, I recommend disabling the account immediatly following the employee's departure. This prevents the user from using their credentials to log on again. Personally I have a "Disabled Users" OU in Active Directory. When I disable accounts
    I move them here for easy future retrieval.
    It is possible the user may return, or if they have access to certain systems you may need the account again. I would keep the accounts for a specific amount of time (e.g. 6 months, but this depends on your needs) and then delete them after this period of
    time.
    If the employee knows the passwords to any shared accounts (not a good idea though many organizations have these) or has accounts in other systems that do not use Active Directory authentication, immediately change the passwords to these accounts again following
    the employee's departure.
    If the employee had administrative access to their computer (not a good idea, though is the reality in most cases) you should disable the computer account and remove it from the network. This will prevent the employee from remotely accessing the machine
    until you are able to rebuild or inspect it for unapproved changes.
    Ask the user's manager, team members, and subordinates if there are any files that the employee would have stored on their computer. Back these up as necessary.
    Most likely you will reuse the computer for another employee. For best results you should use an image so you can re-image their machine and not have to worry whether they had installed any unwanted software (backdoors, viruses, illegal software, etc).
    Hope this helps.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • Best practice for developing with CRM 2013 (On Premises)

    Hello all.  I'm just starting to work with CRM, and I have some questions that hopefully will be simple for the seasoned developers.  It's mostly just some best practice or general how-to questions for the group.
    - When creating a new Visual Studio CRM Project I can connect to my CRM Instance and create new WebResources which deploy to the CRM instance just fine, but how can I pull all the existing items that are in the CRM Solution into the Visual Studio CRM project?
     Or do I need to export the solution to a ZIP, expand it with SolutionPackager.exe, then copy these into my Visual Studio project to get it into sync?
    - When multiple developers are working on changes is it best to keep everything in a Visual Studio project as I mentioned above, or is it better for everyone to have their own instance of CRM to code with so they can Export/Import solutions as needed then
    these solutions be manually merged before moving into a common Test/QA environment?
    - When modifying the submenu on a CRM form is it suggested to use Ribbon Workbench or is it better/easier to just export the solution, expand it with SolutionPackager.exe,  modify ribbondiff and anything else required for the change, package it
    back up, then reimport to CRM?  I've heard from some that Ribbon Workbench has some limitations, but being green I wasn't sure what those limitations might be or if it'd be best to just manually make these changes.  Or is thre any way to have a copy
    of ribbondiff in Visual Studio and deploy this without having to repackage the Solution and Import in the ZIP?
    I think that's it for now :)  Thanks for any advise or suggestions.  I really want to start learning the in's and out's of CRM and how all the pieces fit together.  Also can someone direct me to some documentation or books that might give
    more insight on developing for CRM 2013 or 2015 (moving to this soon)?
    Thanks for your time.

    Hi Sam
    Also interested in best practice around this area - especially recommended development routes, unit testing, continuous integration etc - it would be great if you posted here if you find any good articles etc. At the moment we tend to just push changes
    onto a live system as and when appropriate and I'd prefer to move away from that...
    Thanks
    Stuart

  • What is best practice for integration with freight forwarders?

    Hello,
    We are looking into the possibilities for automatically exchanging data with one of our freight forwarders. We will send them our shipment information and they will send back shipment status and date information including some additional information like the house bill. Sending the shipment data from our SAP (ECC 6) system is no issue, we have done that before. What is new to us is receiving back the status updates from the forwarder. Is there a kind of best practice of where to store this information on the shipment (or in a separate tabel) and what standard function module or BADI to use for this?
    We are using ECC 6.0 sales and distribution, but no transportation management or SCM modules.
    Would, like to hear the experiences of people who have done this type of intergration with their forwarders.
    Regards,
    Ed

    SAP have added SAP TM 8.10 as a separate package which is also integrated with R/3 which means, a separate server is required if SAP TM needs to be implemented which will take care of your expectations.  For more information on this, search in Google so that you will get couple of documentations on this topic.
    G. Lakshmipathi

  • Best practice when dealing with long clips

    I have some long clips that I want to use more than once with different in and out points. I know this can't be done so is the best practice to make duplicates of the clips and set different edit points or somehow roughly split the large clip into smaller clips and name them accordingly.

    Not certain what you think you can't do. Take a long clip and open it in the Viewer. Set In and Out points. Drop that into the Timeline. Now you can move along in the Viewer clip and set new Ins and Outs and drop that into the Timeline. Clips in the Timeline are created from the Ins and Outs you set in the Viewer.
    Is that what you want to do? If it is, I don't where making copies of the clip would work for you
    Later, if you want to match up a clip in the Timeline to that master clip, just use Match Clip (find) in the timeline to find where it correaltes to your main clip
    You can have FCE automatically create subclips at camera cut points by using DV Stop/Start Detect if that is what you're looking for

  • Best practices for start with itunesu

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

  • Best Practices for VidConf with external parties

    We currently use Sony Video conference units for our end units. On the backend, we use a Cisco 3515 for multipoint conferences.
    We now need to be able to do video conferences with external parties using our MCU.
    Since there are many ways of doing this, are there solutions that work better than others? Basically I need to publish a doc that says 'here is what you will need. This port open on your firewall, a public IP address that is not NATd....'
    Any help would be appreciated.

    David,
    Unfortunately, Cisco has not (currently) chosen to implement H.460.x in their H.323 infrastructure solutions, so a Cisco GK could not be used in combination with a Tandberg SBC (session border controller). However, we currently use Cisco gatekeepers for everything but firewall traversal. For traversal, the Tandberg GK functions as the inside portion of the traversal solution and proxies for all my codecs (a mix of six different Tandberg and Polycom product lines)--even those that are H.460-capable. Combined with the SBC (session border controller is a Cisco MCM (25xx router) setting on the public side of the firewall and neighbored to the session border controller.
    The "public" GK is for the entities who write custom policies in their firewalls or set their codecs on the public side of their security. If not H.460 compliant, they have to register with a GK, they can't register directly with the SBC, hence, the public GK which is simply neighbored to the SBC.
    I would encourage you to closely look at the Polycom V2IU as well. It has come a LONG way since it was introduced a couple of years ago.
    Personally, I still don't feel like the V2IU has as much flexibility nor does it implement a dialing methodology best suited for converging networks. It is an ALG (App Layer GW) that has been tweaked to support H.323 traversal, so I don't think it will ever truly match the Tandberg Expressway solution apples-apples, but it is dramatically less expensive and thus worth considering.
    We tested both the Tandberg and Polycom traversal solutions with our internal CAC (call admission and control infrastructure), which is made up of multiple Cisco GK products in a fully meshed neighboring scenario prior to purchase of a traversal solution, and the Cisco products interoperated with both traversal solutions.
    Cisco did present a solution that proposed a Layer 3 solution, but we felt it best to pursue something based within the H.323 umbrella standard.
    If you want to talk more, please email me at [email protected] w/ direct contact info and I'll be happy to assist you in any way I can.
    Greg

  • Question: Best Strategy for Dealing with Large Files 1GB

    Hi Everyone,
    I have to build a UCM system for large files > 1GB.
    What will be the best way to upload them (applet, checkin form, webdav)?
    Also what will be the best way to download them (applet, web form, webdav)?
    Any tips will be greatly appreciated
    Tal.

    Not sure what the official best practice is, but I prefer to get the file on the servers file system first (file copy) and check it in from that path. This would require a customization / calling a custom service.
    Boris
    Edited by: user8760096 on Sep 3, 2009 4:01 AM

  • Best practices for DAO with relationships

    Suppose I have have a relationship structure similar to this:
    School -> 1:M -> Classrooms -> 1:M -> Students -> 1:M TextbooksShould I have 4 DAOs? (SchoolDAO, ClassroomDAO, StudentDAO, TextBookDAO)...
    How would be the best way to insert a Student?
    Would I insert the Student from the ClassroomDAO (because I need to know the ClassroomID in order to insert a Student)? Or would it be better design-wise to insert a Student in the StudentDAO, and assume the correct ClassroomID has been retrieved before the insert?
    I guess this sounds a lot like an ORM tool -- we've tried to use Hibernate in the past but we're working against a large legacy database and we've had lots of trouble hooking hibernate up to it. We're trying to construct a nice DAO layer because the app is lacking a good one -- but I'd like to make sure we're following the best practices.
    Thanks, Kevin

    kp5150 wrote:
    Suppose I have have a relationship structure similar to this:
    >
    >
    Should I have 4 DAOs? (SchoolDAO, ClassroomDAO, StudentDAO, TextBookDAO)...Just noting that that doesn't seem realistic. Either the students own the books or the school does.> School -> 1:M -> Classrooms -> 1:M -> Students -> 1:M Textbooks
    If the first then a school system wouldn't track them. If the second then you need ownership in the database that reflects that.
    And students are not part of a class room but rather part of a class. Classes are held/scheduled in class rooms.
    Additionally best practices generally dictate that table names should not be plural unless they contain sets (plural of set) data.
    >
    How would be the best way to insert a Student?
    Would I insert the Student from the ClassroomDAO (because I need to know the ClassroomID in order to insert a Student)? Or would it be better design-wise to insert a Student in the StudentDAO, and assume the correct ClassroomID has been retrieved before the insert?
    Probably irrelevant. You could do one or the other or even both. In one case you have a class to which a student is added. In the other you have a student and add them to a class.
    I guess this sounds a lot like an ORM tool -- we've tried to use Hibernate in the past but we're working against a large legacy database and we've had lots of trouble hooking hibernate up to it. We're trying to construct a nice DAO layer because the app is lacking a good one -- but I'd like to make sure we're following the best practices.Huh?
    If you have a unrealistic datamodel (like the above) then that is the cause of problems, not a tool.
    But in a generic sense if you have the following
    A -> 1:M -> B -> 1:M -> C -> 1:M D
    Then that is easy for tools to handle.

Maybe you are looking for