Best Practices for VidConf with external parties

We currently use Sony Video conference units for our end units. On the backend, we use a Cisco 3515 for multipoint conferences.
We now need to be able to do video conferences with external parties using our MCU.
Since there are many ways of doing this, are there solutions that work better than others? Basically I need to publish a doc that says 'here is what you will need. This port open on your firewall, a public IP address that is not NATd....'
Any help would be appreciated.

David,
Unfortunately, Cisco has not (currently) chosen to implement H.460.x in their H.323 infrastructure solutions, so a Cisco GK could not be used in combination with a Tandberg SBC (session border controller). However, we currently use Cisco gatekeepers for everything but firewall traversal. For traversal, the Tandberg GK functions as the inside portion of the traversal solution and proxies for all my codecs (a mix of six different Tandberg and Polycom product lines)--even those that are H.460-capable. Combined with the SBC (session border controller is a Cisco MCM (25xx router) setting on the public side of the firewall and neighbored to the session border controller.
The "public" GK is for the entities who write custom policies in their firewalls or set their codecs on the public side of their security. If not H.460 compliant, they have to register with a GK, they can't register directly with the SBC, hence, the public GK which is simply neighbored to the SBC.
I would encourage you to closely look at the Polycom V2IU as well. It has come a LONG way since it was introduced a couple of years ago.
Personally, I still don't feel like the V2IU has as much flexibility nor does it implement a dialing methodology best suited for converging networks. It is an ALG (App Layer GW) that has been tweaked to support H.323 traversal, so I don't think it will ever truly match the Tandberg Expressway solution apples-apples, but it is dramatically less expensive and thus worth considering.
We tested both the Tandberg and Polycom traversal solutions with our internal CAC (call admission and control infrastructure), which is made up of multiple Cisco GK products in a fully meshed neighboring scenario prior to purchase of a traversal solution, and the Cisco products interoperated with both traversal solutions.
Cisco did present a solution that proposed a Layer 3 solution, but we felt it best to pursue something based within the H.323 umbrella standard.
If you want to talk more, please email me at [email protected] w/ direct contact info and I'll be happy to assist you in any way I can.
Greg

Similar Messages

  • Best Practice for using with external HD

    Greetings, I currently have a fairly large Aperture library, about 210Gb, it's managed by Aperture and on my internal MacBook Pro hard drive.  Not good, I know.  I just received my new MBP with Thunderbolt and also purchased the new Promise Pegasus 12 TB RAID storage device.  My question to the community is this... what is the best way to set up Aperture to take advantage of the external RAID storage while still being able to keep some photos on my hard drive so I can work when away from the Pegasus device.
    Any thoughts are appreciated.
    Regards,
    Scott 

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best practice for developing with CRM 2013 (On Premises)

    Hello all.  I'm just starting to work with CRM, and I have some questions that hopefully will be simple for the seasoned developers.  It's mostly just some best practice or general how-to questions for the group.
    - When creating a new Visual Studio CRM Project I can connect to my CRM Instance and create new WebResources which deploy to the CRM instance just fine, but how can I pull all the existing items that are in the CRM Solution into the Visual Studio CRM project?
     Or do I need to export the solution to a ZIP, expand it with SolutionPackager.exe, then copy these into my Visual Studio project to get it into sync?
    - When multiple developers are working on changes is it best to keep everything in a Visual Studio project as I mentioned above, or is it better for everyone to have their own instance of CRM to code with so they can Export/Import solutions as needed then
    these solutions be manually merged before moving into a common Test/QA environment?
    - When modifying the submenu on a CRM form is it suggested to use Ribbon Workbench or is it better/easier to just export the solution, expand it with SolutionPackager.exe,  modify ribbondiff and anything else required for the change, package it
    back up, then reimport to CRM?  I've heard from some that Ribbon Workbench has some limitations, but being green I wasn't sure what those limitations might be or if it'd be best to just manually make these changes.  Or is thre any way to have a copy
    of ribbondiff in Visual Studio and deploy this without having to repackage the Solution and Import in the ZIP?
    I think that's it for now :)  Thanks for any advise or suggestions.  I really want to start learning the in's and out's of CRM and how all the pieces fit together.  Also can someone direct me to some documentation or books that might give
    more insight on developing for CRM 2013 or 2015 (moving to this soon)?
    Thanks for your time.

    Hi Sam
    Also interested in best practice around this area - especially recommended development routes, unit testing, continuous integration etc - it would be great if you posted here if you find any good articles etc. At the moment we tend to just push changes
    onto a live system as and when appropriate and I'd prefer to move away from that...
    Thanks
    Stuart

  • What is best practice for integration with freight forwarders?

    Hello,
    We are looking into the possibilities for automatically exchanging data with one of our freight forwarders. We will send them our shipment information and they will send back shipment status and date information including some additional information like the house bill. Sending the shipment data from our SAP (ECC 6) system is no issue, we have done that before. What is new to us is receiving back the status updates from the forwarder. Is there a kind of best practice of where to store this information on the shipment (or in a separate tabel) and what standard function module or BADI to use for this?
    We are using ECC 6.0 sales and distribution, but no transportation management or SCM modules.
    Would, like to hear the experiences of people who have done this type of intergration with their forwarders.
    Regards,
    Ed

    SAP have added SAP TM 8.10 as a separate package which is also integrated with R/3 which means, a separate server is required if SAP TM needs to be implemented which will take care of your expectations.  For more information on this, search in Google so that you will get couple of documentations on this topic.
    G. Lakshmipathi

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Best practices for DAO with relationships

    Suppose I have have a relationship structure similar to this:
    School -> 1:M -> Classrooms -> 1:M -> Students -> 1:M TextbooksShould I have 4 DAOs? (SchoolDAO, ClassroomDAO, StudentDAO, TextBookDAO)...
    How would be the best way to insert a Student?
    Would I insert the Student from the ClassroomDAO (because I need to know the ClassroomID in order to insert a Student)? Or would it be better design-wise to insert a Student in the StudentDAO, and assume the correct ClassroomID has been retrieved before the insert?
    I guess this sounds a lot like an ORM tool -- we've tried to use Hibernate in the past but we're working against a large legacy database and we've had lots of trouble hooking hibernate up to it. We're trying to construct a nice DAO layer because the app is lacking a good one -- but I'd like to make sure we're following the best practices.
    Thanks, Kevin

    kp5150 wrote:
    Suppose I have have a relationship structure similar to this:
    >
    >
    Should I have 4 DAOs? (SchoolDAO, ClassroomDAO, StudentDAO, TextBookDAO)...Just noting that that doesn't seem realistic. Either the students own the books or the school does.> School -> 1:M -> Classrooms -> 1:M -> Students -> 1:M Textbooks
    If the first then a school system wouldn't track them. If the second then you need ownership in the database that reflects that.
    And students are not part of a class room but rather part of a class. Classes are held/scheduled in class rooms.
    Additionally best practices generally dictate that table names should not be plural unless they contain sets (plural of set) data.
    >
    How would be the best way to insert a Student?
    Would I insert the Student from the ClassroomDAO (because I need to know the ClassroomID in order to insert a Student)? Or would it be better design-wise to insert a Student in the StudentDAO, and assume the correct ClassroomID has been retrieved before the insert?
    Probably irrelevant. You could do one or the other or even both. In one case you have a class to which a student is added. In the other you have a student and add them to a class.
    I guess this sounds a lot like an ORM tool -- we've tried to use Hibernate in the past but we're working against a large legacy database and we've had lots of trouble hooking hibernate up to it. We're trying to construct a nice DAO layer because the app is lacking a good one -- but I'd like to make sure we're following the best practices.Huh?
    If you have a unrealistic datamodel (like the above) then that is the cause of problems, not a tool.
    But in a generic sense if you have the following
    A -> 1:M -> B -> 1:M -> C -> 1:M D
    Then that is easy for tools to handle.

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • Best practices for start with itunesu

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

    Hi people, I from Barcelona Spain, We start deploy itunesU for our university. We have a públic and private site, the públic site work with públic site manager.
    I looking for the best way for vídeos and feeds, I try Podcast Producer 2, feeder and podcast maker.
    Podcast maker, is cool option but xml not have a “elements” for itunesU
    Feeder: is the best option, do a perfect xml for itunesu
    Podcast Producer 2, i don’t understand the diference rss feeds or atom feeds, the workflow do it a ipod version and apple tv version, in públic site manager I add rss feed and i can see a ipod version, appletv version or audio version, in atom feed only a ipod version, Why?
    I create a workflow with name of courses
    Podcast Producer no have “elements” for itunesu, (category, order, etc...)
    the name of author it’s the same of username in podcast Producer.
    Dou you need a edit xml file (UUIDNumber_offeed.cache) for add itunesu elements.
    Any poll of software for publish in itunesU?
    Sorry for my english
    Thaks a lot

  • Best Practices, Project Files with External Drives

    I've found bits of discussion here and there, but I would like some further clarity.
    For performance and stability is it best to have the Final Cut Pro Documents directory on an External HardDrive and the Project File on the desktop?
    Ideally I would like to have all the files for each client project on a separate Hard Drive for working and storage purposes. Is this generally what others are doing? I've had some editors not do this, and I end up searching around for clips and project elements on the main hard drive, which makes me nervous that I will miss something for archiving.
    Also, with some critical projects I've been making a second copy of the external hard drive at various stages of the project in case of drive failure. I guess a swappable raid system might be an option, but I really like the plug-in play of firewire, not to mention they're cheap.
    Any insights into how others are managing projects, files and storage would be appreciated.
    Thanks,

    I keep all my projects on the internal drive (main system drive) and all the media on the externals or secondary internal drive. I back up the project files each night to one of the externals (the one with the media of the project I am currently working on) and a USB Thumb Drive. I do this redundant backup as the project file is gold...without this your project is lost. Tapes you can always recapture.
    When I start a project, I designate the scratch drive. That way, all the captures and renders go to that drive. If I change projects I am working on from day to day, then I go into my settings and change the scratch drive, so that I keep things where they belong....and so they don't end up scattered across my system.
    Backing up my drives? Cloning the media drives? Unnecessary IMHO. If you lose a drive, you still have the project file and you simply re-capture. And I have never lost a media drive yet...
    But, it you want to be safe about it, get a RAID and make it the kind of RAID that makes duplicates of the media on several drives. Not sure what the RAID type that is.
    Shane

  • [solved] Best Practice for SSDs with crypto on it regarding TRIM

    Hi,
    I was doing some research on this matter but did not find too much information on that that's more useful than confusing.
    The setup is the following: I have an SSD (Crucial, Marvell-Controller) with two partitions: a small one for /boot and a bigger one for the rest, which is a LUKS-Container. I left some unpartitioned space at the end of the SSD. I'd like to *not* enable TRIM on the LUKS-device for security purposes.
    I was wondering now:
    I read that TRIM does reduce the write amplification of garbage collection. But shouldn't the garbage collection do not write at all, but just erase cells with no information on it?
    If TRIM helps keeping up performance: do I need to explicitly trim the unpartitioned space? Or does this area behave like the spare area?
    If TRIM of the unpartitioned space is necessary: what's the most elegant way to do so?
    If someone could shed a little light on this matter, that would help a lot and would be greatly appreciated.
    Last edited by Ovion (2015-03-09 19:20:13)

    I have the same setup. Crucial SSD, LUKS, TRIM (cron.weekly fstrim). A full hexdump (minus gigabytes of random data) looks like this: https://bpaste.net/raw/505157 (tell me about it)
    There is no issue with security. At least, none I care about. So the attacker can see how much free space there is [and where]. The where part is important since lots of small files give a different picture [lots of small free spaces in between] than a single very large file would [no free space in between], assuming there is no fragmentation worth mentioning [which Linux filesystems are usually good at]. So an attacker could probably make some guesses about your amount of data and file sizes. On the other hand I don't see how that's important, in ecryptfs you get this kind of info for free, and I have all sorts of files in all sorts of sizes either way, so it's not a big secret.
    The question is, when all it takes to crack your encryption setup is a keylogger or a $5 encryption wrench, does it really matter?
    Don't use TRIM on your SSD if you don't want to (there are lots of reasons not to TRIM... like better data recovery chances if you delete something by accident). But don't fool yourself thinking it's somehow really important for your security...
    As for unpartitioned space, if it ever was in use before, you need to trim it once. Create a partition on it, then blkdiscard the partition, then delete the partition. That way it's good until it's "in use" again because you dd all over it or had it resynced in a RAID.
    Apart from that, TRIM does all the things you said (reduce write amplification, performance, etc. etc.) but it's not like the SSD can't take it if you're not writing 24/7 because it's a database server burning up or something.
    Last edited by frostschutz (2015-03-09 19:19:16)

  • Best practice for handling local external assets in Air?

    When setting up a project (as3 mobile not flex framework ideally), where and how might one place their runtime-loaded application assets?
    Especially, does anyone have example code for referencing local files such that it works across android, iOS and the local debugger/local playback?
    Thanks,
    Scott

    Just have a folder to collect your assets and call with an relative path. Because you'r going to attach the files and folder while packaging which is refered by your app.

Maybe you are looking for

  • Enhancement not getting reflected in Info Cube

    Hello Friends, I have made enhancement for getting additional data in the InfoCube 0SD_C03 fro data source 2LIS_13_VDITM .I have made changes in Communication Structure and transfer structure as well . But i am getting the required data in the psa an

  • How would I install Windows 7 on an iMac without a CD slot?

    How would I go about installing Windows 7 on an iMac lacking a CD slot? This might be quite obvious...but I have been searching the internet to no avail for an hour now, so I would appreciate help from all you kind folks in the Apple community. Thank

  • How to determine the number of pixels of a desired object within a digital photograph?

    Hi everyone. I want to determine the number of pixels in an object that is part of a digital photo. I use cameras to film chunks of ice breaking away from ice cliffs or tidewater glaciers. I want to know the size of the ice chunk that my digital phot

  • If I update to ML do all accounts on the mac get the update

    I have multiple accounts on a Pro with RD. I'm also eligible for the free update to ML. I'm just wondering if the update is applied to all accounts on the device. I've never updated my MacBook and the whole updating a mac, windows, iPhones etc but ne

  • Change of Cost center validation date

    Hi HCM consultant required cost centre validity date w.e.f 01.01.1970 because he told he is facing problem at the time of creation of Position and position assigned to personal number. My controlling area and version validity start from 1970 but I ha