Question: Best Strategy for Dealing with Large Files 1GB

Hi Everyone,
I have to build a UCM system for large files > 1GB.
What will be the best way to upload them (applet, checkin form, webdav)?
Also what will be the best way to download them (applet, web form, webdav)?
Any tips will be greatly appreciated
Tal.

Not sure what the official best practice is, but I prefer to get the file on the servers file system first (file copy) and check it in from that path. This would require a customization / calling a custom service.
Boris
Edited by: user8760096 on Sep 3, 2009 4:01 AM

Similar Messages

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Best strategy for dealing with a mixed library and missing photos

    Looking at the package contents of my iPhoto Library, it appears that I have a "mixed" library (i.e. part managed and part referenced). I'm not sure how this happened--I originally created this library about three years ago when I first bought my Mac with iLife '06, then later upgraded to iLife '08 which I'm currently still using. Maybe the default setting was to have a referenced library in iPhoto '06? Don't know. I don't recall ever changing this setting.
    Anyway, I was starting to manually recover some missing photos but it just occurred to me that iPhoto is simply updating the aliases to point to the photos I'm telling it to use, and not actually copying the photos into the library (I'm guessing that only happens when you import a photo). This is not what I want. I want my library to be completely managed. I understand that, as my current Preferences settings dictate, any new photos I import now will be copied into the iPhoto Library. But for the existing photos that I am able to recover, I don't want to have to keep the originals outside of iPhoto.
    I read in another topic about an application called AliasHerder and I'm thinking of trying that. The problem is, I no longer have original copies of all of the missing photos. I can recreate the folder structure for some of them but not all. I'm not sure what this will do to my iPhoto Library (I have emailed the vendor for clarification). I'm wondering if I'll still end up with a mixed library containing aliases that point to a non-existent file. Perhaps someone who has some experience with the tool could enighten me.
    Based on suggestions given to me in previous posts I made, I tried both rebuilding my iPhoto Library and using the iPhoto Library Manager tool. Neither of these produced the results I had hoped for. Am I better off just starting off from scratch and creating a whole new library? And if so, what's the best way to get all of the existing photos from my current library to the new one? Do I export them out and then import them into the new library? Is there any way to salvage events, albums, etc. from my existing library or will I have to recreate these in the new library?
    Thanks in advance!

    First check iPhoto's Advanced preference pane to make sure you're running a "managed" library.
    Click to view full size
    Since you don't have the "source" photos available to relink to your best bet, IMO, would be to continue using your current library and delete those missing photos from it whenever you come across them in day to day use. If you click on the thumbnail of a missing photos and drag it to the iPHoto trash and empty it that will delete it from the library. This way you will retain all of your organizational efforts, i.e. albums, books, keywords, etc.
    When you rebuilt with iPhoto Library Manager what did the resulting library contain? If it contained your photos without the missing photos you could use it. You would retain your albums, keywords, faces, places and other metadata. You will lose any keepsakes, i.e. books, slideshows, cards, etc.
    Or, you could start over as follows:
    Creating a new library while preserving the Events from the original library.
    1 - Move the existing library folder to the desktop
    2 - Open the library package like this.
    3 - Launch iPhoto and, when asked, select the option to create a new library.
    4 - Drag the Originals folder from the iPhoto Library on the desktop into the open iPhoto window.
    You will end up with all your photos (no missing photos) in the same events as in the original library. But there will be no albums, metadata or keepsakes. It's for this reason I made the original suggestion of continuing with your current library above.
    OT
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto (iPhoto.Library for iPhoto 5 and earlier versions) database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger or later), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. There are versions that are compatible with iPhoto 5, 6, 7 and 8 libraries and Tiger and Leopard. Just put the application in the Dock and click on it whenever you want to backup the dB file. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.
    NOTE: The new rebuild option in iPhoto 09 (v. 8.0.2), Rebuild the iPhoto Library Database from automatic backup" makes this tip obsolete.

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Dealing with large files, again

    Ok, so I've looked into using BufferedReaders and can't get my head round them; or more specifically, I can't work out how to apply them to my code.
    I have inserted a section of my code below, and want to change it so that I can read in large files (of over 5 million lines of text). I am reading the data into different arrays and then processing them. Obvioulsy, when reading in such large files, my arrays are filling up and failing.
    Can anyone suggest how to read the file into a buffer, deal with a set amount of data, process it, empty the arrays, then read in the next lot?
    Any ideas?
    void readV2(){
         String line;
         int i=0,lineNo=0;
            try {
              //Create input stream
                FileReader fr = new FileReader(inputFile);
                 BufferedReader buff = new BufferedReader(fr);
                while((line = buff.readLine()) != null) {
              if(line.substring(0,2).equals("V2")){
                     lineNo = lineNo+1;
              IL[i] = Integer.parseInt(line.substring(8,15).trim());
                    //Other processing here
                     NoOfPairs = NoOfPairs+1;
                     }//end if
                     else{
                      break;
            }//end while
            buff.close();
            fr.close();
            }//end try
            catch  (IOException e) {
            log.append("IOException error in readESSOV2XY" + e + newline);
            proceed=false;
            }//end catch IOException
            catch (ArrayIndexOutOfBoundsException e) {
                   arrayIndexOutOfBoundsError(lineNo);
         }//end catch ArrayIndexOutOfBoundsException
         catch (StringIndexOutOfBoundsException e) {
              stringIndexOutOfBoundsError(e.getMessage(),lineNo);
    }//end V2Many thanks for any help!
    Tim

    Yeah, ok, so that seems simple enough.
    But once I have read part of the file into my program,
    I need to call another method to deal with the data I
    have read in and write it out to an output file.
    How do I get my file reader to "remember" where I am
    up to in the file I'm reading?
    An obvious way, but possibly not too good technically,
    would be to set a counter and when I go back to the
    fiel reader, skip that number of lines in the inpuit
    file.
    This just doesn't seem too efficient, which is
    critical when it comes to dealing with such large
    files (i.e. several million lines long)I think you might need to change the way you are thinking about streams. The objective of a stream is to read and process data at the same time.
    I would recommend that you re-think your algorithm : instead of reading the whole file, then doing your processing - think about how you could read a line and process a line, then read the next line, etc...
    By working on just the pieces of data that you have just read, you can process huge files with almost no memory requirements.
    As a rule of thumb, if you ever find yourself creating huge arrays to hold data from a file, chances are pretty good that there is a better way. Sometimes you need to buffer things, but very rarely do you need to buffer such huge pieces.
    - K

  • Tips for dealing with large channel count on cRio

    Hello, I have a very simple application that takes an analog input using an AI module (9205) from a thermistor and based on the value of the input it sends out a true/false signal using a digital out module (9477). Each cRio chassis will have close to 128 channels, the code being exactly the same for each channel.
    I wonder if anyone has any tips for how I can do this so that I don't have to copy and paste each section of code 128 times. Obviously this would be a nightmare if the code ever had to be changed. I'm sure there is a way to make a function or a class but being new to graphical programming I can't think of a good way to do this. I looked for a way to dynamically select a channel but can't seem to find anything, if I could select the channel dynamically I'm guessing I can create a subvi and do it that way. Any tips or help would be greatly appreciated.

    There isn't a way to dynamically choose a channel at runtime.  In order
    for the VI to compile successfully, the compiler must be able to statically
    determine which channel is being read or written in the I/O Node at
    compile time.  However, that doesn't mean you can't write a reusable
    subvi.  If you right click on the FPGA I/O In terminal of the I/O Node and create a constant or control, you should be able to reuse the same logic for all of your channels.  The attached screen shot should illustrate the basics of what this might look like.  If you right click the I/O control/constant and select "Configure I/O Type...", you can configure the interface the I/O Item must support in order for it to be selectable from the control.  While this helps single source some of the logic, you will still eventually need 128 I/O constants somewhere in your FPGA VI hierarchy.
    I should also mention that if each channel being read from the 9205 is contained in a separate subVI or I/O Node, you will also incur some execution time overhead due to the scanning nature of the module.  You mentioned you are reading temperature signals so the additional execution time may not be that important to you.  If it is, you may want to look at the IO Sample Method.  You can find more information and examples on how to use this method in the LV help.  Using the IO Sample Method does allow you to dynamically choose a channel at runtime and is generally more efficient for high channel counts.  However, it's also a lot more complicated to use than the I/O Node.
    You also mentioned concerns about the size of arrays and the performance implications of using a single for loop to iterate across your data set.  That's the classic design trade off when dealing with FPGAs.  If you want to perform as much in parallel as possible, you'll need to store all 128 data points from the 9205 modules at once, process the data in parallel using 128 instances of the same circuit, and then output a digital value based on the result.  If you're using fixed point data types, that's 182 x 26 bits for just the I/O data from the 9205.  While this will yield the fastest execution times, the resulting VI may be too large to fit on your target.  Conversely, you could use the IO Sample Method to read each channel one at a time, process the data using the same circuit, and then output a digital value.  This strategy will use the least amount of logic on the FPGA but will also take the longest to execute.  Of course, there are all sorts of options you could create in between these two extremes.  Without knowing more about your requirements, it's hard to advise which end of the spectrum you should shoot for.  Anyway, hopefully this will give you some ideas on where to get started.
    Attachments:
    IO Constant.JPG ‏31 KB

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best Strategy for importing a WAR file

    I would like to import a WAR file into JDeveloper and run it against the embedded OC4J app server. What is the best strategy (the approach which minimizes the amount of cutting and pasting)??
    I tried to use the File/Import menu selection, but that didn't seem to operate the way that I expected. I also tried using the "Existing Source" from the same menu selection with "index.jsp", but JDeveloper didn't seem to like that either (it complained that it couldn't read "index.jsp" from within a zip file. So I had to resort to "New/Web Tier/JSP Page" and then cutting and pasting the source. After that JDeveloper was happy (no "not under the root server directory" errors). But I would like to have a more streamlined approach, because these WAR file has hundreds of files.
    Thanks, Norman

    Did you try creating
    "Project from an Existing WAR file" option.
    Create an emtpy workspace
    File | New
    Select General
    Select Projects
    Select "Project from an Existing WAR file" option
    raghu
    JDev Team

  • Strategies for dealing with large forms

    I've created a dynamic form that is about 30 pages long. Although performance in reader is OK, when editing, every change takes about 10 seconds to be digested by Designer. I've tried editing parts and then copying them across, but formatting information seems to be lost when pasting.
    What are the practical limits to how long a form can be, and are there any suggestions for how to deal with longer forms?
    Many thanks
    Alex

    Performance may be okay in Reader 8, but a 30-page dynamic form will probably by unusable in Reader 7. I have done work with very large forms before, and I have maintained each page in a separate XDP and then brought them together once all the kinks are worked out. I don't know why you would be losing formatting information.
    There is a technique that I have used where you use dynamic subforms to show only one page of the form at a time. That approach allows a very large form to be used in version 7 with acceptable response time. However, it's a very complex approach, and I have come to think that it's too complex for comfort. The more complex your approach is the more likely it is to fail in a future version of Reader.

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • After Effects CS5-best settings for rendering many large files after multiple passes with the clone

    I have a moderate level Quad core PC,specs below, with 24Gb of RAM.
    I am processing sequences of timelapse photos brought in from Adobe RAW as either a raw sequence at the original 5K res (Canon 5D2) or as 4K 16bit TIFFs. There are typically between 400-500 shots in each sequence.
    I use clone brush in AE CS5 to remove 'stray objects' such as birds, people or vehicles from frames.
    I can succesfully process these if I only have to clone out the odd object from a sequence, but occasionally I require to remove something on multiple frames. This might be a car that drove into shot and stayed for say 200 frames before driving away again.
    The clone tool lets me paint out the arrival and then the parked vehicle by successively using the previous frame as the source. However at some point, even with say 16-20 Gb set as the AE memory, the program will get choked and start to crawl. This knocks on to the final render, when I will get outragous times for rendering even single frames, say 3 hours per frame etc.
    I assume that AE is having to go back through many multiple source frames as the cloning that I have done is continually using the previous 'already cleaned up frame' as the source.So frame 150 is using frame 149, which used frame 148 etc.
    Observing the Resource Monitor as AE renders, I see not much Processor usage, tyically 20-30% but maxed out Memory. Lots of reading of files off disk, usually apparently 'random' Windows system files, but also lots or reads from the timelapse source files as well.
    I did get single render times down to 1 hour but then did more tweaking and haven't got back to anything better than 2.5 hours per frame since!
    Is there a better way of achieving what I want?
    My PC specs and set up: HP xw8600 with x5450 Quad core processor.
    24 Gb RAM set usually as 16 Gb for AE (3 cores) and 1 core with max 3Gb for other progs. Sometimes I have set this to up to 20Gb fror AE.
    Running Vista Business (yes I know Win7 would be better-but hey it's yet more money!).
    Nvidia GTX460 1Gb memory with Cuda 'hack'-which works well with PP CS5.
    Windows Page File set to about 20% more than Windows usually sets it to, on either separate SATA drive or dual with the Windows C drive.
    Source files on external USB 3 drive (pretty fast).
    AE render files on RAID 0 internal 7200 drive.
    AE output files to another external USB3 drive.
    Any suggestions for improving my set up would be appreciated, as I'm getting old watching AE render my projects!

    Is there a better way of achieving what I want?
    Don't use AE? Just kidding, but your own analysis very much covers the facts already. Yes, it's AE going back and forth in time and yes, it's the holding of those frames in memory which is chewing up your memory. Depending on the situation you may look into patching up your disturbances with masked still images rather than cloning across the sequence, but beyond that I don't see much potential to make things more efficient. It's just AE trying to be über-smart with caching and then making a mess. The only other thing is the RAW import, which may consume unnecessary memory. Batching the files in Photoshop and only using PSDs or TIFFs in AE may squeeze out an extra GB of RAM. On that note also consider doing the cleanup in PS. After all, PS Extended does support video/ image sequences to some degree...
    Mylenium

  • Best way to work with large file?

    Ok I have a huge file (200mb) that is all sequenes of numbers separate by commas. I wish to know if a specific sequence exists in this file. What is the best way of doing that check? I cant load the whole file into a StringBuffer since it is too large.

    Well, it's not necessarily too large. You might be able to load it all into memory at once.
    But you're right. It's not a good idea.
    What you can do is use a StreamTokenizer set to split on commas.
    Conceptually, you need to have a simple state machine that keeps track of how many consecutive numbers from that sequence you've read. As soon as you hit a number that's not the next one in the sequence, you reset to the beginning of the sequence.
    It's not hard to write it yourself, but there might be a simpler way--one of the IO classes or regex classes might have something for processing a stream that way.

Maybe you are looking for

  • How do I transfer contacts from iPad to iPhone 5s

    Bought an iPhone and would like to transfer contacts from iPad. Everything else seems to have synced. Cannot even see contacts info in iCloud when I log in from my PC

  • ALSB 3.0 : Request message works in Test Console but not from Client side

    By invoking a Web Service operation through a signature/encryption enabled Proxy Service in the ALSB Test Console for Proxy Services, i get a successful response. Now when invoking the operation from a client application i get the following error res

  • ORCL-002: GXORCL2PreparedStatement::DAEExecute(): ORA-01722: invalid number

    [06/Sep/2001 09:55:41:4] warning: ORCL-002: GXORCL2PreparedStatement::DAEExecute(): ORA-01722: invalid number java.sql.SQLException: GXORCL2PreparedStatement::DAEExecute(): ORA-01722: invalid number at com.netscape.server.jdbc.PreparedStatement.verif

  • Avid playback

    Hi all...I'm sorry to repeat but I'm still having problems...I have a movie from an Adrenalin avid the movie info says the codec is: Avid ABVB NuVista I've downloaded as many codecs as I can find but on the avid site there's a message: This installer

  • Invalid value error when using custom lov-window

    We generate a form with Headstart(patch12.4)/Designer 6.0. In a multi-recordblock we insert a record, selected via a custom LOV form. When changing the focus to another field, immediately we get the message FRM-40212: Invalid value for field OMI_ONDE