Strategies for dealing with large forms

I've created a dynamic form that is about 30 pages long. Although performance in reader is OK, when editing, every change takes about 10 seconds to be digested by Designer. I've tried editing parts and then copying them across, but formatting information seems to be lost when pasting.
What are the practical limits to how long a form can be, and are there any suggestions for how to deal with longer forms?
Many thanks
Alex

Performance may be okay in Reader 8, but a 30-page dynamic form will probably by unusable in Reader 7. I have done work with very large forms before, and I have maintained each page in a separate XDP and then brought them together once all the kinks are worked out. I don't know why you would be losing formatting information.
There is a technique that I have used where you use dynamic subforms to show only one page of the form at a time. That approach allows a very large form to be used in version 7 with acceptable response time. However, it's a very complex approach, and I have come to think that it's too complex for comfort. The more complex your approach is the more likely it is to fail in a future version of Reader.

Similar Messages

  • Tips for dealing with large channel count on cRio

    Hello, I have a very simple application that takes an analog input using an AI module (9205) from a thermistor and based on the value of the input it sends out a true/false signal using a digital out module (9477). Each cRio chassis will have close to 128 channels, the code being exactly the same for each channel.
    I wonder if anyone has any tips for how I can do this so that I don't have to copy and paste each section of code 128 times. Obviously this would be a nightmare if the code ever had to be changed. I'm sure there is a way to make a function or a class but being new to graphical programming I can't think of a good way to do this. I looked for a way to dynamically select a channel but can't seem to find anything, if I could select the channel dynamically I'm guessing I can create a subvi and do it that way. Any tips or help would be greatly appreciated.

    There isn't a way to dynamically choose a channel at runtime.  In order
    for the VI to compile successfully, the compiler must be able to statically
    determine which channel is being read or written in the I/O Node at
    compile time.  However, that doesn't mean you can't write a reusable
    subvi.  If you right click on the FPGA I/O In terminal of the I/O Node and create a constant or control, you should be able to reuse the same logic for all of your channels.  The attached screen shot should illustrate the basics of what this might look like.  If you right click the I/O control/constant and select "Configure I/O Type...", you can configure the interface the I/O Item must support in order for it to be selectable from the control.  While this helps single source some of the logic, you will still eventually need 128 I/O constants somewhere in your FPGA VI hierarchy.
    I should also mention that if each channel being read from the 9205 is contained in a separate subVI or I/O Node, you will also incur some execution time overhead due to the scanning nature of the module.  You mentioned you are reading temperature signals so the additional execution time may not be that important to you.  If it is, you may want to look at the IO Sample Method.  You can find more information and examples on how to use this method in the LV help.  Using the IO Sample Method does allow you to dynamically choose a channel at runtime and is generally more efficient for high channel counts.  However, it's also a lot more complicated to use than the I/O Node.
    You also mentioned concerns about the size of arrays and the performance implications of using a single for loop to iterate across your data set.  That's the classic design trade off when dealing with FPGAs.  If you want to perform as much in parallel as possible, you'll need to store all 128 data points from the 9205 modules at once, process the data in parallel using 128 instances of the same circuit, and then output a digital value based on the result.  If you're using fixed point data types, that's 182 x 26 bits for just the I/O data from the 9205.  While this will yield the fastest execution times, the resulting VI may be too large to fit on your target.  Conversely, you could use the IO Sample Method to read each channel one at a time, process the data using the same circuit, and then output a digital value.  This strategy will use the least amount of logic on the FPGA but will also take the longest to execute.  Of course, there are all sorts of options you could create in between these two extremes.  Without knowing more about your requirements, it's hard to advise which end of the spectrum you should shoot for.  Anyway, hopefully this will give you some ideas on where to get started.
    Attachments:
    IO Constant.JPG ‏31 KB

  • Question: Best Strategy for Dealing with Large Files 1GB

    Hi Everyone,
    I have to build a UCM system for large files > 1GB.
    What will be the best way to upload them (applet, checkin form, webdav)?
    Also what will be the best way to download them (applet, web form, webdav)?
    Any tips will be greatly appreciated
    Tal.

    Not sure what the official best practice is, but I prefer to get the file on the servers file system first (file copy) and check it in from that path. This would require a customization / calling a custom service.
    Boris
    Edited by: user8760096 on Sep 3, 2009 4:01 AM

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Premiere Pro 2.0 slows when dealing with larger video files

    I'm having issues with Premiere Pro 2.0 slowing to a crawl and taking 60-90 seconds to come back to life when dealing with larger .avi's (8+ mins). When I try to play a clip on the timeline, drag the slider over said clip, or play from a title into said clip on the timeline, Premiere hangs. The clips on question are all rendered, and the peak file has been generated for each different clip has well. This is a new problem; the last time I was working with a larger clip (45+ mins, captured from a Hi-8 cam), I had no problems. Now, I experience this slow down with all longer clips, although I've only dealt with footage captured from a Hi-8 cam and also a mini-DV cam. This problem has made Premiere nearly unusable. I'm desperate at this point.
    System:
    CPU: P4 HT 2.4ghz
    Ram: 2x 1gb DDR
    Video: ATI Radeon 9000 Series
    Scratch Disk: 250gb WD My Book - USB 2.0 (I suspect this might be part of the problem)
    OS: XP Pro SP2
    I'm not on my machine right now, and I can definitely provide more information if needed.
    Thanks in advance.

    Aside from some other issues, I found that USB was just not suited for editing to/from, and on a much faster machine, that you list.
    FW-400 was only slightly better. It took FW-800, before I could actually use the externals for anything more than storage, i.e. no editing, just archiving.
    eSATA would be even better/faster.
    Please see Harm's ARTICLES on hardware, before you begin investing.
    Good luck,
    Hunt
    [Edit] Oops, I see that Harm DID link to his articles. Missed that. Still, it is worth mentioning again.
    Also, as an aside, PrPro 2.0 has no problem on my workstation when working with several 2 hour DV-AVI's, even when these are edited to/from FW-800 externals.
    Message was edited by: the_wine_snob - [Edit]

  • Can ui:table deal with large table?

    I have found h:dataTable can do pagination because it's data source is just a DataModel. But ui:table's datasouce is a data provider which looks some complex and confused.
    I have a large table and I want to load the data on-demand . So I try to implement a provider. But soon I found that the ui:table may be load all data from provider always.
    In TableRowGroup.java, there are many code such as:
    provider.getRowKeys(rowCount, null);
    null will let provider load all data.
    So ui:table can NOT deal with large table!?
    thx.
    fan

    But ui:table just uses TableDataProvider interface.TableData provider is a wrapper for the CachedRowSet
    There are two layers between the UI:Table comonent and the database table: the RowSet layer and the Data Provider layer. The RowSet layer makes the connection to the database, executes the queries, and manages the result set. The Data Provider layer provides a common interface for accessing many types of data, from rowsets, to Array objects, to Enterprise JavaBeans objects.
    Typically, the only time that you work with the RowSet object is when you need to set query parameters. In most other cases, you should use the Data Provider to access and manipulate the data.
    What can a CachedRowSet (or CachedRowSetprovider?)
    do?Check out the API that I pointed you to to see what you can do with a CachedRowSet
    Does the Table cache the data itself?
    Maybe this way is convenient for filter and order?
    Thx.I do not know the answers to these questions.

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • What is best way dealing with large tiff file in OSX Lion?

    I'm working with large tiff  file (engineering drawing), but preview couldnt handle (unresponsive) this situation.
    What is best way dealing with large tiff file in OSX Lion? (Viewing only or simple editing)
    Thx,
    54n9471

    Use an iPad and this app http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=400600005&mt=8

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • Dealing with large files, again

    Ok, so I've looked into using BufferedReaders and can't get my head round them; or more specifically, I can't work out how to apply them to my code.
    I have inserted a section of my code below, and want to change it so that I can read in large files (of over 5 million lines of text). I am reading the data into different arrays and then processing them. Obvioulsy, when reading in such large files, my arrays are filling up and failing.
    Can anyone suggest how to read the file into a buffer, deal with a set amount of data, process it, empty the arrays, then read in the next lot?
    Any ideas?
    void readV2(){
         String line;
         int i=0,lineNo=0;
            try {
              //Create input stream
                FileReader fr = new FileReader(inputFile);
                 BufferedReader buff = new BufferedReader(fr);
                while((line = buff.readLine()) != null) {
              if(line.substring(0,2).equals("V2")){
                     lineNo = lineNo+1;
              IL[i] = Integer.parseInt(line.substring(8,15).trim());
                    //Other processing here
                     NoOfPairs = NoOfPairs+1;
                     }//end if
                     else{
                      break;
            }//end while
            buff.close();
            fr.close();
            }//end try
            catch  (IOException e) {
            log.append("IOException error in readESSOV2XY" + e + newline);
            proceed=false;
            }//end catch IOException
            catch (ArrayIndexOutOfBoundsException e) {
                   arrayIndexOutOfBoundsError(lineNo);
         }//end catch ArrayIndexOutOfBoundsException
         catch (StringIndexOutOfBoundsException e) {
              stringIndexOutOfBoundsError(e.getMessage(),lineNo);
    }//end V2Many thanks for any help!
    Tim

    Yeah, ok, so that seems simple enough.
    But once I have read part of the file into my program,
    I need to call another method to deal with the data I
    have read in and write it out to an output file.
    How do I get my file reader to "remember" where I am
    up to in the file I'm reading?
    An obvious way, but possibly not too good technically,
    would be to set a counter and when I go back to the
    fiel reader, skip that number of lines in the inpuit
    file.
    This just doesn't seem too efficient, which is
    critical when it comes to dealing with such large
    files (i.e. several million lines long)I think you might need to change the way you are thinking about streams. The objective of a stream is to read and process data at the same time.
    I would recommend that you re-think your algorithm : instead of reading the whole file, then doing your processing - think about how you could read a line and process a line, then read the next line, etc...
    By working on just the pieces of data that you have just read, you can process huge files with almost no memory requirements.
    As a rule of thumb, if you ever find yourself creating huge arrays to hold data from a file, chances are pretty good that there is a better way. Sometimes you need to buffer things, but very rarely do you need to buffer such huge pieces.
    - K

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

  • Dealing with large amounts of data

    Hi
    I am new to using Flex and BlazeDS. I can see in the FAQ that binary data transfer from server to Flex app is more efficient. My question is: is there a way to build a Flex databound control (e.g. datagrid) which binds to a SQL query or web service or remoting on the server side and then display infinite amount of data as user pages or scrolls using scrollbar? or does the developer have to write code from scratch to deal with paging or deal with infinite scrollbar by asking server for chunks of data at a time?

    You have to write your own paginating grid. It's easy to do, just make a canvas, throw a grid and some buttons on it, than when user click to the next page, you make a request to the server and when you have a result, set it as a new data model for the grid.
    I would discourage you to return infinite amount of data to the user, provide a search functionality plus pagination.
    Hope that helps.

Maybe you are looking for

  • Doubt in uploading using call transaction method

    hi all i am uploading f-29 in call transaction method .. i have a problem in currency field, the currency field is not picking up it shows a error that input field is longer than screen field .. i have declared currency field as type BSEG-WRBTR(same

  • Lion will not mount shares from Debian Linux 5.0 server

    HI after Lion update I cant mount shares from our server, it replys back with this: "The version of the server you tryed to connect to are not supported" I can not find info about server versions supported anywhere Anyone know what can be done to fix

  • Flashback 기능 관련 질문 드립니다.[FLASHBACK TRANSACTION QUERY]

    flashback 기능을 공부 중에 있습니다. 테스트 환경 은 10GR1 입니다. scott 에 dba 권한을 주구 쉽게 테스트 진행 중에 예제와 좀 다른 결과가 나와서, 좀 이상하게 생각이 되서요.. 원래 이게 맞는지도 잘 모르겠네요.. 제가 테스트 진행에 참고하는 문서는 Reviewed by Oracle Certified Master Korea Community ( http://www.ocmkorea.com http://cafe.daum.n

  • Capture Change history for customized field- MM02.

    Hi Friends, I have created one sub screen with one field i.e., Warranty card No. This will be stored in MARC-ZZMAT1. Change history cannot be capturing for this field. If I made any changes (MM02), we required to capture the changes history.  Please

  • I accidentally opened an infected attachment. What can I do to see if my iPad is infected?

    I accidentally opened an infected email attachment sent from a hacked PC. How do I know if my iPad is infected? I don't want to send any malicious items to others. What is my best course of action? Thanks.