Issue seeking & scrubbing through a very large FLV

BACKGROUND
I am working with a flash developer on a project where we
need to playback, scrub and seek through a 2 hour video. The
developer is controlling a locally stored 3.3 GB FLV file using the
netstream command. The FLV specs are: 640x480x29.97FPS encoded w/
key frames every 12 frames to support ½ second seek accuracy.
PROBLEMS
It feels like the there is some caching mechanism fighting
against our seek performance past the first few minutes of the
video.
1) The first time we seek to any point past the first few
minutes there is a long one-time delay. If we go to the 30 minute
mark, it takes 10-15 seconds to cue the video. However, after that
one delay, subsequent seeks from 0-30 minutes are immediate.
2) If we seek or scrub past ~5300 seconds, go back to the
start and then try to go past the 5300 mark the seek delay happens
again. So after "warming the movie up" we can quickly seek through
the first half until we attempt to pass the magic 5300 second mark.
I could live with problem #1 but #2 is a deal breaker. Here
are some more data points:
TESTS
1) SMALLER FLV: If I compress the movie to 2.75 GB I can seek
further in before running into the above problem (6300 seconds)
2) LARGER BUFFER: Setting a large pre-load buffer solved the
performance issues but there is no way we have enough RAM to
preload the entire video. (We tried and it crashed).
3) MEMORY OBSERVATIONS: When we seeked to 30 minutes, we did
NOT notice a jump in memory usage while waiting for the video to
cue.
Does anyone have any ideas on this one? My gut is telling me
that Flash is building some kind of time code lookup table on the
fly and it is arbitrarily limited in size.
pete

That's a tough one. Acrobat is not designed for tiling PDF files to create another PDF. That's really what you're asking. There is the option to PRINT to a PDF, and turn on the Poster feature. If were in Windows where there is a real Adobe PDF printer driver, you could probably use that feature. But for various reasons (too complicated to describe here), that was withdrawn on the Macintosh.
If you have a copy of Adobe InDesign, and if you installed an Adobe PDF 9 PPD file (see description below), it could be done in a somewhat awkward way. InDesign allows you to place PDF files so you would need to make a page of the proper size and place your large PDF:
Then after installing the Adobe PDF 9 PPD file, you could choose File > Print. Then choose to print a PostScript file to the Adobe PDF 9.0 PPD file. In the Setup panel, you'd choose a Letter size page. Then you'd choose the Tile option at the bottom and set the Overlap amount:
Then you'd save the PostScript file and process through Distiller.
My blog post below describes how to find and install the Adobe PDF 9.0 PPD file:
http://indesignsecrets.com/creating-postscript-files-in-snow-leopard-for-older-print-workf lows.php

Similar Messages

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • Hello, I am having issues open very large files I created, one being 1,241,776 KB. I have PS 12.1 with 64 bit version. I am not sure if they issues I am having is because of the PS version I have, and whether or not I have to upgrade?

    Hello, I am having issues open very large files I created, one being 1,241,776 KB. I have PS 12.1 with 64 bit version. I am not sure if they issues I am having is because of the PS version I have, and whether or not I have to upgrade?

    I think more likely, it's a memory / scratch disk issue.  1.25 gigabytes is a very big image file!!
    Nancy O.

  • HT4623 Does anyone have an issue with your iPhone 5 when trying to play music? Mine won't, on the screen you see the player run through songs very quickly but never playing.... Help!

    Does anyone have an issue with your iPhone 5 when trying to play music? Mine won't, on the screen you see the player run through songs very quickly but never playing.... Help!

    Try this > Start a song, tap the screen, under the left side of the scrubberbar that appears, tap the icon to stop shuffle.

  • Timeline issues - when i drag media back, the timeline becomes very large and out of whack!  What is this CS4

    timeline issues - when i drag media back, the timeline becomes very large and out of whack!  What is this in CS4?

    You are in the wrong forum!
    Premiere Pro CS4 & Earlier

  • Scrubbing through cached comp very sluggish

    I have a comp that has been RAM Previewed and consequently, the entire status bar above the timeline is green.
    However if I take my playhead and drag through the timeline, the screen updates very very slowly (perhaps 1~2fps).  But I can RAM Preview at full speed.
    Any ideas? I thought once it's cached, and you aren't changing any parameters, it should scrub through because it's all loaded in RAM, yes? I swear I've done this a billion times in the past.
    48GB Mac with 10GB free for other stuff. 200GB HDD Global Cache (although nothing in the status bar is blue. It's all green)

    Yup! Expressions is what's killing it. I have several LinesCreator 3D lines which uses copious amount of code to do its magic.
    Shake 'n Bake those expressions!

  • Very Large Spacing Issue In Both Mail And Address Book!!

    Hello,
    I need your help to resolve this problem. Initially I had a problem with garbled font in my Mail application. I used FontNuke in an attempt to resolve the problem. The result of using FontNuke was that the font was un-garbled but now I have very large spacing in the Mail and Address Book applications. Also the circles on the left side of the Mail application (that normally indicate the number of unread messages you have) are also very large!! the same thing is happens with the red star on the Mail icon that indicates the number of unread messages you have.
    I would really appreciate any help that you can provide!
    Thanks,
    SydCam

    Thanks for following up so quickly. I took your advice and checked my fonts. I opened Font book and got rid of all the "duplicate" fonts and made sure that I had all of the required fonts. I then rebooted and when I went back to Mail and the Address Book, I still had the same problem. There are still very large spaces in my email both in the header and body. The circle to the left that indicates how many un-read email you have is still very large. It is over sized. I wish there was a way I could show you a screen capture so you can see what exactly what I'm referring to.
    Obviously there is something that I'm missing, but I just don't know what it is. I would greatly appreciate any help that you (or anyone else) can provide!
    Thanks in advance!
    SydCam

  • Sorting very large file

    Hi,
    I've a very large file (1.3G, more than 10 million records) that I need to sort, what is the best possible way to do that, without using a database?
    Thanks,
    Gary

    Just a suggestion:
    If you create a sorted red/black tree of record numbers (i.e. offsets in the data file), you could get an equivalent sort as follows (probably without a huge amount of memory...):
    Here's the strategy:
    Create a comparitor that compares two Long objects. It performs a seek in the data file and retrieves the records pointed to by those two Longs and compares them using whatever record comparitor you were going to use.
    Create a sorted list based on that comparitor (a tree based list will probably be best)
    Run a for loop going from 0 to the total records in the data file, create Long objects for each value, and insert that value into the list.
    When you are done, the list will contain the record numbers in sorted order.
    Create an iterator for the list
    For each element in the list, grab the corresponding data file record from the source file and write it to the destination file.
    voila
    Some optimizations may be possible (and highly desirable, actually):
    When you read records from the file, place them into an intermediate cache (linked list?) of limited length (say 1000 records). When a record is requested, scan the cache for that record ID - if found, move it to the top of the list, then return the associated data. If it is not found, read from the file, then add to the top of the list. If the list has more than X records, remove the last item from the list.
    The upper pieces of the sort tree are going to get hammered, so having a cache like this is a good idea.
    Another possibility: Instead of inserting the records in order (i.e. 0 through X), it may be desirable to insert them in pseudo-random order - I seem to remember that there was a really nice algorithm for doing this (without repeating any records) in an issue of Dr. Dobbs from awhile back. You need to get a string of random numbers between 0 and X without repeats (like dealing from a card deck).
    If your data is already in relatively sorted order, the cache idea will not provide significant bennefit without randomizing the inputs.
    One final note: Try the algorithm without the cache first and see how bad it is. The OSes disk caching may provide sufficient performance. If you encapsulate the data file (i.e. an object that has a getRecordBytes(int recNumber) method), then it will be easy to implement the cache later if it is needed.
    I hope this provides some food for thought!
    - K

  • Unable to copy very large file to eSATA external HDD

    I am trying to copy a VMWare Fusion virtual machine, 57 GB, from my Macbook Pro's laptop hard drive to an external, eSATA hard drive, which is attached through an ExpressPort adapter. VMWare Fusion is not running and the external drive has lots of room. The disk utility finds no problems with either drive. I have excluded both the external disk and the folder on my laptop hard drive that contains my virtual machine from my Time Machihne backups. At about the 42 GB mark, an error message appears:
    The Finder cannot complete the operation because some data in "Windows1-Snapshot6.vmem" could not be read or written. (Error code -36)
    After I press OK to remove the dialog, the copy does not continue, and I cannot cancel the copy. I have to force-quit the Finder to make the copy dialog go away before I can attempt the copy again. I've tried rebooting between attempts, still no luck. I have tried a total of 4 times now, exact same result at the exact same place, 42 GB / 57 GB.
    Any ideas?

    Still no breakthrough from Apple. They're telling me to terminate the VMWare processes before attempting the copy, but had they actually read my description of the problem first, they would have known that I already tried this. Hopefully they'll continue to investigate.
    From a correspondence with Tim, a support representative at Apple:
    Hi Tim,
    Thank you for getting back to me, I got your message. Although it is true that at the time I ran the Capture Data program there were some VMWare-related processes running (PID's 105, 106, 107 and 108), this was not the case when the issue occurred earlier. After initially experiencing the problem, this possibility had occurred to me so I took the time to terminate all VMWare processes using the activity monitor before again attempting to copy the files, including the processes mentioned by your engineering department. I documented this in my posting to apple's forum as follows: (quote is from my post of Feb 19, 2008, 1:28pm, to the thread "Unable to copy very large file to eSATA external HDD", relevant section in >bold print<)
    Thanks for the suggestions. I have since tried this operation with 3 different drives through two different interface types. Two of the drives are identical - 3.5" 7200 RPM 1TB Western Digital WD10EACS (WD Caviar SE16) in external hard drive enclosures, and the other is a smaller USB2 100GB Western Digital WD1200U0170-001 external drive. I tried the two 1TB drives through eSATA - ExpressPort and also over USB2. I have tried the 100GB drive only over USB2 since that is the only interface on the drive. In all cases the result is the same. All 3 drives are formatted Mac OS Extended (Journaled).
    I know the files work on my laptop's hard drive. They are a VMWare virtual machine that works just fine when I use it every day. >Before attempting the copy, I shut down VMWare and terminated all VMWare processes using the Activity Monitor for good measure.< I have tried the copy operation both through the finder and through the Unix command prompt using the drive's mount point of /Volumes/jfinney-ext-3.
    Any more ideas?
    Furthermore, to prove that there were no file locks present on the affected files, I moved them to a different location on my laptop's HDD and renamed them, which would not have been possible if there had been interference from vmware-related processes. So, that's not it.
    Your suggested workaround, to compress the files before copying them to the external drive, may serve as a temporary workaround but it is not a solution. This VM will grow over time to the point where even the compressed version is larger than the 42GB maximum, and compressing and uncompressing the files will take me a lot of time for files of this size. Could you please continue to pursue this issue and identify the underlying cause?
    Thank you,
    - Jeremy

  • I support a very large school district currently running Firefox 3.6. What will happen at end of life date? We're in the middle of online testing this week.

    I run the test center for a very large school district with over 120k students. We've got a current deployed base of 54k client machines using Firefox 3.6. We haven't upgraded due to multiple reasons, the most important of which is removing the possibility of using In Private Browsing from the students, and dealing with plugin-updates for the non digital natives (read dumber than a bag of hammers users) that make up the majority of the client base.
    We're testing ESR now, but just found out that end of life for 3.6 is tomorrow, 4/24. We are currently in the middle of statewide online testing. The question is, what will happen tomorrow when the browser goes end of life. The ESR wiki mentions that "an update to the current version of Desktop Firefox will be offered through the Application Update Service"
    So the main question is, are my students/teachers going to get a popup telling them they have to update the browser if we have the updates already turned off? If so, can I turn it off remotely using SCCM, because it will cause all kinds of havoc.
    Please advise asap, and thanks in advance.

    We had to do some serious gymnastics to remove at least most of the ability to use IPB. We removed it from the gui, but unfortunately, if they know the hotkey, they can still bring it up. Security has some serious headaches with this, as by law they have to be able to track where students go, and going with private browsing removes their ability to do forensic work they're required to be able to do. Not a very well thought out feature from Mozilla in my opinion, but it is what it is. Successive versions have made it even more difficult to remove even the gui portion.
    We do plan to release ESR due to the aforementioned security issues, but testing has been slow.
    But thanks for the reply. I think we can turn off the updates if it isn't already done.

  • While playing music on my iphone 5, i am now unable to play, pause, or skip songs while my phone is locked. I can still change the volume and slide the scrub through songs.  This has always worked for me until now. I have not changed any settings

    While playing music on my iphone 5, i am now unable to play, pause, or skip songs while my phone is locked. I can still change the volume and scrub through songs.  This has always worked for me until now. As far as i know, i have not changed any settings.  This is whle my phone is locked both on the slide up toolbar/menu interface and on the normal screen before i slide to unlock.

    I skimmed through this very long post but from what I read, this sounds much more like a coverage issue with your network than a device issue.  I have not seen or heard of this issue before and I do not have this issue personnally and & I use the music app quite frequently while working in the yard.
    The DND feature should not be a factor at all if you're sure it's turned off.  I always recommend restarting your phone by holding down the home button + the lock button for 10-12 seconds until the Apple logo appears.  Once the phone reboots, do a test to see if this has helped to resolve the issue.
    If not, go to Settings > General > Reset > Reset Network Settings.  Then, test it out again.  Post back if neither of these solutions solve your issue.

  • Very Large Business Application (VLBA)

    CVLBA Research Center
    The Center for Very Large Business Applications (CVLBA) is a joint research initiative of the Otto-von-Guericke University Magdeburg and the Technische Universität München which was initiated by SAP in 2006. The decision by SAP AG to fund research with these two university partners stems from the cooperation experiences and the knowledge transfer that has taken place by jointly running the SAP University Alliances Project. Fifteen researchers in the CVLBA teams at Otto-von-Guericke University Magdeburg and Technische Universität München focus their research on bridging the gap between ERP research and ERP development.
    Defining Very Large Business Applications
    The first step in exploring issues of VLBA was to establish a common working definition to be used by all partners. According to the definition used a VLBA is essentially characterized u2013 in contrast to a single business application u2013 by its strategic importance for an organization, by its spatial, organizational, cultural or technical unrestrictedness and by its capability to be implemented by application systems or system landscapes.
    The goal of the foundation of VLBA is a profound definition that conceives a VLBA as both, an application (instantiation level; VLBA in the narrower sense) and a research framework (meta level; VLBA in the broader sense). You can find more details in the text attached in this thread. We are looking forward to your feedback about this highly relevant topic, in particular regarding sustainability, legacy systems and cultural diversity!

    Definition of Very Large Business Application
    The first step in exploring issues of a Very Large Business Application (VLBA) was to establish a common working definition to be used by all partners. A VLBA deals with a business application, which has a strategic importance within a constituted organization. Significant features of a VLBA are:
    (1) A VLBA supports one or more processes, from which one at least is a business process. Consequently, a VLBA is directly successfully effective and the strategic dependency of the organization is given by an application of a VLBA, because changing or turning the system away is associated with big financial, organizational and personnel-related costs.
    (2) A VLBA does not have any spatial, organizational, cultural or technical limits.
    (3) VLBAs could be implemented through application systems as well as through system landscapes. It is significant that they support a (universal organizational) business process.
    A possible far reaching ability of automation of internal processes should be achieved through the application of the most modern technologies. Supply Chain Management (SCM)- and Customer Relationship Management (CRM)-systems are instances for this kind of software, as far as they fulfill all defined requirements. VLBAs are similar to a Business Information System in the manner that they can support several Business Application Fields and in this case, they are based on several types of Business Application Systems.
    VLBA as a Field of Research
    Along with the first definition, the concept of u2018VLBAu2019 was classified and integrated into the already existing conceptual world of business informatics, using the UML. This allows distinguishing u2018VLBAu2019 from other related topics and clearly defines its scope in the business informaticsu2019 scientific world. In particular, a VLBA can be regarded as a special system landscape on the one hand and as a research area in the other hand. The present-day heterogeneous and grown system landscapes, just like they are usually discovered in the business practice, suffer from the symptom of spaghetti integration. Therefore, it seems to be practical to raise principles of the Software Engineering to the level of the system landscapes and to establish a design theory in the sense of System Landscape Engineering.
    Authors: Lars Krüger (CVLBA Magdeburg) & Bastian Grabski (CVLBA Magdeburg)

  • After Effects has started crashing when I scrub through timeline.

    I've been working on an animation for several weeks and now, starting yesterday, whenever I scrub through the timeline the program crashes on me and gives several error messages. It seems to have started after I added a large image layer with motion blur enabled that moves pretty quick. But even if I turn off motion blur for the whole comp the issue still occurs.
    Error messages are as follows:
    After Effects error: crash occurred while invoking format plug-in “PNG”.
    After Effects error: crash occurred while invoking rendering plug-in “AE_OpenGL”.
    After Effects error: Crash in progress. Last logged message was: <6356> <ASL.ResourceUtils.GetLanguageIDFromRegistry> <0> Unable to obtain the User 'Language' registry key at: Software\Adobe\After Effects\10.5\ Defaulting to 'en_US'.
    After Effects can’t continue: sorry, After Effects has crashed. For After Effects Help and Support, go to http://www.adobe.com/support/aftereffects. If you still can’t resolve the issue, please contact Adobe Technical Support (2).
    ( 0 :: 42 )
    Before quitting, you have one chance to save your project (don’t use the same name as the original).
    I am using After Effects version 10.5.0.253.
    I have tried several troubleshooting steps such as updating my display drivers, disabling OpenGL in Preferences > Previews, Moving the Extensions folder out of my install, Recreating my preferences, Purging the cache, and even just uninstalling and reinstalling After Effects all to no avail.
    Additional Details:
    Windows 7 Professional SP1 64-Bit
    CPU: Intel Core i7-2600 @ 3.40Ghz
    RAM: 8.00 GB
    GPU: NVIDIA GeForce GT 630 with Driver ver. 320.49
    Any help from the community would be greatly appreciated. I'm usually good at being able to research and resolve my own issues by this one has me stumped and my boss is on my case about the deadline.
    Thank you.

    While I did have OpenGL disabled in Preferences > Previews its seems I still had Fast Previews set to Adaptive Resolution - OpenGL off. I've switched this to just Off and now seem to be back on track. I believe I didn't have an understanding of what all "turn off OpenGL" truly meant. I'll continue working and report back wether or not I have any more issues related to this.
    What would have causes this issue to appear in the first place? I've been working in AE 5.5 for about 2 years now doing a multitude of different projects but this is the first time I've run up against this issue. Is my hardware just simply underpowered?
    Edit: Spelling

  • Can't hear .wav as I scrub through frames

    I used to create flash cartoons with characters lip synced to
    sounds. I did it by placing the sound on its own layer and then
    drawing each frame's mouth position accurate to the sound in each
    frame.
    I synced it by scrubbing (moving) through the frames, and I
    could hear each discrete sound for each frame. Now I hear nothing,
    which makes it very hard to lip sync! How can I hear the pieces of
    the .wav in each frame?

    No, it's not muted. The sound plays when I hit enter, but I
    just can't hear it as I'm scrubbing through. I'm on a different
    computer, and I wonder if it's some sort of settings issue. It just
    functions differently than it did where I was before.
    The sound plays when I hit enter, but I can't stop it before
    it's finished.
    The sound does not play unless I start it on or before the
    first frame.
    I can't hear any sound when dragging through the frames on
    the timeline.

  • Very-large-scale searching in J2EE

    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue?

    Michael McNeil wrote:
    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue? Hi. Fancy SQL to the rescue! If the table has a unique key, you can simply send a
    query per page, with iterative SQL that selects the next N rows beyond what was
    selected last time. Eg:
    Let variable X be the highest key value you've seen so far. Initially it would
    be the lowest possible value.
    select * from mytable M
    where ... -- application-specific qualifications...
    and M.key >= X
    and (100 <= select count(*) from mytable MM where MM.key > X and MM.key < M.key and ...)
    In English, this says, select all the qualifying rows higher than what I last saw, but
    only those that have fewer than 100 qualifying rows between the last I saw and them (ie:
    the next 100).
    When processing this query, remember the highest key value you see, and use it for the
    next query.
    Joe

Maybe you are looking for

  • Configure SSO Connection from SAP Enterprise Portal to BOE Server

    Hi Guys, We recently installed a BOE Server and want to connect it to our SAP Enterprise Portal. What we need is just to display the Crystal Reports via Enterprise Portal. We have set up the following: SAP EP with AD Authentication SAP EP configured

  • Report to be in Excel format while access from portal

    Hello Folks, Can we access a BW query in Excel fomat from Portal,  want to know it can be possibe or not? Throw some idea on the possibilities. Thanks in advance, Sathish.

  • Creating a load bar....

    When creating the load bar or load screen...Should I create it after I set up my webpage or can I set it up first....and if i can set it up first how will I see the load in load bar in action????

  • CS5-Bridge -- Not Enough Storage is available to complete this operation

    Since installing CS5 (Windows XP sp3, w/ 3gb ram) and trying to use Bridge I get this command after about 10 minutes of use...especially if I try to drag a thumbnail image from the film strip at the bottom of the screen to another folder location.  P

  • Where can I find detailed information on wiring my Switch Device?

    I am using a PXI-4071 High Voltage FlexDMM with a PXI-2584 High Voltage Mux.  The topology is 12x1 1-wire.  Where can I find better information on how to wire this particular device.  I used an example from LV to develop the code, and I can hear the