Working with VERY LARGE tables - is it possible to bypass row counting?

Hello!
For working with large result sets ADF provides the `Range Paging` mechanism for views, described in the 27.1.5 part of the Developer’s Guide For Forms/4GL Developers.
It works well, but as a common mode it counts total row count to allow paging. In some cases query `select count(1) from (SELECT ...)...` can take very, very long time.
But if a view object doesn't know row count (for example we can override getEstimatedRowCount() method ), paging controls doesn't appear in user interface.
Meanwhile I suggest that it's possible to display two paging links - Prev and Next, without knowing row count. Is it a way to do it?
Thank in advance,
Ilya Rodionov.

Hi Ilya,
while you wait for Franks to dig up the right sample you can read this thread:
Re: ADF BC: Performance issue with getEstimatedRowCount (ER?)
There we discuss the exact issue.
Timo

Similar Messages

  • JRockit for applications with very large heaps

    I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
    I used the JRMC to profile the operation and here were the results that I thought were interesting:
    liveset 30%
    heap fragmentation 2.5%
    GC Pause time average 600ms
    GC Pause time max 2.5 sec
    It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
    For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
    Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
    My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
    Thanks in advance for any help you can provide.

    Any suggestions for using JRockit with very large heaps?

  • Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, how to get back standard sized homepage?

    Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, need to return to standard sized homescreen?

    Triple click the Home button and then go to Settings>General>Accessibility and turn Zoom off. If problems see:
    iPhone: Configuring accessibility features (including VoiceOver and Zoom)

  • Deleting rows from very large table

    Hello,
    I need to delete rows from a large table, but not all of them, so I can't use truncate. The delete condition is based on one column, something like this:
    delete from very_large_table where col1=100;
    There's an index (valid, B-tree) on col1, but it still goes very slow. Is there any instruction which can help delete rows faster?
    Txh in adv.
    A.

    Your manager doesn't agree to your running an EXPLAIN PLAN? What is his objection? Sounds like the prototypical 'pointy-hair boss'.
    Take a look at these:
    -- do_explain.sql
    spool explain.txt
    -- do EXPLAIN PLAN on target queries with current index definitions
    truncate table plan_table
    set echo on
    explain plan for
    <insert query here>
    set echo off
    @get_explain.sql
    -- get_explain.sql
    set linesize 120
    set pagesize 70
    column operation     format a25
    column query_plan     format a35
    column options          format a15
    column object_name     format a20
    column order           format a12
    column opt           format a6
    select     lpad(' ',level) || operation "OPERATION",
         options "OPTIONS",
         decode(to_char(id),'0','COST = ' || NVL(to_char(position),'n/a'),object_name) "OBJECT NAME",
         cardinality "rows",     
         substr(optimizer,1,6) "OPT"
    from     plan_table
    start     with id = 0
    connect by prior id = parent_id
    There are probably newer, better ways, but this should work with all living versions of Oracle and is something I've had in my back pocket for several years now. It's not actually executing the query or dml in question, just running an explain plan on it.

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Any suggestions on calculating with very large or small numbers?

    It seems that double values are about 17 decimal places (10e17) in precision.
    Is there a way in iPhone calculations to get more precision for very large and small numbers, like 10e80 and so forth? I know that's more than the entire number of atoms in the universe, but still.
    I tried "long double" but that didn't seem to make any difference.
    Just a limitation?
    Thanks,
    doug

    Hmmm... maybe I was just having a problem with my formatted string then?
    I was using the NSString %g format, which is supposed to print in exponential notation if the number is greater than 1e4 or less than 1e-4, or something like that.
    But I was not getting anything greater exponents than 1e17 and then I was apparently getting overflows because the number were having negative mantissas.
    All the variables involved were double...
    How did you "look at" z?
    Thanks,
    doug

  • Problem in compilation with very large number of method parameters

    I have java file which I created using WSDL2Java. Since the actual WSDL has a complex type with a large number of elements(around 600) in it, Consequently the resulting java file(from WSDL2Java) has a method that takes 600 parameters of various types. When I try to compile it using javac at command prompt, it says "Too many parameters" and doesn't compile. The same is compiling successfully using JBuilder X . The only way I could compile successfully at command prompt is by reducing the number of parameters to around 250 but unfortunately that it's not a workable solution. Does Sun specify any upper bound on number of parameters that can be passed to a method?

    ... a method that takes 600 parameters ...Not compatible with the spec, see Method Descriptors.
    When I try to compile it using javac at
    command prompt, it says "Too many parameters" and
    doesn't compile.As it should.
    The same is compiling successfully using JBuilder X .If JBuilder produces a class file, that class file may very well be invalid.
    The only way I could compile
    successfully at command prompt is by reducing the
    number of parameters to around 250Which is what the spec says.
    but unfortunately that it's not a workable solution.Pass an array of objects - an array is just one object.
    Does Sun specify
    any upper bound on number of parameters that can be
    passed to a method?Yes.

  • Need help with "Very large content bounds" error...

    Hey guys,
    I've been having an issue with Adobe Muse [V7.0, Build 314, CL 778523] - one of the widgets I tried from the Exchange library seemed to bug out and created a large content box.
    This resulted in this error:
    Assert: "Very large content bounds (W:532767.1999999997 H:147446.49743999972) detected in BoxUtils::childBounds"
    Does anyone know how I could fix this issue?

    Hi there -
    Your file has been repaired and emailed back to you. Please let us know if you run into any other issues.
    Thanks!
    -Sam

  • What is the best way to work with a large amount of data?

    Assume you have a big text file to work that you need to store and work with, what would be the best way to do that?
    I will be adding / removing a lot of elements so I really want to stay away from array resizing overhead.
    I don't want to step on any toes here, but would binary trees / linked lists in C++ be a better solution?

    Assume you have a big text file to work that you need
    to store and work with, what would be the best way to
    do that?Parse it into a database. Then use SQL to manipulate.

  • How to save the format at less time while working with a larger format that is over 5 GBs?

    I am working on a 3' x 10' banner with 300dpi in Photoshop and it takes all of my time to do the extra tools to do, and it takes 10 minutes to save it or more.  I have powerbook and CS6 on me.  I put the highest memory in Photoshop preference.  So I am bearing to work with it for hours for one small assigment, but why does it takes so long?  is there a better way to do to save time and speed to make the poster banner to look greater than life? even in 200 dpi?   What are your answer to my solution?  I would be appreciated it.  Thank you

    station_two wrote:
    a 10' banner does not even remotely need to be at 300 ppi...
    A printer worth their salt will know.
    Station two is correct.
    For future reference here are two links to good articles about viewing distance and ppi:
    http://www.digitalphotopro.com/technique/workflow/the-right-resolution
    http://www.northlight-images.co.uk/article_pages/print_viewing_distance.html

  • My imessage isn't working when i send messages to people with iphones that have imessage it works with very few people, now it's starting to stop working with those people. help please.

    my imessage doesn't work between my friends and i. i have a new iphone 5. my messages only work with some people. with others imessage doesn't work in the first place and now its starting to stop working with other people help please.
    &amp; yes we all have imessage on.

    Hi there,
    I see that you are having IMessage problems. Some of the problems might be:
    - Your friends IMessage is not activated (you have to activiate it with your apple id)
    - Your IMessage is not activated
    Here is some links to another person`s Imessage problems:
    https://discussions.apple.com/message/17301880#17301880
    https://discussions.apple.com/message/20142285#20142285
    https://discussions.apple.com/message/18988318#18988318
    Hope that helps!

  • Composite primary key on very large tables...

    Hello
    So I'm building a database where one of the tables will eventually have > 1 billion rows and was wondering about the primary key. The table will have 3 columns (sample_id, object_id, value) and so I was thinking instead of creating a surrogate key I would create a composite primary key on those 3 columns.... People will query either for sample_id = X or object_id = Y. I've created a composite index on object_id, sample_id and value and the query times have been fast < 2-3s per object_id. Although building that index takes some time (7-8 hrs) what would be the pros/cons to composite PK vs a unique index? I plan to do massive bulk uploads (50M records at a time) so I'll disable the constraints before loading.... These records will also be loaded in order so would a clustered table be appropriate?
    thanks
    steve

    Hi,
    As correctly said by steve,partition the table.
    Create a unique index on single column and then even if ur query is not using that column which has index,use a HINT and make it to use the index.
    If ur using joins on this table with other tables u can use use_hash hint which will improve performance.
    Hope it helps.
    Thanks

  • Cache purgin is not working with event polling table

    hi all,
    i am facing problem with event polling tables
    i will tell you the steps which i have followed for event polling tables
    1.i have created a table in back end for event polling table
    2.i have created ODBC connection to import the tables
    3. i have imported the tables along with event polling table
    4. i made the table as event polling table using utilities
    5.and then i ran the report in presentation catalog and i checked out the cache entries in Administration tool(in cache)
    6.and in back end i have manually inserted a db_name,catalog_name,schema_name and table name in to the event polling tales
    but cache purging is not happening ,
    any other steps that i have to follow ,or did i do any mistake while implementing event polling table
    please can any tell me how to make use of event polling table, i have followed following link
    http://gerardnico.com/wiki/dat/obiee/event_table_
    Thanks

    Hi,
    The steps are correct frm the link i have given,is the table name correctly specified i mean is it matching with the physical layer table.
    Test it for 1 table and check out by running the request containing that table and then insert 1 row manually to purge the activity or not by going to and verify that the cache entry is deleted and that you can find the below trace in the NQQuery.log file.
    Still the problem exists check out the log details whats the error.If your getting error in the same blog gerard has given some common errors and their support.
    hope you follow etiquitte.
    By,
    KK

  • Pivot table with very large number of columns

    Hello,
    here is the situation:
    One table that contains raw data; from this table I feed one with extract information (3 fields); I have to turn the content in a pivot table
    Ro --- Co --- Va
    A A 1
    A B 1
    A C 2
    B A 11
    Turned in
    A B C...
    A 1 1 2
    B 11 null null
    To do this I do a query like:
    select r, sum(decode(c,'A',Va) COLA, sum(decode(c,'B',Va) COLB , sum(decode(c,'C',Va) COLC,.... sum(decode(c,'XYZ',Va) COLXYZ from table group by r
    The statement is generated by a script (cfmx) and it works until I reach a query that try to have 672 values for c; which means 672 columns...
    Oracle doesn't like that: ORA-01467: sort key too long
    I like this way has it is getting the result fast.
    I have tried different solution a the CFMX level with for that specific query, I got timeout (query table with loop on co within loop on ro)
    Is there any work around?
    I am using Oracle 9i.
    Tahnk you!

    insert into extracted_data select c, r, v, p from full_data where <specific_clause>
    The values for C are from a query: select disctinct c from extracted_data
    and it is the same for R
    R and C are varchar2(3999)
    I suppose that I can split on the first letter of the C column as:
    SELECT r, low.cola, low.colb, . . ., low.colm,
    high.coln, high.colo, . . ., high.colz
    FROM (SELECT r, SUM(DECODE(c, 'A', va)) cola, . . .
    SUM(DECODE(c, 'M', va)) colm
    FROM table
    WHERE c like 'A%'
    GROUP BY r) Alpha_A,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'B%'
    GROUP BY r) Alpha_B,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'C%'
    GROUP BY r) Alpha_C
    (SELECT r, SUM(DECODE(c, 'zN', va)) coln, . . .
    SUM(DECODE(c, 'zZ', va)) colz
    FROM table
    WHERE c like 'Z%'
    GROUP BY r) Alpha_Z
    WHERE alpha_A.r = alpha_B.r and apha_a.r = alpha_C.r ... and alpha_a.r = alpha_z.r
    I will have 27 select statement joined... I have to check if even like that I will not reach the limit within one of the statement select
    "in real life"
    select GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1 from
    (select r, sum(decode(C, 'Wall, unspecified',cases)) W0 from tmp_maqueje where upper(C) like 'W%' group by r) GRPW,
    select r,
    sum(decode(C, 'Ceramic tiles, indoors',cases)) C0,
    sum(decode(C, 'Cement surface, outdoors (Concrete/cement block, see Structural element, A11)',cases)) C1
    from tmp_maqueje where upper(C) like 'C%' group by r) GRPC
    where GRPW.r = GRPC.r
    order by GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1
    Message was edited by:
    maquejp

  • Error when creating index with parallel option on very large table

    I am getting a
    "7:15:52 AM ORA-00600: internal error code, arguments: [kxfqupp_bad_cvl], [7940], [6], [0], [], [], [], []"
    error when creating an index with parallel option. Which is strange because this has not been a problem until now. We just hit 60 million rows in a 45 column table, and I wonder if we've hit a bug.
    Version 10.2.0.4
    O/S Linux
    As a test I removed the parallel option and several of the indexes were created with no problem, but many still threw the same error... Strange. Do I need a patch update of some kind?

    This is most certainly a bug.
    From metalink it looks like bug 4695511 - fixed in 10.2.0.4.1

Maybe you are looking for

  • Default Filename for a New Pages Document

    When I collect articles from the web and save them in Word or Beans, the file name defaults to the first line (title) of the document. When I save a new document in Pages, the default file name is Untitled. Is there a way to make Pages use the first

  • Invalid object error, for @CURRMBRRANGE

    Hi, I have been getting the error invalid object in line 1, when trying to retrieve the member using smartview 11.1.2.1. The formula in the member is: @SUMRANGE (9999,@CURRMBRRANGE (Period,Gen,4, ,0 )); I have tried changing it to: @SUMRANGE (9999,@C

  • Encryption Program

    Ok, so here's the deal: For school I have to create an encryption program that will read a string from the user, ask them for an alphabet that it will use to encrypt it then encrypt the phrase. This is my code: * This program asks the user for a phra

  • @Prompt causing funny issues

    Guys, I am migrating an universe  which is pointing to a SQL 2005 DB from 6.5 to xi 3.1 FP 1.5 and i am facing the below issue. In the universe i have a condition using the prompt function and it is as below: ((tablenamel.name in @Prompt('Select Name

  • ?'s About Transitioning from Bridge to LR

    I have a couple of questions before installing LR and importing work already done through Bridge. I searched the archives but didn't see exactly what I'm looking for - apologies in advance if I missed it. 1) I have been reading about using stacks, "v