Using Build to animate? Presenting statistical (=large) tables

Hi there,
I am using Keynote to present the results of statistical analyses. Most of the times, this results in large tables which aren't quite as audience-friendly as I would like them to be.
So, I use an object (circle of square) to highlight the parts of the table I am speaking about. Using build in / build out I am able to switch to different parts of the table. It works quite well.
But I am wondering if it is possible to let the highlighting-object move around the screen in a few steps. I expect it to look a little bit more 'clean' than moving out one object while the new one moves in.
Thanks in advance for your suggestions,
Rense Nieuwenhuis

There isn't a way to move an object along a path -- the best you could do would be to duplicate the object multiple times, and have these build in/build out at several points along the path between highlighted parts.
PowerMac G5   Mac OS X (10.4.4)  

Similar Messages

  • Using dbms_metadata to get defination of large table

    Hi,
    I have very very large table definition table with partitions and sub-partitions and I want to create that table in some other db.
    I am trying below spooled query but its not retrieving complete definition of the table, any other setting I should follow
    spool t.sql
    set pagesize 0
    set long 2000000
    select dbms_metadata.get_ddl
    I am using oracle 10g r2
    Thanks,
    Pankaj

    Thanks you all for your replied, following are observation after settings suggested
    With setting
    ==================
    set longchunksize 2000000
    set long 2000000
    set lines 2000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    10819 23419 21635001
    =================
    With setting
    set long 100000
    set pages 0
    -sh-3.00$ cat t.sql | wc
    3452 7326 279269
    Result => None worked for me :(

  • Efficiently Querying Large Table

    I have to query a recordset of 14k records against (join) a very large table: billions of data -even a count of the table does not return any resulty after 15 mins.
    I tried a plsql procedure to store the first recordset in a temp table and then preparing two cursors: one on the temp table and the other on the large table.
    However, the plsql procedure runs for a long time and just gives up with this error:
    SQL> exec match;
    ERROR:
    ORA-01041: internal error. hostdef extension doesn't exist
    BEGIN match; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Is there is way through which I can query more efficiently?
    - Using chucks of records from the large table at a time - how to do that? (rowid)
    - Or just ask the dba to partition the table - but the whole table would still need to be queried.
    The temp table is:
    CREATE TABLE test AS SELECT a.mon_ord_no, a.mo_type_id, a.p2a_pbu_id, a.creation_date,b.status_date_time,
    a.expiry_date, a.current_mo_status_desc_id, a.amount,
    a.purchaser_name, a.recipent_name, a.mo_id_type_id,
    a.mo_redeemed_by_id, a.recipient_type, c.pbu_id, c.txn_seq_no, c.txn_date_time
    FROM mon_order a, mo_status b, host_txn_log c
    where a.mon_ord_no = b.mon_ord_no
    and a.mon_ord_no = c.mon_ord_no
    and b.status_date_time = c.txn_date_time
    and b.status_desc_id = 7
    and a.current_mo_status_desc_id = 7
    and a.amount is not null
    and a.amount > 0
    order by b.status_date_time;
    and the PL/SQL Procedure is:
    CREATE OR REPLACE PROCEDURE MATCH
    IS
    --DECLARE
    deleted INTEGER :=0;
    counter INTEGER :=0;
    CURSOR v_table IS
         SELECT DISTINCT pbu_id, txn_seq_no, create_date
         FROM host_v
         WHERE status = 4;
    v_table_record v_table%ROWTYPE;
    CURSOR temp_table (v_pbu_id NUMBER, v_txn_seq_no NUMBER, v_create_date DATE) IS
         SELECT * FROM test
         WHERE pbu_id = v_pbu_id
         AND txn_seq_no = v_txn_seq_no
         AND creation_date = v_create_date;
    temp_table_record temp_table%ROWTYPE;
    BEGIN
    OPEN v_table;
    LOOP
    FETCH v_table INTO voucher_table_record;
    EXIT WHEN v_table%NOTFOUND;
    OPEN temp_table (v_table_record.pbu_id, v_table_record.txn_seq_no, v_table_record.create_date);
    LOOP
    FETCH temp_table INTO temp_table_record;
    EXIT WHEN temp_table %FOUND;
         DELETE FROM test WHERE pbu_id = v_table_record.pbu_id AND
                   temp_table_record.txn_seq_no = v_table_record.txn_seq_no AND
                   temp_table_record.creation_date = v_table_record.create_date;
    END LOOP;
    CLOSE temp_table;
    END LOOP;
    CLOSE v_table;
    END MATCH;
    /

    Many thanks,
    I can get the explain plan for the SQL statement, but I am not sure how to get it for teh PLSQL. Which section in the PLSQL do I get the explain plan. I am using SQL Navigator.
    I can create the cursor with the join, and if it does not need the delete statement, then there is no need requirement for the procedure itself. Should I just run the query as a SQL statement?
    You have not said what I should do with the rowid?
    Regards

  • I would appreciate it if someone could advise me as to the optimum resolution, dimensions and dpi for actual photographic slides that I am scanning for use in a Keynote Presentation, that will be projected in a large auditorium.  I realize that most proje

    I would appreciate it if someone could advise me as to the optimum resolution, dimensions and dpi for actual photographic slides that I am scanning for use in a Keynote Presentation, that will be projected in a large auditorium. I realize that most projectors in auditoriums that I will be using have 1024 x 1200 pixels, and possibly 1600 x 1200. There is no reference to this issue in the Keynote Tutorial supplied by Apple, and I have never found a definitive answer to this issue online (although there may be one).
                Here’s my question: When scanning my photographic slides, what setting, from 72 dpi to 300 dpi, would result in the best image quality and use up the most efficient amount of space? 
                Here’s what two different photo slide scanning service suppliers have told me: 
    Supplier No. 1 tells me that they can scan slides to a size of 1544 x 1024 pixels, at 72 dpi, which will be 763 KB, and they refer to this as low resolution (a JPEG). However, I noticed when I looked at these scanned slides, the size of the slides varied, with a maximum of 1.8 MB. This supplier says that the dpi doesn’t matter when it comes to the quality of the final digital image, that it is the dimensions that matter.  They say that if they scanned a slide to a higher resolution (2048 x 3072), they would still scan it at 72 dpi.
    Supplier No. 2: They tell me that in order to have a high quality image made from a photographic slide (starting with a 35 mm slide, in all cases), I need to have a “1280 pixel dimension slide, a JPEG, at 300 dpi, that is 8 MB per image.” However, this supplier also offers, on its list of services, a “Standard Resolution JPEG (4MB file/image – 3088 x 2048), as well as a “High Resolution JPEG (8 MB file/image – 3088x2048).
    I will be presenting my Keynotes with my MacBook Pro, and will not have a chance to try out the presentations in advance, since the lecture location is far from my home, so that is not an option. 
    I do not want to use up more memory than necessary on my laptop.  I also want to have the best quality image. 
    One more question: When scanning images myself, on my own scanner, for my Keynote presentations, would I be better off scanning them as JPEGs or TIFFs? I have been told that a TIFF is better because it is less compressed. 
    Any enlightenment on this subject would be appreciated.
    Thank you.

    When it comes to Keynote, I try and start with a presentation that's 1680 x 1050 preset or something in that range.  Most projectors that you'll get at a conference won't project much higher than that and if they run at a lower resolution, it's better to have the device downsize your Keynote.  Anything is better than having the projector try and upsize your presentation... you work hard to make it look good, and it's mangled by some tired Epson projector.
    As far as slides go, scan them in at 150 dpi or better, and make them at least the dimensions of your presentation.  Keynote is really only wanting 72dpi, but I do them at 150, just in case I need to print out the presentation as a handout later, and having the pix at 150 dpi gives me a little help with their quality on a printer.
    You'd probably have to drop in the 150 versions again if you output the Keynote to .pdf or Word or something, but at least you have the option.
    And Gary's right (above) go ahead and scan them as TIFFs.  Sooner or later you'll want to do something else with these slides (like make something for an iPad or the like) and having them as TIFFs keeps your presentation looking good.
    Finally, and this is a big one, get to the location for your presentation ahead of time if you can, and plug the laptop in and see what you get.  There's always connection problems. Don't let the AV bonehead tell you everything will work just fine ('... I don't have any adapters for a Mac...') .  See it for yourself... you're the one that's standing up there.  Unless it's your boss, then you better be really sure it works.

  • How do I open and use a large table from Word in Pages?

    I upgraded my MBP from Snow Leopard to Mountain Lion a couple days ago.  I knew that my old Word application wouldn't work, but several Apple people assured me that Pages could handle my old docs, including tables.
    So, I purchased and installed Pages '09 this morning.  I opened the table I use all the time - - and most of it is missing!  Apparently, Pages doesn't handle large tables.
    I need help - desperately!  This table contains all my medical expenses for the year, so I have to have it.
    Thanks

    J,
    Besides needing to have your Table Object Floating, you should know that the maximum number of rows in Pages Tables is 999.
    If you continue to have problems opening your Word document and its table in Pages, try one of the free Office Clones, LibreOffice, OpenOffice, IBM Lotus Symphony, etc.
    I'd bet that at least one of those free apps will work if Pages doesn't. By the way, have you tried viewing your Word document in Quick Look? To do that, click on the filename in Finder and hit the Spacebar key. You won't be able to do anything but view the document in Quick Look, but it would give you confidence that your file is OK.
    Jerry

  • Using workspaces with large tables

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    I've just spotted the workspace manager forum, I'll post there. :-)

  • How to rotate a large table

    I'm building a long technical report with Pages'08.
    There are many tables in this report.
    This document is in portrait format.
    In the middle of this document I have a particularly large
    table which can't be read if I try to stay on a portrait
    presentation of this table: too many columns (15).
    Hence I'd like to find an easy way to either rotate
    the page or the table so as to be able to use larger
    columns.
    I discovered that Pages'08 doesn't permit to put one
    page in landscape format. I also abandonned the idea of
    using 3 different documents (part 1 in portrait,
    part 2 in landscape, part 3 in portrait again).
    I have to chain the paragraph numbers.
    I have to make a table of content at the end of
    this technical report.
    What is the most efficient way to manage to fill this large
    table.
    Word let me do this, but unfortunately it does also
    make me spend toomuch time on other simple and basic functions.
    Is Pages'09 better on this basic and frequent need (at least for my job)?
    <pre>--------
    As long as you'll see students making graphics with pen on paper,
    you'll see the missing keystone of the software empire.
    dan</pre>

    Peggy wrote:
    You can rotate a floating table, but it can be a problem if you need to edit the table. It will auto-rotate to portrait to edit it but it can be difficult to see or get to the outside edges. I find it easiest to copy & paste the table into a landscape document the copy it back after editing.
    Thank you for the nice hint.
    I finally choosed to work on a temporary document in A3 format,
    and keep it open so as to be able to quickly copy my table in the
    main document every time I update it.
    During this copy operation I noticed a boring problem:
    as the text column in my main document is slightly smaller than my table,
    Pages decide to shrink it every time, and I can't recover it's
    original size (which I penibly tuned up in my A3 document).
    Hence all the cell contents are partially hidden.
    The button:
    Inspector > Metrics > Original Size
    is off.
    Do you know how to circumvent this bad habit of Pages to resize my
    imported table?
    <pre>--------
    As long as you'll see students making graphics with pen on paper,
    you'll see the missing keystone of the software empire.
    dan</pre>

  • OutOfMemory error when trying to display large tables

    We use JDeveloper 10.1.3. Our project uses ADF Faces + EJB3 Session Facade + TopLink.
    We have a large table (over 100K rows) which we try to show to the user via an ADF Read-only Table. We build the page by dragging the facade findAllXXX method's result onto the page and choosing "ADF Read-only Table".
    The problem is that during execution we get an OutOfMemory error. The Facade method attempts to extract the whole result set and to transfer it to a List. But the result set is simply too large. There's not enough memory.
    Initially, I was under the impression that the table iterator would be running queries that automatically fetch just a chunk of the db table data at a time. Sadly, this is not the case. Apparently, all the data gets fetched. And then the iterator simply iterates through a List in memory. This is not what we needed.
    So, I'd like to ask: is there a way for us to show a very large database table inside an ADF Table? And when the user clicks on "Next", to have the iterator automatically execute queries against the database and fetch the next chunk of data, if necessary?
    If that is not possible with ADF components, it looks like we'll have to either write our own component or simply use the old code that we have which supports paging for huge tables by simply running new queries whenever necessary. Alternatively, each time the user clicks on "Next" or "Previous", we might have to intercept the event and manually send range information to a facade method which would then fetch the appropriate data from the database. I don't know how easy or difficult that would be to implement.
    Naturally, I'd prefer to have that functionality available in ADF Faces. I hope there's a way to do this. But I'm still a novice and I would appreciate any advice.

    Hi Shay,
    We do use search pages and we do give the users the opportunity to specify search criteria.
    The trouble comes when the search criteria are not specific enough and the result set is huge. Transferring the whole result set into memory will be disastrous, especially for servers used by hundreds of users simultaneously. So, we'll have to limit the number of rows fetched at a time. We should do this either by setting the Maximum Rows option for the TopLink query (or using rownum<=XXX inside the SQL), or through using a data provider that supports paging.
    I don't like the first approach very much because I don't have a good recipe for calculating the optimum number of Maximum Rows for each query. By specifying some average number of, say, 500 rows, I risk fetching too many rows at once and I also risk filling the TopLink cache with objects that are not necessary. I can use methods like query.dontMaintainCache() but in my case this is a workaround, not a solution.
    I would prefer fetching relatively small chunks of data at a time and not limiting the user to a certain number of maximum rows. Furthermore, this way I won't fetch large amounts of data at the very beginning and I won't be forced to turn off the caching for the query.
    Regarding the "ADF Developer's Guide", I read there that "To create a table using a data control, you must bind to a method on the data control that returns a collection. JDeveloper allows you to do this declaratively by dragging and dropping a collection from the Data Control Palette."
    So, it looks like I'll have to implement a collection which, in turn, implements the paging functionality that I need. Is the TopLink object you are referring to some type of collection? I know that I can specify a collection class that TopLink should use for queries through the query.useCollectionClass(...) method. But if TopLink doesn't provide the collection I need, I will have to write that collection myself. I still haven't found the section in the TopLink documentation that says what types of Collections are natively provided by TopLink. I can see other collections like oracle.toplink.indirection.IndirectList, for example. But I have not found a specific discussion on large result sets with the exception of Streams and Cursors and I feel uneasy about maintaining cursors between client requests.
    And I completely agree with you about reading the docs first and doing the programming afterwards. Whenever time permits, I always do that. I have already read the "ADF Developer's Guide" with the exception of chapters 20 and 21. And I switched to the "TopLink Developer's Guide" because it seems that we must focus on the model. Unfortunately, because of the circumstances, I've spent a lot of time reading and not enough time practicing what I read. So, my knowledge is kind of shaky at the moment and perhaps I'm not seeing things that are obvious to you. That's why I tried using this forum -- to ask the experts for advice on the best method for implementing paging. And I'm thankful to everyone who replied to my post so far.

  • Using Hierarchy levels in Webi charts and tables

    Gurus,
    I am building a webi report that incorporates our HR employee structure.
    The Webi report is told to bring data for 'Level 1 and 2' of the hierarchy for the employee ID requesting the data.
    scenario
    Employee A
         Employee A1
              Emp A1a
              Emp A1b
         Employee B1
              Emp B1a
              Emp b1b
    If report runs for A, then return A, A1, B1.
    If reports runs for A1, then return A1, A1a, A1b
    etc.
    If I use the hiearchy in the Table of results for A1..  I can get everything looking nice.  However, If I then run for A, I get one line in the table...A's data.
    No hiearchy present i the table.  Sometimes the table is blank.
    Same is true if running for B1.
    We are running the initial  4.0 SP4 build.  Has any one seen this? know if its corrected in later builds?
    thanks
    jim

    This is BW data. 
    No need to restrict in Webi, the BW data is only 2 levels.  It is just which 2 levels...
    In the example above:
    The whole hierarchy is 3 levels, however when the user authorization runs, BW is asked to return Levels 1 and 2 based on the user ID.  so technically.. it is sometimes Level 1, 2, sometimes level 2 and 3.  sometimes only level3 (if lowest level user ID is run)
    I am thinking the Level selector is not knowing how to handle this movement up and down the hierarchy..
    jim

  • Performance of large tables with ADT columns

    Hello,
    We are planning to build a large table ( 1 billion+ rows) with one of the columns being an Advanced Data Type (ADT) column. The ADT column will be based on a TYPE that will have approximately (250 attributes).
    We are using Oracle 10g R2
    Can you please tell me the following:
    1. How will Oracle store the data in the ADT column?
    2. Will the entire ADT record fit in one block?
    3. Is it still possible to partition a table on an attribute that is part of the ADT?
    4. How will the performace be affected if Oracle does a full table scan of such a table?
    5. How much space will Oracle take, if any, for storing a NULL in an ADT?
    I think we can create indexes on the attribute of the ADT column. Please let me know if this is not true.
    Thanks for your help.

    I agree with D.Morgan that object type with 250 attributes is doubtful.
    I don't like object tables (tables with "row objects") too.
    But, your table is a relational table with object column ("column object").
    C.J.Date in An introduction to Database Systems (2004, page 885) says:
    "... object/relational systems ... are, or should be, basically just relational systems
    that support the relational domain concept (i.e., types) properly - in other words, true relational systems,
    meaning in particular systems that allow users to define their own types."
    1. How will Oracle store the data in the ADT column?...
    For some answers see:
    “OR(DBMS) or R(DBMS), That is the Question”
    http://www.quest-pipelines.com/pipelines/plsql/tips.htm#OCTOBER
    and (of course):
    "Oracle® Database Application Developer's Guide - Object-Relational Features" 10g Release 2 (10.2)
    http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14260/adobjadv.htm#i1006903
    Regards,
    Zlatko

  • Large tables truncated or withheld from webhelp

    I'm running into a major issue trying to include a large table in my WebHelp build.  I'm using RoboHelp 8 in Word.  When I include a large table (6 columns x 180 rows) the table is either truncated or withheld from the compiled WebHelp.
    I've tried several things to resolve it, but they all end in the same results.  I've tried importing the table from its original Word file.  I've tried breaking it up into many smaller tables.  I've tried building a new table in Word, then copying the data.  Oddly enough if I build the table blank and compile--the table appears.  But once I copy data into the table, it disappears.
    RoboHelp seems unable to process the table, as when I've broken the single table into several smaller tables, it chokes, doesn't include the table or even put the topic in the TOC, even though it is in the source file.
    Any ideas?  I've not been able to find anything in the forums or anywhere else online. 
    Many thanks!

    Can you tell us what you mean "using RoboHelp in Word"? Do you mean you are using it as your editor or that you are using the RoboHelp for Word application? If the later, is there a reason why you can't use the RoboHelp HTML application? This is much more suited to producing WebHelp. Personally I wouldn't touch the HTML that Word creates with a bargepole.

  • Large table

    Hi, I will have a large table with 1 million records inserted everyday. The table schema has a couple of CLOBs. We are going to keep 1 year's data, so the table will eventually have about 365 million records. I have following questions.
    1) Is this amount of data can be handled by Oracle?
    2) Is there anything I can do for performance tuning, like partition, etc.?
    3) Any other advice for large table? Any recommended online documents/articles?
    Thanks,
    Zhe

    Philippe Florent wrote:
    You work with RAC and Linux, do you think this OS is mature to build an HA solution ? As many advices as people I asked. I don't expect a definitive answer of course, just thoughts...Well, I am a bit biased as I used Linux since the pre 1.0 kernel days. :-)
    We have 5 RAC clusters - all running x86_64 h/w and Linux as the o/s. Linux today is equivalent today to any other nix system - from HP-UX to Solaris. Thus no backseat ito performance, flexibility and scalability. There seems to me to be a lot of FUD around Linux. Amongst the old nix hands, it is seen as a new snotty nose and immature kid on the block. This is helped along with marketing propaganda by companies like Microsoft.
    Last year local Oracle asked me to do a Q&A session with one of their customers as they were looking at RAC and wanted answers from a user/customer perspective and not Oracle sales. Halfway through it we got to talking operating systems - and I was surprised to still find this idea that Linux is a hobbyist/immature type operating system and not comparable to a "real' o/s like Solaris or HP-UX for example.
    There's a single fact that I use to try and counter this idea about Linux not being ready for running big commercial systems. The question: "<i>What o/s do the vast majority of the 500 largest and fastest computer systems on this planet use?</i>" The answer: <a href="http://www.top500.org/stats/list/35/osfam"><i>Linux.</i></a>
    455 of the world's faster 500 computer clusters use Linux. That is 91%. Up from around 84% last year this time. Not one of these run HP-UX. Only 5 runs Microsoft Windows. 2 runs Open Solaris. 19 runs AIX.
    So if Linux is the predominant choice for o/s to run these large clusters, why then would it not be "+good enough+" for a (many times smaller) corporate cluster? In these large clusters, everything is magnified. Performance. Administration. Support. Flexibility. Robustness and stability. Costs. Scalability. Etc. And 91% of the time, Linux was selected to address these requirements and concerns.
    So I like to reverse the question and instead ask "+What are your reasons for not using Linux and are these valid ones?+" :-)

  • Large table, primary key constraint

    I have migrated a table from 8i to 9i that is over 300 million rows. I migrated the the table to a 9i database without constraints or indexes.
    I have successfully created a composite index of two columns, t1 varchar2(512), t2 varchar2(32). This index took nearly 16 hours to create.
    I am now trying to create a primary key based on that index with the following sql:
    alter table table1
    add constraint table1_t1_t2_pk primary key(t1,t2)
    using index table1_t1_t2_idx
    nologging
    This process has taken over 24 hours and is well into the second day. Studio reports it will take an additional 15 hours to create.
    My questions are these?
    1. Is my syntax okay?
    2. I thought that by creating a primary key on an existing index, that another index is not being created. I thought it would be faster this way. Why is it taking a lot longer to create then the index it is based upon?
    3. Is there a more efficient method (other than parallel query) to create this index/constraint on such a large table? What happens when I go production and need to recreate this index if I have a failure. I have never had to do this before. I can't be down for 48 hours to create an index. What other alternatives do I have?
    The table is partit[i]Long postings are being truncated to ~1 kB at this time.

    Is INDEX table1_t1_t2_idx UNIQUE? If it's not that might explain why building the primary key constraint takes longer.
    I think the USING INDEX clause with an existing index is intended mainly for different UNIQUE constraints to share the same index. In your situation I think you would be better off just building the primary key constraint.
    Cheers, APC

  • Item Classes - large tables - query timeouts!!!

    Hi,
    I've been building a series of Item Classes in Discoverer 4.1. Admin Tool on a reasonably large set of tables - but some of the Item Classes timeout before the list of values are retrieved.
    Discoverer continually tells me to go and set values for the query govenor - but I've already done this (15minute warning, 60 minute timeout).
    Does the Admin Tool have another place to configure the Query Govenor - other than Tool -> Privileges -> Query Govenor Tab...
    I've been able to fool the database by running a SQL*Plus query (resulting in a cached result set) for the smaller tables, but the Item Classes based on the larger table will not return.
    I've reviewed the 'LOV that works more like the LOV in Oracle Forms' thread and the Database Function and Custom folder solutions will work - but I'd prefer to fix the problem at its source. Is it possible Discoverer requires an index on every column used as an item class?
    Can anyone help me?
    thanks,
    Lance

    Have a look at the timeout value specified in the registry key:
    \\HKEY_CURRENT_USER\Software\ORACLE\Discoverer\Database\ItemClassDelay
    By default it is 20 seconds.
    Metalink Note refers:
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=1040079.6
    You're probably safer using your lookup tables to generate LOV's anyway, since you have no guarantee that an LOV generated from a particular field in your table will contain all of the possible values available in the lookup table.

  • Postgres' LIMIT .. OFFSET for large table

    Hi!
    I have a really large table (some millions of rows) which I'd like to present on a web page. I let the user choose a limit, say 25 lines per page, and present some buttons to go one page forward or backwards.
    Some years ago, I have done this using PostgreSQL. There's an easy way to do it using LIMIT .. OFFSET. In Oracle, there's no such functionality.
    Currently, my 'workaround' looks like this (a bit more complex in reality):
    1 SELECT * FROM (
    2 SELECT
    3 ROW_NUMBER() OVER ( ORDER BY MSG_RCV_TIME DESC) AS ROWNO,
    4 TO_CHAR(MSG_RCV_TIME) MSG_RCV
    5 FROM MSG_TABLE
    6* ORDER BY MSG_RCV_TIME DESC) WHERE ROWNO BETWEEN 1 AND 10
    This gives back 10 rows, which does the job. The problem is: It takes AGES!. The web server falls in to a timeout before even printing one line. First, Oracle has to suck in all x*1'000'000 lines just to sort out the ones it doesn't need. That can't be the solution, can it?
    In this forum, I have read a few notes about PARTITION, CURSOR and such things, but I didn't really get what the use of it is.
    Any hints on that? This forum is based on Oracle, too (I hope), and it's fast. There must be a solution for this.
    Btw, the table I am talking about is being filled by syslog-ng, and it currently grows by 200MB per day (and it's still in the testing phase). I expect some hundred million lines to be present later.
    Thanks a lot in advance
    André

    See Tom Kyte's site for thisCool. Didn't know this one. How is he checking the performance of the queries?
    The one comment in there that I entirely agree with
    is that such large result sets are meaningless to the
    human eye so I would question exactly what you are
    trying to achieve. As Tom rightly says, nobody is
    ever going to scroll down to rows 999001 - 999010,
    even if they could.Of course not. But you see, as an example, that if you type just one word into google's mask, it returns loads of pages. As soon as you see that your query was not really a good one, you try with more specific words, and it returns less pages. That's exactly what my GUI is going to do. First, it gives you an overview, then, it lets you refine the search.
    Anyway: As soon as I limit the output in the innermost query, I doubt it's useful: Say, I limit the number of rows to browse through to 1000, but syslog-ng is producing 2000 rows per minute - you'll miss the rows you were maybe looking for.
    It's essential to be able to see all the records. I don't mind if nobody ever looks at pages 200'000 to 1'000'000.
    Thanks again for the great link.
    André (who really starts to like Oracle and its community)

Maybe you are looking for

  • Is there any way to package a project to move it to another computer?

    I've got FCE4 projects on one computer but I need to be able to open and work on them on my MBP as well. Currently, I've got the projects in Dropbox, but when I open them, the source media files all need to be reconnected. I guess I could painstaking

  • Material master upload through flat file using the BAPI_MATERIAL_SAVEDATA

    Hi Guys, I need to upload the material master using the BAPI, I need to update the all the views in the material master, Could any one can help please? I using EXCEL file is input file and suggest me the, excel file format, if could you suggest it wo

  • ROW TO COLUMN in Oracle 10G

    Hi all, I have requirement to display the values in a single column base upon the empid ex: select * from emp where empid=1000 then actual result will come as empid mgr dept sal 1000 10    10    1000 1000 20    20    2000 2000 10   10    1500 2000 20

  • Line item number per invoice

    Hi Gurus, i need to display the line item number in the report as per the FB03 tcode per a document i have a query on a multi provider which includes 0FI_AP_4(FIAP) and 0FI_TX_4(FI TAX) cubes iam not getting the correct lineitem value as per FB03 tco

  • How do with change table cells from staticText1 to button1 in run time?

    I have two question: first: I think change table's cells from staticText to button in run time? how do? second: I think change table column's order in run time?how do? ex: =============change before=========== name age wtu 22 =============chnage afte