Lookup from a large table?

Hi, All
Very new to APEX. If I have a very large table of securities, and want the user to be able to lookup the security a number of different ways (Name, CUSIP or ticker), is there a way to do this in APEX? When they are faced with entering a "security id", they need to find a way to retrieve the correct one.
Thanks

Hi
Yes, this is very simple.
Essentially, you create a number of items where they can enter the details.
In you case, let's call them P1_SEC_ID, P1_NAME, P1_CUSIP and P1_TICKER.
Next create a button called GO (or whatever you want) that branches back to the same page.
Not create a report region with a source something like this...
SELECT *
FROM my_big_table
WHERE INSTR(UPPER(sec_id),UPPER(:P1_SEC_ID)) > 0
OR    INSTR(UPPER(name),UPPER(:P1_NAME)) > 0
OR    INSTR(UPPER(cusip),UPPER(:P1_CUSIP)) > 0
OR    INSTR(UPPER(ticker),UPPER(:P1_TICKER)) > 0Next, make the report rgion conditional on the request value being 'GO' and this should work for you.
Cheers
Ben

Similar Messages

  • 4 lookups against single large table

    I need to do a 4 column lookup against a large table (1 Million rows) that contains 4 different record types.  The first lookup will match on colums A, B, C, and D.  If no match is found, I try again with colums A, B, C, and '99' in column D.  If no match, try again with column A, B, D, and '99' in Column C.  Finally, if no match in any of the above, use column A, '99' in B, '99' in C, '99' in D.  I will retreive 2 columns from the lookup table.
    My thought is that breaking this sequence out into 4 different tables/ lookups would be most efficient. The other option would be to write a script that handled this logic in a single transform with an in-memory table.  My concern is that the size of the table would be too large to load into memory.
    Any ideas/suggestions would be appreciated.  

    Hi all, ..Jamie glad your looking at this post. I've a spreadsheet contains about 26K rows and a warehouse fact table containing 10 million rows (after being filtered). I want to pull back only the fact table rows that match the id field in the spreadsheet,
    and thus extend the 26K rows in the spreadsheet with the fields from the fact table. Pretty basic join ID = ID. To make it interesting/challenging I'm using Excel 2010 32 Bit no issue with 26K rows but no way I can even browse 10 mil records without timeout.
    Seems like PQ can use SQL functions so was wondering if that might be an option to do the work on the server I.e. fn_ReturnRow(@ID). Haven't seen an example of this being used anywhere so perhaps you, Chris Webb or one of the other PQ Gurus could lend a hand?
    This may also help the original poster with some more functions to return back rows that would normally make PQ choke. Hopefully with minimal M. And BTW is there a way to use SQL Stored Procs in PQ? Cheers
    Geoff Fane

  • Retrieve data from a large table from ORACLE 10g

    I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
    The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
    Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
    When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
    But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
    Any help on this problem will be highly appriciated.
    Thanks in advance...
    -Jahedur Rahman
    Edited by: Jahedur on May 16, 2010 11:42 PM

    Girish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
    1."export the data into another media into the hard drive."
    What does it mean by this line i.e. another media into hard drive???
    ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
    2."I am not able to connect to the database directly because of license issue"
    huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
    ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
    3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
    ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
    4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
    ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
    E.g: 1 to 20,000 records in 1st phase
    20,001 to 40,000 records in 2nd phase
    40,001 to ...... records in 3nd phase
    and so on...
    Please let me know if this does not clarify your confusions... :)
    Thanks...
    -Jahedur Rahman
    Edited by: user13114507 on May 12, 2010 11:28 PM

  • Deleting rows from very large table

    Hello,
    I need to delete rows from a large table, but not all of them, so I can't use truncate. The delete condition is based on one column, something like this:
    delete from very_large_table where col1=100;
    There's an index (valid, B-tree) on col1, but it still goes very slow. Is there any instruction which can help delete rows faster?
    Txh in adv.
    A.

    Your manager doesn't agree to your running an EXPLAIN PLAN? What is his objection? Sounds like the prototypical 'pointy-hair boss'.
    Take a look at these:
    -- do_explain.sql
    spool explain.txt
    -- do EXPLAIN PLAN on target queries with current index definitions
    truncate table plan_table
    set echo on
    explain plan for
    <insert query here>
    set echo off
    @get_explain.sql
    -- get_explain.sql
    set linesize 120
    set pagesize 70
    column operation     format a25
    column query_plan     format a35
    column options          format a15
    column object_name     format a20
    column order           format a12
    column opt           format a6
    select     lpad(' ',level) || operation "OPERATION",
         options "OPTIONS",
         decode(to_char(id),'0','COST = ' || NVL(to_char(position),'n/a'),object_name) "OBJECT NAME",
         cardinality "rows",     
         substr(optimizer,1,6) "OPT"
    from     plan_table
    start     with id = 0
    connect by prior id = parent_id
    There are probably newer, better ways, but this should work with all living versions of Oracle and is something I've had in my back pocket for several years now. It's not actually executing the query or dml in question, just running an explain plan on it.

  • SELECTing from a large table vs small table

    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please help

    user597961 wrote:
    I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
    The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
    1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
    ( SELECTing using an index )
    My understanding of how Oracle works internally is this :
    It will first locate the ROWID from teh B-Tree that stores the index.
    ( This operation is O(log N ) based on B-Tree )
    ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
    And Oracle simply reads teh data from teh location it deduced from ROWID.
    But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
    Am I correct above.
    2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
    Can somebody please helpIt's not going to be that simple. Before your first step (locate ROWID from index), it will first evaluate various access plans - potentially thousands of them - and choose the one that it thinks will be best. This evaluation will be based on the number of rows it anticipates having to retrieve, whether or not all of the requested data can be retrived from the index alone (without even going to the data segment), etc. etc etc. For each consideration it makes, you start with "all else being equal". Then figure there will be dozens, if not hundreds or thousands of these "all else being equal". Then once the plan is selected and the rubber meets the road, we have to contend with the fact "all else is hardly ever equal".

  • How to efficiently select random rows from a large table ?

    Hello,
    The following code will select 5 rows out of a random set of rows from the emp (employee) table
    select *
      from (
           select ename, job
             from emp
           order by dbms_random.value()
    where rownum <= 5my concern is that the inner select will cause a table scan in order to assign a random value to each row. This code when used against a large table can be a performance problem.
    Is there an efficient way of selecting random rows from a table without having to do a table scan ? (I am new to Oracle, therefore it is possible that I am missing a very simple way to perform this task.)
    thank you for your help,
    John.
    Edited by: 440bx on Jul 10, 2010 6:18 PM

    Have a look at the SAMPLE clause of the select statement. The number in parenthesis is a percentage of the table.
    SQL> create table t as select * from dba_objects;
    Table created.
    SQL> explain plan for select * from t sample (1);
    Explained.
    SQL> @xp
    PLAN_TABLE_OUTPUT
    Plan hash value: 2767392432
    | Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |      |   725 | 70325 |   289   (1)| 00:00:04 |
    |   1 |  TABLE ACCESS SAMPLE| T    |   725 | 70325 |   289   (1)| 00:00:04 |
    8 rows selected.

  • Java.lang.OutOfMemory error while retrieving data from a large table

    Hi,
    i am trying to fetch data using "executeQuery()" into a ResultSet from the database. But since the data in that table is large. i am recieving "java.lang.OutOfMemory" Error. So, to resolve that, i have used "setMaxRows()" for my statement object. This resolved the error but i don't recieve the entire data. If i call "executeQuery()" again, i recieve the same data. I don't even know a filtering criterion where by i can filter the data for each "executeQuery()"..
    How can i resolve this problem
    Thanx in advance
    --Chaitanya                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Either use some criteria you develop related to one of the keys on the table or use some sort of record limiting method.
    Note the method of limiting will vary related to the database you are using. You will have to look at the documentation.
    For example I am told this will work in MySQL to get 200 records starting at record 100.
    SELECT * FROM myTable ORDER BY whatever ASC LIMIT 100,200
    Because you are running out of memroy I assume the table is large,
    I am not sure what the impact of the above will have on performance because if in the above if the order by is not based on an index at the server level all the records will be selected and sorted before the records are limited.
    I would make sure you have an appropriate index.
    If you use the advanced search over the user forums using "resultset paging" and possibility the database you are using you should be able to get some ideas.
    I hope this makes sense to you.
    rykk

  • How to extract data from a large table!

    Hi all,
    I am working on a survey application using JSP, where users login to the application and fill out survey form. Responses of users are stored in a database table as a numeric value ranging from 1-10. Now I want to create some standard reports based on these records. The problem I foresee here is that every survey has almost 15000 user�s records, in this way we have million of records in the table where we keep records. Now while creating reports, if I search the database for these millions of records and then do some calculations/computations on these values, this will turn out a mess. Please suggest what is the best approach to handle the situation.
    Thanks

    thats something for a Database forum.
    But basicly: Use good indexes !

  • Fetching data from a large table

    hi
    I am trying to fetch data from a table with 100 million rows. There are 5 conditions(AND) in the "WHERE" clause. Out of five columns there is composite non unique index on 2 columns. rest of the three columns are haing data with low cardinality.i just wana know what type of index is suggested for my query so that i can fetch the data immediately i.e. should i go for composite index on all the 5 columns. secondly there is continuous insertion also. if i create index on all the 5 columns how it effect my insertion.i am using 8i
    Thanx
    Tarun

    This forum is for posting feedback about the OTN site.
    The best place to get an answer to your question is a Database forum, perhaps the PL/SQL forum.

  • Deleting few records from a large table

    Hi,
    I have a table having roughly 1.5 million. I sent a script to delete 16 duplicate records and it took more than 20 minute.
    Any help is appreciated
    Thanks
    Racle

    Since the delete statement keeps a log of records being deleted in case the delete statement may need to be rolled back, it may take a while to delete. Also, do the columns in your where clause have indexes on them? If no, then your system may have a hard time finding the records that need to be deleted. The truncate table command also deletes records much faster than the delete statement, but it doesn't keep a log of the records being deleted so use with caution!

  • Need to delete specific Months Data from SQL Server Table

    Greetings Everyone,
    So i have one table which contains 5 years old data, now business wants to keep just one year old data and data from qurter months i.e. (jan, mar, June, sep and December), i need to do this in stored procedure. how i can achive this using month lookup table.
    Thank you in advance
    R

    Hi Devin,
    In a production environment, you should be double cautious about the data. I have no idea why you’re about to remove the data just years old. In one of the applications I used to support, the data retention policy is like to keep raw data for latest month
    and the elder data would get rollup as max, min, average and so on to store in another table. That’s a good example for data retention.
    In your case I still suggest you keep the elder data in another table. If the data size is so huge that violates  your storage threshold, get the data rollup and store the aggregated would be a good option.
    Anyway if you don’t care about the elder data, you can just delete them with code like below.
    DELETE
    FROM yourTable
    WHERE YEAR(dateColumn) < YEAR(CURRENT_TIMESTAMP) OR (MONTH(dateColumn) not in (1,3,6,9,12) AND YEAR(dateColumn) = YEAR(CURRENT_TIMESTAMP))
    In some cases to remove data from very large table, DELETE performs bad. TRUNCATE would be a better option which works faster. Read more by clicking
    here. In your case, if necessary, you can reference the below draft code.
    SELECT * INTO tableTemp FROM yourTable WHERE YEAR(dateColumn) = YEAR(CURRENT_TIMESTAMP) AND MONTH(dateColumn) IN(1,3,6,9,12)
    TRUNCATE yourTable;
    INSERT INTO yourTable SELECT * FROM tableTemp
    As you mentioned, you need to do the deletion in Stored Procedure(SP). Can you post your table DDL with sample data and specify your requirement details so that I can help to compose your SP.
    If you have any question, feel free to let me know.
    Best regards,
    Eric Zhang

  • Performance during joining large tables

    Hi,
    I have to maintain a report which gets data from many large tables as below. Currently it is using join statement to join all 8 tables and causing a very slow performance.
    SELECT
        into corresponding fields of table equip
        FROM caufv
                  join afih on afih~aufnr = caufv~aufnr
                  join iloa on iloa~iloan = afih~iloan
                  join iflos  on iflos~tplnr = iloa~tplnr
                  join iflotx on iflos~tplnr = iflotx~tplnr
                  join vbak on vbak~aufnr = caufv~aufnr
                  join equz on equz~equnr = afih~equnr
                  join equi on equi~equnr = equz~equnr
                  join vbap on vbak~vbeln = vbap~vbeln
        WHERE
    Please suggest me another way, I'm newbie in ABAP. I tried using FOR ALL ENTRIES IN but it did not work. I would very appreciate if you can leave me some sample lines of code.
    Thanks,

    Hi Dear ,
    I will suggest you not to use inner join for such i.e. 8 number of table and that too huge tables. Instead use For All entries wherever possible. But before using for all entries check initial for base table and if its not possible to avoid inner join then try to minimise it. Use inner join between header and item.
    Hope this will help you to solve your problem . Feel free to ask if you have any doubt.
    Regards,
    Vijay

  • Efficiently Querying Large Table

    I have to query a recordset of 14k records against (join) a very large table: billions of data -even a count of the table does not return any resulty after 15 mins.
    I tried a plsql procedure to store the first recordset in a temp table and then preparing two cursors: one on the temp table and the other on the large table.
    However, the plsql procedure runs for a long time and just gives up with this error:
    SQL> exec match;
    ERROR:
    ORA-01041: internal error. hostdef extension doesn't exist
    BEGIN match; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    Is there is way through which I can query more efficiently?
    - Using chucks of records from the large table at a time - how to do that? (rowid)
    - Or just ask the dba to partition the table - but the whole table would still need to be queried.
    The temp table is:
    CREATE TABLE test AS SELECT a.mon_ord_no, a.mo_type_id, a.p2a_pbu_id, a.creation_date,b.status_date_time,
    a.expiry_date, a.current_mo_status_desc_id, a.amount,
    a.purchaser_name, a.recipent_name, a.mo_id_type_id,
    a.mo_redeemed_by_id, a.recipient_type, c.pbu_id, c.txn_seq_no, c.txn_date_time
    FROM mon_order a, mo_status b, host_txn_log c
    where a.mon_ord_no = b.mon_ord_no
    and a.mon_ord_no = c.mon_ord_no
    and b.status_date_time = c.txn_date_time
    and b.status_desc_id = 7
    and a.current_mo_status_desc_id = 7
    and a.amount is not null
    and a.amount > 0
    order by b.status_date_time;
    and the PL/SQL Procedure is:
    CREATE OR REPLACE PROCEDURE MATCH
    IS
    --DECLARE
    deleted INTEGER :=0;
    counter INTEGER :=0;
    CURSOR v_table IS
         SELECT DISTINCT pbu_id, txn_seq_no, create_date
         FROM host_v
         WHERE status = 4;
    v_table_record v_table%ROWTYPE;
    CURSOR temp_table (v_pbu_id NUMBER, v_txn_seq_no NUMBER, v_create_date DATE) IS
         SELECT * FROM test
         WHERE pbu_id = v_pbu_id
         AND txn_seq_no = v_txn_seq_no
         AND creation_date = v_create_date;
    temp_table_record temp_table%ROWTYPE;
    BEGIN
    OPEN v_table;
    LOOP
    FETCH v_table INTO voucher_table_record;
    EXIT WHEN v_table%NOTFOUND;
    OPEN temp_table (v_table_record.pbu_id, v_table_record.txn_seq_no, v_table_record.create_date);
    LOOP
    FETCH temp_table INTO temp_table_record;
    EXIT WHEN temp_table %FOUND;
         DELETE FROM test WHERE pbu_id = v_table_record.pbu_id AND
                   temp_table_record.txn_seq_no = v_table_record.txn_seq_no AND
                   temp_table_record.creation_date = v_table_record.create_date;
    END LOOP;
    CLOSE temp_table;
    END LOOP;
    CLOSE v_table;
    END MATCH;
    /

    Many thanks,
    I can get the explain plan for the SQL statement, but I am not sure how to get it for teh PLSQL. Which section in the PLSQL do I get the explain plan. I am using SQL Navigator.
    I can create the cursor with the join, and if it does not need the delete statement, then there is no need requirement for the procedure itself. Should I just run the query as a SQL statement?
    You have not said what I should do with the rowid?
    Regards

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • How do I open and use a large table from Word in Pages?

    I upgraded my MBP from Snow Leopard to Mountain Lion a couple days ago.  I knew that my old Word application wouldn't work, but several Apple people assured me that Pages could handle my old docs, including tables.
    So, I purchased and installed Pages '09 this morning.  I opened the table I use all the time - - and most of it is missing!  Apparently, Pages doesn't handle large tables.
    I need help - desperately!  This table contains all my medical expenses for the year, so I have to have it.
    Thanks

    J,
    Besides needing to have your Table Object Floating, you should know that the maximum number of rows in Pages Tables is 999.
    If you continue to have problems opening your Word document and its table in Pages, try one of the free Office Clones, LibreOffice, OpenOffice, IBM Lotus Symphony, etc.
    I'd bet that at least one of those free apps will work if Pages doesn't. By the way, have you tried viewing your Word document in Quick Look? To do that, click on the filename in Finder and hit the Spacebar key. You won't be able to do anything but view the document in Quick Look, but it would give you confidence that your file is OK.
    Jerry

Maybe you are looking for

  • How do you get your dock to appear on the Thunderbolt display?

    When I connect my MacBook Air to the Thunderbolt, the dock remains on my laptop. My husband's MacBook Pro automatically puts the dock at the bottom of the Thunderbolt screen. Also, when I open any application or a new window in an open application, t

  • Bridge CS5 preview problems.

    Hello, all. I am experiencing a problem with CS5 I didn't have before when running CS4. After downloading all CR2 files from my Canon DSLR I am not able to display the previews as sharp as they always were. I can open these images in Photoshop CS5 or

  • Custom Component Error -- Uggh

    I have tried to solve the problem using online resouces, 3rd party message boards, Xcelsius tutorials, the textbook "Xcelsius 2008: Dashboard Best Practices", tutorials which came with the Xcelsius SDK, among others, however, I continue to run into p

  • Cannot find the Config DN or the Searchbase: OAM

    Hello, I'm getting a "Cannot find the Config DN or the Searchbase" error when I try to configure the Identity Server via the webpass console. I have installed the Identity Server and Webpass successfully. I'm using ADSI to connect to AD, which is my

  • Side-chain compression not working! please help...

    I am trying to duck my es1 bass sound via the kick drum being emitted by an ultrabeat. i have done the following after reading several guides; ultrabeat in track 9. es 1 in track 10. set up basic 4/4 beat purely for learning this method, set es1 to e