Google style autosuggest with millions rows table

Hi All,
I'm exploring the ways of implementing a "google style autosuggest" on a table with no less than 30 millions rows. It has a field with an address (varchar) and I'd like to create a Ajax call while the user is typing that would suggest the user few addresses.
I was thinking about using contains+fuzzy... but not sure if it will be fast enough and if it will return the right results.
Any suggestions ?
thanks

2 million rows with XML type data
link may be of your interest.
HTH
Girish Sharma

Similar Messages

  • DELETE QUERY FOR A TABLE WITH MILLION ROWS

    Hello,
    I have a requirement where I have to compare 2 tables - both having around million rows, and delete data based on a single column.
    DELETE FROM TABLE_A WHERE COLUMN_A NOT IN
    (SELECT COLUMN_A FROM TABLE_B)
    COLUMN_A had index defined on it in both tables. Still it is taking a long time. What is the best way to achieve this? any work around?
    thanks

    How many rows are you deleting from this table ? If the precentage is large then the better option is
    1) Create a new table where COLUMN_A NOT IN
    (SELECT COLUMN_A FROM TABLE_B)
    2) TRUNCATE table_A
    3)Insert in to table_A (select * from new table)
    4) If you have any constraints then may be it can be diaabled.
    thanks

  • Best way to refresh 5 million row table

    Hello,
    I have a table with 5 million rows that needs to be refreshed every 2 weeks.
    Currently I am dropping and creating the table which takes a very long time and gives a warning related to table space at the end of execution. It does create the table with the actual number of rows but I am not sure why I get the table space warning at the end.
    Any help is greatly appreciated.
    Thanks.

    Can you please post your query.
    # What is the size of temporary tablespace
    # Is you query performing any sorts ?
    Monitor the TEMP tablespace usage from below after executing your SQL query
    SELECT TABLESPACE_NAME, BYTES_USED, BYTES_FREE
    FROM V$TEMP_SPACE_HEADER;
    {code}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Problem with BULK COLLECT with million rows - Oracle 9.0.1.4

    We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.
    We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
    1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
    2. We used BULK COLLECT into FETCH the data.
    3. We used FORALL statement while inserting the data.
    We did not introduce any of our transformation logic yet.
    We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:
    ERROR at line 1:
    ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
    call ,pmucalm coll)
    ORA-06512: at "VVA.BULKLOAD", line 66
    ORA-06512: at line 1
    We got the same error even with 1 million rows.
    We do have the following configuration:
    SGA - 8.2 GB
    PGA
    - Aggregate Target - 3GB
    - Current Allocated - 439444KB (439 MB)
    - Maximum allocated - 2695753 KB (2.6 GB)
    Temp Table Space - 60.9 GB (Total)
    - 20 GB (Available approximately)
    I think we do have more than enough memory to process the 1 million rows!!
    Also, some times the same program results in the following error:
    SQL> exec bulkload
    BEGIN bulkload; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    We did not even attempt the full load. Also, we are not using the PARALLEL option yet.
    Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?
    Thanks,
    Haranadh
    Following is the code:
    set echo off
    set timing on
    create or replace procedure bulkload as
    -- SOURCE --
    TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
    TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
    TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
    TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
    TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
    TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
    TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
    TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
    TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
    TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
    TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
    TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
    TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
    TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
    TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
    TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;
    src_cpd_dt_array src_cpd_dt;
    src_acqr_ctry_cd_array      src_acqr_ctry_cd;
    src_acqr_pcr_ctry_cd_array     src_acqr_pcr_ctry_cd;
    src_issr_bin_array      src_issr_bin;
    src_mrch_locn_ref_id_array     src_mrch_locn_ref_id;
    src_ntwrk_id_array      src_ntwrk_id;
    src_stip_advc_cd_array      src_stip_advc_cd;
    src_authn_resp_cd_array      src_authn_resp_cd;
    src_authn_actvy_cd_array      src_authn_actvy_cd;
    src_resp_tm_id_array      src_resp_tm_id;
    src_mrch_ref_id_array      src_mrch_ref_id;
    src_issr_pcr_array      src_issr_pcr;
    src_issr_ctry_cd_array      src_issr_ctry_cd;
    src_acct_num_array      src_acct_num;
    src_tran_cnt_array      src_tran_cnt;
    src_usd_tran_amt_array      src_usd_tran_amt;
    j number := 1;
    CURSOR c1 IS
    SELECT
    cpd_dt,
    acqr_ctry_cd ,
    acqr_pcr_ctry_cd,
    issr_bin,
    mrch_locn_ref_id,
    ntwrk_id,
    stip_advc_cd,
    authn_resp_cd,
    authn_actvy_cd,
    resp_tm_id,
    mrch_ref_id,
    issr_pcr,
    issr_ctry_cd,
    acct_num,
    tran_cnt,
    usd_tran_amt
    FROM ima_ama_acct ima_ama_acct
    ORDER BY issr_bin;
    BEGIN
    OPEN c1;
    FETCH c1 bulk collect into
    src_cpd_dt_array ,
    src_acqr_ctry_cd_array ,
    src_acqr_pcr_ctry_cd_array,
    src_issr_bin_array ,
    src_mrch_locn_ref_id_array,
    src_ntwrk_id_array ,
    src_stip_advc_cd_array ,
    src_authn_resp_cd_array ,
    src_authn_actvy_cd_array ,
    src_resp_tm_id_array ,
    src_mrch_ref_id_array ,
    src_issr_pcr_array ,
    src_issr_ctry_cd_array ,
    src_acct_num_array ,
    src_tran_cnt_array ,
    src_usd_tran_amt_array ;
    CLOSE C1;
    FORALL j in 1 .. src_cpd_dt_array.count
    INSERT INTO ima_dly_acct (
         CPD_DT,
         ACQR_CTRY_CD,
         ACQR_TIER_CD,
         ACQR_PCR_CTRY_CD,
         ACQR_PCR_TIER_CD,
         ISSR_BIN,
         OWNR_BUS_ID,
         USER_BUS_ID,
         MRCH_LOCN_REF_ID,
         NTWRK_ID,
         STIP_ADVC_CD,
         AUTHN_RESP_CD,
         AUTHN_ACTVY_CD,
         RESP_TM_ID,
         PROD_REF_ID,
         MRCH_REF_ID,
         ISSR_PCR,
         ISSR_CTRY_CD,
         ACCT_NUM,
         TRAN_CNT,
         USD_TRAN_AMT)
         VALUES (
         src_cpd_dt_array(j),
         src_acqr_ctry_cd_array(j),
         null,
         src_acqr_pcr_ctry_cd_array(j),
              null,
              src_issr_bin_array(j),
              null,
              null,
              src_mrch_locn_ref_id_array(j),
              src_ntwrk_id_array(j),
              src_stip_advc_cd_array(j),
              src_authn_resp_cd_array(j),
              src_authn_actvy_cd_array(j),
              src_resp_tm_id_array(j),
              null,
              src_mrch_ref_id_array(j),
              src_issr_pcr_array(j),
              src_issr_ctry_cd_array(j),
              src_acct_num_array(j),
              src_tran_cnt_array(j),
              src_usd_tran_amt_array(j));
    COMMIT;
    END bulkload;
    SHOW ERRORS
    -----------------------------------------------------------------------------

    do you have a unique key available in the rows you are fetching?
    It seems a cursor with 20 million rows that is as wide as all the columnsyou want to work with is a lot of memory for the server to use at once. You may be able to do this with parallel processing (dop over 8) and a lot of memory for the warehouse box (and the box you are extracting data from)...but is this the most efficient (and thereby fastest) way to do it?
    What if you used a cursor to select a unique key only, and then during the cursor loop fetch each record, transform it, and insert it into the target?
    Its a different way to do a lot at once, but it cuts down on the overall memory overhead for the process.
    I know this isnt as elegant as a single insert to do it all at once, but sometimes trimming a process down so it takes less resources at any given moment is much faster than trying to do the whole thing at once.
    My solution is probably biased by transaction systems, so I would be interested in what the data warehouse community thinks of this.
    For example:
    source table my_transactions (tx_seq_id number, tx_fact1 varchar2(10), tx_fact2 varchar2(20), tx_fact3 number, ...)
    select a cursor of tx_seq_id only (even at 20 million rows this is not much)
    you could then either use a for loop or even bulk collect into a plsql collection or table
    then process individually like this:
    procedure process_a_tx(p_tx_seq_id in number)
    is
    rTX my_transactions%rowtype;
    begin
    select * into rTX from my_transactions where tx_seq_id = p_tx_seq_id;
    --modify values as needed
    insert into my_target(a, b, c) values (rtx.fact_1, rtx.fact2, rtx.fact3);
    commit;
    exception
    when others
    rollback;
    --write to a log or raise and exception
    end process_a_tx;
    procedure collect_tx
    is
    cursor tx is
    select tx_seq_id from my_transactions;
    begin
    for rTx in cTx loop
    process_a_tx(rtx.tx_seq_id);
    end loop;
    end collect_tx;

  • Fetch only more than or equal to 10 million rows tables

    Hi all,
    How to fetch tables has more than 10 million rows with is plsql? i got this from some other site I couldn't remember.
    Somebody can help me on this please. your help is greatly appreciated.
    declare
    counter number;
    begin
    for x in (select segment_name, owner
    from dba_segments
    where segment_type='TABLE'
    and owner='KOMAKO') loop
    execute immediate 'select count(*) from '||x.owner||'.'||x.segment_name into counter;
    dbms_output.put_line(rpad(x.owner,30,' ') ||'.' ||rpad(x.segment_name,30,' ') ||' : ' || counter ||' row(s)');
    end loop;
    end;
    Thank you,
    gg

    1) This code appears to work. Of course, there seems to be no need to select from DBA_SEGMENTS when DBA_TABLES would more straightforward. And, of course, you'd have to do something when the count exceeded 10 million.
    2) If you are using the cost-based optimizer (CBO) and your statistics are reasonably accurate and you can tolerate a degree of staleness/ approximation in the row counts, you could just select the NUM_ROWS column from DBA_TABLES.
    Justin

  • Delete from 95 million rows table ...

    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.

    user8604530 wrote:
    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.how many rows in the table BEFORE the DELETE?
    how many rows in the table AFTER the DELETE?
    How do I ask a question on the forums?
    SQL and PL/SQL FAQ
    Handle:     user8604530
    Status Level:     Newbie
    Registered:     Mar 10, 2010
    Total Posts:     64
    Total Questions:     26 (22 unresolved)
    I extend to you my condolences since you rarely get your questions answered.

  • MINUS on a 4 million row table - how to enhance speed?

    I have a query like this:
    select col1, col2, col3
    from table1
    MINUS
    select col1, col2, col3
    from EXTERNAL_TABLE
    table1 has approximately 4 million records and external file has roughly 4 milllion rows.
    MINUS takes around 25 mins here. How can I speed up?
    Thanks in Advance!!!

    To make something go faster, you first need to know what makes it slow.
    Simple actually - to solve a problem we need to know what the problem is. Can't answer a question without knowing what the question is.
    Reading a total of 8 million rows means a lot more I/O than usual... so that is likely a culprit for the slow performance. If that is the case, you will need to find a way to increase the time it takes to perform all that I/O. Or do less I/O.
    But you need to pop the hood and take a look at just what is causing what you think is slow performance. (it may not even be slow - it may be as fast as it can do given the hardware and other limitations that are imposed on Oracle and this MINUS process)

  • Importing huge (10 million rows) tables into HANA

    Dear All,
    i need to import huge tables (40.000.000 records)  into an HANA system, in order to develop a demo system for a customer.
    I'm planning to use CSV files.
    Does someone have experience in a task like this? is there a limit to CSV file dimension?
    Many thanks
    Leopoldo Capasso

    Check out this blog for best practice to load high volume data from flat files:
    http://scn.sap.com/community/hanainmemory/blog/2013/04/08/bestpracticesforsaphanadataloads
    Hope this helps.
    Rama

  • Schema Design for 10^6+ rows table (Indicator Column / Bitmap Join Index?)

    Hi all,
    I read following suggestion for a SELECT with LEFT OUTER JOIN in a DB2 consulting company paper for a 10 million-rows table:
    SELECT columns
    FROM ACCTS A LEFT JOIN OPT1 O1
    ON      A.ACCT_NO = O1.ACCT_NO
    AND     A.FLAG1 = ‘Y’
    LEFT JOIN OPT2 O2
    ON      A.ACCT_NO = O2.ACCT_NO
    AND     A.FLAG2 = ‘Y’
    WHERE A.ACCT_NO = 1
    For DB2, according to the paper, the following is true: Iff A.FLAG1 <> ‘Y’ Then no Table or Index Access on OPT1 is done. Same for A.FLAG2/OPT2.
    I recreated the situation for ORACLE with the following script and came to some really interesting questions
    DROP TABLE maintbl CASCADE CONSTRAINTS;
    DROP TABLE opt1 CASCADE CONSTRAINTS;
    DROP TABLE opt2 CASCADE CONSTRAINTS;
    CREATE TABLE maintbl
    id INTEGER NOT NULL,
    dat VARCHAR2 (2000 CHAR),
    opt1 CHAR (1),
    opt2 CHAR (1),
    CONSTRAINT CK_maintbl_opt1 CHECK(opt1 IN ('Y', 'N')) INITIALLY IMMEDIATE ENABLE VALIDATE,
    CONSTRAINT CK_maintbl_opt2 CHECK(opt2 IN ('Y', 'N')) INITIALLY IMMEDIATE ENABLE VALIDATE,
    CONSTRAINT PK_maintbl PRIMARY KEY(id)
    CREATE TABLE opt1
    maintbl_id INTEGER NOT NULL,
    adddat1 VARCHAR2 (100 CHAR),
    adddat2 VARCHAR2 (100 CHAR),
    CONSTRAINT PK_opt1 PRIMARY KEY(maintbl_id),
    CONSTRAINT FK_opt1_maintbltable FOREIGN KEY(maintbl_id) REFERENCES maintbl(id)
    CREATE TABLE opt2
    maintbl_id INTEGER NOT NULL,
    adddat1 VARCHAR2 (100 CHAR),
    adddat2 VARCHAR2 (100 CHAR),
    CONSTRAINT PK_opt2 PRIMARY KEY(maintbl_id),
    CONSTRAINT FK_opt2_maintbltable FOREIGN KEY(maintbl_id) REFERENCES maintbl(id)
    INSERT ALL
    WHEN 1 = 1 THEN
    INTO maintbl (ID, opt1, opt2, dat) VALUES (nr, is_even, is_odd, maintbldat)
    WHEN is_even = 'N' THEN
    INTO opt1 (maintbl_id, adddat1, adddat2) VALUES (nr, adddat1, adddat2)
    WHEN is_even = 'Y' THEN
    INTO opt2 (maintbl_ID, adddat1, adddat2) VALUES (nr, ADDdat1, ADDdat2)
    SELECT LEVEL AS NR,
    CASE WHEN MOD(LEVEL, 2) = 0 THEN 'Y' ELSE 'N' END AS is_even,
    CASE WHEN MOD(LEVEL, 2) = 1 THEN 'Y' ELSE 'N' END AS is_odd,
    TO_CHAR(DBMS_RANDOM.RANDOM) AS maintbldat,
    TO_CHAR(DBMS_RANDOM.RANDOM) AS adddat1,
    TO_CHAR(DBMS_RANDOM.RANDOM) AS adddat2
    FROM DUAL
    CONNECT BY LEVEL <= 100;
    COMMIT;
    SELECT * FROM maintbl
    LEFT OUTER JOIN opt1 ON maintbl.id = opt1.maintbl_id AND maintbl.opt1 = 'Y'
    LEFT OUTER JOIN opt2 ON maintbl.id = opt2.maintbl_id AND maintbl.opt2 = 'Y'
    WHERE id = 1;
    Explain plan for "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi":
    http://i.imgur.com/f0AiA.png
    As one can see, the DB uses a view to index-access the opt tables iff indicator column maintbl.opt1='Y' in the main table.
    Explain plan for "Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production":
    http://i.imgur.com/iKfj8.png
    As one can see, the DB does NOT use the view, instead uses a pretty useless case-statement
    Now my questions:
    1) What does the optimizer do in 11.2 XE?!?
    2) In General: Do you suggest this table-setup? Does your yes/no suggestion depend on the rowcount in the tables? Of course I see the problem with incorrectly updated columns and would NEVER do it if there is another truly relational solution with same performance possibly.
    3) Is there a way to avoid performance issues if I don't use an indicator column in ORACLE? Is this what a [Bitmap Join Index|http://docs.oracle.com/cd/E11882_01/server.112/e25789/indexiot.htm#autoId14] is for?
    Thanks in advance and happy discussing,
    Blama

    Fair enough. I've included a cut-down set of SQL below.
    CREATE TABLE DIMENSION_DATE
    DATE_ID NUMBER,
    CALENDAR_DATE DATE,
    CONSTRAINT DATE_ID
    PRIMARY KEY
    (DATE_ID)
    CREATE UNIQUE INDEX DATE_I1 ON DIMENSION_DATE
    (CALENDAR_DATE, DATE_ID);
    CREATE TABLE ORDER_F
    ORDER_ID VARCHAR2(40 BYTE),
    SUBMITTEDDATE_FK NUMBER,
    COMPLETEDDATE_FK NUMBER,
    -- Then I add the first bitmap index, which works:
    CREATE BITMAP INDEX SUBMITTEDDATE_FK ON ORDER_F
    (DIMENSION_DATE.DATE_ID)
    FROM ORDER_F, DIMENSION_DATE
    WHERE ORDER_F.SUBMITTEDDATE_FK = DIMENSION_DATE.DATE_ID;
    -- Then attempt the next one:
    CREATE BITMAP INDEX completeddate_fk
    ON ORDER_F(b.date_id)
    FROM ORDER_F, DIMENSION_DATE b
    WHERE ORDER_F.completeddate_fk = b.date_id;
    -- which results in:
    -- ORA-01408: such column list already indexed

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • General Scenario- Adding columns into a table with more than 100 million rows

    I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
    Thanks in advance.
    svk

    For such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
    Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
    table). Add the default after all the rows have a value for the new column.

  • Issue in adding not null constraint on 250 G  table with 50 million rows.

    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.

    user445775 wrote:
    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.And what's wrong with it taking an hour? Presumably, this is a one time operation, and it doesn't really interfere with anything else.

  • How to export a table with half a million rows?

    I need to export a table that has 535,000 rows. I tried to export to Excel and it exported only 65,535 rows. I tried to export to a text file and it said it was using the clipboard (?) and 65,000 rows was the maximum. Surely there has to be a way to export
    the entire table. I've been able to import much bigger csv files than this, millions of rows.

    What version of Access are you using?  Are you attempting to copy and paste records or are you using Access' export functionality from the menu/ribbon?  I'm using Access 2010 and just exported a million record table to both a text file and to Excel
    (.xlsx format).  Excel 2003 (using .xls 97-2003 format) does have a limit of 65,536 rows but the later .xlsx format does not.
    -Bruce

  • Get min and max for a column from table with 24 million rows.

    What is the best way to re-write the following query in a procedure for the table which has around 24 million rows?
    SELECT MIN(ft_src_ref_id), MAX(ft_src_ref_id )
    INTO gn_Min_ID, gn_Max_ID
    from UI_PURGE_FT;
    Thanks
    Edited by: tcode on Jun 21, 2012 12:31 PM

    Which of the following plan is better, can you please breifly explain?
    Also I need to gather statics to know recursive calls , db block gets ,consistent gets etc. etc. how can I get that?
    Thanks
    1.
    Execution Plan
    Plan hash value: 3702745568
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 6 | 13991 (2)| 00:02:48 |
    | 1 | SORT AGGREGATE | | 1 | 6 | | |
    | 2 | INDEX FAST FULL SCAN | UI_PURGE_FT_PK | 23M| 136M| 13991 (2)| 00:02:48 |
    2.
    Execution Plan
    Plan hash value: 1974183109
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | | 2 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 6 | | |
    | 2 | INDEX FULL SCAN (MIN/MAX)| UI_PURGE_FT_PK | 23M| 136M| 2 (0)| 00:00:01 |
    | 3 | SORT AGGREGATE | | 1 | 6 | | |
    | 4 | INDEX FULL SCAN (MIN/MAX)| UI_PURGE_FT_PK | 23M| 136M| 2 (0)| 00:00:01 |
    | 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------------

  • Inserting 10 million rows in to a table hangs

    HI through toad iam using a simple for loop to insert 10 million rows into a table by saying
    for i in 1 ......10000000
    insert.................
    It hangs ........ for lot of time
    is there a better way to insert the rows in to the table....?
    i have to test for performance.... and i have to insert 50 million rows in its child table..
    practically when the code moves to production it will have these many rows...(may be more also) thats why i have to test for these many rows
    plz suggest a better way for this
    Regards
    raj

    Must be a 'hardware thing'.
    My ancient desktop (pentium IV, 1.8 Ghz, 512 MB), running XE, needs:
    MHO%xe> desc t
    Naam                                      Null?    Type
    N                                                  NUMBER
    A                                                  VARCHAR2(10)
    B                                                  VARCHAR2(10)
    MHO%xe> insert /*+ APPEND */ into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:04:09.71
    MHO%xe> drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:31.50
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.04
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:02:44.12
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:09.46
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:00.15
    MHO%xe> insert /*+ APPEND */ into t
      2   with my_data as (
      3   select level n, 'abc' a, 'def' b from dual
      4   connect by level <= 10000000
      5   )
      6   select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:03.89
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:27.17
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.15
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:56.10Yea, 'cached' it a bit (ofcourse ;) )
    But the append hint seems to knibble about 50 sec off anyway (using NO indexes at all) on my 'configuration'.

Maybe you are looking for

  • [BUG REPORT] SmartWatch doesn't wake up while tapping on it

    After last update my Smartwatch doesn't waku up while tapping on it. I can wake it up only with button. This problem exists while screen of watch goes off on a any widget. When screen goes off while displaying a time - i can wake up by tapping on it.

  • BH-600 Headset - No sound through speaker

    Hi, I have a Nokia BH-600 headset which has no sound through the earpiece after successful pairing with my phone which is a Nokia E61i. It used to work perfectly until I upgraded the software for the phone. I have had Nokia Customer Support try to so

  • Why do I have 2.5GB of 'Other' ???

    Last week my 16GB nano went goofy. I use WinAMP as an interface. When I tried to eject it it said the device was being used by another process. I disconnected it and everything was gone. I followed the troubleshooting instructions and eventually had

  • Post laptop crash: form won't submit/can't access network resource

    I distributed a form on 5/12 through the 'distribute form wizard' in Acrobat 9 Pro. On 5/14 my laptop motherboard fried. We retrieved my data & imaged another computer, but then my responses file seemed to have issues, not wanting to import any respo

  • How do I get this "Apple Support Communities"

    In my email. Also I have updated to 10.6.7 and yet it says I have 10.5.2 Thank you