Fast searching record among 70 millions of records in database

Hi All,
Could you please give me some Idea how can I do fast searching among 70 millions of record in database? I have tried with lucene but unable to get desired result.
-Roy D

lucene? What's that?Don't know, but it reminds me of Lucille ;-)
sings a certain bluesy song
To OP:
Could you please give me some Idea how can I do fast searching among 70 millions of record in database?First you need to give us amore clear idea of what's going on.
Can you post your execution plan and describe your table(s), indexes, database-version etc.?
See: [How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]

Similar Messages

  • How to make this faster?? read millions of record from txt file

    Hi there,
    I got an issue. There is a txt file contains 2 million records, I also got another file contains over 10000 numbers. Now, I need to compare this 10000 numbers with that 2 million records if any records contains a number which belongs to 10000 number set, i retrieve this record and keep it. later on, when i finish the comparison i'll write all the result records into a txt file.
    What kind of data structure shall i use to keep the records and numbers? how to make the comparison quicker? Any idea will do!
    Thanks!

    if i were to do it, i will insert bout the records into the db. then do an sql statement on the two tables to get the results. Then get the rs and output it to another text file.
    just my opinion. not sure if this is faster.
    Message was edited by:
    clarenceloh

  • How to DELETE millions of records. How to make it fast.

    Hi
    I need to delete near abt 134 millions of records from tables.
    How to make it faster? any trick , any settings.
    I am using Oracle 9i on Linux box.
    If suppose i use TRUNCATE . does it deletes all objects defined over tables like constraint, indexes etc.
    Thanks,
    Kuldeep

    hi
    SQL> create table te as select * from all_objects;
    Table created.
    SQL> create index te_ind on te ( owner);
    Index created.
    SQL> truncate table te;
    Table truncated.
    SQL> select index_name , status from user_indexes where table_name = 'TE';
    INDEX_NAME                     STATUS
    TE_IND                         VALID
    SQL> create table ti as select * from all_objects;
    Table created.
    SQL> create index ti_ind on ti ( owner);
    Index created.
    SQL> drop table ti;
    Table dropped.
    SQL> select index_name , status from user_indexes where table_name = 'TI';
    no rows selected
    SQL>regards
    Taj

  • Deleting records in millions - any ways to speed

    Hello Friends,
    I have a table with millions of records . As I am doing testing , I got to delete millioins of records and load then load using ETL tools.
    Using simple delete is taking hours together - any ways to increase the speed of deletion.
    I can't use truncate as I got to delete only conditional records ( that r in millions )
    Any idea ?
    thanks/kumar

    kumar73 wrote:
    I have a table with millions of records . As I am doing testing , I got to delete millioins of records and load then load using ETL tools.
    Using simple delete is taking hours together - any ways to increase the speed of deletion.
    I can't use truncate as I got to delete only conditional records ( that r in millions ) There are two basic ways to increase performance in such a case.
    Decrease the workload by doing less work. This can be achieved by disabling the indexes - the delete process thus do not have to update the indexes. Disabling triggers (can be dangerous if not done correctly as these may contain specific delete processing logic). In other words, reduce the overheads for a delete to a bare minimum.
    Use a bigger truck to carry the workload. A faster trucks means a faster server. That's usually not doable. But using a "+bigger truck+" often is. With a bigger truck you ca shift more workload at the same time - in this case it means using more processes to do the delete, instead of using a single process. You can use Oracle's Parallel Query feature. You can also write your own PL/SQL and SQL code to perform the delete in parallel.
    You also need to consider what the delete will do to the space allocated by that table. Despite deleting millions of rows, very little free space may become available (depending on pctfree and pctuse settings of the table) - which means by adding millions of rows again to the table via an ETL process, may seriously bump up the space footprint of that table.
    I would personally almost never use a delete in such a case - DML on large volumes of data is expensive. DDL is not. And partitioning is often a non-negotiable option in order to effectively deal with large volumes of data within the constraints of the resources of the system, and within the runtimes dictated by business requirements.

  • Having Millions of Records in table how we can reduce the exicution time

    We have developed report it takes time to  running eighteen hours   background job monthly data because having millions of records in tables and used loops also Could you please help me how can read record million wise as parrlel exicution to reduce time

    Moderator message - Welcome to SCN.
    Please search the forums before asking a question.
    Also, Please read Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!! and How to post code in SCN, and some things NOT to do... and [Asking Good Questions in the Forums to get Good Answers|/people/rob.burbank/blog/2010/05/12/asking-good-questions-in-the-forums-to-get-good-answers] before posting again.
    Thread locked.
    Rob

  • Sort desc + millions of records

    hello,
    just need some assistance with the filters and sorting them in 'desc' order.
    here is my script:
    select acct_no,
         tran_date,
         rdg_date,
         meter_rdg
    from(     
    select t1.acct_no
    ,(select max(tran_date) --------------tran_date
    from t2
         where acct_no = t1.acct_no) tran_date
    ,(select max(rdg_date) ---------------rdg_date
         from t3
         where acct_no = t1.acct_no) rdg_date
    ,(select meter_rdg ------------------meter_rdg
         from t3
         where acct_no = t1.acct_no
         and rdg_date = (select max(rdg_date)
                   from t3
                   where acct_no = t1.acct_no)
    and rownum = 1) meter_rdg
    ,(select max(curr_rdg_date) -----------curr_rdg_date
         from t4
    where acct_no = t1.acct_no) curr_rdg_date
    ,(select curr_meter_rdg -------------curr_meter_rdg
    from t4
    where acct_no = t1.acct_no
    and curr_rdg_date = (select max(rdg_date)
    from t4
    where acct_no = t1.acct_no)
    and rownum = 1)curr_meter_rdg
    from t1
    where acct_status = 'D'
    --filters...
    where      tran_date between to_date('xx-xx-xxxx','mm-dd-yyyy')
              and to_date('xx-xx-xxxx','mm-dd-yyyy')
    order by tran_date desc
    when i take the "date between clause", my query is quite faster. i can query "MILLIONS" of single records in 5-10 seconds. if i put in the "where tran_date between...", my query will run for 5 minutes+. in addition to this, I would like to sort the tran_date in "descending" order. it crawls.. so slow...
    anyone who could give me an idea to this on sorting the "tran_date" descending.
    i'm kind of a neo being programmer so any suggestion would be highly appreciated. :) thank you.

    First, try to write your query readable, and with some modification to avoid subqueries :
    select acct_no,
           tran_date,
           rdg_date,
           meter_rdg
    from  (select t1.acct_no,
                  max(t2.tran_date)                                                  as tran_date,
                  max(t3.rdg_date)                                                   as rdg_date,
                  max(t3.meter_rdg) keep (dense_rank last order by t3.rdg_date)      as meter_rdg,
                  max(t4.curr_rdg_date)                                              as curr_rdg_date,
                  max(t4.curr_meter_rdg) keep (dense_rank last order by t4.rdg_date) as curr_meter_rdg
            from t1, t2, t3, t4
            where t1.acct_status = 'D'
            and   t1.acct_no=t2.acct_no(+)
            and   t1.acct_no=t3.acct_no(+)
            and   t1.acct_no=t4.acct_no(+)
            group by t1.acct_no)
    where tran_date between to_date('xx-xx-xxxx','mm-dd-yyyy')
    and to_date('xx-xx-xxxx','mm-dd-yyyy')
    order by tran_date descThen work on explain plan...
    Nicolas.
    /*obviously not tested*/

  • Best way to insert millions of records into the table

    Hi,
    Performance point of view, I am looking for the suggestion to choose best way to insert millions of records into the table.
    Also guide me How to implement in easier way to make better performance.
    Thanks,
    Orahar.

    Orahar wrote:
    Its Distributed data. No. of clients and N no. of Transaction data fetching from the database based on the different conditions and insert into another transaction table which is like batch process.Sounds contradictory.
    If the source data is already in the database, it is centralised.
    In that case you ideally do not want the overhead of shipping that data to a client, the client processing it, and the client shipping the results back to the database to be stored (inserted).
    It is must faster and more scalable for the client to instruct the database (via a stored proc or package) what to do, and that code (running on the database) to process the data.
    For a stored proc, the same principle applies. It is faster for it to instruct the SQL engine what to do (via an INSERT..SELECT statement), then pulling the data from the SQL engine using a cursor fetch loop, and then pushing that data again to the SQL engine using an insert statement.
    An INSERT..SELECT can also be done as a direct path insert. This introduces some limitations, but is faster than a normal insert.
    If the data processing is too complex for an INSERT..SELECT, then pulling the data into PL/SQL, processing it there, and pushing it back into the database is the next best option. This should be done using bulk processing though in order to optimise the data transfer process between the PL/SQL and SQL engines.
    Other performance considerations are the constraints on the insert table, the triggers, the indexes and so on. Make sure that data integrity is guaranteed (e.g. via PKs and FKs), and optimal (e.g. FKs should be indexes on the referenced table). Using triggers - well, that may not be the best approach (like for exampling using a trigger to assign a sequence value when it can be faster done in the insert SQL itself). Personally, I avoid using triggers - I rather have that code residing in a PL/SQL API for manipulating data in that table.
    The type of table also plays a role. Make sure that the decision about the table structure, hashed, indexed, partitioned, etc, is the optimal one for the data structure that is to reside in that table.

  • What is the best approach to insert millions of records?

    Hi,
    What is the best approach to insert millions of record in table.
    If error occurred while inserting the record then how to know which record has failed.
    Thanks & Regards,
    Sunita

    Hello 942793
    There isn't a best approach if you do not provide us the requirements and the environment...
    It depends on what for you is the best.
    Questions:
    1.) Can you disable the Constraints / unique Indexes on the table?
    2.) Is there a possibility to run parallel queries?
    3.) Do you need to know the rows which can not be inserted if the constraints are enabled? Or it is not necessary?
    4.) Do you need it fast or you have time to do it?
    What does "best approach" mean for you?
    Regards,
    David

  • Very slow Sorting 48 million of records

    Hi All,
    I am working on a project to calculate calls, inbound and outbound at a telecom company.
    We get 12 million call records everyday i.e. Calling_number_From , Call_start_time,Calling_number_to and call_end_time.
    We then split these records into 4 records using UNION ALL which means we have 48 Million records to process and then we do order by call_time.
    This order by takes hours to run, Please advice on ideas to improve performance.
    Table has Parallel_degree 10
    We are on Oracle 10G
    We spilt the call into four records i.e.
    Each call will have incoming number and outgoing number.
    Incoming call
    Main_number Column_Call time Count_calls
    999     Call_start_time +1
    999     Call_end_time -1
    Outgoing Call
    Main_number Column_Call time Count_calls
    888          Call_start_time +1
    888          Call_end_time -1
    Then we sort the Column_call_time by asc order and check for maximum simultaneous Incoming,outgoing and maximum calls active for each Main_number in one hour.That is the reason we need the sort
    Do you guys know Any other alogoritm to do the same?
    Any way to sort 48 million rows faster.
    Below is the query.
    SELECT did_qry.PART_TS,
    did_qry.P_NUMBER ,
    TO_CHAR(did_qry.call_time,'HH24')
    ||':00-'
    ||TO_CHAR(DID_QRY.CALL_TIME,'HH24')
    ||':59' HOUR_RANGE,
    FLAG,
    HOUR_CHANGE,
    DECODE(HOUR_CHANGE,'HC',DID_QRY.ACTIVE_CALLS+1,DECODE(DID_QRY.ACTIVE_CALLS,0,1,DID_QRY.ACTIVE_CALLS)) ACTIVE_CALLS,
    DECODE(HOUR_CHANGE,'HC',DID_QRY.IO_CALLS_CNT+1,DECODE(DID_QRY.IO_CALLS_CNT,0,1,DID_QRY.IO_CALLS_CNT)) io_calls
    FROM
    (SELECT PART_TS,
    P_NUMBER,
    did,
    call_time,
    flag ,
    hour_change,
    SUM(act) over ( partition BY P_NUMBER order by rownum ) active_calls ,
    SUM(io_calls) over ( partition BY P_NUMBER,flag order by rownum ) io_calls_cnt
    FROM
    select TRUNC(H.PART_TS) PART_TS,
    TPILOT.P_NUM P_NUMBER,
    h.orig_num did,
    h.Call_start_ts call_time,
    'IN' flag,
    1 act,
    1 io_calls,
    'NA' hour_change
    from CALL_REC H,
    DISCONN_CD DCODE,
    P_DID TPILOT
    where ( (H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
    and H.PART_TS <to_date('14-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS'))
    AND DCODE.EFF_START_DT <= H.PART_TS
    AND DCODE.EFF_END_DT > H.PART_TS
    AND dcode.CDR_C_CDE =h.A_I_ID
    AND dcode.CDR_B_CDE =h.R_C_ID
    AND dcode.AB_DIS_IND ='N'
    AND RECORD_TYP_ID ='00000000'
    AND tpilot.EFF_START_DT <= h.PART_TS
    AND tpilot.EFF_END_DT > h.PART_TS
    and TPILOT.D_NUM =H.TERM_NUM
    UNION ALL
    select TRUNC(H.PART_TS) PART_TS,
    tpilot.P_NUM P_NUMBER,
    h.term_num did,
    h.PART_TS call_time,
    'IN' flag,
    -1 act,
    -1 io_calls,
    DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), least(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')),'NC',DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), TO_CHAR(h.PART_TS,'HH12'),'HC','NC')) hour_change
    from CALL_REC H,
    DISCONN_CD DCODE,
    P_DID tpilot
    where H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
    and H.PART_TS <to_date('19-02-2011 00:00:00','DD-MM-YYYY HH24:MI:SS')
    AND DCODE.EFF_START_DT <= h.PART_TS
    AND DCODE.EFF_END_DT > h.PART_TS
    AND dcode.CDR_C_CDE =h.A_I_ID
    AND dcode.CDR_B_CDE =h.R_C_ID
    AND dcode.AB_DIS_IND ='N'
    AND RECORD_TYP_ID ='00000000'
    and TPILOT.EFF_START_DT <= H.PART_TS
    and TPILOT.EFF_END_DT > H.PART_TS
    and TPILOT.D_NUM =H.TERM_NUM
    UNION ALL
    SELECT TRUNC(H.PART_TS) PART_TS,
    pilot.P_NUM P_NUMBER,
    h.orig_num did,
    h.Call_start_ts call_time,
    'OUT' flag,
    1 act,
    1 io_calls,
    'NA' hour_change
    FROM CALL_REC H,
    DISCONN_CD DCODE,
    P_DID PILOT
    where H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
    and H.PART_TS <to_date('19-02-2011 00:00:00','DD-MM-YYYY HH24:MI:SS')
    AND DCODE.EFF_START_DT <= H.PART_TS
    AND DCODE.EFF_END_DT > H.PART_TS
    AND dcode.CDR_C_CDE =h.A_I_ID
    AND dcode.CDR_B_CDE =h.R_C_ID
    AND dcode.AB_DIS_IND ='N'
    AND RECORD_TYP_ID ='00000000'
    AND pilot.EFF_START_DT <= h.PART_TS
    and PILOT.EFF_END_DT > H.PART_TS
    and PILOT.D_NUM =H.ORIG_NUM
    UNION ALL
    SELECT TRUNC(h.PART_TS) PART_TS,
    pilot.P_NUM P_NUMBER,
    h.term_num did,
    h.PART_TS call_time,
    'OUT' flag,
    -1 act,
    -1 io_calls,
    DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), least(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')),'NC',DECODE(greatest(TO_CHAR(h.Call_start_ts,'HH12'),TO_CHAR(h.PART_TS,'HH12')), TO_CHAR(h.PART_TS,'HH12'),'HC','NC')) hour_change
    FROM CALL_REC H,
    DISCONN_CD DCODE,
    P_DID pilot
    WHERE H.PART_TS >=to_date('17-02-2011 23:59:59','DD-MM-YYYY HH24:MI:SS')
    and H.PART_TS <to_date('19-02-2011 00:00:00','DD-MM-YYYY HH24:MI:SS')
    AND DCODE.EFF_START_DT <= h.PART_TS
    AND DCODE.EFF_END_DT > h.PART_TS
    AND dcode.CDR_C_CDE =h.A_I_ID
    AND dcode.CDR_B_CDE =h.R_C_ID
    AND dcode.AB_DIS_IND ='N'
    AND RECORD_TYP_ID ='00000000'
    AND pilot.EFF_START_DT <= h.PART_TS
    AND pilot.EFF_END_DT > h.PART_TS
    AND pilot.D_NUM =h.orig_num
    ORDER BY 2,4,6 ASC
    ) DID_QRY
    )

    Explain Plan
    Plan hash value: 616103529
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 204M| 12G| | 759K (1)| 02:31:49 | | | | | |
    | 1 | WINDOW SORT | | 204M| 12G| 33G| 759K (1)| 02:31:49 | | | | | |
    | 2 | WINDOW SORT | | 204M| 12G| 33G| 759K (1)| 02:31:49 | | | | | |
    | 3 | COUNT | | | | | | | | | | | |
    | 4 | PX COORDINATOR | | | | | | | | | | | |
    | 5 | PX SEND QC (ORDER) | :TQ10005 | 204M| 12G| | 5919K(100)| 19:44:00 | | | Q1,05 | P->S | QC (ORDER) |
    | 6 | VIEW | | 204M| 12G| | 5919K(100)| 19:44:00 | | | Q1,05 | PCWP | |
    | 7 | SORT ORDER BY | | 204M| 22G| 55G| 22449 (76)| 00:04:30 | | | Q1,05 | PCWP | |
    | 8 | PX RECEIVE | | | | | | | | | Q1,05 | PCWP | |
    | 9 | PX SEND RANGE | :TQ10004 | | | | | | | | Q1,04 | P->P | RANGE |
    | 10 | BUFFER SORT | | 204M| 12G| | | | | | Q1,04 | PCWP | |
    | 11 | UNION-ALL | | | | | | | | | Q1,04 | PCWP | |
    |* 12 | HASH JOIN | | 51M| 6052M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
    | 13 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
    | 14 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
    | 15 | PX SEND BROADCAST | :TQ10000 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
    | 16 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
    | 17 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
    | 18 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
    |* 19 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
    | 20 | PX BLOCK ITERATOR | | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
    |* 21 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
    |* 22 | HASH JOIN | | 51M| 5516M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
    | 23 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
    | 24 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
    | 25 | PX SEND BROADCAST | :TQ10001 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
    | 26 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
    | 27 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
    | 28 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
    |* 29 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
    | 30 | PX BLOCK ITERATOR | | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
    |* 31 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
    |* 32 | HASH JOIN | | 51M| 5516M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
    | 33 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
    | 34 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
    | 35 | PX SEND BROADCAST | :TQ10002 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
    | 36 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
    | 37 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
    | 38 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
    |* 39 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
    | 40 | PX BLOCK ITERATOR | | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
    |* 41 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 413M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
    |* 42 | HASH JOIN | | 51M| 6052M| | 5612 (4)| 00:01:08 | | | Q1,04 | PCWP | |
    | 43 | BUFFER SORT | | | | | | | | | Q1,04 | PCWC | |
    | 44 | PX RECEIVE | | 13 | 754 | | 5 (0)| 00:00:01 | | | Q1,04 | PCWP | |
    | 45 | PX SEND BROADCAST | :TQ10003 | 13 | 754 | | 5 (0)| 00:00:01 | | | | S->P | BROADCAST |
    | 46 | MERGE JOIN CARTESIAN| | 13 | 754 | | 5 (0)| 00:00:01 | | | | | |
    | 47 | INDEX FULL SCAN | IDX_PK_PBX_PILOT_DID | 2 | 68 | | 1 (0)| 00:00:01 | | | | | |
    | 48 | BUFFER SORT | | 7 | 168 | | 4 (0)| 00:00:01 | | | | | |
    |* 49 | TABLE ACCESS FULL | VOIP_ABNORM_DISCONN_CD | 7 | 168 | | 2 (0)| 00:00:01 | | | | | |
    | 50 | PX BLOCK ITERATOR | | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWC | |
    |* 51 | TABLE ACCESS FULL | HIQ_EVENT_T | 7874K| 495M| | 5546 (3)| 00:01:07 | 1 | 3 | Q1,04 | PCWP | |
    Predicate Information (identified by operation id):
    12 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
    "TPILOT"."DID_NUM"="H"."TERM_NUM")
    filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "TPILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
    "TPILOT"."EFF_END_DT">"H"."RETENTION_TS")
    19 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
    21 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
    "H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
    "H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')))
    22 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
    "TPILOT"."DID_NUM"="H"."TERM_NUM")
    filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "TPILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
    "TPILOT"."EFF_END_DT">"H"."RETENTION_TS")
    29 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
    31 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
    "H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
    "H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')))
    32 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
    "PILOT"."DID_NUM"="H"."ORIG_NUM")
    filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "PILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
    "PILOT"."EFF_END_DT">"H"."RETENTION_TS")
    39 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
    41 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
    "H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
    "H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')))
    42 - access("H"."ATTEMPT_INDICATOR_ID"=TO_NUMBER("DCODE"."CDR_COLUMN_18_CODE") AND "DCODE"."CDR_COLUMN_19_CODE"="H"."RELEASE_CAUSE_ID" AND
    "PILOT"."DID_NUM"="H"."ORIG_NUM")
    filter("DCODE"."EFF_START_DT"<="H"."RETENTION_TS" AND "DCODE"."EFF_END_DT">"H"."RETENTION_TS" AND "PILOT"."EFF_START_DT"<="H"."RETENTION_TS" AND
    "PILOT"."EFF_END_DT">"H"."RETENTION_TS")
    49 - filter("DCODE"."ABNORM_DISCONN_IND"='N')
    51 - filter("HIQ_RECORD_TYPE_ID"='00000000' AND ("H"."RETENTION_TS">=TO_DATE('2009-01-01 23:59:59', 'yyyy-mm-dd hh24:mi:ss') AND
    "H"."RETENTION_TS"<TO_DATE('2011-02-14 23:59:59', 'yyyy-mm-dd hh24:mi:ss') OR "H"."CALL_RLSE_TS">=TIMESTAMP'2009-01-01 23:59:59' AND
    "H"."RETENTION_TS">=TO_DATE('2008-12-31 23:59:59', 'yyyy-mm-dd hh24:mi:ss')))

  • How can I read, millions of records and write as *.csv file

    I have to return some set of columns values(based on current date) from the database (could be million of record also) The dbms_output can accomodate only 20000 records. (I am retrieving thru a procedure using cursor).
    I should write these values to a file with extn .csv (comma separated file) I thought of using a utl_file. But I heard there is some restriction on the number of records even in utl_file.
    If so, what is the restriction. Is there any other way I can achive it? (BLOB or CLOB ??).
    Please help me in solving this problem.
    I have to write to .csv file, the values from the cursor I have concatinated with "," and now its returning the value to the screen (using dbms_output, temporarily) I have to redirect the output to .csv
    and the .csv should be in some physical directory and I have to upload(ftp) the file from the directory to the website.
    Please help me out.

    Jimmy,
    Make sure that utl_file is properly installed, make sure that the utl_file_dir parameter is set in the init.ora file and that the database has been re-started so that it will take effect, make sure that you have sufficient privileges granted directly, not through roles, including privileges to the file and directory that you are trying to write to, add the exception block below to your procedure to narrow down the source of the exception, then test again. If you still get an error, please post a cut and paste of the exact code that you run and any messages that you received.
    exception
        when utl_file.invalid_path then
            raise_application_error(-20001,
           'INVALID_PATH: File location or filename was invalid.');
        when utl_file.invalid_mode then
            raise_application_error(-20002,
          'INVALID_MODE: The open_mode parameter in FOPEN was
           invalid.');
        when utl_file.invalid_filehandle then
            raise_application_error(-20002,
            'INVALID_FILEHANDLE: The file handle was invalid.');
        when utl_file.invalid_operation then
            raise_application_error(-20003,
           'INVALID_OPERATION: The file could not be opened or
            operated on as requested.');
        when utl_file.read_error then
            raise_application_error(-20004,
           'READ_ERROR: An operating system error occurred during
            the read operation.');
        when utl_file.write_error then
            raise_application_error(-20005,
                'WRITE_ERROR: An operating system error occurred
                 during the write operation.');
        when utl_file.internal_error then
            raise_application_error(-20006,
                'INTERNAL_ERROR: An unspecified error in PL/SQL.');

  • 0FI_AR_4 Initialization - millions of records

    Hi,
    We are planning to initialize 0FI_AR_4 datasource for which there are millions of records available in Source system.
    While checking in Quality system we have realised that just for a single fiscal period it is taking hours to extract data, and in Production system we have data for last 4 years (about 40 million records).
    The trace results (ST05) say that most of the time taken while fetching data from BKPF_BSID / BKPF_BSAD view.
    I can see index available on tables BSID/BSAD - Index 5 - Index for BW extraction - which is not yet created on database.
    This index has 2 fields - BUKRS & CPUDT.
    I am not sure whether this index will help in extracting data.
    What all things can be done to improve the performance of this extraction so that Initialization of 0FI_AR_4 can be completed within optimum time.
    Appreciate your inputs experts.
    Regards,
    Vikram.

    We are planning to change the existing FI_AR line item load from current fiscal year full to delta. As of now the FI_AR_4 is full from R/3 for certain company codes and fiscal Yr/Period 2013001 - 2013012. Now business wants historical data and going forward the extractor should bring only changes ( delta).
    we would like to perform these below steps
    1. Initialisation w/o data transfer on comp_code and FY/period 1998001 - 9999012
    2. Reapir full load for all the historical data fiscal year/period wise like 1998001-1998012, 1999001-1999012...... current year 2013001 - 2013011 till PSA
    3. Load these to DSO
    4. activate the requests
    5. Now do a delta load from R/3 to BW till PSA for the new selection 1998001-9999012
    6. load till DSO
    7. Activate the load
    Pls let me know if these above steps will bring in all the data for FI_AR_4 line items and will not be missing any data once I do the delta load after the repair full loads.
    Thanks

  • How to use search term2 in customer master record

    hi
    how to use search term2 in customer master record. can anyone tell me plz
    thanks
    monica

    Hi,
    Search Term 2
    Label used for search helps.
    Only uppercase letters are stored in this field. Your entries are converted automatically to uppercase letters.
    There are two of these fields for search terms. These fields can be used independently of each other.
    Procedure
    You can use your own criteria for entering the search term.
    Example
    You can enter the main part of the name or an organizational ID.
    For example, for the company "Hechinger & Sons", you could enter "Hechinger" as the first search term.
    The second search term could then be the name ID you use within your company, to help you identify your data later.
    Please check out the following link:
    http://help.sap.com/saphelp_47x200/helpdata/EN/01/a9b331455711d182b40000e829fbfe/frameset.htm
    Hope this helps.
    Please assign points as a way to say thanks.
    Regards,

  • Need help / Advice ; manage daily millions of records;;plz help me:)

    Hi all,
    I've only 2 years of experience in Oracle DBA. I need advice from Experts:)
    To begin, the company I work for, decide to daily save in our Oracle database about 40 millions of records in our only table (User tables). These records should be daily imported from csv or xml feeds into one table.
    This 's a project that need :
    - Study the performance
    - Study What is required in terms of hardware
    As a leader in the market, Oracle 's the only DBMS that could support this size of data, but what's the limit of Oracle in this case? can Oracle support and manage perfectly daily 40 millions of records and for many years, ie We need all data of this table, we can't consider after a period that we don't need history: we need to save all data and without purge the history and this for many years i suppose!!! you can imagine 40 daily millions of records and for many years!!!
    Then we need to consolidate from this table different views (or maybe materalized view) for each department and business inside the company, one other project that need study!
    My questions 're :Using Oracle Database 10g Enterprise Edition Release 10.2.0.1.0:
    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?
    2-Study the performance ; which solutions, technics could I use to improve the performance of :
    - Daily Loading 40 millions of records from csv or xml file/files?
    - Daily Consolidate / managing different views/ materalized view from this big table?
    3- What is required in terms of hardware? features / Technologies( maybe clusters...)
    Hope that experts help me and advice me! thank you very much for your atention :)

    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?Yes
    2-Study the performance ; which solutions, technics could I use to improve >>>the performance of :Send me your email, and I can send you a Performance tuning metodology pdf.
    You can see my email on my profile.
    Daily Loading 40 millions of records from csv or xml file/files?DIrect Load
    - Daily Consolidate / managing different views/ materalized view from this big table?You can use table partitions, one partition for each day.
    Regards,
    Francisco Munoz Alvarez

  • How to Update millions or records in a table

    I got a table which contains millions or records.
    I want to update and commit every time for so many records ( say 10,000 records). I
    dont want to do in one stroke as I may end up in Rollback segment issue(s). Any
    suggestions please ! ! !
    Thanks in Advance

    Group your Updates.
    1.) Look for a good group criteria in your table, a Index on it is recommend.
    2.) Create an PL/SQL Cursor with the group criteria in the where clause.
    cursor cur_updt (p_crit_id number) is
    select * from large_table
    where crit_id > p_crit_id;
    3.) Now you can commit in a serial loop all your updates.

  • What's the best way to delete 2.4 million of records from table?

    We are having two tables one is production one and another is temp table which data we want to insert into production table. temp table having 2.5 million of records and on the other side production table is having billions of records. the thing which we want to do just simple delete already existed records from production table and then insert the remaining records from temp to production table.
    Can anyone guide what's the best way to do this?
    Thanks,
    Waheed.

    Waheed Azhar wrote:
    production table is live and data is appending in this table on random basis. if i go insert data from temp to prod table a pk voilation exception occured bcoz already a record is exist in prod table which we are going to insert from temp to prod
    If you really just want to insert the records and don't want to update the matching ones and you're already on 10g you could use the "DML error logging" facility of the INSERT command, which would log all failed records but succeeds for the remaining ones.
    You can create a suitable exception table using the DBMS_ERRLOG.CREATE_ERROR_LOG procedure and then use the "LOG ERRORS INTO" clause of the INSERT command. Note that you can't use the "direct-path" insert mode (APPEND hint) if you expect to encounter UNIQUE CONSTRAINT violations, because this can't be logged and cause the direct-path insert to fail. Since this is a "live" table you probably don't want to use the direct-path insert anyway.
    See the manuals for more information: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#BGBEIACB
    Sample taken from 10g manuals:
    CREATE TABLE raises (emp_id NUMBER, sal NUMBER
       CONSTRAINT check_sal CHECK(sal > 8000));
    EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog');
    INSERT INTO raises
       SELECT employee_id, salary*1.1 FROM employees
       WHERE commission_pct > .2
       LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10;
    SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;
    ORA_ERR_MESG$               ORA_ERR_TAG$         EMP_ID SAL
    ORA-02290: check constraint my_bad               161    7700
    (HR.SYS_C004266) violatedIf the number of rows in the temp table is not too large and you have a suitable index on the large table for the lookup you could also try to use a NOT EXISTS clause in the insert command:
    INSERT INTO <large_table>
    SELECT ...
    FROM TEMP A
    WHERE NOT EXISTS (
    SELECT NULL
    FROM <large_table> B
    WHERE B.<lookup> = A.<key>
    );But you need to check the execution plan, because a hash join using a full table scan on the <large_table> is probably something you want to avoid.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

Maybe you are looking for