How to Load 100 Million Rows in Partioned Table

Dear all,
I a workling in VLDB application.
I have a Table with 5 columns
For ex- A,B,C,D,DATE_TIME
I CREATED THE RANGE (DAILY) PARTIONED TABLE ON COLUMN (DATE_TIME).
AS WELL CREATED NUMBER OF INDEX FOR EX,
INDEX ON A
COMPOSITE INDEX ON DATE_TIME,B,C
REQUIREMENT
NEED TO LOAD APPROX 100 MILLION RECORDS IN THIS TABLE EVERYDAY ( IT WILL LOAD VIA SQL LOADER OR FROM TEMP TABLE (INSERT INTO ORIG SELECT * FROM TEMP)...
QUESTION
TABLE IS INDEXED SO I AM NOT ABLE TO USE SQLLDR FEATURE DIRECT=TRUE.
SO LET ME KNOW WHAT THE BEST AVILABLE WAY TO LOAD THE DATA IN THIS TABLE ????
Note--> PLEASE REMEMBER I CAN'T DROP AND CREATE INDEX DAILY DUE TO HUGE DATA QUANTITY.

Actually a simpler issue then what you seem to think it is.
Q. What is the most expensive and slow operation on a database server?
A. I/O. The more I/O, the more latency there is, the longer the wait times are, the bigger the response times are, etc.
So how do you deal with VLT's? By minimizing I/O. For example, using direct loads/inserts (see SQL APPEND hint) means less I/O as we are only using empty data blocks. Doing one pass through the data (e.g. apply transformations as part of the INSERT and not afterwards via UPDATEs) means less I/O. Applying proper filter criteria. Etc.
Okay, what do you do when you cannot minimize I/O anymore? In that case, you need to look at processing that I/O volume in parallel. Instead of serially reading and writing a 100 million rows, you (for example) use 10 processes that each reads and writes 10 million rows. I/O bandwidth is there to be used. It is extremely unlikely that a single process can fully utilised the available I/O bandwidth. So use more processes, each processing a chunk of data, to use more of that available I/O bandwidth.
Lastly, think DDL before DML when dealing with VLT's. For example, a CTAS to create a new data set and then doing a partition exchange to make that new data set part of the destination table, is a lot faster than deleting that partition's data directly, and then running a INSERT to refresh that partition's data.
That in a nutshell is about it - think I/O and think of ways to use it as effectively as possible. With VLT's and VLDB's one cannot afford to waste I/O.

Similar Messages

  • Performance issue while loading 20 million rows

    Hi all,
    Loading done with 20 million rows into a table (which contains 173columns) using sql loader.
    And direct=true has been used.
    database is : 9i
    OS : Sun OS 5.10 Sun V890, 16 cpu 32GB RAM
    Elapsedtime is : 4 hours
    But same volume is tried with the following details, into the same table (but columns are increased from 173 to 500)
    Database : oracle 10.2.0.4.0 64 bit
    OS : Sun Os 5.10 SUN-FIRE V6800, 24 cpu and 54GB RAM
    Elapsed time : 6:06 hours
    please tell me what could be the problem and how can I minimize the loading time?
    Thanks in Advance
    Anji

    Hi burleson,
    AWR snap shot as follows.
    DB Name         DB Id    Instance     Inst Num Release     RAC Host
    REVACC        1015743016 REVACC              1 10.2.0.4.0  NO  P4061AFMAP
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:       342 16-Sep-09 19:30:53        38       2.7
      End Snap:       343 16-Sep-09 20:30:07        36       2.6
       Elapsed:               59.24 (mins)
       DB Time:              195.22 (mins)
    Cache Sizes
    ~~~~~~~~~~~                       Begin        End
                   Buffer Cache:     9,184M     9,184M  Std Block Size:        16K
               Shared Pool Size:       992M       992M      Log Buffer:    10,560K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:          1,097,030.72        354,485,330.18
                  Logical reads:             23,870.31          7,713,251.27
                  Block changes:              8,894.16          2,873,984.09
                 Physical reads:                740.82            239,382.82
                Physical writes:              1,003.32            324,203.27
                     User calls:                 28.54              9,223.18
                         Parses:                242.99             78,517.09
                    Hard parses:                  0.03                  8.55
                          Sorts:                  0.60                193.91
                         Logons:                  0.01                  3.45
                       Executes:                215.63             69,676.00
                   Transactions:                  0.00
      % Blocks changed per Read:   37.26    Recursive Call %:    99.65
    Rollback per transaction %:    0.00       Rows per Sort:  7669.57
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.99       Redo NoWait %:  100.00
                Buffer  Hit   %:   96.93    In-memory Sort %:  100.00
                Library Hit   %:   99.97        Soft Parse %:   99.99
             Execute to Parse %:  -12.69         Latch Hit %:   99.52
    Parse CPU to Parse Elapsd %:   91.82     % Non-Parse CPU:   99.64
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   44.50   44.46
        % SQL with executions>1:   85.83   84.78
      % Memory for SQL w/exec>1:   87.15   86.65
    Top 5 Timed Events                                         Avg %Total
    ~~~~~~~~~~~~~~~~~~                                        wait   Call
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    CPU time                                         11,632          99.3
    db file scattered read              320,585         210      1    1.8   User I/O
    SQL*Net more data from client        99,234         164      2    1.4    Network
    log file parallel write               5,750         149     26    1.3 System I/O
    db file parallel write              144,502         142      1    1.2 System I/O
    Time Model Statistics                  DB/Inst: REVACC/REVACC  Snaps: 342-343
    -> Total time in database user-calls (DB Time): 11713.1s
    -> Statistics including the word "background" measure background process
       time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
    Statistic Name                                       Time (s) % of DB Time
    DB CPU                                               11,631.9         99.3
    sql execute elapsed time                              3,131.4         26.7
    parse time elapsed                                       53.7           .5
    hard parse elapsed time                                   1.2           .0
    connection management call elapsed time                   0.3           .0
    hard parse (sharing criteria) elapsed time                0.1           .0
    sequence load elapsed time                                0.1           .0
    repeated bind elapsed time                                0.0           .0
    PL/SQL execution elapsed time                             0.0           .0
    DB time                                              11,713.1          N/A
    background elapsed time                                 613.1          N/A
    background cpu time                                     454.5          N/A
    Wait Class                              DB/Inst: REVACC/REVACC  Snaps: 342-343
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc
                                                                      Avg
                                           %Time       Total Wait    wait     Waits
    Wait Class                      Waits  -outs         Time (s)    (ms)      /txn
    User I/O                      562,302     .0              304       1  51,118.4
    System I/O                    166,468     .0              295       2  15,133.5
    Network                       201,009     .0              165       1  18,273.5
    Application                        60     .0                5      82       5.5
    Configuration                     313     .0                4      12      28.5
    Other                           1,266     .0                3       2     115.1
    Concurrency                     9,305     .0                2       0     845.9
    Commit                             60     .0                1      21       5.5
    Wait Events                            DB/Inst: REVACC/REVACC  Snaps: 342-343
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    db file scattered read              320,585     .0         210       1  29,144.1
    SQL*Net more data from clien         99,234     .0         164       2   9,021.3
    log file parallel write               5,750     .0         149      26     522.7
    db file parallel write              144,502     .0         142       1  13,136.5
    db file sequential read             207,780     .0          93       0  18,889.1
    enq: RO - fast object reuse              60     .0           5      82       5.5
    write complete waits                    135     .0           3      23      12.3
    control file parallel write           2,501     .0           3       1     227.4
    rdbms ipc reply                         189     .0           2      12      17.2
    control file sequential read         13,694     .0           2       0   1,244.9
    buffer busy waits                     8,499     .0           1       0     772.6
    log file sync                            60     .0           1      21       5.5
    direct path write                     8,290     .0           1       0     753.6
    SQL*Net message to client           100,882     .0           1       0   9,171.1
    log file switch completion               13     .0           0      38       1.2
    os thread startup                         2     .0           0     174       0.2
    direct path read                     25,646     .0           0       0   2,331.5
    log buffer space                        161     .0           0       1      14.6
    latch free                                7     .0           0      24       0.6
    latch: object queue header o            180     .0           0       1      16.4
    log file single write                    11     .0           0       9       1.0
    SQL*Net more data to client             893     .0           0       0      81.2
    latch: cache buffers chains             767     .0           0       0      69.7
    row cache lock                           36     .0           0       2       3.3
    LGWR wait for redo copy                 793     .0           0       0      72.1
    reliable message                         60     .0           0       1       5.5
    latch: cache buffers lru cha             11     .0           0       1       1.0
    db file single write                      1     .0           0      10       0.1
    log file sequential read                 10     .0           0       1       0.9
    latch: session allocation                18     .0           0       1       1.6
    latch: redo writing                       4     .0           0       0       0.4
    latch: messages                           7     .0           0       0       0.6
    latch: row cache objects                  1     .0           0       0       0.1
    latch: checkpoint queue latc              1     .0           0       0       0.1
    PX Idle Wait                         13,996  100.5      27,482    1964   1,272.4
    SQL*Net message from client         100,881     .0      23,912     237   9,171.0
    Streams AQ: qmn slave idle w            126     .0       3,442   27316      11.5
    Streams AQ: qmn coordinator             255   50.6       3,442   13497      23.2
    Streams AQ: waiting for time              1  100.0         545  544885       0.1
    class slave wait                          2     .0           0       2       0.2
    Background Wait Events                 DB/Inst: REVACC/REVACC  Snaps: 342-343
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write               5,750     .0         149      26     522.7
    db file parallel write              144,502     .0         142       1  13,136.5
    control file parallel write           2,501     .0           3       1     227.4
    direct path write                     8,048     .0           1       0     731.6
    control file sequential read          3,983     .0           0       0     362.1
    os thread startup                         2     .0           0     174       0.2
    direct path read                     25,646     .0           0       0   2,331.5
    log buffer space                        161     .0           0       1      14.6
    events in waitclass Other               924     .0           0       0      84.0
    log file single write                    11     .0           0       9       1.0
    db file single write                      1     .0           0      10       0.1
    log file sequential read                 10     .0           0       1       0.9
    db file sequential read                   2     .0           0       5       0.2
    latch: cache buffers chains              42     .0           0       0       3.8
    latch: redo writing                       4     .0           0       0       0.4
    buffer busy waits                         2     .0           0       0       0.2
    rdbms ipc message                    77,540   24.8      54,985     709   7,049.1
    pmon timer                            1,185  100.0       3,456    2916     107.7
    Streams AQ: qmn slave idle w            126     .0       3,442   27316      11.5
    Streams AQ: qmn coordinator             255   50.6       3,442   13497      23.2
    smon timer                              112     .0       3,374   30125      10.2
    Streams AQ: waiting for time              1  100.0         545  544885       0.1
    Operating System Statistics             DB/Inst: REVACC/REVACC  Snaps: 342-343
    Statistic                                       Total
    AVG_BUSY_TIME                                 161,850
    AVG_IDLE_TIME                                 187,011
    AVG_IOWAIT_TIME                                     0
    AVG_SYS_TIME                                    9,653
    AVG_USER_TIME                                 152,083
    BUSY_TIME                                   3,887,080
    IDLE_TIME                                   4,491,132
    IOWAIT_TIME                                         0
    SYS_TIME                                      234,325
    USER_TIME                                   3,652,755
    LOAD                                               11
    OS_CPU_WAIT_TIME                                9,700
    RSRC_MGR_CPU_WAIT_TIME                              0
    VM_IN_BYTES                                57,204,736
    VM_OUT_BYTES                                        0
    PHYSICAL_MEMORY_BYTES                  56,895,045,632
    NUM_CPUS                                           24
    Service Statistics                     DB/Inst: REVACC/REVACC  Snaps: 342-343
    -> ordered by DB Time
                                                                 Physical    Logical
    Service Name                      DB Time (s)   DB CPU (s)      Reads      Reads
    SYS$USERS                            11,931.9     11,848.9  2,608,446 ##########
    REVACC                                    0.0          0.0          0          0
    SYS$BACKGROUND                            0.0          0.0     25,685     34,096
    Service Wait Class Stats                DB/Inst: REVACC/REVACC  Snaps: 342-343
    -> Wait Class info for services in the Service Statistics section.
    -> Total Waits and Time Waited displayed for the following wait
       classes:  User I/O, Concurrency, Administrative, Network
    -> Time Waited (Wt Time) in centisecond (100th of a second)
    Service Name
    User I/O  User I/O  Concurcy  Concurcy     Admin     Admin   Network   Network
    Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time Total Wts   Wt Time
    SYS$USERS
       525903     24088      9259       161         0         0    201012     16511
    REVACC
            0         0         0         0         0         0         0         0
    SYS$BACKGROUND
        36410      6292        46        35         0         0         0         0
    I will provide entire report.. if it is not sufficient
    Thanks
    Anji
    Edited by: user11907415 on Sep 17, 2009 6:39 AM

  • How to load 1 milloin rows ???

    Hi,
    I want to load 1 million rows in one of my table through sql loader,is it possible to load??or I have a table named sales in which i have 1 milloin rows how could I write those rows into notepad or any other text editor or even excel??
    thanks in advance ...

    Hello Alok,
    You can load/import from anywhere, thats not a problem.
    For making one input file for SQL Loader what you can do is use a editor that can save 1 million rows.
    OR
    You can make small files using excel or some text-editor, and than at OS level merge these files for example in unix you can do :
    cat *.txt > merge.txt
    Then you can use merge as input to SQL loader
    My question which still remain unanswered.. from which package you want to import?
    If it is access than you can export directly to a text/csv file
    If you are using oracle, then you can use export, copy command, or db-link!!
    Cheers,
    Sudhanshu

  • How to load 100 CSV file to DSO / InfoCube

    Hi
    How to load 100 CSV files of transcational data to DSO / InfoCube?

    Hi,
    you can achieve to write the .bat (batch file program in windows)
    .bat program it will convert to xsl to .CSV format.
    in process chain you can use the OS COMMAND option while creating process chain.
    it will first trigger the bat file then it will trigger the excel files and info package.
    now i am also working on same scenario now.
    please find the below doc how to create the process chain using os command.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e0c41c52-2638-2e10-0b89-ffa4e2f10ac0?QuickLink=index&…
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/3034bf3d-9dc4-2d10-9e93-d4ba01505ed9?QuickLink=index&…
    Thanks,
    Phani.

  • "how to load a text file to oracle table"

    hi to all
    can anybody help me "how to load a text file to oracle table", this is first time i am doing, plz give me steps.
    Regards
    MKhaleel

    Usage: SQLLOAD keyword=value [,keyword=value,...]
    Valid Keywords:
    userid -- ORACLE username/password
    control -- Control file name
    log -- Log file name
    bad -- Bad file name
    data -- Data file name
    discard -- Discard file name
    discardmax -- Number of discards to allow (Default all)
    skip -- Number of logical records to skip (Default 0)
    load -- Number of logical records to load (Default all)
    errors -- Number of errors to allow (Default 50)
    rows -- Number of rows in conventional path bind array or between direct path data saves (Default: Conventional path 64, Direct path all)
    bindsize -- Size of conventional path bind array in bytes (Default 256000)
    silent -- Suppress messages during run (header, feedback, errors, discards, partitions)
    direct -- use direct path (Default FALSE)
    parfile -- parameter file: name of file that contains parameter specifications
    parallel -- do parallel load (Default FALSE)
    file -- File to allocate extents from
    skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE)
    skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE)
    commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE)
    readsize -- Size of Read buffer (Default 1048576)
    external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE
    (Default NOT_USED)
    columnarrayrows -- Number of rows for direct path column array (Default 5000)
    streamsize -- Size of direct path stream buffer in bytes (Default 256000)
    multithreading -- use multithreading in direct path
    resumable -- enable or disable resumable for current session (Default FALSE)
    resumable_name -- text string to help identify resumable statement
    resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200)
    PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An example of the former case is 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control=foo userid=scott/tiger'. One may specify parameters by position before but not after parameters specified by keywords. For example, 'sqlldr scott/tiger control=foo logfile=log' is allowed, but 'sqlldr scott/tiger control=foo log' is not, even though the position of the parameter 'log' is correct.
    SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\PFS2004.CTL LOG=D:\PFS2004.LOG BAD=D:\PFS2004.BAD DATA=D:\PFS2004.CSV
    SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\CLAB2004.CTL LOG=D:\CLAB2004.LOG BAD=D:\CLAB2004.BAD DATA=D:\CLAB2004.CSV
    SQLLDR USERID=GROWSTAR/[email protected] CONTROL=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CTL LOG=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.LOG BAD=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.BAD DATA=D:\GROW\DEACTIVATESTAFF\DEACTIVATESTAFF.CSV

  • How to load Matrix report data into basic table data using ODI

    Hi,
    How to load Matrix report data into basic table data using oracle Data Integrator ?
    Requirement Description:
    Following is the matrix report data:
    JOB                       DEPT10                DEPT20 
    ANALYST                                           6000
    CLERK                   1300                     1900 Need to convert it into below format:
    JOB                             Dept                        Salary
    ANALYST                  DEPT10     
    ANALYST                  DEPT20                     6000
    CLERK                       DEPT10                    1300
    CLERK                       DEPT20                    1900
        Thanks for your help in advance. Let me know if any further explanation is required.

    Your list seems to be a little restrictive, you can do a lot more with ODI procedures.
    If you create new procedure, and add a step. In the 'command on source' tab set you technology and schema as per your source database. Use the unpivot functionality as described in the link, please, rather than using 'SELECT *' use the appropriate column names and alias them for eg:
    SELECT job as job,
    deptsal as deptsal,
    saldesc as saledesc
    FROM pivoted_data
    UNPIVOT (
    deptsal --<-- unpivot_clause
    FOR saldesc --<-- unpivot_for_clause
    IN (d10_sal, d20_sal, d30_sal, d40_sal) --<-- unpivot_in_clause
    Then in your 'command on target' tab set the technology and schema to your target db, then put your INSERT statement for eg:
    INSERT INTO job_sales
    (job,
    deptsal,
    saledesc
    VALUES
    :job,
    :deptsal,
    :saledesc
    Therefore you are using bind variables from source to load data into target.
    Obviously if the source and target table are in the same database, then you can have it all in one statement in the 'command on target' as
    INSERT INTO job_sales
    (job,
    deptsal,
    saledesc
    SELECT job as job,
    deptsal as deptsal,
    saldesc as saledesc
    FROM pivoted_data
    UNPIVOT (
    deptsal --<-- unpivot_clause
    FOR saldesc --<-- unpivot_for_clause
    IN (d10_sal, d20_sal, d30_sal, d40_sal) --<-- unpivot_in_clause
    also set the log counter as 'Insert' on the tab where your INSERT statement is, so you know how many rows you insert into the table.
    Hope this helps.
    BUT remember that this feature only came out in Oracle 11g.

  • How to delete a particular row in ALV table

    Hi,
    How to delete a particular row in ALV table based on some condition(by checking value for one of the columns in a row)
    Thanks
    Bala Duvvuri

    Hello Bala,
    Can you please be a bit more clear as to how you intend to delete the rows from your ALV? By the way deleting rows from an ALV is no different from deleting rows from a normal table. Suppose you have enabled selection property in ALV & then select multiple rows and click up on a button to delete the rows then below would be the coding: (Also keep in mind that you would have to maintain the Selection property of the context node that you are binding to your ALV to 0..n)
    data : lr_table_settings  TYPE REF TO if_salv_wd_table_settings,
                 lr_config          TYPE REF TO cl_salv_wd_config_table.
      lr_table_settings  ?= lr_config.
    ** Setting the ALV selection to multiple selection with no lead selection
      lr_table_settings->set_selection_mode( value = cl_wd_table=>e_selection_mode-multi_no_lead ).
    Next delete the selected rows in the action triggered by the button:
    METHOD onactiondelete_rows .
      DATA:  wd_node TYPE REF TO if_wd_context_node,
             lt_node1 TYPE ig_componentcontroller=>elements_node,
             wa_temp  TYPE REF TO if_wd_context_element,
             lt_temp  TYPE wdr_context_element_set,
             row_number TYPE i VALUE 0.
      wd_node = wd_context->get_child_node( name = 'NODE' ).
      CALL METHOD wd_node->get_selected_elements
        RECEIVING
          set = lt_temp.
      LOOP AT lt_temp INTO wa_temp.
        wd_node->remove_element( EXPORTING element = wa_temp ).
      ENDLOOP.
      CALL METHOD wd_node->get_static_attributes_table
        EXPORTING
          from  = 1
          to    = 2147483647
        IMPORTING
          table = lt_node1.
      wd_node->bind_table( new_items = lt_node1 ).
    ENDMETHOD.
    If in case this isn't your requirement please do let me know so that I can try come up with another analysis.
    Regards,
    Uday

  • How to find newly updated rows in a table

    Hi..
    How to know newly updated rows in a table.
    Thanks in advance
    pal

    Or other good thing would be to add LAST_UPDATED column to your table, that can reflect the time the row gets updated.
    G

  • How to get count of rows for a table?

    Hi,
    How to get count of rows for a table and secondly, how can i have access to a particular cell in a table?
    Regards,
    Devashish

    Hi Devashish,
    WdContext.node<Your_node_name>().size() will give you the no: of rows.
    This should be the node that is bound to the table's datasource property.
    WdContext.node<Your_node_name>().get<node_name>ElementAt(index_value); will select the row at that particular index.
    You can access an attribute of a particular row as
    WdContext.node<Your_node_name>().get<node_name>ElementAt(index_value).get<attribute_name>();
    Hope this helps,
    Best Regards,
    Nibu.
    Message was edited by: Nibu Wilson

  • How to list all the rows from the table VBAK

    Friends ,
    How to list all the rows from the table VBAK.select query and the output list is appreciated.

    Hi,
    IF you want to select all the rows for VBAK-
    Write-
    Data:itab type table of VBAK,
           wa like line of itab.
    SELECT * FROM VBAK into table itab.
    Itab is the internal table with type VBAK.
    Loop at itab into wa.
    Write: wa-field1,
    endloop.

  • Migration of million rows from remote table using merge

    I need to migrate (using merge) almost 15 million rows from a remote database table having unique index (DAY, Key) with the following data setup.
    DAY1 -- Key1 -- NKey11
    DAY1 -- Key2 -- NKey12
    DAY2 -- Key1 -- NKey21
    DAY2 -- Key2 -- NKey22
    DAY3 -- Key1 -- NKey31
    DAY3 -- Key2 -- NKey32
    In my database, I have to merge all these 15 million rows into a table having unique index (Key); no DAY in destination table.
    First, it would be executed for DAY1 and the merge command will insert following two rows. For DAY2, it would update two rows with the NKey2 (for each Key) values and so on.
    Key1 -- NKey11
    Key2 -- NKey12
    I am looking for the best possible approach. Please note that I cannot make any change at remote database.
    Right now, I am using the following one which is taking huge time for DAY2 and so on (mainly update).
    MERGE INTO destination D
      USING (SELECT /*+ DRIVING_SITE(A) */ DAY, Key, NKey
                   FROM source@dblink A WHERE DAY = v_day) S
      ON (D.Key = S.Key)
    WHEN MATCHED THEN
       UPDATE SET D.NKey = S.NKey
    WHEN NOT MATCHED THEN
       INSERT (D.Key, D.NKey) VALUES (S.Key, S.NKey)
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;Edited by: 986517 on Feb 14, 2013 3:29 PM
    Edited by: 986517 on Feb 14, 2013 3:33 PM

    MERGE INTO destination D
      USING (SELECT /*+ DRIVING_SITE(A) */ DAY, Key, NKey
                   FROM source@dblink A WHERE DAY = v_day) S
      ON (D.Key = S.Key)
    WHEN MATCHED THEN
       UPDATE SET D.NKey = S.NKey
    WHEN NOT MATCHED THEN
       INSERT (D.Key, D.NKey) VALUES (S.Key, S.NKey)
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;The first remark I have to emphasize here is that the hint /*+ DRIVING_SITE(A) */ is silently ignored because in case of insert/update/delete/merge the driving site is always the site where the insert/update/delete is done.
    http://jonathanlewis.wordpress.com/2008/12/05/distributed-dml/#more-809
    Right now, I am using the following one which is taking huge time for DAY2 and so on (mainly update).The second remark is that you've realised that your MERGE is taking time but you didn't trace it to see where time is being spent. For that you can either use the 10046 trace event or at a first step get the execution plan followed by your MERGE statement.
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;The third remark is related to the DML error logging : be aware that unique keys will empeach the DML error loggig to work correctly.
    http://hourim.wordpress.com/?s=DML+error
    And finally I advise you to look at the following blog article I wrote about enhancing an insert/select over db-link
    http://hourim.wordpress.com/?s=insert+select
    Mohamed Houri
    www.hourim.wordpress.com

  • How to load data to dimensions and fact table is it simialr to sql server

    How to load data to dimensions and fact table in a data ware house environment, is it simialr to sql server, oracle?
    Never migrated or loaded data to datawarehouse server.
    I have an interview and am really confused if they ask anything related to the dataware house side elements.
    Coudl you please if you don't mind provide me some steps and dimension and fact table info, an example only.
    Atleast for my knowledge.
    Thank you very much for the helpful info.

    Some discussions in previous forums should help you
    http://forums.sdn.sap.com/thread.jspa?threadID=2019448
    http://forums.sdn.sap.com/thread.jspa?threadID=1908902
    In the SAP tutorial, you can see a sample example of making fact tables.
    http://help.sap.com/businessobject/product_guides/boexir32SP1/en/xi321_ds_tutorial_en.pdf
    Arun

  • General Scenario- Adding columns into a table with more than 100 million rows

    I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
    Thanks in advance.
    svk

    For such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
    Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
    table). Add the default after all the rows have a value for the new column.

  • How to load a XML file into a table

    Hi,
    I've been working on Oracle for many years but for the first time I was asked to load a XML file into a table.
    As an example, I've found this on the web, but it doesn't work
    Can someone tell me why? I hoped this example could help me.
    the file acct.xml is this:
    <?xml version="1.0"?>
    <ACCOUNT_HEADER_ACK>
    <HEADER>
    <STATUS_CODE>100</STATUS_CODE>
    <STATUS_REMARKS>check</STATUS_REMARKS>
    </HEADER>
    <DETAILS>
    <DETAIL>
    <SEGMENT_NUMBER>2</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>3</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic administration</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>4</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic finance</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>5</SEGMENT_NUMBER>
    <REMARKS>rp polytechnic logistics</REMARKS>
    </DETAIL>
    </DETAILS>
    <HEADER>
    <STATUS_CODE>500</STATUS_CODE>
    <STATUS_REMARKS>process exception</STATUS_REMARKS>
    </HEADER>
    <DETAILS>
    <DETAIL>
    <SEGMENT_NUMBER>20</SEGMENT_NUMBER>
    <REMARKS> base polytechnic</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>30</SEGMENT_NUMBER>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>40</SEGMENT_NUMBER>
    <REMARKS> base polytechnic finance</REMARKS>
    </DETAIL>
    <DETAIL>
    <SEGMENT_NUMBER>50</SEGMENT_NUMBER>
    <REMARKS> base polytechnic logistics</REMARKS>
    </DETAIL>
    </DETAILS>
    </ACCOUNT_HEADER_ACK>
    For the two tags HEADER and DETAILS I have the table:
    create table xxrp_acct_details(
    status_code number,
    status_remarks varchar2(100),
    segment_number number,
    remarks varchar2(100)
    before I've created a
    create directory test_dir as 'c:\esterno'; -- where I have my acct.xml
    and after, can you give me a script for loading data by using XMLTABLE?
    I've tried this but it doesn't work:
    DECLARE
    acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
    BEGIN
    insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
    select x1.status_code,
            x1.status_remarks,
            x2.segment_number,
            x2.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns header_no      for ordinality,
              status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
      passing acct_doc as "d",
              x1.header_no as "hn"
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS'
    ) x2
    END;
    This should allow me to get something like this:
    select * from xxrp_acct_details;
    Statuscode status remarks segement remarks
    100 check 2 rp polytechnic
    100 check 3 rp polytechnic administration
    100 check 4 rp polytechnic finance
    100 check 5 rp polytechnic logistics
    500 process exception 20 base polytechnic
    500 process exception 30
    500 process exception 40 base polytechnic finance
    500 process exception 50 base polytechnic logistics
    but I get:
    Error report:
    ORA-06550: line 19, column 11:
    PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
    ORA-06550: line 4, column 2:
    PL/SQL: SQL Statement ignored
    06550. 00000 -  "line %s, column %s:\n%s"
    *Cause:    Usually a PL/SQL compilation error.
    and if I try to change the script without using the column HEADER_NO to keep track of the header rank inside the document:
    DECLARE
    acct_doc xmltype := xmltype( bfilename('TEST_DIR','acct.xml'), nls_charset_id('AL32UTF8') );
    BEGIN
    insert into xxrp_acct_details (status_code, status_remarks, segment_number, remarks)
    select x1.status_code,
            x1.status_remarks,
            x2.segment_number,
            x2.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '/ACCOUNT_HEADER_ACK/DETAILS'
      passing acct_doc
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS'
    ) x2
    END;
    I get this message:
    Error report:
    ORA-19114: error during parsing the XQuery expression: 
    ORA-06550: line 1, column 13:
    PLS-00201: identifier 'SYS.DBMS_XQUERYINT' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    ORA-06512: at line 4
    19114. 00000 -  "error during parsing the XQuery expression: %s"
    *Cause:    An error occurred during the parsing of the XQuery expression.
    *Action:   Check the detailed error message for the possible causes.
    My oracle version is 10gR2 Express Edition
    I do need a script for loading xml files into a table as soon as possible, Give me please a simple example for understanding and that works on 10gR2 Express Edition
    Thanks in advance!

    The reason your first SQL statement
    select x1.status_code,
            x1.status_remarks,
            x2.segment_number,
            x2.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns header_no      for ordinality,
              status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '$d/ACCOUNT_HEADER_ACK/DETAILS[$hn]/DETAIL'
      passing acct_doc as "d",
              x1.header_no as "hn"
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS'
    ) x2
    returns the error you noticed
    PL/SQL: ORA-00932: inconsistent datatypes: expected - got NUMBER
    is because Oracle is expecting XML to be passed in.  At the moment I forget if it requires a certain format or not, but it is simply expecting the value to be wrapped in simple XML.
    Your query actually runs as is on 11.1 as Oracle changed how that functionality worked when 11.1 was released.  Your query runs slowly, but it does run.
    As you are dealing with groups, is there any way the input XML can be modified to be like
    <ACCOUNT_HEADER_ACK>
    <ACCOUNT_GROUP>
    <HEADER>....</HEADER>
    <DETAILS>....</DETAILS>
    </ACCOUNT_GROUP>
      <ACCOUNT_GROUP>
      <HEADER>....</HEADER>
      <DETAILS>....</DETAILS>
      </ACCOUNT_GROUP>
    </ACCOUNT_HEADER_ACK>
    so that it is easier to associate a HEADER/DETAILS combination?  If so, it would make parsing the XML much easier.
    Assuming the answer is no, here is one hack to accomplish your goal
    select x1.status_code,
            x1.status_remarks,
            x3.segment_number,
            x3.remarks
    from xmltable(
      '/ACCOUNT_HEADER_ACK/HEADER'
      passing acct_doc
      columns header_no      for ordinality,
              status_code    number        path 'STATUS_CODE',
              status_remarks varchar2(100) path 'STATUS_REMARKS'
    ) x1,
    xmltable(
      '$d/ACCOUNT_HEADER_ACK/DETAILS'
      passing acct_doc as "d",
      columns detail_no      for ordinality,
              detail_xml     xmltype       path 'DETAIL'
    ) x2,
    xmltable(
      'DETAIL'
      passing x2.detail_xml
      columns segment_number number        path 'SEGMENT_NUMBER',
              remarks        varchar2(100) path 'REMARKS') x3
    WHERE x1.header_no = x2.detail_no;
    This follows the approach you started with.  Table x1 creates a row for each HEADER node and table x2 creates a row for each DETAILS node.  It assumes there is always a one and only one association between the two.  I use table x3, which is joined to x2, to parse the many DETAIL nodes.  The WHERE clause then joins each header row to the corresponding details row and produces the eight rows you are seeking.
    There is another approach that I know of, and that would be using XQuery within the XMLTable.  It should require using only one XMLTable but I would have to spend some time coming up with that solution and I can't recall whether restrictions exist in 10gR2 Express Edition compared to what can run in 10.2 Enterprise Edition for XQuery.

  • How to find number of rows in a table

    i have one table ,it contains millions of record,how can i know number of rows in that table without using count(*),
    i tried in user_table ,for the column NUM_ROWS,but it is not showing number of rows,pls send me a solution for this.
    regards,
    singh

    Ok, that only was to show simply that max option
    might not an option to reduce execution time.Yes, but I/O variances have a tendency to really skew the observed elapsed execution time - making execution time alone a poor choice to determine what SQL will perform better than another.
    Both MAX(ROWNUM) and COUNT(*) results in the same amount of I/O - as both uses the exact same execution plan, I/O wise. In this example, a FTS.
    SQL> create table testtab nologging as select * from all_objects where rownum < 10001;
    Table created.
    -- warmed up the buffer cache with a couple of SELECTs against TESTAB in order
    -- to discard PIOs from the results
    SQL> set autotrace on
    SQL> select count(*) from testtab;
    COUNT(*)
    10000
    Execution Plan
    Plan hash value: 2656308840
    | Id | Operation | Name | Rows | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 35 (9)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | | |
    | 2 | TABLE ACCESS FULL| TESTTAB | 9262 | 35 (9)| 00:00:01 |
    Note
    - dynamic sampling used for this statement
    Statistics
    0 recursive calls
    0 db block gets
    131 consistent gets
    0 physical reads
    0 redo size
    223 bytes sent via SQL*Net to client
    238 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    SQL> select max(rownum) from testtab;
    MAX(ROWNUM)
    10000
    Execution Plan
    Plan hash value: 2387991791
    | Id | Operation | Name | Rows | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 35 (9)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | | |
    | 2 | COUNT | | | | |
    | 3 | TABLE ACCESS FULL| TESTTAB | 9262 | 35 (9)| 00:00:01 |
    Note
    - dynamic sampling used for this statement
    Statistics
    0 recursive calls
    0 db block gets
    131 consistent gets
    0 physical reads
    0 redo size
    225 bytes sent via SQL*Net to client
    238 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    So seeing that we have the exact same baseline for both queries, and that PIO does not influence the results, we time a 1000 executions of both.
    SQL> declare
    2 cnt number;
    3 begin
    4 for i in 1..1000
    5 loop
    6 select count(*) into cnt from testtab;
    7 end loop;
    8 end;
    9 /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:03.19
    SQL>
    SQL> declare
    2 cnt number;
    3 begin
    4 for i in 1..1000
    5 loop
    6 select max(rownum) into cnt from testtab;
    7 end loop;
    8 end;
    9 /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:15.87
    SQL>
    This shows that what makes the MAX() more expensive is just that - determining the MAX(). For each row, Oracle has to call its internal MAX() function with two values - the current max result and the new value. This function then returns the new max value. This overhead per row adds up to a significant overhead in execution time - making the MAX() approach 5x slower than the COUNT() approach.

Maybe you are looking for

  • Problem with Duplicate Movie Clip which is tracked in an Array

    I have a problem with my inventory code using Shared Objects. What i did was to track the collected items in an array. Each time the user collects an item, the original iconMC is duplicated and loads a picture of the item (function loadImage). Howeve

  • Photoshop CS6 installer will not initialize - no log files generated

    Hi, I am trying to reinstall Photoshop CS6 on a new computer I purchased. When I run the installer, the progress bar gets to the end and sits there for several minutes. Then the Adobe Installer windows just closes and no other installation windows op

  • Dynamic(Switch Independent) NIC Teaming Mode Problem

    Hi, We are using Switch Independent Dynamic distribution NIC Teaming configuration on Hyper-V 2012 R2 cluster. In the following graphs between 19:48 - 19:58 you can see clearly how dynamic NIC teaming mode badly affected the web servers response tim

  • Feature Request: Multiple Email Reply

    I've spent a few minutes hunting around the Apple.com website trying to find/search for a way to provide feedback for Mail. There doesn't appear to be such a beast, so I thought I would post it here. In other email programs I have used over the years

  • AirPort Utility 6.3.2 doesn't see older AirPort Base Station

    I have an old AirPort Base Station (2001 model, Dual Ethernet, "Snow", actually not "Graphite", though apparently that's the correct category here), which was previously configured with my MacBook Pro running MAC OS X 10.6.8. I've just started it up,