Partition merge for huge data

The application has a very large table with more than 5 billion data , using that table it is planned to have only some years data by doing this it is expected to reduce 20 to 25 % data reduced.
The problem here is as it is very large table we plan to get the required data in partition and plan to have this partition merged in to single partition then exchange partition to one table which is not having partition.
My question is how efficiently this partition merge is with huge data volume ?

Hello,
I wonder why are you using Bulk Collect, when the same can be accomplished by a simple Merge statement.
Below can help:
MERGE INTO hotels tgt USING
(SELECT hotel_code,
  hotel_name,
  hotel_type,
  hotel_address,
  hotel_number,
  hotel_facility
FROM bck_hotel
) src ON (src.hotel_code = tgt.hotel_code)
WHEN MATCHED THEN
  UPDATE
  SET tgt.hotel_name   = src.hotel_name,
    tgt.hotel_type     = src.hotel_type,
    tgt.hotel_address  = src.hotel_address,
    tgt.hotel_number   = src.hotel_number,
    tgt.hotel_facility = src.hotel_facility WHEN NOT MATCHED THEN
  INSERT
      tgt.hotel_code,
      tgt.hotel_name,
      tgt.hotel_type,
      tgt.hotel_address,
      tgt.hotel_number,
      tgt.hotel_facility
    VALUES
      src.hotel_code,
      src.hotel_name,
      src.hotel_type,
      src.hotel_address,
      src.hotel_number,
      src.hotel_facility
    );Is it not true?
Regards,
P.

Similar Messages

  • Is Multimaster Timesten replication a good option for huge data volumes?

    Hi,
    There are 3 timesten nodes in our production setup .There will be around 5 million rows in each node initially which will gradually increase to about 10 million.Once our application moves to production, there will be around 50-70 transactions per second in each node , which need to be replicated on to the other node.
    Initially we thought of going with Active-Standby-Subscriber replication.However in this case if active and standby node go down,then it will be a site failure case.So is Active-Active (Multimaster replication) configuration a good option ? Will data collision happen when replication happens in both directions?
    Thanks in advance.
    Nithya

    Multi-master replication is rarely a good idea. You will get data collisions unless you rigorously partition the workload. Conflict detection and resolution is not adequate to guarantee consistency over time. Recovery back to a consistent state after a failure is complex and error prone. I'd strongly advise against a multi-master setup, especially for a high volume system.
    You seem to be concerned that 2 out of the 3 systems may fail resulting in a site outage. The likelihood of that is small if you have set things up with separate power etc. With the A/S pair based approach you would still have query capability if the two master systems failed. The chances of all 3 systems failing is not that much less than of just 2 failing in reality I would say (depending on the reason for the failure).
    Chris

  • Internal table for huge data

    HI,
    I have to populate internal table with huge number of data.
    What type of internal is suitable for this?
    Regards,
    Ram

    Hi ram,
    As long as you do not have any complex read functionalities, or nested loops, it should be fine to use the normal internal table.
    Regards,
    Ravi

  • Power BI for huge data

    Hi,
    We are using Power BI for Natural Language Query and reporting. We have to build it for data of around 200 million records. Which is fairly larger in size than 250MB (specified as data size limit for Power BI ).  Is there any way to work with this size
    of data in Power BI ? Just for reference when  we tried it for 54 Millions records and size of data in excel file was 1.8 GB and it did not work. Could anyone please tell us how can Power BI work with this much data.
    Thanks and regards,
    Arvind
    arvind chamoli

    Hi Arvind,
    Currently Power BI does have a limitation of 250MB, therefore you'll need to think about ways to shrink the data to be stored in the workbook. One practical way is to think about normalization. For example, last week I got a huge XML file from stackoverflow
    of 26GB, with a schema like:
    date, post content, list of tags, other columns that I am not interested in
    I wrote simple code to break it down into three csv files:
    date id, date
    date id, tag id
    tag id, tag name
    Basically I got rid of the data that I am not interested e.g. post content, comments, etc. and normalized the data to replace dup strings with id. Hope that helps :-)
    Samuel

  • Merge for bulk data

    Hi all,
    I want to insert bulk data from external table to database ..Program compiled successfully bt after executing the data doesn't insert to database..plz help me..
    External table:-bck_hotel
    HOTEL_CODE     NUMBER
    HOTEL_NAME     VARCHAR2(100)
    HOTEL_TYPE     VARCHAR2(100)
    HOTEL_ADDRESS VARCHAR2(100)
    HOTEL_NUMBER     NUMBER
    HOTEL_FACILITY     VARCHAR2(100)
    HOTEL1     VARCHAR2(100)
    LATITUDE     NUMBER
    LONGITUDE     NUMBER
    Database table:-hotel
    HOTEL_CODE     NUMBER
    HOTEL_NAME     VARCHAR2(100)
    HOTEL_TYPE     VARCHAR2(100)
    HOTEL_ADDRESS     VARCHAR2(100)
    HOTEL_NUMBER     NUMBER
    HOTEL_FACILITY      VARCHAR2(100)
    Code:
    CURSOR cur_hotels IS
    SELECT hotel_code, hotel_name, hotel_type, hotel_address, hotel_number,
    hotel_facility
    FROM bck_hotels;
    BEGIN
    OPEN cur_hotels;
    LOOP
    FETCH cur_hotels BULK COLLECT
    INTO v_hotel_code, v_hotel_name, v_hotel_type, v_hotel_address, v_hotel_number, v_hotel_facility LIMIT 1000;
    FORALL i IN 1 .. v_hotel_code.COUNT MERGE INTO hotels tgt USING (
    SELECT v_hotel_code(i) AS hotel_code, v_hotel_name(i) AS hotel_name,
    v_hotel_type(i) AS hotel_type,
    v_hotel_address(i) AS hotel_address,
    v_hotel_number(i) AS hotel_number,
    v_hotel_facility(i) AS hotel_facility
    FROM dual) src
    ON (src.hotel_code = tgt.hotel_code)
    WHEN MATCHED THEN UPDATE SET
    tgt.hotel_name = src.hotel_name, tgt.hotel_type = src.hotel_type, tgt.hotel_address = src.hotel_address, tgt.hotel_number = src.hotel_number, tgt.hotel_facility = src.hotel_facility
    WHEN NOT MATCHED THEN
    INSERT(tgt.hotel_code, tgt.hotel_name, tgt.hotel_type, tgt.hotel_address, tgt.hotel_number, tgt.hotel_facility)
    VALUES(src.hotel_code, src.hotel_name, src.hotel_type, src.hotel_address, src.hotel_number, src.hotel_facility);

    Hello,
    I wonder why are you using Bulk Collect, when the same can be accomplished by a simple Merge statement.
    Below can help:
    MERGE INTO hotels tgt USING
    (SELECT hotel_code,
      hotel_name,
      hotel_type,
      hotel_address,
      hotel_number,
      hotel_facility
    FROM bck_hotel
    ) src ON (src.hotel_code = tgt.hotel_code)
    WHEN MATCHED THEN
      UPDATE
      SET tgt.hotel_name   = src.hotel_name,
        tgt.hotel_type     = src.hotel_type,
        tgt.hotel_address  = src.hotel_address,
        tgt.hotel_number   = src.hotel_number,
        tgt.hotel_facility = src.hotel_facility WHEN NOT MATCHED THEN
      INSERT
          tgt.hotel_code,
          tgt.hotel_name,
          tgt.hotel_type,
          tgt.hotel_address,
          tgt.hotel_number,
          tgt.hotel_facility
        VALUES
          src.hotel_code,
          src.hotel_name,
          src.hotel_type,
          src.hotel_address,
          src.hotel_number,
          src.hotel_facility
        );Is it not true?
    Regards,
    P.

  • Best Practise for loading data into BW CSV vs XML ?

    Hi Everyone,
    I would like to get some of your thoughts on what file format would be best or most efficient to push data into BW. CSV or XML ?
    Also what are the advantages / Disadvantages?
    Appreciate your thoughts.

    XML is used only for small data fields - more like it is easier to do it by XML rather than build an application for the same - provided the usage is less.
    Flat files are used for HUGE data loads ( non SAP ) and definitely the choice of data formats would be flat files.
    Also XML files are transformed into a flat file type format with each tag referring to the field and the size of the XML file grows to a high value depending on the number of fields.
    Arun

  • Range partition the table ( containing huge data ) by month

    There ia a table with huge data ard 9GB.This needs to range patitioned by month
    to improve performance.
    Can any one suggest me the best option to implement partitioning in this.

    I have a lot of tables like this. My main tip is to never assign 'MAXVALUE' for your last partition, because it will give you major headaches when you need to add a partition for a future month.
    Here is an example of one of my tables. Lots of columns are omitted, but this is enough to illustrate the partitioning.
    CREATE TABLE "TSER"."TERM_DEPOSITS"
    ( "IDENTITY_CODE" NUMBER(10), "ID_NUMBER" NUMBER(25),
    "GL_ACCOUNT_ID" NUMBER(14) NOT NULL ,
    "ORG_UNIT_ID" NUMBER(14) NOT NULL ,
    "COMMON_COA_ID" NUMBER(14) NOT NULL ,
    "AS_OF_DATE" DATE,
    "ISO_CURRENCY_CD" VARCHAR2(15) DEFAULT 'USD' NOT NULL ,
    "IDENTITY_CODE_CHG" NUMBER(10)
    CONSTRAINT "TERM_DEPOSITS"
    PRIMARY KEY ("IDENTITY_CODE", "ID_NUMBER", "AS_OF_DATE") VALIDATE )
    TABLESPACE "DATA_TS" PCTFREE 10 INITRANS 1 MAXTRANS 255
    STORAGE ( INITIAL 0K BUFFER_POOL DEFAULT)
    LOGGING PARTITION BY RANGE ("AS_OF_DATE")
    (PARTITION "P2008_06" VALUES LESS THAN (TO_DATE(' 2008-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    TABLESPACE "DATA_TS_PART1" PCTFREE 10 INITRANS 1
    MAXTRANS 255 STORAGE ( INITIAL 1024K BUFFER_POOL DEFAULT)
    LOGGING NOCOMPRESS , PARTITION "P2008_07" VALUES LESS THAN (TO_DATE(' 2008-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
    'NLS_CALENDAR=GREGORIAN'))
    TABLESPACE "DATA_TS_PART2" PCTFREE 10 INITRANS 1 MAXTRANS 255
    STORAGE ( INITIAL 1024K BUFFER_POOL DEFAULT) LOGGING NOCOMPRESS )
    PARALLEL 3

  • When I tried to Mail Merge for Data is is not exporting any data.

    HI,
    EBS-12.1.3
    DB-11gR1
    OS - RHEL 5.6
    [With my Login User and SysAdmin Login User] When I enter into to the "People -> Enter and Maintain" Form and then I press the "Export Button", there is error Alert
    Function is not available to this responsibility. Change Responsibilities or Connect to the System Administrator
    I Added the Function "HR ADI Seeded Integrator Form Functions" into the "AE HRMS Manager" Responsiblity. It is also working and Export Data icon is enable.
    Problem:
    But Problem is when I tried to Mail Merge for Data is is not exporting any data.
    ====================================================================
    Steps
    1.Move to the "People -> Recruirment" and then "Request Recruitment Letter" .
    2. Enter the New Request. as
    Letter Name "App. Letter Contract Site",
    Automatic or Manual = Manual.
    Select the Name from the LOVs for the Request for Detail Block.
    3. Press the "Export Data" icon.
    4. Integrator Page Appear with my Custom Integrator Name as "Appointment Letter - Contact Site".
    5. Select the "Word 2003" from the View List. and Reporting is Checked.
    6. Review the Folowing Enteries as:
    Integrator Appointment Letter - Contact Site
    Viewer Word 2003
    Reporting Yes
    Layout App. Letter Contract Site
    Content XXHR_MBE_APP_LET_CONT_SITE_V
    Session Date 2011/08/02
    Mapping XXHR_MBE_APP_LET_CONT_SITE_V Mapping
    7. Press "Create Document" Button.
    8. It will open the Excel 2003 and then Word 2003. But no data down download from the Form.
    9. It open the Mail Merge Letter but no Data is Display.
    ===========================================================
    Note:
    a. I am following the Steps from the Link:"http://apps2fusion.com/at/38-ss/351-generate-recruitment-letters-web-adi".
    b. From the "Desktop Integrator Manager", "Oracle Web ADI", "HRMS Web ADI", it is working fine and Dowload the Data.
    ===========================================================
    Thanks
    Vishwa

    Please try the solution in ("Function not available to this responsibility" Error While Cliclking On Forms Personalisation [ID 1263970.1]) and see if it helps.
    Thanks,
    Hussein

  • I used a partitioned HDD for time machine, using a partition already containing other data files. I am now no longer able to view that partition in Finder. Disk Utility shows it in grey and "not mounted". Any suggestions of how to access the files?

    I used a partitioned HDD for time machine, using a partition already containing other data files. I am now no longer able to view that partition in Finder. Disk Utility shows it in grey and "not mounted". Any suggestions of how to access the files? Does using time machine mean that that partition is no longer able to be used as it used to be?
    HDD is a Toshiba 1TB, partitioned into two 500GB partitions.
    OS X version 10.9.2

    Yes, sharing a TM disk is a bad idea, and disks are cheap enough so that you don't need to.
    Now
    Have you tried to repair the disk yet

  • Method for Downloading Huge Data from SAP system

    Hi All,
    we need to download the huge data from one SAP system  & then, need to migrate into another SAP system.
    is there any better method, except downloading data through SE11/SE16 ? please advice.
    Thanks
    pabi

    I have already done several system mergers, and we usually do not have the need to download data.
    SAP can talk to SAP. with RFC and by using ALE/IDOC communication.
    so it is possible to send e.g. material master with BD10 per IDOC from system A to system B.
    you can define that the IDOC is collected, which means it is saved to a file on the application server of the receiving system.
    you then can use LSMW and read this file with several hundred thousand of IDOCs as source.
    Field mapping is easy if you use IDOC method for import, because then you have a 1:1 field mapping.
    So you need only to focus on the few fields where the values changes from old to new system..

  • Using a partitionned cache with off-heap storage for backup data

    Hi,
    Is it possible to define a partitionned cache (with data into the heap) with off-heap storage for backup data ?
    I think it could be worthwhile to do so, as backup data are associated with a different access pattern.
    If so, what are the impacts of such off-heap storage for backup data ?
    Particularly, what are the impacts on performance ?
    Thanks.
    Regards,
    Dominique

    Hi,
    It seems what using scheme for backup-store is broken in latest version of Coherence, I've got an exception using your setup.
    2010-07-24 12:21:16.562/7.969 Oracle Coherence GE 3.6.0.0 <Error> (thread=DistributedCache, member=1): java.lang.NullPointerException
         at com.tangosol.net.DefaultConfigurableCacheFactory.findSchemeMapping(DefaultConfigurableCacheFactory.java:466)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage$BackingManager.isPartitioned(PartitionedCache.java:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.instantiateBackupMap(PartitionedCache.java:24)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.setCacheName(PartitionedCache.java:29)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ServiceConfig$ConfigListener.entryInserted(PartitionedCache.java:17)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:266)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
         at com.tangosol.util.ObservableHashMap.dispatchEvent(ObservableHashMap.java:229)
         at com.tangosol.util.ObservableHashMap$Entry.onAdd(ObservableHashMap.java:270)
         at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
         at com.tangosol.coherence.component.util.ServiceConfig$Map.put(ServiceConfig.java:43)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$StorageIdRequest.onReceived(PartitionedCache.java:45)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.java:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.java:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.java:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.java:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.java:42)
         at java.lang.Thread.run(Thread.java:619)Tracing in debuger has shown what problem is in PartitionedCache$Storage#setCacheName(String) method, it calls instantiateBackingMap(String) before setting __m_CacheName field.
    It is broken in 3.6.0b17229
    PS using asynchronous wrapper around disk based backup storage should reduce performance impact

  • Table maintenence Generator for a Huge data Table.

    Hi Experts,
    I have created a Table maitenance for a Custom table which has some 80,000,000 records approx.
    Now the when i run this, it goes for a short dump saying "STORAGE_PARAMETERS_WRONG_SET".
    The basis have reported that the tcode is running a sequential read on the table & they are saying that is the reason for this dump.
    Are there any limitations that Table maintenance can't be created for table with huge data?
    Or should the program be modified to accomodate the "READ" from the tables in case of large entries?
    Please advice.
    regards,
    Kevin.

    Hi,
      I think this is because of memory overflow.
      You can create two screens for this, in on screen (Overview) screen, restrict the data selection.
      In detail screen display the data.
    With regards,
    Vamsi

  • Get table partition name dynamically for given date range

    Dear All,
    Could you please tell me how to get the partition name dynamicaly for given date range ?
    Thank you.

    SQL> select table_name,
           partition_name,
           to_date (
              trim (
                 '''' from regexp_substr (
                              extractvalue (
                                 dbms_xmlgen.
                                 getxmltype (
                                    'select high_value from all_tab_partitions where table_name='''
                                    || table_name
                                    || ''' and table_owner = '''
                                    || table_owner
                                    || ''' and partition_name = '''
                                    || partition_name
                                    || ''''),
                                 '//text()'),
              'syyyy-mm-dd hh24:mi:ss')
              high_value_in_date_format
      from all_tab_partitions
    where table_name = 'SALES' and table_owner = 'SH'
    TABLE_NAME                     PARTITION_NAME                 HIGH_VALUE_IN_DATE_FORMAT
    SALES                          SALES_1995                     01-JAN-96               
    SALES                          SALES_1996                     01-JAN-97               
    SALES                          SALES_H1_1997                  01-JUL-97               
    SALES                          SALES_H2_1997                  01-JAN-98               
    SALES                          SALES_Q1_1998                  01-APR-98               
    SALES                          SALES_Q2_1998                  01-JUL-98               
    SALES                          SALES_Q3_1998                  01-OKT-98               
    SALES                          SALES_Q4_1998                  01-JAN-99               
    SALES                          SALES_Q1_1999                  01-APR-99               
    SALES                          SALES_Q2_1999                  01-JUL-99               
    SALES                          SALES_Q3_1999                  01-OKT-99               
    SALES                          SALES_Q4_1999                  01-JAN-00               
    SALES                          SALES_Q1_2000                  01-APR-00               
    SALES                          SALES_Q2_2000                  01-JUL-00               
    SALES                          SALES_Q3_2000                  01-OKT-00               
    SALES                          SALES_Q4_2000                  01-JAN-01               
    SALES                          SALES_Q1_2001                  01-APR-01               
    SALES                          SALES_Q2_2001                  01-JUL-01               
    SALES                          SALES_Q3_2001                  01-OKT-01               
    SALES                          SALES_Q4_2001                  01-JAN-02               
    SALES                          SALES_Q1_2002                  01-APR-02               
    SALES                          SALES_Q2_2002                  01-JUL-02               
    SALES                          SALES_Q3_2002                  01-OKT-02               
    SALES                          SALES_Q4_2002                  01-JAN-03               
    SALES                          SALES_Q1_2003                  01-APR-03               
    SALES                          SALES_Q2_2003                  01-JUL-03               
    SALES                          SALES_Q3_2003                  01-OKT-03               
    SALES                          SALES_Q4_2003                  01-JAN-04               
    28 rows selected.

  • Migration of huge data from norm tables to denorm tables for performance

    We are planning to move the NORM tables to DENORM tables in Oracle DB for a client for performance issue. Any Idea on the design/approach we can use to migrate this HUGE data (2 billion records/ 5TB of data) in a window of 5 to 10 hrs (Or minimum than that also).
    We have developed SQL that is one single query which contains multiple instance of same table and lots of join. Will that be helpful.

    Jonathan Lewis wrote:
    Lother wrote:
    We are planning to move the NORM tables to DENORM tables in Oracle DB for a client for performance issue. Any Idea on the design/approach we can use to migrate this HUGE data (2 billion records/ 5TB of data) in a window of 5 to 10 hrs (Or minimum than that also).
    We have developed SQL that is one single query which contains multiple instance of same table and lots of join. Will that be helpful.Unfortunately, the fact that you have to ask these questions of the forum tells us that you don't have the skill to determine whether or not the exercise is needed at all. How have you proved that denormalisation is necessary (or even sufficient) to solve the client's performance problems if you have no idea about how to develop a mechanism to restructure the data efficiently ?
    Regards
    Jonathan LewisYour brutal honesty is certainly correct. Another thing that is concerning to me is that it's possible that he's planning on denormalizing tables that are normalized for a reason. What good is a system that responds like a data warehouse but has questionable data integrity? I didn't even know where to begin with asking that question though.

  • AppleScript for mail merge with Excel data?

    Is there any approach for an AppleScript that lets you use Excel data as source for some kind of mail merge operations? If not, would anyone here be interested if someone (I?) would take a closer look at some kind of a (AppleScript Studio) solution for this?

    I'm not sure I get the question.
    If it is "could you use AppleScript to create mail merge for Pages with Excel?", the answer is, Yes. Both Pages and Excel are scriptable.
    If the question is "has anyone done it yet?", I don't know, but it could be a fun exercise.
    If the question is "would there be a market, if someone (you?) wrote and released such a script?", I cannot tell, but personally I would probably write a hack that worked only for my own needs - if I had them. The overhead with usability, testing, different address formats, languages, different Excel sheet formats, and so on, makes me guess that the investment wouldn't pay off for a generic solution. Especially considering both Pages and MS Office already have their own mail merge functions.

Maybe you are looking for

  • I have an old iPod classic that does not sync with a current version of I tunes . I have iPod selected, etc. help

    I have an old I pod classic about 2008 and found it will not sync using the latest version of I tunes downloaded on windows 7 system. I have I pod selected in the top right corner....etc.

  • SUM function in a calculation

    Hi All, Hope all is well. I'm at an early beginner level with Discoverer/Oracle a) Have a calc to determine how many weeks it took from when the course ended that the instructor handed in the grade. Calc_DateDiff ROUND(( TRUNC(Final Grade Date)-TRUNC

  • Fields in sales order item level for pricing

    01.02.2011 Hi friends, I have a requirement wherein i need at least 3 additional fields at the item level in the Sales Order for pricing determination i.e i want this field to be used in the condition table. Since these fields should have list of val

  • Non - ASCII characters in textinput box

    Hi, I have a flex application where I have a TextInput box. If you paste the following (non-ascii characters) into it: "" ''¡¢£¤¥¦§¨©ª«¬ ®¯°±²³´µ¶·¸¹º®¯°±²³µ´¶µ·¹¸º»¼½¾¿ÀÁÈÉÊËÌÍÏÎÐÒÔØÖöõôóòññó" all you are left with is: I am guess its some configurat

  • Attachement set as INTERNAL has been sent to vendor

    Hi, We are in SRM 4.0 and using extended classic scenario. We are facing issue in PO output. Attachements which are set as "internal" in PO are being sent with Po output (through e-mail or fax) to the vendor. Can you please help me how to restrict th