Want to perform an incremental data replication

I  have five vdisk in DC SAN (EVA 8400) which contains a large amount of data. This data has to replicate to newly deployed DR SAN. That is why we take a tape backup from DC SAN and want to restore it on DR before start the replication. After that we want to start the replication which will just replicate the incremental data, after restored from tape. But in HP CA (Continuous Access) I have not found any option to start the incremental data replication, it is started from the zero-block data with a newly automatic created vdisk. So please suggest me about the possibility of incremental data replication.

Actually, I have got it to work...
I used the methods ACTIVATE_TAB_PAGE & TRANSFER_DATA_TO_SUBSCREEN in the BADI LE_SHP_TAB_CUST_ITEM.
Its working fine now.
Thanks for the response.

Similar Messages

  • Performance problem in data replication program written in java

    Dear all,
    I need your valuable ideas on improving below logic on replicating data fromDB2 to Oracle 9i.We have a huge tables in DB2 to replicate to Oracle side.For one table this taking lot of time.The whole app' is written in java.The current logic is Setting soft delete to specific set of records in oracel table and Reading all records from DB2 table to set only these records in oracle table to 'N' so that deleted records got soft deleted in oralce side.The DB2 query is having 3 table join and taking nearly 1minute.We are updating the oracle table in batch of 100000.For 610275 record update in batch mode it is taking 2.25 hours which has to be reduced to <1hour.The first update to all Y and second update using DB2 query is taking 2.85 hrs.
    Do you have any clever idea to reduce this time?? kindly help us.we are in critical situation now.Even new approach in logic to replicate also welcome..

    hi,
    just remove joins and use for all entries.
    if sy-subrc = 0.
    use delete adjacent duplicates from itab comparing key fields.(it will increase performance)
    then write another select statement.
    endif.
    some tips:
    Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    Avoid for all entries in JOINS
    Try to avoid joins and use FOR ALL ENTRIES.
    Try to restrict the joins to 1 level only ie only for tables
    Avoid using Select *.
    Avoid having multiple Selects from the same table in the same object.
    Try to minimize the number of variables to save memory.
    The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    Avoid creation of index as far as possible
    Avoid operators like <>, > , < & like % in where clause conditions
    Avoid select/select single statements in loops.
    Try to use 'binary search' in READ internal table. Ensure table is sorted before using BINARY SEARCH.
    Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    Avoid using ORDER BY in selects
    Avoid Nested Selects
    Avoid Nested Loops of Internal Tables
    Try to use FIELD SYMBOLS.
    Try to avoid into Corresponding Fields of
    Avoid using Select Distinct, Use DELETE ADJACENT
    Go through the following Document
    Check the following Links
    Re: performance tuning
    Re: Performance tuning of program
    http://www.sapgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTunin

  • I i want to run many-to-one replication scenario

    Hi,
    i want to run many-to-one replication scenario and for that i have created two tables and inserted values on source db and one table on target db in which i want consolidated data from those two tables. Below are the table details,
    Source DB,
    CREATE TABLE GGS.CLIENT_INFO
    ( CLIENT_ID varchar2(10) not null,
    CLIENT_NAME varchar2(50) not null,
    CLIENT_ADD varchar2(50),
    CONSTRAINT CLIENTID_PK PRIMARY KEY (CLIENT_ID)
    CREATE TABLE GGS.ACCOUNT_INFO
    ( ACCOUNT_NO varchar2(15) not null,
    BANK_NAME varchar2(50) not null,
    CLIENT_ID varchar2(10) not null,
    CONSTRAINT ACCTNO_PK PRIMARY KEY (ACCOUNT_NO)
    alter table GGS.ACCOUNT_INFO add CONSTRAINT FK_CLIENTINFO FOREIGN KEY (CLIENT_ID) REFERENCES GGS.CLIENT_INFO(CLIENT_ID);
    Target DB,
    CREATE TABLE GGS1.CLIENT_ACCOUNT_INFO
    ( CLIENT_ID varchar2(10) not null,
    ACCOUNT_NO varchar2(15) not null,
    CLIENT_NAME varchar2(50) not null,
    CLIENT_ADD varchar2(50),
    BANK_NAME varchar2(50) not null,
    CONSTRAINT CLIENT_ACCOUNT_PK PRIMARY KEY (CLIENT_ID, ACCOUNT_NO)
    when i start replicat process, it is giving below error,
    *"Oracle GoldenGate Delivery for Oracle, CLACTDEL.prm: OCI Error ORA-01400: cannot insert NULL into ("GGS1"."CLIENT_ACCOUNT_INFO"."CLIENT_NAME") (status = 1400), SQL <INSERT INTO "GGS1"."CLIENT_ACCOUNT_INFO" ("CLIENT_ID","ACCOUNT_NO") VALUES (:a0,:a1)>."*
    Note: i am inserting two source tables data in one table which at target side using OGG capture-replicate process.
    Please help to resolve above error.
    Regards,
    Shital

    For one, do not use the GoldenGate database user as your source and target schema owner. Why? What happens in a bidirectional setup? To prevent ping-ponging of data, operations performed by the replicat user should be ignored. That's what keeps the applied update on a target being re-applied on the original source, and then being captured and sent to the target, etc.
    Without knowing your setup, what did you do for ADD TRANDATA and supplemental logging at the database level (only needed for the source)? What did you do for initial load and synchronization? What are your parameter files?
    The error shown so far - cannot insert null - applies everywhere in Oracle whenever you try to insert a record with a null value in a column where a NOT NULL constraint is present. You can see that for yourself in a SQL*Plus session and trying the insert. You are inserting two column values, when your own table definition shows you would need at least 4 values (to account for all of the not null constraints).

  • Data Replication issue for R/3 on i5

    QUESTION:
    1) Do we need the data in following 2 files to be replicated when the production operation runs on
    Stand-by system?
    DBSTATIDB4 Index Sizes in the Database (statistical data)
    DBSTATTDB4 Table Sizes in the Database (statistical data)
    2) If we don't need data in those 2 files when production operation is underway on Stand-by,
    please let us know any considerations if you have.
    BACKGROUND INFO:
    Although our customer has two R/3 system on i5 configured as a production and a stand-by system
    and replicate R/3 production data to stand-by system using data replication tool called MIMIX,
    sometimes we encounter a problem with the data of following files are out of synchronization.
    Files cannot be replicated:
    DBSTATIDB4     F@Index Sizes in the Database (statistical data)
    DBSTATTDB4     F@Table Sizes in the Database (statistical data)
    The reason for that is  because of the new DB function called fast-delete since V5R3.
    This problem happens due to the fact that this function execute ENDJRN/STRJRN and it makes
    problem within MIMIX replication operation.
    In order to avoid this problem, we'd like to make sure that it's possible to eliminate these files
    out of synchronization and need more information for these files.

    Hello Yuhko,
    first of all thank you very much, that you are the first user/customer, that is using this new and great forum !!!
    We will move lot's of further customers into this forum soon.
    Fortunately, yes you can ignore these 2 tables if you like - they are not really interesting for the healthy of your SAP system. But, are you really on the latest PTF Levels of V5R3 - because I know this error and it should be fixed in the meantime - perhaps you need a newer MIMIX version as well - you should at least check.
    Then you could configure to run this "crazy" job not twice a day, but once a week only - that makes even your system faster ... This could be done in ST03 in the collector configuration. It is RSORATDB or RSDB_TDB in TCOLL - depending on your SAP release.
    But again:
    If you want to exclude these 2 tables from replication you are fine as well - but I would make this job even more rarely for better performance.
    Regards

  • Essbase 7.1 - Incremental data load in ASO

    Hi,
    Is there incremental data loading feature in ASO version 7.1? Let's say, I've the following data in ASO cube
    P1 G1 A1 100
    Now, I get the following 2 rows as per the incremental data from relational source:
    P1 G1 A1 200
    P2 G1 A1 300
    So, once I load these rows using rule file with override existing values option, will I've the following dataset in ASO:
    P1 G1 A1 200
    P2 G1 A1 300
    I know there is data load buffer concept in ASO 7.1. And this is the inly way to improve data load performance. But just wanted to check if we can implement incremental loading in ASO or not.
    And one more thing, Can 2 load rules run in parallel to load data in ASO cubes? As per my understanding, when we start loading data, the cube is locked for any other insert/update. Pls correct me if I'm wrong!
    Thanks!

    Hi,
    I think the features such as incremental data loads were available from version 9.3.1
    In the whats new for Essbase 9.3.1 it contains
    Incrementally Loading Data into Aggregate Storage Databases
    The aggregate storage database model has been enhanced with the following features:
    l An aggregate storage database can contain multiple slices of data.
    l Incremental data loads complete in a length of time that is proportional to the size of the
    incremental data.
    l You can merge all incremental data slices into the main database slice or merge all
    incremental data slices into a single data slice while leaving the main database slice
    unchanged.
    l Multiple data load buffers can exist on a single aggregate storage database. To save time, you
    can load data into multiple data load buffers at the same time.
    l You can atomically replace the contents of a database or the contents of all incremental data
    slices.
    l You can control the share of resources that a load buffer is allowed to use and set properties
    that determine how missing and zero values, and duplicate values, in the data sources are
    processed.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Performance issue on Date filter

    In my where condition I wanted a date filter on sale date.  I want all sales after 9/1/2014.   
    CASE1
    If I explicitly use date filter like 
    SaleDate > '2014-09-01 00:00:00:000' I am getting the result in 2 seconds.
    CASE2
    Since I may need to use this data value again, I created a date table with single column "date" and loaded the value '2014-09-01 00:00:00:000' . 
    So now my where is like 
    SaleDate > (Select date from dateTable)
    When I run this , the result does not show up even after 10 mins. Both date types are datetime. I am baffled.  Why is this query not coming up with the result?

    As mentioned by Erland, for the optimizer, both situation are very different. With a literal, the optimizer can properly estimate the number of qualifying rows and adapt the query plan appropriately. With a scalar subquery, the value is unknown at compile
    time, and the optimizer will use heuristics to accommodate any value. In this case, the selection for all rows more recent than September 1st 2014 is probably a small percentage of the table.
    I can't explain why the optimizer or engine goes awry, because the subquery's result is a scalar, and shouldn't result in such long runtime. If you are unlucky, the optimizer expanded the query and actually joins the two tables. That would make the indexes
    on table dateTable relevant, as well as distribution and cardinality of dateTable's row values. If you want to know, you would have to inspect the (actual) query plan.
    In general, I don't think your approach is a smart thing to do. I don't know why you want to have your selection date in a table (as opposed to a parameter of a stored procedure), but if you want to stick to it, maybe you should break the query up into something
    like this. The optimizer would still have to use heuristics (instead of more reliable estimates), but some unintended consequences could disappear.
    Declare @min_date datetime
    Set @min_date = (SELECT date FROM dateTable)
    SELECT ...
    FROM ...
    WHERE SaleDate > @min_date
    If you use a parameter (or an appropriate query hint), you will probably get performance close to your first case.
    Gert-Jan

  • Issue in data replication for one particular table

    Hi,
    We have implemented streams in out test environment and testing the business functionalities. We have an issue in data replication for only one custom table all other tables data replications are proper no issue. When we do 100 rows update data replication is not happening for that particular table.
    Issue to simulate
    Update one row -- Replication successful.
    100 rows update -- After 3-4 hrs nothing happened.
    Please let me know did any of you have come across similar issue.
    Thanks,
    Anand

    Extreme slowness on apply site are usually due to lock, library cache locks or too big segments in streams technical tables left after a failure during heavy insert. these tables are scanned with full table scan and scanning hundreds of time empty millions of empty blocks result in a very big loss of performance, but not in the extend your are describing. In your case it sound more like a form of lock.
    We need more info on this table : Lob segments? tablespace in ASSM?
    If the table is partitioned and do you have a job that perform drop partitions? most interesting is what are the system waits nd above all the apply server sessions waits. given the time frame, I would more looks after a lock or an library cache lock due to a drop partitions or concurrent updates. When you are performing the update, you may query 'DBA_DDL_LOCKS', 'DBA_KGLLOCK' and 'DBA_LOCK_INTERNAL' to check that you are not taken in a library cache lock.

  • Data Replication Basics missing

    Hi Experts,
    Im on MDG 6.1.
    when i go through the blogs and forums i understood that data replication can be configured by
    Step 1)
    Perform ALE settings.
    step 2)
    in DRFIMG
    - define o/b implementation
    -define replication model (attach the o/b implementation here)
    step 3)
    in MDGIMG
    -when you define business activity you mention the o/b implementation that you created above under "BOType" column.
    is this all or did i miss anything else?
    i have read in a forum that you also need to mention in BRF+ ->NON user agent step->Pattern 04 -> Data Replication.
    does this mean if i dont mention this step in BRF+ the replication will not work??
    by doing all the above does a create and a change will be replicated as and when they are occured?
    Are these the same steps to execute replication for all reuse objects or do they differ?
    Please help.
    Regards
    Eva

    While configuring Data replication, Can i know in what cases we should use BRF+ -> Pattern 04 -> Data Replication(Non User Agent Step) and in what cases we should not use this???
    From the previous post i understood that Data replication works "without" a non user agent step in BRF+ (Pattern 04 -> Data Replication).
    i.e Approvals->Activation->Complete ->Triggers Data replication automatically??
    Please, Can any expert answer my two questions above.
    Regards
    Eva

  • Incremental Data load in SSM 7.0

    Hello all,
    I once raised a thread in SDN which says how to automate data load into SSM 7.0.
    Periodic data load for a model in SSM
    Now my new requirement is not to upload the whole data again , but only the new data (data after the previous data load) . Is there a way to do the incremental data load in SSM 7.0 ? Loading the whole of the fact data again and again will take a hit on the performance of the SSM system. Is there a work around in case there is no solution ?
    Thanks
    Vijay

    Vijay,
    In your PAS model you can build a procedure to remove data and then load that data to the correct time period.
    In PAS, to remove data but not the variable definitions from the database:
    Removing data for a particular variable
    REMOVE DATA SALES
    or if there were particular areas only within
    SELECT product P1
    SELECT customer C1
    REMOVE SELECTED SALES
    or remove all data
    REMOVE DATA * SURE
    or just a time period
    REMOVE DATA SALES BEFORE Jan 2008
    Then you would construct or modify your Load Procedure to load the data for the new period
    SET PERIOD {date range}
    Regards.
    Bpb
    Then would

  • HT3231 I have completed migration, is it possible to merge the two account now? or a short cut to share information? I don't want to log in and out each time I want to see my old data.

    I have completed migration, is it possible to merge the two account now? or a short cut to share information? I don't want to log in and out each time I want to see my old data.

    No, the camera connection kit only supports the copying of photos and videos to the Photos app, it doesn't support copying content off the iPad. For your second camera instead of the SD reader part of the kit, does the iPad to camera cable not work with it for direct transfer to the iPad ?
    For Lightroom and Nikon software again no - you can only install apps that are available in the iTunes app store on your computer and the App Store app on the iPad, 'normal' PC and Mac (OS X) software are not compatible with the iPad (iOS). There are some apps that perform fairly basic editing functions that are available in the store, but nothing as sophisticated as, for example, Lightroom.

  • How to use incremental data load in OWB? can CDC be used?

    hi,
    i am using oracle 10g relese 2 and OWB 10g relese 1
    i want know how can i implement incremental data load in OWB?
    is it having such implicit feature in OWB tool like informatica?
    can i use CDC concept for this/ is it viable and compatible with my envoirnment?
    what could be other possible ways?

    Hi ,
    As such the current version of OWB does not provide the functionality to directly use CDC feature available. You have to come up with your own strategy for incremental loading. Like, try to use the Update Dates if available on your source systems or use CDC packages to pick the changed data from your source systems.
    rgds
    mahesh

  • I want to acheive digitally triggered data acquisition and using AIO lines for triggering connecting to PFI lines but no response.

    How to acheive digital data acquisitions using AO signals as trigger signal.

    Greetings,
    I am not sure that I have a clear understanding of your question. It appears that you want to perform a triggered acquisition. You mentioned both analog input (AI) and analog output (AO). Which type of operation are you performing? Are you using digital or analog triggering? Also, it would be helpful if you could mention what hardware and software you are using.
    Spencer S.

  • MDG-F 7.0 Data Replication using SOA

    Hello, 
    There has been a kind of issue since MDG-F 7.0 is started. The issue is related to data replication results. Once master record data are successfully replicated over into ECC SAP 6.0, ECC is not sending anythhing back in to MDG-F for status update. We intended to put some custom development work done to fill the gap using email notification but we don't have a clear picture of what needs to be done.
    Requirement is that once master data is successfully or unsuccessfully updated in ECC, we want MDG-F to generate an email notiication to tell the requester of what happens and we want the status to be updated by master record 1, 2, 3.. by change request 1, 2, 3, by Edition.
    Any thought?
    Thanks,
    LUO

    Hello Luo
    You can configure the custom email notification after successful update of master data in ECC.
    MDG-M: How To Send an Email Notification to the Requestor at Time of Material Activation
    Another way out is - since you are using SOA, you can send the message back to receiver system. Check with your PI consultant. You can configure the messages after successful data update.
    Kiran

  • MDG-F 7.0 future planned data replication

    Hello,
    From what I have learned, valid from date in Edition can be used to control of timing of the data replication. In my data prototype in MDG-F 7.0, it is clear that GL change request will go with that. In other words, if the valid from date =August 1, 2014 (today=07/15/2014), GL change request won't come into ECC until August 1 2014 is arrived. it is behaved this way regardless of the choise of replication timing I have made.
    Will the same behavior occur to cost center change request and profit center change request?
    Thanks,
    Luo

    Hello Luo
    Yes you can do that but the real problem I faced is that in the edition screen you get the list of all the editions. Users get confused with this.
    Best way is to connect your DEV / Quality systems with resp MDG Dev and Quality. Use automatic replication. Perform the testings in each system. Once you get the approval you can create master data in production a day before go-live. This is working well till now in my case.
    Kiran

  • Difference between Data Replication and Data Synchronization?

    Hi Brian: Sorry, the Data Replication task wizard does not support ODBC sources. I don't think there's an easy way to truncate the target tables using a Data Synchronization task, but you should be able to read data from an ODBC source and write it to your target. The main difference between the two task wizards:Data Replication allows you to copy multiple tables from the same source and do incremental loads (only copying new or updated data on subsequent runs of the same task)Data Syncronization allows you to transform data with field expressions and lookups. Cheers,Josh

    I typically use Data Replication where I truncate the target, but it doesn't appear that this works for ODBC connections. However, Data Sync does work for ODBC connections--if I'm already truncating the target, is this the same as a data sync 

Maybe you are looking for