New table in replicated setup

Hi,
Im having two datastores A and B. Whole datastore is replicated in two-way replication scheme.
My question is what are the steps to be taken if Im adding a new table to this setup.
After adding the new table in both the datastores, replication is not happening for the new table.
Do I need to do a duplicate operation on one of the card after creating a new table in the other?
Regards
Pratheej

Hi Pratheej,
Please confirm my understanding that you are using legacy replication (CREATE REPLICATION as opposed to CREATE ACTIVE STANDBY PAIR). For legacy datastore level replication, when you create a new table in a replicated datastroe it is created as 'EXCLUDED" (i.e. as if it had existed at the time you created the replication scheme but you had explicitly EXCLUDED it. Hence to get that table into replication you need to 'ALTER TREPLICATION ... INCLUDE ...'.
If you are adding multiple tables you can do them all in one 'hit' so in steps 5 and 7 you would do all the tables at the same time hence only one duplicate is needed regardless of the number of tables being added.
If you were to use ACTIVE STANDBY PAIR replication then you could do this much more easily using DDL Replication.
Chris

Similar Messages

  • Oracle stream - Downstream new table setup

    Hi,
    I want to add new table to my existing oracle stream setup. Below are the step. Is this OK?
    1) stop apply/capture
    2) Add new rule to the the existing captue which internally will call DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION too(I guess).
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    3) Import (which will initialize the table
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part2_srm_expdp_%U.dmp exclude=grant,statistics,ref_constraint
    4) start apply/capture

    Have you applied this way? What was the result?
    regards

  • "Program terminated in remote system NONE: Logon failed" when adding a new table

    Hello,
    I set up a replication flow from a SAP ERP 6.0 EHP7 with SAP ASE 16.0 source to SAP HANA 1.0 rev 82 target
    I am using a standalone SLT system NetWeaver 7.0 with DMIS 2011_1_731 (with SP 1 to 7).
    I have 100 tables to replicate.
    I succeeded to set up replication for 57 tables. These tables are replicating properly.
    Whenever I try to add a new one (with LTRC transaction, Data Provisioning -> Start Replication), the new table is marked as 'Failed' after a little while.
    When I press 'Show Error Log' button I got an obscure error message:
    "Program terminated in remote system NONE: Logon failed "
    I do not understand this message. I checked both on SAP ASE source and SAP HANA target. I am still able to connect against both source and target.
    Can you please tell how to troubleshoot this error.
    Thanks in advance,
    Christian

    First thank you for answering my questions. I really appreciate your answers.
    I rechecked the documentation.
    "Application Operations Guide SAP Landscape Transformation Replication Server Document Version: 2.3 – 2014-07-08"
    Page 29 - 30
    3.5.2.2 Data Transfer Jobs
    This section explains the relationship between the number of data transfer jobs and the number of available background work process.
    Data transfer and data transformation processing on SLT server system is accomplished by the background work processes of the underlying SAP NetWeaver ABAP application server. Each job occupies 1 background work process in the SAP LT Replication Server system. For each configuration, the parameter Data Transfer Jobs restricts the maximum number of data load job for each mass transfer ID (MT_ID). In total, a mass transfer ID (MT_ID) requires at least 4 background jobs to be available:
     One monitoring job (master job)
     One master controller job
     At least one data load job
     One additional job either for the migration objects definition, access plan calculation or to change configuration settings in the Configuration & Monitoring Dashboard
    Example
    If you set the parameter Data Transfer Jobs to 04 in a configuration “SCHEMA1”, a mass transfer ID 001 is assigned. As a result, the following jobs should be in the system:
    1 Master controller job: /1LT/IUC_REP_CNTR_001
     At most 4 parallel jobs for MT_ID 001: /1LT/IUC_LOAD_MT_001_001/~002/~003/~004
    When configuring your data load or replication scenario, consider the following:
     Do not define more data transfer jobs than the number of available application server background work processes. If all available background work processes are already occupied by jobs, any other job will have to wait until a free work process becomes available. This can lead to long wait times until a new activity (for example creating triggers) can start, and can also result in significantly increased latency times for data replication.
     The number of dialog work processes in the source system corresponds 1:1 with the number of data transfer jobs in the SAP LT Replication Server system.
     Besides the work processes allocated by the data transfer jobs you need to provide additional available work processes for controller and monitoring jobs, the migration objects definition, access plan calculation or to perform configuration changes, and so on.
    Sizing for SAP LT Replication Server involves determining how many work processes are required to perform the initial load of data into the target system within an acceptable timeframe, and accomplish the change capturing and the transfer of data changes to the target system within expected latency times.
    Ensure that you add enough additional work processes to allow other required SAP LT Replication Server jobs to run.
    Finally, you map the number of required application server work processes to their system resource consumption (CPU, memory, disc space) using the formulas provided by the SLT Sizing Guide.
    With the simple formula below, you can calculate the number of required application server work processes (WPs) on the SLT Server for each active SLT configuration.
    The number of required work processes can be determined by adding
     The Number of required data transfer jobs ,
     plus one background work process for Central Master (Monitoring) Job (only one per system!),
     plus one background work process for Master Controller Job,
     plus 3-5 additional empty background work processes (recommended per configuration),
     plus approx. 3 dialog work processes (recommended for each configuration).
    Note: A lack of available free application server work processes can negatively affect the data load or data replication processes.
    To summarize everything, the number of 'Data Transfer Jobs' must be set depending of the number of source tables, it is not the actual number of tables.
    Assume that for my 100 tables I use 10 'Data Transfer Jobs' :
    - The number of work processes on the SLT server would be 20. I took the simple formula of the documentation:
    10 data transfer jobs ,
    + 1 background work process for Central Master (Monitoring) Job (only one per system!),
    + 1 background work process for Master Controller Job,
    + 5 additional empty background work processes (recommended per configuration),
    + 3 dialog work processes (recommended for each configuration).
    - The number of dialog processes on the source server would be 10 ( equal to the number of 'Data Transfer Jobs')
    Am I correct ?
    Regards,
    Christian

  • New table/column in publication breaks replication

    Hi, 
    SQL 2008R2 
    I added a control table to a database that is being replicated to a different server.  The tabled called [__Updated] has one column called [DateUpdated] of type datetime2.  I manually created the table in the subscriber, added the new table/column
    to the list of articles and ran the replication. 
    It falls over with the error shown below, any ideas? 
    2015-02-24 16:23:34.32 [95%] Generating schema scripts for article 'AAA'
    2015-02-24 16:23:34.32 [95%] Generating schema scripts for article '__Updated'
    2015-02-24 16:23:34.33 [95%] The replication agent had encountered an exception.
    2015-02-24 16:23:34.33 Source: Unknown
    2015-02-24 16:23:34.33 Exception Type: Microsoft.SqlServer.Management.Smo.FailedOperationException
    2015-02-24 16:23:34.33 Exception Message: Script failed for Table 'dbo.__Updated'.
    2015-02-24 16:23:34.33 Message Code: Not Applicable
    2015-02-24 16:23:34.33
    Exact version is: 
    SELECT @@VERSION
    Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
    Jun 28 2012 08:36:30
    Copyright (c) Microsoft Corporation
    Standard Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)

    Replication is considered to be a mature technology which means there are little changes. Oracle publishing is gone and updateable subcriptions are gone. Other than that it is the same.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • How to add new tables in Streams for Schema level replication ( 10.2.0.3 )

    Hi,
    I am in process of setting up Oracle Streams schema level replication on version 10.2.0.3. I am able to setup replication for one table properly. Now I want to add 10 more new tables for schema level replication. Few questions regarding this
    1. If I create new tables in source, shall I have to create tables in target database manually or I have to do export STREAMS_INSTANTIATION=Y
    2. Can you tell me metalink note id to read more on this topic ?
    thanks & regards
    parag

    The same capture and apply process can be used to replicate other tables. Following steps should suffice your need:
    Say table NEW is the new table to be added with owner SANTU
    downstr_cap is the capture process which is already running
    downstr_apply is the apply process which is already there
    1. Now stop the apply process
    2. Stop the capture process
    3. Add the new table in the capture process using +ve rule
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES
    table_name      => 'SANTU.NEW',
    streams_type    => 'capture',
    streams_name    => 'downstr_cap',
    queue_name      => 'strmadmin.DOWNSTREAM_Q',
    include_dml     => true,
    include_ddl     => true,
    source_database =>  ' Name of the source database ',
    inclusion_rule  => true
    END;
    4. Take export of the new table with "OBJECT_CONSISTENT=Y" option
    5. Import the table at destination with "STREAMS_INSTANTIATION=Y' option
    6. Start the apply process
    7. Start the capture process

  • How to find the new tables and columns in a schema

    hi..good morning to all...
    I have a schema ABC which owns some objects.
    Now some days before I have made another schema XYZ which was a replica of ABC schema.
    between these days some new tables, new columns in the existing tables(with or without default value), comments on the columns are being added in the new schema i.e XYZ schema.
    Now I have to find the extra things which are present in the new schema. I need to find the new tables, new columns in hte existing tables, their default values and descriptions of those.
    Can u plss help me how can I find it?
    I am guessing that I have to write a SQL query with a minus clause but I am not able to write it and also dont know where should I execute it.
    plss help. thanks in advance.

    And moreover, when I am executing the query to get the desired result, then it is throwing "illegal use of long datatype" error and pointing to the b.data_default area of my query..
    select a.table_name, a.column_name, b.data_default, a.comments from all_col_comments a, dba_tab_columns b
    where a.TABLE_NAME=b.TABLE_NAME
    and a.OWNER=b.OWNER
    and a.OWNER=XYZ
    minus
    select c.table_name, c.column_name, d.data_default, c.comments from all_col_comments c, dba_tab_columns d
    where c.TABLE_NAME=d.TABLE_NAME
    and c.OWNER=d.OWNER
    and c.OWNER='ABC'
    order by 1, 2;
    plss help...

  • ME2K Report Failed to Include New SRM PO Replicated ECC

    We implemented SRM 7.0 in July 2011 and we noticed that when run ME2K (RM06K00 program) the generated report is only listing ECC POs only. Why New SRM PO replicated in ECC is not  inlcuded? The reprot should include all PO types in ECC.
    We also noticed that PS financial reprots such S_ALR_87013558 and CJE0 reprot strucure is to display SRM PO data correctly.
    Any ideas how to fix reports.......
    Edited by: Basilio Reyes on Nov 2, 2011 8:03 PM

    check BBP_DOCUMENT_TAb table... if you can find the PO there with the idoc number... how many entries are there? and what are the idoc status in WE02 for those?

  • Can I replicate new tables using the ACTIVE STANDBY PAIR replication scheme

    Hi,
    I have created myself a simple setup using an active/standby pair with a single subscriber like so:
    CREATE ACTIVE STANDBY PAIR cie ON "tt-test1", cie ON "tt-test2" RETURN RECEIPT SUBSCRIBER cie on "tt-test3";
    I have then added some tables on the master, they did not replicate automatically. I find this:
    Command> repschemes;
    Replication Scheme Active Standby:
    Master Store: CIE on TT-TEST1
    Master Store: CIE on TT-TEST2
    Master Return Service: Return Receipt
    Subscriber Store: CIE on TT-TEST3
    Excluded Tables:
    ROOT.EXTRACTOR_
    ROOT.PROMPT_
    ROOT.PREFERABLE_
    Included Tables:
    List too long (59 items), use verbosity 4 to display
    <snip>
    My question is ... how do I include these tables in replication?
    Do I need to trash and clone the secondary master store and the subscriber again? Even doing that won't add the tables to the replication scheme so I don't think that is a solution.
    I couldn't find much documentation on the ALTER REPLICATION statement but from what I could find it requires me to know the 'name' of the replication scheme and the examples in the documentation didn't work when I used 'Active Standby' as the scheme name in the statement.
    Am I being retarded here? Is this a limitation of using the ACTIVE STANDBY PAIR replication model?
    Thanks in advance.
    Huw

    When you setup and rollout the ACTIVE/STANDBY pair (or indeed legacy replication) it only includes tables that already exist. The normal deployment process is:
    1. Create the first datastore (the one which will initially be the 'active').
    2. Create (and populate) all necessary tables.
    3. Create the active/standby pair replication scheme.
    4. Start the repagent
    5. Make the datastore active by calling ttRepStateSet('ACTIVE')
    6. Use ttRepAdmin -duplicate to create the standby store from the active
    7. Start repagent at standby
    8. Use ttRepAdmin -duplicate to create the subscriber store from the standby
    7. Start repagent at subscriber
    If you need to add/remove tables later you must do the following:
    At active node:
    1. Create any new tables (and populate them) as needed
    2. Stop repagent
    3. Execute ALTER ACTIVE STANDBY PAIR with INCLUDE and/or EXCLUDE clauses as required
    4. Start repagent
    Then you need to redeploy the other stores:
    At standby:
    5. Stop repagent
    6. Drop datastore (ttDestroy)
    7. Re-create datastore from active using ttRepAdmin -duplicate
    8. Start repagent
    At subscriber:
    9. Stop repagent
    10. Drop datastore (ttDestroy)
    11. Re-create datastore from standby using ttRepAdmin -duplicate
    12. Start repagent
    This is documented in the TimesTen Replication Guide in the section on administering an active/standby pair.
    Chris

  • How can I create a new table in a MySQL database in MVC 5

    I have an MVC 5 app, which uses MySQL hosted in Azure as a data source. The point is that inside the database, I want to create a new table called "request". I have already activated migrations for my database in code. I also have the following
    code in my app.
    Request.cs: (inside Models folder)
    public class Request
    public int RequestID { get; set; }
    [Required]
    [Display(Name = "Request type")]
    public string RequestType { get; set; }
    Test.cshtml:
    @model Workfly.Models.Request
    ViewBag.Title = "Test";
    <h2>@ViewBag.Title.</h2>
    <h3>@ViewBag.Message</h3>
    @using (Html.BeginForm("SaveAndShare", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
    @Html.AntiForgeryToken()
    <h4>Create a new request.</h4>
    <hr />
    @Html.ValidationSummary("", new { @class = "text-danger" })
    <div class="form-group">
    @Html.LabelFor(m => m.RequestType, new { @class = "col-md-2 control-label" })
    <div class="col-md-10">
    @Html.TextBoxFor(m => m.RequestType, new { @class = "form-control", @id = "keywords-manual" })
    </div>
    </div>
    <div class="form-group">
    <div class="col-md-offset-2 col-md-10">
    <input type="submit" class="btn btn-default" value="Submit!" />
    </div>
    </div>
    @section Scripts {
    @Scripts.Render("~/bundles/jqueryval")
    HomeController.cs:
    [HttpPost]
    public ActionResult SaveAndShare(Request request)
    if (ModelState.IsValid)
    var req = new Request { RequestType = request.RequestType };
    return RedirectToAction("Share");
    The point is that, I want the user to fill the form inside the Test view and click submit, and when the submit is clicked, I want a new entry in the new table to be created. But first of course I need to create the table. Should I create it using SQL query
    through MySQL workbench? If yes, then how can I connect the new table with my code? I guess I need some DB context but don't know how to do it. If someone can post some code example, I would be glad.
    UPDATE:
    I created a new class inside the Models folder and named it RequestContext.cs, and its contents can be found below:
    public class RequestContext : DbContext
    public DbSet<Request> Requests { get; set; }
    Then, I did "Add-Migration Request", and "Update-Database" commands, but still nothing. Please also note that I have a MySqlInitializer class, which looks something like this:
    public class MySqlInitializer : IDatabaseInitializer<ApplicationDbContext>
    public void InitializeDatabase(ApplicationDbContext context)
    if (!context.Database.Exists())
    // if database did not exist before - create it
    context.Database.Create();
    else
    // query to check if MigrationHistory table is present in the database
    var migrationHistoryTableExists = ((IObjectContextAdapter)context).ObjectContext.ExecuteStoreQuery<int>(
    string.Format(
    "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = '{0}' AND table_name = '__MigrationHistory'",
    // if MigrationHistory table is not there (which is the case first time we run) - create it
    if (migrationHistoryTableExists.FirstOrDefault() == 0)
    context.Database.Delete();
    context.Database.Create();

    Hello Toni,
    Thanks for posting here.
    Please refer the below mentioned links:
    http://azure.microsoft.com/en-us/documentation/articles/web-sites-dotnet-deploy-aspnet-mvc-app-membership-oauth-sql-database/
    http://social.msdn.microsoft.com/Forums/en-US/3a3584c4-f45f-4b00-b676-8d2e0f476026/tutorial-problem-deploy-a-secure-aspnet-mvc-5-app-with-membership-oauth-and-sql-database-to-a?forum=windowsazurewebsitespreview
    I hope that helps.
    Best Regards,
    Sadiqh Ahmed

  • Error while creating new table

    Hi
    First I had deleted one custom table. Then I try to create the same table with different fields. Now I am gettinh the errors.
    1.    ZLV_COMP_TABLE: Inconsistency DD <-> DB (check table with analysis tool)
    2.   A table called ZLV_COMP_TABLE exists in the database
    3.   No active nametab exists for ZLV_COMP_TABLE
    4.   Termination due to inconsistencies
    5.  Table ZLV_COMP_TABLE (Statements could not be generated)
      6.  Error number in DD_DECIDE (9)
    Please help me ... how can I create a table with same name.
    thanks
    Subhankar

    Hello,
    goto SE14 -- Give the table name
    use Tables Radio button
    click on Edit
    check the Delete Data Radio button
    click on ACTIVATE AND ADJUST DATABASE.
    Now check the table in SE11.
    If it still exits you can change the same table or Delete it again & create a new table with Same Name.
    This might help your query.
    Anil.

  • Creation of New Table for Delivery Output Type.

    Hi Guys,
    I need to replace an existing table by creating a new Table in the existing Access Sequence with the combination of "Ship to Party/Product Hierarchy".
    Logistics>Shipping>basic Shipping functions>Output determination>Maintain Condition Tables-->maintain output condition table for deliveries.
    I am choosing a new table by the name 902, but i am not getting the field " PRODH Product Hierarchy" on the right hand side to choose from.
    I checked the field catelog also. Firstly the field catelog was also not having the field, and then i added the field in it by choosing new entries.
    I guess i am missing some step in between, thats why the new field (PRODH) is not showing on the right hand side while creation of the condition table.
    One more thing, when i am seeing the Field Catelog, i am able to see a very large number of fields, then why only a few are appearing during creation of a new table??????
    Can you guys correct me and let me find a way.????
    Thanks very much indeed.
    Regards,
    Vivek

    Hi If u have already the field in FC u can add with new entries .
    Try with ,enter t- code <b>SE11</b>, enter <b>KOMB</b>- it will ask for access key get from basis and add u r Field catalog
    Field catalog for condition key: output determination
    If at all u want a new field tao add to u r FC u Can try with userexit.
    1)ADDING OF NEW FIELDS IN PRICING  
    In Pricing in SD the fields on the basis of which pricing is done are derived from the FIELD CATALOG which is a structure KOMG .This structure is used to transfer transaction data to the pricing procedure in SD and is also known as communication structure.This structure KOMG consists of two tables KOMK for Header related fields and KOMP for item related fields.
       The fields which are not in either of the two tables KOMK and KOMP
    cannot be used in pricing .Sometimes a need arises when the pricing
    is to be based on some other criteria which is not present in the form of fields in either of the two tables.
      This problem can be solved by using USEREXITS which are provided for pricing in SD.
      Pricing takes place both when the SALES ORDER ( Transaction VA01) is created as well as when INVOICING ( Transaction VF01) is done.Hence SAP provides 2 userexits ,one for sales order processing which is
    USEREXIT_PRICING_PREPARE_TKOMP  or
    USEREXIT_PRICING_PREPARE_TKOMK
    Depending upon which table (KOMK or KOMP) the new fields were inserted we use either of the above two userexits.These userexits are found in include MV45AFZZ of the standard SAP sales order creation program SAPMV45A.
    In the case of userexit which will be called when invoicing is done ,these
    are provided in the include RY60AFZZ which is in the standard SAP
    program SAPMV45A. The name of the userexits are same. i.e
    USEREXIT_PRICING_PREPARE_TKOMP  or
    USEREXIT_PRICING_PREPARE_TKOMK
    These userexits are used for passing the data from the communication structure to the pricing procedure, for this we have to fill the newely
    created field in the communication structure KOMG for this we fill the code in the above userexit using the MOVE statement after the data that
    has to be passed is taken from the database table by using the SELECT statement. The actual structure which is visible in these userexits and which is to be filled for that particular field is TKOMP or TKOMK.
    Before the coding for these userexits is done ,it is necessary to create a new field in either of the two tables KOMK or KOMP .For this purpose
    includes are provided in each of them .
    To create the field in header data(KOMK) the include provided is KOMKAZ
    and to create the field in item data(KOMP) the include provided is KOMPAZ.
    One possible example for the need of creating new fields can be e.g. Frieght to be based upon transportation zone ,for this no field is available in field catalog and hence it can be created in KOMK and then above userexits can be used to fill the transportation data to it.
    2)The other method of finding userexit is to find the word USEREXIT in the
    associated program of the transaction for which we want to determine userexit using SE38.
    3)The other method of finding userexits is to find the include in case of SD/MM applications where the userexits are located ,this can be found in the SAP reference IMG generally in the subfolder under SYSTEM MODIFICATION.
    Some other examples of userexits in SD are:<b></b>
    Message was edited by:
            SHESAGIRI GEDILA

  • New table in Report painter

    Dear All,
    Can some one guide me as to how to include the new table FAGLFLEXT in the report painter reports.
    Thanks and Regards,
    Gokul.

    Hi,
    use KE5B to make the FSV nodes available in Report painter reports. This function can be used to create/change sets based on FSV.
    Best regards, Christian

  • How to add a new table or view in the view object

    hello,every one.
    I wanna add a new table or view in the view object's query statement.
    when the table or view not in the where clause,the query statement is working fine.
    If they in the where clause,I got the "java.lang.NullPointerException".
    who can help me
    thank you very much

    thank you for your reply
    I wanna extend the VO
    oracle.apps.pay.selfservice.payslip.US.server.PayPayslipGetPersonDetail
    the original sql is:
    SELECT ppf.person_id,
    FROM per_people_f ppf,
    per_assignments_f paf,
    pay_assignment_actions paa
    where paa.assignment_action_id = :1 AND paf.assignment_id = paa.assignment_id AND SYSDATE BETWEEN paf.effective_start_date AND paf.effective_end_date AND paf.person_id = ppf.person_id
    I wanna extend:
    SELECT ppf.person_id, pay_v.element_name,pay_v.assignment_action_id
    FROM per_people_f ppf,
    per_assignments_f paf,
    pay_assignment_actions paa,
    pay_run_results_v pay_v
    where paa.assignment_action_id = :1 AND paf.assignment_id = paa.assignment_id AND SYSDATE BETWEEN paf.effective_start_date AND paf.effective_end_date AND paf.person_id = ppf.person_id
    pay_v.assignment_action_id =paa.assignment_action_id
    if the pay.v in the where clause. I will got the exception. whereas,it's works fine.

  • DataSource for FAGLFLEXT and BSEG, or New Table in ECC6?

    need to create an extractor to have all the information of FAGLFLEXT, because we need to keep the ledger information and the split of the information. However, we need to add 13 fields contained in BSEG.
    Therefore we thought to reads the line items table FAGLFLEXA, and then enhace it throught BSEG table.
    However, since we are using ECC6 and BI7. It is not support the creation of DataSources for FAGLFLEXA throught FAGLBW03.
    Is it an option to incorporate all fields into FAGLFLEXT.
    Can we creat a new table group based on FAGLFLEXT, and then adding the coding block extensions to that table -
    how does new g/l and the new table group work in parallel? Which is the procedure to do it?
    Documentation says we can create a new table group based on FAGLFLEXT --- its the how does it work in conjuction part...for example...the new g/l handles document splitting and one other thing georg referenced last night...will the split documents go into our new table group?
    BSEG does not have the document splitter information that we need (it's incomplete data). It's missing profit centers on many items, it's missing the proper split of transactions.
    Thanks for your comments.

    Here is more information about this post.
    Client situation:  Our client is implementing ECC 6 and is using the "New-GL" features.  Because of business requirements, the coding block has been extended (not insignificantly - 18 extra fields at the moment) to accommodate legal, regulatory and management reporting.  The reporting solution includes standard ECC reporting (e.g. report writer, report painter reports) as well as feeds to BW (BI 7).
    The Challenge:  Our understanding is that adding all of the coding block extensions to the New-GL tables (ie. FAGLFLEXA and FAGLFLEXT) may lead to performance degradation in the ECC system.  However, we still need to accommodate the requirement to report by the additional dimensions that are not currently included in the New-GL, so our challenge has been to find a solution that minimizes performance issues, while still allowing us to have all the necessary dimensions with which to do the required reporting.
    What we would like to know:  How have you handled this in similar situations?
    Have you added to the New-GL tables? How many fields? Performance issues encountered?
    Have you created additional table group(s) based on the New-GL and then modified that structure to have the new fields?  How does the additional table group work co-incident with New-GL (e.g. does the additional table group receive document splitting information?)?
    Have you created custom extractors for BW?  On what basis (we understand that FAGFLEXA cannot be created as a datasource to feed BW)?

  • New table contains data after Successful activation

    HI All,
    one DSO activation got failed due to red request which was present in the target. We repeated the DSO Activation step once we deleted the bad request from DSO and it got completed successfully.
    Generally, as part of ODS Activation, the data will move to ACTIVE table and CHANGELOG table and after that the same data will get delete from the ACTIVATION(NEW) queue (u201CU tableu201D).
    After that DSO Activation termination, U table data is not getting deleted for that particular table.
    As per the SAP note u201C680480u201D, if any termination happens while activating DSO then there might be a chance of just activating the request after that it wonu2019t delete the activation queue. And also, from that point of time onwards the new table continue to grow.
    The above mentioned sap note contains solution upto the version BW 3.5. As my system is BI 7.0, we cant implement any patches as mentioned in the note.
    Can any one please tell me in which table and all do i need to delete the entry of that particular request from the table apart from RSODSACTREQ.
    Reards,
    Sridevi.

    Hi Sridevi,
    Actually it's not advisable to delete the entries from  the DB tables. But at times, we are forced to do that to avoid inconsistencies in the system.
    If there are no pending requests in DSO for activation and if you are able to upload further to downsteam data targets like Cubes, you need not to worry about Activation Queue.
    For the time being, since you have already deleted bad entries from RSODSACTREQ and RSREQICODS tables, I feel there will not be any inconsistencies in the system.. So do not delete from any more tables.
    In the next complete data load, if any inconsistency found, then you delete from the other tables.
    In order to avoid this in future, make the status to red in request monitor before deleting from  the DSO Manage.
    Regards,
    Suman

Maybe you are looking for