MDG-F data distribution in case of add on deployment

Hi Expert
Can any one describes the procedure for MDG-F data distribution for add-on deployment scenario?
Is it required to release edition each time? Then what is meaning of check box- Immediately Distribute Change Requests:in MDG 6.1  if you have to replicate edition manually each time? Is data replication model is required for data distribution as well?

Hello Sanjay
MDG-F is a Flex model. You have to configure replication model in any case.
There are 2 ways to replicate data from MDG.
1. Auto Replication - Here you have to mark  immediately distribute change request while creating edition. With this tick mark, data will get replicated automatically. No need to release any edition.
2. Manual replication - When you want to replicate the data manually, do not tick the auto distribution option. The data will get stored in the edition. You can release the edition any time. Only thing you need to make sure that there is no open change request in the edition. Once you release the edition, you can view the edition in manual replication option.
Kiran

Similar Messages

  • ALE:Material Master Data Distribution

    Hi
    I am new to ALE Material Master Data Distribution and if the questions are incorrectly worded, please accept my apologies. It is due to lack of knowledge.
    I am doing Material Master Data distribution and I am using standard basic type MATMAS03. The standard TCode MM01, MM02 has SAP tables and 3 custom tables.
    I have turned the active on using BD61 and change pointers to message type.
    Q: Is the change pointer, applicable to entire material master? i.e. if I change a field in Z table or standard material tables, can an IDOC be created for the entire material record?
    Q: What is the function of BD52? Does this override the option Activate Change pointers for message type? Does it mean, if the changes are made to those fields in BD52 a change pointer is created?
    Q: If I change Z table only, will there be a change pointer created? If not, how could I accomplish this using BD52.
    Q When do we use the enhancement MGV00001?
    Also, If you could add anything in addition to above queries, it will be helpful to me and I appreciate it.
    With regards,
    William

    Thank you Sudhakar.
    Your tips are good, but, I was not able to get the problem resolved.
    Let me explain what I have done based on your tips.
    1.  MATMAS uses MATMAS03. The basic type has NOT been modified to have z fields yet.
    2.  Using BD52, i have created the following fields
         MATERIAL    ZEMM_MARA_TECH   IM_ADVISORY_CODE
         MATERIAL    ZEMM_MARA_TECH   KEY.
    3.  Modified the MMR for IM_ADVISORY_CODE.
    I did not see a change pointer for this change.
    Q:  When I created above entry using BD52, was asking the table name. The table name is stored in TCD0B. What TCode is available the entries in TCDOB. For the time being, i created using SQL.
    Q. Is there a follow up configuration required to capture the changes?
    I will award the points in a couple of days time, even if I do not get the solution, because you have spent your precious time for the community. I will wait for a couple of days for other suggestions and close the Problem.

  • Update column data to Upper Case in parent and child table

    Hi ,
    I am facing issue while updating column value to upper case in parent table and child table. How can i do that ?
    when updating parent row:
    ORA-02292: integrity constraint (XXXXXXXXXXXXXX_FK) violated - child record found
    When updatng corresponding child row:
    ORA-02291: integrity constraint (XXXXXXXXXXXXXXXX_FK) violated - parent key not found
    how can i update on both the places ?
    Regards,
    AA

    I am facing issue while updating column value to upper case in parent table and child table. How can i do that ?
    Why do you need to do that?
    That is just ONE of several questions you should answer before you start modifying your data.
    1. What is your 4 digit Oracle version? (result of SELECT * FROM V$VERSION)
    2. If both values are the same case what difference does it make what that case is?hen you don't need to alter your original data.
    3. What is the source of the column values you are using now? If you change your data to upper case it will no longer be identical to the source data.
    4. What is your plan for enforcing future values to be stored in UPPER case? Are you going to use a trigger? Have you written and tested such a trigger to see if it will even work the way you expect?
    5. Why aren't you using a surrogate key instead of a 'business' data item? You have just demonstrated one reason why surrogate keys can be useful: their actual value is NOT important.
    You should reexamine your problem and architecture and consider other alternatives.
    One alternative is to add a new 'surrogate key' column to use as the primary key. Just create a new sequence and use a trigger to populate the new column. Your current plans will require a trigger to perform the case conversion so instead of the just use the trigger to provide the value.
    If the change is being done to facilitate searching you could just add a VIRTUAL column UPPER_MY_COLUMN and index that instead. Then you could search on that new virtual column and the data values would still be identical to the original data source.

  • Master Data Distribution !

    Hi!
       I want to know the purpose of master data distribution for the following between the vendor & the customer.
       1. Material Master
       2. Vendor Master & Customer Master.
      Whats the purpose of linking our system with our vendor or customer etc with <b>regard to master data</b>
      Pls explain in detail.
      Thanks
      Rahul.

    Hi Rahul,
    We dont do master data distribution with customer system or vendor system.
    Master data distribution is done between distributed systems of the same organization using ALE configuration. So we dont link to customer or vendor systems for transfering master data but for transfering transactional data like purchase orders or sales orders etc.
    Master Data Distribution
    Rather than distributing the complete master data information, views of the master data can be distributed (for example, material sales data, material purchase data). Each view of the master data is stored in a separate message type.
    Users can specify which data elements in a master record are to be distributed.
    Various distribution strategies are supported:
    ·        Cross-system master data can be maintained centrally and then distributed. The final values are assigned locally.
    ·        A view of the master data can be maintained locally. In this case there is always one maintenance system for each view. After the master data has been maintained it is transferred to a central SAP system and distributed from there.
    Types of Distribution
    ·        Active distribution (PUSH)
    If the master data is changed (for example, new data, changes or deletions), a master data IDoc is created in the original system and is distributed by class as specified in the distribution model.
    ·        Requests (PULL)
    A request occurs when a client system needs information about master data held in the system. You can select specific information about the master data, for example, the plant data for a material.
    If you want to be notified of all subsequent changes to the master data, this has to be set up “manually” between the two systems. It is not yet possible for this to be done automatically in the distribution mechanism in the original system.
    Transferring the Master Data
    A distinction is made between transferring the entire master data and transferring only changes to the master data.
    If the entire master data is transferred, a master IDoc is created for the object to be distributed in response to a direct request from a system. Only the data that was actually requested is read and then sent. The customer specifies the size of the object to be distributed in a request report.
    If only changes are transferred, the master IDoc is created based on the logged changes.
    Reward points for the useful answers,
    Aleem.

  • [svn] 1543: Bug: BLZ-152-lcds custom Date serialization issue - need to add java.io. Externalizable as the first type tested in AMF writeObject() functions

    Revision: 1543
    Author: [email protected]
    Date: 2008-05-02 15:32:59 -0700 (Fri, 02 May 2008)
    Log Message:
    Bug: BLZ-152-lcds custom Date serialization issue - need to add java.io.Externalizable as the first type tested in AMF writeObject() functions
    QA: Yes - please check that the fix is working with AMF3 and AMFX and you can turn on/off the fix with the config option.
    Doc: No
    Checkintests: Pass
    Details: The problem in this case was that MyDate.as was serialized to MyDate.java on the server but on the way back, MyDate.java was serialized back to Date.as. As the bug suggests, added an Externalizable check in AMF writeObject functions. However, I didn't do this for AMF0Output as AMF0 does not support Externalizable. To be on the safe side, I also added legacy-externalizable option which is false by default but when it's true, it restores the current behavior.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-152
    Modified Paths:
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/endpoints/AbstractEndpoint.ja va
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/io/SerializationContext.java
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/io/amf/Amf3Output.java
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/io/amfx/AmfxOutput.java
    blazeds/branches/3.0.x/resources/config/services-config.xml

  • F i install firefox 4, my old firefox data as in cookies and add-ons will get lost or what??

    if i install firefox 4, my old firefox data as in cookies and add-ons will get lost or what??
    I need it and also want to install the new Firefox
    == This happened ==
    Not sure how often

    You should always use beta test versions with a new and separate profile and keep the regular release so you can use that version in case of problems.
    See http://kb.mozillazine.org/Profile_Manager
    Firefox 4.0b1 is the first of the beta test release versions and still has a lot of bugs.
    Such releases are meant for testing only so if bugs are found then they can get reported and fixed before the final 4.0 version gets released.
    If you do not have experience with such test releases then you are better of with the latest Firefox 3.6.6 release.

  • My MTM version is not allowing Data Driven Test Case UI to be created???

    From in MTM I am able to use the UI Builder to create (R&P) a test case (standard login case) but when I then try to use the same process to create a Data Driven test case for login the screens to allow that feature are not showing up.  I am using
    the Ultimate package but is there something else I should be doing?

    Hi,
    What do you mean the screens to allow that feature are not showing up?
    In MTM, we could add parameters to a manual test case to run multiple times with different data.
    More information, please refer to:
    # Add Parameters to a Manual Test Case To Run Multiple Times with Different Data
    https://msdn.microsoft.com/en-us/library/vstudio/dd997832(v=vs.110).aspx
    Regards
    Starain
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Data distribution in distributed caching scheme

    When using the distributed ( partitioned ) scheme in coherence , how the data distribution is happening among the nodes in data-grid..? Is there any API to control it or are there some configurations to control it ?

    Hi 832093
    A distributed scheme works by allocating the data to partitions (by default there are 257 of these, but you can configure more for large clusters). The partitions are then allocated as evenly as possible to the nodes of the cluster, so each node owns a number of partitions. Partitions belong to a cache service so you might have a cache service that is responsible for a number of caches and a particular node will own the same partitions for all those caches. If you have a backup count > 0 then a backup of each partition is allocated to another node (on another machine if you have more than one). When you put a value into the cache Coherence will basically perform a hash function on your key which will allocate the key to a partition and therfore to the node that owns that partition. In effect a distibuted cache works like a Java HashMap which hashes keys and allocates them to buckets.
    You can have some control over which partition a key goes to if you use key association to co-locate entries into the same partition. You would normally do this to put related values into the same location to make processing them on the server side more efficient in use-cases where you might need to alter or query a number of related items. For example in finanial systems you might have a cache for Trades and a cache for TradeValuations in the same cache service. You can then use key association to allocate all the Valuations for a Trade to the same partition as the parent Trade. So if a Trade was mapped to partition 190 in the Trade cache then all of the Valuations for that Trade would map to partition 190 in the TradeValuations cache and hence be on the same node (in the same JVM process).
    You do not really want to have control over which nodes partitions are allocated to as this could alter Coherence ability to evenly distribute partitions and allocate backups properly.
    JK

  • How can I backup data from a case-sensitive volume to a NON-case-sensitive volume?

    The case-sensitive volume in this instance being a desktop-mounted disk image volume.
    A tragi-comedy in too many acts and hours
    Dramatis Personae:
    Macintosh HD: 27" iMac 3.06GHz Intel Core 2 Duo (iMac10,1), 12 GB RAM, 1 TB SATA internal drive
    TB1: 1 TB USB external drive
    TB2: 2 TB USB to Serial-ATA bridge external drive
    Terabyte: a .dmg disk image and resulting desktop volume of the same name (sorry, I don't know the technical term for a .dmg that's been opened, de-compressed and mounted -- evanescently -- on the desktop)
    Drive Genius 3 v3.1 (3100.39.63)/64-bit
    Apple Disk Utility Version 11.5.2 (298.4)
    Sunday morning (05/08/11), disk utility Drive Genius 3's drive monitoring system, Drive Pulse, reported a single bad block on an external USB2.0 1TB drive, telling me all data would be lost and my head would explode if I didn't fix this immediately. So I figured I'd offload the roughly 300 GB of data from TB1 to TB2 (which was nearly empty), with the intention of reinitializing TB 1 to remap the bad block and then move all its data BACK from TB 2. When I opened TB1's window in the Finder and tried to do a straight "Select All" and drag all items from TB1 to TB2, I got this error message:
    "The volume has the wrong case sensitivity for a backup."
    The error message didn't tell me WHICH volume had "the wrong case sensitivity for a backup," and believe me, or believe me not, this was the first time I'd ever heard that there WAS such a thing as "case sensitivity" for a drive. I tried dragging and dropping some individual folders -- some of them quite large, in the 40GB range -- from TB1 to TB2 without any problem whatsoever, but the majority of the items were the usual few-hundred-MB stuff that seems to proliferate on drives like empty Dunkin' Donuts coffee cups on the floor of my car, and I didn't relish the idea of spending an afternoon dragging and dropping dribs and drabs of 300GB worth of stuff from one drive to another.
    Being essentially a simple-minded soul, I had what I thought was the bright idea that I could get around the problem by making a .dmg disk image file of the whole drive, stashing it on TB2, repairing and re-initializing TB1, and then decompressing the disk image I'd made of TB1, and doing the "drag and drop" of all the files in resulting desktop volume to TB1. So I made the .dmg of TB1, called "Terabyte," stashed that .dmg on TB2 (no error messages this time), re-initialized and then rebooted the iMac from my original Snow Leopard 10.6.1 disks and used Disk Utility to erase and initialize TB1 -- making sure that it was NOT initialized as case-sensitive, and installed a minimal system on TB1 from the same boot. Then I updated that 10.6.1 system to 10.6.7 with System Update, and checked to see that Disk Utility reported all THREE drives -- internal, 1TB, and 2TB -- as Mac OS Extended (Journaled), and no "case sensitive" BS. I also used Drive Genius 3's "information" function for more detailed info on all three drives. Except for the usual differing mount points, connection methods, and S.M.A.R.T. status (only the Macintosh HD internal, SATA 1TB drive supports S.M.A.R.T.), everything seemed to be oojah-***-spiff, all three drives showing the same Partition Map Types: GPT (GUID Partition Table.) Smooth sailing from here on out, I thought.
    Bzzzzt! Wrong!
    When I opened the Terabyte .dmg and its desktop volume mounted, I tried the old lazy man's "Select All" and drag all items from the desktop-mounted drive "Terabyte" to TB1, I got the error message:
    "The volume has the wrong case sensitivity for a backup."
    I then spent the next three hours on the phone with AppleCare (kids -- when you buy a Mac ANYTHING, cough up the money for AppleCare. Period.), finally reaching a very pleasant senior tech something-or-other in beautiful, rainy Portland, OR. Together we went through everything I had done, tried a few suggestions she offerred, and, at the end of three hours, BOTH of us were stumped. At least I didn't feel quite as abysmally stupid as I did at the beginning of the process, but that was all the joy I had gotten after two solid days of gnawing at this problem -- and I mean SOLID; I'm retired, and spend probably 12 hours a day, EVERY day, at the keyboard, working on various projects.
    The AppleCare senior tech lady and I parted with mutual expressions of esteem, and I sat here, slowly grinding my teeth.
    Then I tried something I don't know why I was so obtuse as to not have thought of before: I opened Apple's Disk Utility and checked the desktop-mounted volume Terabyte (Mount Point: /Volumes/Terabyte), the resulting volume from opening and uncompressing the .dmg "Terabyte".
    Disk Utility reported: "Format : Mac OS Extended (Case-sensitive)." Doh!
    Obviously, TB1, the 1 TB USB external drive I'd actually bought as part of a bundle from MacMall when I bought my 27" iMac, and which I had initialized the first day I had the iMac up and running (late November 2009), had somehow gotten initialized as a Case-sensitive drive. How, I don't know, but I suspect the jerk behind the keyboard. Whatever the case, when I created the Terabyte disk image (the drive's original name: when I erased and re-initialized it -- see above -- I renamed it "1TB" for quick identification), the original drive's "Case-sensitive" format was encoded too. So when I tried to drag and drop EVERYTHING from the desktop-mounted volume "Terabyte" to the newly initialized and "blessed" (now THERE's a term from the past!), the system recognized it as an attempt as a total volume backup, and hit me with "The volume [the desktop-mounted volume "Terabyte" -- BB] has the wrong case sensitivity for a backup." And, of course, the reinitialized TB1 was now correctly formatted as NOT "case-sensitive."
    Well, that solved the mystery (BTW, Disk Utility identified the unopened Terabyte.dmg as an "Apple UDIF read-only compressed {zlib}, which is why the .dmg file could be copied to ANY volume, case sensitive or not), but it didn't help me with my problem of having to manually move all that data from the desktop-mounted volume "Terabyte" to TB1. I tried to find a way to correct the problem at the .dmg AND opened-volume-from-.dmg level with every disk utility I had, to no avail.
    Sorry for the long exposition, but others may trip over this "case-sensitive" rock in the road, and I wanted to make the case as clear as possible.
    So my problem remains: other than coal shovel by coal shovel, is there any way to get all the data off this case-sensitive desktop-mounted volume "Terabyte" and onto TB1.
    Not that I know whether it would made any difference or not, one of the things that got me into this situation was my inability to get "Time Machine" properly configured so it wasn't making new back-ups every (no lie) 15 minutes.
    Philosophical bonus question: what's the need for this "case-sensitive," "NOT case-sensitive" option for disk initialization?
    As always, thanks for any help.
    Bart Brown

    "Am I to understand that you have a case-sensitive volume with data that you want to copy to a case-insensitive volume? And the Finder won't let you do it? If that's what the problem is, the reason should be obvious: on the source volume, you may have two files in the same folder whose names differ only in case. When copying that folder to the target volume, it's not clear what the Finder should do."
    Yes, I understand all that... NOW.
    What I had (have) is a USB external 1TB drive (henceforth known as "Terabyte") that I bought with my 27" iMac. I formatted, and put a minimal (to make it bootable) system on Terabyte the same day back in late November 2009 that I set up my 27" iMac. Somehow -- I don't know how -- Terabyte got initialized as "case-sensitive." I didn't even know at the time that there WAS such a thing as "case-sensitive" or "NOT case-sensitive" format.
    Sunday morning (05/08/11), Drive Pulse, a toolbar-resident utility (that's Part of Drive Genius 3) that monitors internal and external drives for physical, problems, volume consistency problems, and volume fragmentation, reported a single bad block on the volume Terabyte, advising me that it would be best if I re-formatted Terabyte ASAP. I thought I could open Terabyte in a Finder window, Select All, and drag everything on the drive to ANOTHER USB external drive of 2 TB capacity (henceforth known as TB2). When I tried to do that, I got an error message:
    "The volume has the wrong case sensitivity for a backup."
    First I'd heard of "case sensitivity" -- I'm not too bright, as you seem to have realized.
    Oddly enough (to me), I could move huge chunks of data, including a folder of 40GB, from Terabyte to TB2 with no problem.
    Then the scenario unfolded per my too-convoluted message: several hours of trying things on my own, including making a .dmg of Terabyte (henceforth to be known as Terabyte.dmg) -- which left me with the exact same problem as described in the previous 4 paragraphs; and my 3 hours on the phone with AppleCare, who at least explained this case-sensitive business, but, after some shot-in-the-dark brainstorming -- tough to do with only one brain, and THAT on the OTHER end of the line --  the very pleasant AppleCare rep and I ended up equally perplexed and clueless as to how to get around the fact that a .dmg of a case-sensitive volume, while not case-sensitive in its "image" form (Terabyte.dmg), and thus able be transferred to TB1 or TB2 with no problems whatsoever, when opened -- either by double-clicking or opening in Disk Utility -- produced a desktop-mounted volume (henceforth known as the volume "Terabyte," the original name of the case-sensitive volume from which TB1.dmg had been made) that had the same case-sensitivity as the original from which it was made.
    In the meantime, having gotten the data I needed to save off the physical USB "case-sensitive" volume Terabyte in the form of Terabyte.dmg, I erased and re-initialized the physical USB "case-sensitive" volume Terabyte, getting rif of the case sensitivity, and renaming it TB1. But it all left me back at square one, EXCEPT I had saved my data from the original "Terabyte" drive, and reformatted that drive to a NON- case-sensitive data now named TB1. The confusion here stems from the fact that problem case-sensitive drive, from which I made Terabyte.dmg, was originally named "Terabyte". When I re-initialized it as a NON case-sensitive drive, I renamed it TB1. I'm sorry about the confusing nomenclature, which I've tried to improve upon from my original message -- usual text-communication problem: the writer knows what he has in mind, but the reader can only go by what's written.
    So, anyway, I still have the same problem, the desktop-mounted volume "Terabyte" still cannot be transferred in one whole chunk to either my internal drive, TB1, TB2, as the Finder interprets it as a volume backup (which it is), and reads the desktop-mounted volume "Terabyte" as case-sensitive, as the original volume -- from which the disk image Terabyte.dmg was made -- had been at the time I made it. 
    "As long as that situation doesn't arise, you should be able to make the copy with a tool that's less fastidious than the Finder, such as cp or rsync."
    I'm afraid I have no idea what "cp or rsync" are. I'd be happy to be educated. That's why I came here.
    Bart Brown
    Message was edited by: Bartbrn
    Just trying to unmuddy the water a bit,,,

  • Error during DG data distribution

    Hi SAP DG Experts,
    I am configuring and testing DG data distribution from central EHS system to multiple ERP systems.
    I am able to send the DG idoc and ERP system is receiving that.
    But the DG data creation is failing and when I check DGp7 I can get an error message "data records were not save as no selection date was available".
    Please let me know if you have come across this issue?
    Thanks
    PS

    Dear Pugazendhi
    to distrbute sucessfully DG master via ALE you need to disztrbute
    a.) the phrases
    b.) do a proper set up of receiver system
    Check e.g.
    Distribution (ALE) of Exceptions to Dangerous Goods Regulations - Dangerous Goods Management (EHS-DGP) - SAP Library
    This topic is asked for very rarely
    C.B:

  • WBS data distribution for IDOC-PI-JDBC with system status 'CRTD' not working.

    Hello Experts,
    I am busy with a project where the PS project creation and updates should be transfered to the external legacy system through IDOC-PI-JDBC scenario.
    I am using the BAPI_PROJECT_MAINTAIN for this process. This scenario is working fine with the projects system status released (REL).
    If the system status is Created (CRTD) then no idoc creation happening therefor no data distribution of work breakdown structure(WBS) to the legacy system.
    Can anybody give me some suggestion of make it happening.
    Thanks in advance.
    Regards,
    Antony.

    Hello all,
    I did it with the help of my abap developer.
    We implemented a BADI, and from the BADI we created the idoc.
    When the idoc is there the legacy system receive the data.
    Regards,
    Antony.

  • Data distribution scheme and database fragmentation

    Hi all,
    I'm working on a scenario (University) involving the fragmentation of a central database. A company has regional offices i.e. (England, Wales, Scotland) and each regional office has differing combinations of business areas. They currently have one central database in their head office and my task is to "design a data distribution scheme". By scheme does this mean something like horizontal / vertical fragmentation? Also can somebody point me to an Oracle specific example of creating a fragmented table? I've tried to search online and have found the "partition by" keyword but not much else except for database linking - but I'm thinking this is more concerned with querying than actually creating the fragments.
    Many thanks for your time

    >
    Partitioning is what the tutor meant by "fragmentation". So if there is a current central database and I have created new databases for each regional office I could run something like the below statement on the regional databases to create a bespoke version of the employee table filtered by data relevant to them? This is all theoretical and we don't have to develop the database, I just want to get the syntax correct - Thanks!
    >
    There you go talking about 'new databases' again. You said your original task was this
    >
    my task is to "design a data distribution scheme".
    >
    Is the task to give the regions access to their own data in the ONE central DB? Or to actually create a new DB for each region that contains ONLY that regions data?
    So are we talking ACCESS to a central DB by region? Or are we talking replication of the entire central DB to multiple regions?
    Your example table is partitioned by region. But if each region has their own DB why would you put data for other regions in it?
    If you are wanting each region to have access to their own data in the central DB then you could partition the central DB tables like your example:
    CREATE TABLE employees (
    id INT NOT NULL,
    fname VARCHAR(30),
    lname VARCHAR(30),
    hired DATE NOT NULL DEFAULT '1970-01-01',
    separated DATE NOT NULL DEFAULT '9999-12-31',
    job_code INT,
    store_id INT
    PARTITION BY LIST(region_id) (
    PARTITION Wales VALUES IN (2)
    ); But if you are creating a regional DB that includes data only for that region there is no need to partition it.

  • Full Master Data Distribution in ALE - Vendor Master CREMAS(Full Distributi

    Hi Experts,
       I have to do a Full Master Data Distribution in ALE - Vendor Master CREMAS(Full Distribution) using BD21.
       I have already read the blog of Mickel and incorporated the code in Change_Pointer_Read.
       It is still not working. Should I include the code at the end of the Function Module. If yes, how to code at the last as
       I can include my code at the SAP suggest Enhancement and End Enhancement.
       Is there any way I can create the implicit Enhancement at the end of the Function Module.
       I will appreciate any help to make the Full Distribution work.
    Thanks,
    Mich

    Hi,
       Here is the link.
       /people/michal.krawczyk2/blog/2009/06/04/distribution-of-full-master-data-objects-from-change-pointers
       My requirement is to send the whole Vendor Master in a IDoc even though for eg there is only one field change in address field of Vendor Master. Full Master Data distribution. But the program BD21 creates only IDocs with segments where the changes are made in the particular tab like address.
       Is there any Enhancement or way to enforce the whole Master Data(CREMAS) to be distributed.
       I will really appreciate any helpful answer .
    Thanks,
    Mich

  • 'Master data type User table cannot add row'-DTW error

    Hi All,
          I am creating a template for user define Master data table from DTW, when i am trying to port data by using that template through DTW. it is giving error like--'Master data type User table cannot add row'
    So can any one have solution for this.
    Regard's
    Hari

    Hari,
    Please see SAP Note 1234690 on the SAP PartnerEdge Portal.  This seems like a similar problem although the example is using the Business One SDK which uses the DI API.  The DTW also uses the DI API ... so there may be a relation.
    You may want to check the latest patch level for SAP Business One 2007A as the note says it is a known issue.
    Eddy

  • MAXDB : Data Distribution

    Hi,
    What is the counterpart for Histograms (in Oracle) in MaxDB? i.e How can we get the data distribution in MAXDB?

    > I would like to know whether kernel trace provides the information about this optimization and sampling?? If not, how can this optimizer trace be accessed or found?
    Yes, activating the kernel trace for the optimizer would create further informational output.
    Anyhow, this output is intended for the developer of the optimizer and difficult to understand.
    It's nothing like the CBO trace of oracle.
    Really - if you want to understand better what the optimizer "thought" about your data, then use EXPLAIN JOIN and EXPLAIN SEQUENCE. These commands produce output that deliver some useful information.
    Apart from that it should be clear, that the inner workings of the optimizer are not documented and changed all the time to improve performance and stability. In 7.6 alone there had been so many changes and enhancements.
    As the general design goal of MaxDB is ease of use it shouldn't be necessary to know what the optimizer exactly does.
    If you find your statement is badly optimized - post it here and we have a look.
    Maybe it's a bug that should be fixed.
    regards,
    Lars

Maybe you are looking for

  • Error while creating grouspace using spaceswebservice

    Hi, I am using peoplesoft appliation and trying to create new spaces using the spaceswebservice, i am getting the following exception. Copied from spaces.log: (9:05:04 AM) madanagopal.damodharan: <Apr 6, 2011 8:28:55 PM PDT> <Warning> <oracle.j2ee.ws

  • Imported Photoshop File Fails to Update

    I have imported a layered file from Photoshop CS3 into Flash CS3. The original import works fine but when I change the PSD file in Photoshop and try to update it in Flash it doesn't get updated. To do the update I right-click on the bitmap layer in t

  • ECC and GTS integration

    Gurus If using ECC and GTS, what is ususally done with the "Foreign Trade" fields in the ECC?  Apparenlty these do not transfer to GTS.  This may be confusing to the user to present fields that do nothing.  What is the best practice for addressing th

  • Business Rules Framework (BRF) Implementation

    Hi Gurus, We are in SRM 7.0 (EHP2) ,Extended Classic and planning to migrate from Application Controlled workflow to Business Rules Framework( BRF) workflow. As of now we have BBP_WFL_APPROV_BADI to determine approvers dynamically based on Approval l

  • How to Import users under a Namespace

    Hi All, How to Import users to a specific Namespace in Sun Role Manager 4.1. Thanks in advance Regards