Data duplication

We are using SRM 5.0 with CCM 2.0 as catalog tool. We have recently published some changes in a supplier catalog item. We found the items in the master catalog/supplier catalog are fine thru Admin access, however if we check the catalog items as end user using search, we found all the tems for this particular supplier are duplicated, i.e) each items are appearing twice with same value.
Any one please throw some guidance on it.
Regards
TGB

Dear Poster
Your thread has had no response since it's creation over
1 year ago, therefore, I am closing the thread.
Should the issue still be outstanding please create a new thread in the relevent forum.
Thank you for your compliance in this regard.
Jason Boggans
SAP SRM SDN Moderator

Similar Messages

  • Difference between "Data replication", "Data Cloning", "Data duplication" and "Data Migration"

    Hi Gurus,
    Can anyone tell me the difference between "Data replication", "Data Cloning", "Data duplication" and "Data Migration". I have gone through Google but doesn't find any appropriate answer for that where I can find the difference.
    It would be highly appreciated if you give me a link for all and give me some point out.
    Thanks & Regards
    Nimai Karmakar

    Here is how I see the terms used and understood by most folks
    "Data replication"
    This is the keeping the same data in sync in 2 different databases.  Replication is the process of keeping data in sync between 2 environments/databases and not limited to Oracle to Oracle, it could be Oracle to MySQL to be kept in sync, MySQL to Oracle, Oracle to SQL Server, SQL Server to Oracle.  The purpose of keeping data in sync can vary, but that is basic description of what it is.
    "Data Cloning"
    Make a copy of the data/database from one database environment to another, like for example taking production database and cloning the database to a pre-production database/environment for testing.  Coning term is used more when copying a database not just data, but the whole database so typically you hear the reference "Database Cloning", but data cloning is basically the making a copy of the data from one place to another.  Sometimes Cloning and Duplicating are used in the same way by folks as the basic end result is the same you are making a copy of the data to someplace else.
    "Data duplication"
    This is typically term when duplicating the data from one environment to another for a particular purpose, a lot of folks use "Data/Database Cloning" and "Data Duplication" in the same context, meaning they use them to mean the same thing, but I see more difference in the terms so when they are different Data Duplication is done when combining 2 databases to 1 therefore you can not clone the 2 databases into one so this term would be used more in those circumstances.
    "Data Migration"
    Moving data from one location/database to another location database, for example moving from MySQL to Oracle you would do a data migration from MySQL to Oracle as you are moving the data from MySQL to Oracle and going to leave it in Oracle.  You could also be migrating to from Oracle 10g environment to a new Oracle 11g environment, therefore "Migration" is typically used when referring to moving the data from one environment to another and the source and target are different in location, database vendor, database version.
    These are very simple ways of looking at the terms from my experience on how they are used.

  • Inventory Aging Data, Duplication in SAP B1 8.81

    Hello Everyone,
    I have been working on Inventory Aging Report creation and has been facing with a problem. The data seems to get duplicated when i execute the
    report. I have attached the screen shot below for your reference.
    I have taken the following tables OPDN, PDN1, OITM and OITW for my report, and below is the query.
    SELECT T0.[DocNum], T0.[DocDate], T1.[ItemCode], T1.[Dscription], T1.[WhsCode], T3.[OnHand] FROM OPDN T0  INNER JOIN PDN1 T1 ON T0.DocEntry = T1.DocEntry INNER JOIN OITM T2 ON T1.ItemCode = T2.ItemCode INNER JOIN OITW T3 ON T2.ItemCode = T3.ItemCode WHERE T3.[OnHand]  > 0
    Please help me.. Thank You.

    Hi Nagarajan,
    Thank You for your reply..
    But I am still facing the duplication, before it was duplicating three times, now two times it is getting replicated.
    Attached the new screenshot for your reference..
    Kindly view the total stock values in report and SAP....Appreciate your help..
    Thank you.

  • Data duplication issue. Please advise.

    Hi,
    I created a multiprovider on  notification item and causes. I need Cause code from causes and Damage code from Items. When i report on this multiprovider data is getting duplicated.
    I tried. Excluding # one's it's wprking but missing out some records like the notifications without cause code/damage code. But i want all the notifications to be displayed in my report  without duplications.
    I tried infoset option there also same issue Not all the notifications in Items are not there in causes.
    I think multiprovider can solve the issue if i keep cause code or damage code in free chars but they want both to be displayed in the report.
    I appreciate your response.
    Thanks,
    Naveen

    I would combine this data first in an ODS.  That will combine the data for you and you can report on a single line granted you create the ODS key properly.
    Brian

  • Data Duplication, Modeling Issue

    I have a following arrangement:
    DimX M:1 Dim1 1:M FactM1
    DimX M:1 Dim1 1:M FactM2
    What business user wants is
    1. Fact1 and Fact2 to work along with DimX and Dim1
    2. When Fact1 Dim1 and DimX taken together the numbers for Fact1 should not repeat even when DimX repeats.
    To illustrate this...Let us assume the data is as follows...
    fact1 -> mea$ 10
    dim1 -> att1'xyz'
    dimX -> att10 'aa', 'bb'
    dimX -> att11 'xx', 'zz',
    then when the user pulls the columns
    mea$, max(mea$), sum(mea$), dim1 -> att1, dimX -> att10
    10, 10, 20, 'xyz', 'aa'
    10, 10, 20, 'xyz', 'bb'
    Note: Sum 20 is due to dimx-att10 is aggregating for dimX-> att11
    Also when user puts sum on the table, he gets values as
    mea$, max(mea$), sum(mea$), dim1 -> att1, dimX -> att10
    10, 10, 20, 'xyz', 'aa'
    10, 10, 20, 'xyz', 'bb'
    20, 10, 40, , , <--grand total line
    again this ans is wrong, since
    1. I have to put "Aggregation Rule" on mea$(non aggregated column) to get sum, which sums up blindly
    2. max(mea$) gives max of the value, which will be wrong once I get one more values of dim1-> att1.
    Please suggest, how to resolve this issue.

    Gerardnico,
    Thanks for replying...
    To summarize, I see three methods: 1. Bridge Table without Weighing Factor, 2. Bridge Table without Weighing Factor & 3. Boolean/Multiple Column
    1. I can't included WF table, as we are building on top of OLTP system, and don't have much flexibility
    2. This will yield multiple rows, which is what happening currently and of course they sum up incorrectly. I advised Business not to sum it this way; But I am curious to see if this can be solved, hence posting it here...
    3. Going back to 1, we don't have flexibility to create columns :(
    did I miss any other method?
    Please let me know.
    Thanks in advance.
    PS: I already have referred to these posts
    http://gerardnico.com/wiki/dat/obiee/join_in_lts
    http://gerardnico.com/wiki/dat/obiee/obiee_bridge_table
    Thanks for posting them :)

  • Dynamic Form Data Duplication

    I'm working on my first dynamic form, and I'm having an issue where my dynamic subforms share the same data, so editing one subform updates all other subforms of the same type. The form is a tshirt order sheet, so, for example, there'll be a line for a specific brand of shirt, plus sizes. When another line is added, it'll duplicate all the info from the first line, and any changes made to the second line will be immediately mirrored in the first line. I have the fields bound to an XML schema I wrote, so I'm not sure if that may be causing the problem.

    Hi,
    You have not included the XML schema. However the issue is that the objects in the repeating row are bound to the schema, if this is absent or if the schema only contains one node, then I am sure that all repeated instances of the row will display the same data.
    The issue is where is the data coming from? Is it from a database or is the user entering it. If the user is entering it then I would not bind those fields to a schema. Set the binding to Name/Normal.
    Niall

  • Launch Quicktime Player/ No data duplication

    Hello,
    I'm trying to use the blog template in iWeb to create a video organizing/browsing page. The basic idea is to have a picture from the video with a short descriptive text (easy), but then make it so when you click on the picture the video launches in Quicktime Player.
    All this content will be stored on a local folder using the "publish to file" option.
    Also, I don't want to create multiple copies of the video files-- in other words, I'd like iWeb to point to the files in a common directory w/out making a copy in the site folder.
    Is this possible?
    Thanks,
    Andy

    Try holding the alt key to download them, and then open them in Quicktime Player, if you want them to open automatically thereafter, you can probably use Automator.

  • OC4J: marshalling does not recreate the same data structure onthe client

    Hi guys,
    I am trying to use OC4J as an EJB container and have come across the following problem, which looks like a bug.
    I have a value object method that returns an instance of ArrayList with references to other value objects of the same class. The value objects have references to other value objects. When this structure is marshalled across the network, we expect it to be recreated as is but that does not happen and instead objects get duplicated.
    Suppose we have 2 value objects: ValueObject1 and ValueObject2. ValueObject1 references ValueObject2 via its private field and the ValueObject2 references ValueObject1. Both value objects are returned by our method in an ArrayList structure. Here is how it will look like (number after @ represents an address in memory):
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@2
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@1
    We would expect to see the same (except exact addresses) after marshalling. Here is what we get instead:
    Object[0] = com.cramer.test.SomeVO@1
    Object[0].getValueObject[0] = com.cramer.test.SomeVO@2
    Object[1] = com.cramer.test.SomeVO@3
    Object[1].getValueObject[0] = com.cramer.test.SomeVO@4
    It can be seen that objects get unnecessarily duplicated – the instance of the ValueObject1 referenced by the ValueObject2 is not the same now as the instance that is referenced by the ArrayList instance.
    This does not only break referential integrity, structure and consistency of the data but dramatically increases the amount of information sent across the network. The problem was discovered when we found that a relatively small but complicated structure that gets serialized into a 142kb file requires about 20Mb of network communication. All this extra info is duplicated object instances.
    I have created a small test case to demonstrate the problem and let you reproduce it.
    Here is RMITestBean.java:
    package com.cramer.test;
    import javax.ejb.EJBObject;
    import java.util.*;
    public interface RMITestBean extends EJBObject
    public ArrayList getSomeData(int testSize) throws java.rmi.RemoteException;
    public byte[] getSomeDataInBytes(int testSize) throws java.rmi.RemoteException;
    Here is RMITestBeanBean.java:
    package com.cramer.test;
    import javax.ejb.SessionBean;
    import javax.ejb.SessionContext;
    import java.util.*;
    public class RMITestBeanBean implements SessionBean
    private SessionContext context;
    SomeVO someVO;
    public void ejbCreate()
    someVO = new SomeVO(0);
    public void ejbActivate()
    public void ejbPassivate()
    public void ejbRemove()
    public void setSessionContext(SessionContext ctx)
    this.context = ctx;
    public byte[] getSomeDataInBytes(int testSize)
    ArrayList someData = getSomeData(testSize);
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println(" serialised output size: "+byteOutputStream.size());
    byte[] bytes = byteOutputStream.toByteArray();
    objectOutputStream.close();
    byteOutputStream.close();
    return bytes;
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return null;
    public ArrayList getSomeData(int testSize)
    // Create array of objects
    ArrayList someData = new ArrayList();
    for (int i=0; i<testSize; i++)
    someData.add(new SomeVO(i));
    // Interlink all the objects
    for (int i=0; i<someData.size()-1; i++)
    for (int j=i+1; j<someData.size(); j++)
    ((SomeVO)someData.get(i)).addValueObject((SomeVO)someData.get(j));
    ((SomeVO)someData.get(j)).addValueObject((SomeVO)someData.get(i));
    // print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Check the serialised size of the structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    return someData;
    Here is RMITestBeanHome:
    package com.cramer.test;
    import javax.ejb.EJBHome;
    import java.rmi.RemoteException;
    import javax.ejb.CreateException;
    public interface RMITestBeanHome extends EJBHome
    RMITestBean create() throws RemoteException, CreateException;
    Here is ejb-jar.xml:
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <!DOCTYPE ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN" "http://java.sun.com/dtd/ejb-jar_2_0.dtd">
    <ejb-jar>
    <enterprise-beans>
    <session>
    <description>Session Bean ( Stateful )</description>
    <display-name>RMITestBean</display-name>
    <ejb-name>RMITestBean</ejb-name>
    <home>com.cramer.test.RMITestBeanHome</home>
    <remote>com.cramer.test.RMITestBean</remote>
    <ejb-class>com.cramer.test.RMITestBeanBean</ejb-class>
    <session-type>Stateful</session-type>
    <transaction-type>Container</transaction-type>
    </session>
    </enterprise-beans>
    </ejb-jar>
    And finally the application that tests the bean:
    package com.cramer.test;
    import java.util.*;
    import javax.rmi.*;
    import javax.naming.*;
    public class RMITestApplication
    final static boolean HARDCODE_SERIALISATION = false;
    final static int TEST_SIZE = 2;
    public static void main(String[] args)
    Hashtable props = new Hashtable();
    props.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
    props.put(Context.PROVIDER_URL, "ormi://lil8m:23792/alexei");
    props.put(Context.SECURITY_PRINCIPAL, "admin");
    props.put(Context.SECURITY_CREDENTIALS, "admin");
    try {
    // Get the JNDI initial context
    InitialContext ctx = new InitialContext(props);
    NamingEnumeration list = ctx.list("comp/env/ejb");
    // Get a reference to the Home Object which we use to create the EJB Object
    Object objJNDI = ctx.lookup("comp/env/ejb/RMITestBean");
    // Now cast it to an InventoryHome object
    RMITestBeanHome testBeanHome = (RMITestBeanHome)PortableRemoteObject.narrow(objJNDI,RMITestBeanHome.class);
    // Create the Inventory remote interface
    RMITestBean testBean = testBeanHome.create();
    ArrayList someData = null;
    if (!HARDCODE_SERIALISATION)
    // ############################### Alternative 1 ##############################
    // ## This relies on marshalling serialisation ##
    someData = testBean.getSomeData(TEST_SIZE);
    // ############################ End of Alternative 1 ##########################
    } else
    // ############################### Alternative 2 ##############################
    // ## This gets a serialised byte stream and de-serialises it ##
    byte[] bytes = testBean.getSomeDataInBytes(TEST_SIZE);
    try {
    java.io.ByteArrayInputStream byteInputStream = new java.io.ByteArrayInputStream(bytes);
    java.io.ObjectInputStream objectInputStream = new java.io.ObjectInputStream(byteInputStream);
    someData = (ArrayList)objectInputStream.readObject();
    objectInputStream.close();
    byteInputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    // ############################ End of Alternative 2 ##########################
    // Print out the data structure
    System.out.println("Data:");
    for (int i = 0; i<someData.size(); i++)
    SomeVO tmp = (SomeVO)someData.get(i);
    System.out.println("Object["+Integer.toString(i)+"] = "+tmp);
    System.out.println("Object["+Integer.toString(i)+"]'s some number = "+tmp.getSomeNumber());
    for (int j = 0; j<tmp.getValueObjectCount(); j++)
    SomeVO tmp2 = tmp.getValueObject(j);
    System.out.println(" getValueObject["+Integer.toString(j)+"] = "+tmp2);
    System.out.println(" getValueObject["+Integer.toString(j)+"]'s some number = "+tmp2.getSomeNumber());
    // Print out the size of the serialised structure
    try {
    java.io.ByteArrayOutputStream byteOutputStream = new java.io.ByteArrayOutputStream();
    java.io.ObjectOutputStream objectOutputStream = new java.io.ObjectOutputStream(byteOutputStream);
    objectOutputStream.writeObject(someData);
    objectOutputStream.flush();
    System.out.println("Serialised output size: "+byteOutputStream.size());
    objectOutputStream.close();
    byteOutputStream.close();
    } catch (Exception e) {
    System.out.println("Serialisation failed: "+e.getMessage());
    catch(Exception ex){
    ex.printStackTrace(System.out);
    The parameters you might be interested in playing with are HARDCODE_SERIALISATION and TEST_SIZE defined at the beginning of RMITestApplication.java. The HARDCODE_SERIALISATION is a flag that specifies whether Java serialisation should be used to pass the data across or we should rely on OC4J marshalling. TEST_SIZE defines the size of the object graph and the ArrayList structure. The bigger this size is the more dramatic effect you get from data duplication.
    The test case outputs the structure both on the server and on the client and prints out the size of the serialised structure. That gives us sufficient comparison, as both structure and its size should be the same on the client and on the server.
    The test case also demonstrates that the problem is specific to OC4J. The standard Java serialisation does not suffer the same flaw. However using the standard serialisation the way I did in the test case code is generally unacceptable as it breaks the transparency benefit and complicates interfaces.
    To run the test case:
    1) Modify provider URL parameter value on line 15 of the RMITestApplication.java for your environment.
    2) Deploy the bean to the server.
    4) Run RMITestApplication on a client PC.
    5) Compare the outputs on the server and on the client.
    I hope someone can reproduce the problem and give their opinion, and possibly point to the solution if there is one at the moment.
    Cheers,
    Alexei

    Hi,
    Eugene, wrong end user recovery.  Alexey is referring to client desktop end user recovery which is entirely different.
    Alexy - As noted in the previous post:
    http://social.technet.microsoft.com/Forums/en-US/bc67c597-4379-4a8d-a5e0-cd4b26c85d91/dpm-2012-still-requires-put-end-users-into-local-admin-groups-for-the-purpose-of-end-user-data?forum=dataprotectionmanager
    Each recovery point has users permisions tied to it, so it's not possible to retroacively give the users permissions.  Implement the below and going forward all users can restore their own files.
    This is a hands off solution to allow all users that use a machine to be able to restore their own files.
     1) Make these two cmd files and save them in c:\temp
     2) Using windows scheduler – schedule addperms.cmd to run daily – any new users that log onto the machine will automatically be able to restore their own files.
    <addperms.cmd>
     Cmd.exe /v /c c:\temp\addreg.cmd
    <addreg.cmd>
     set users=
     echo Windows Registry Editor Version 5.00>c:\temp\perms.reg
     echo [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent\ClientProtection]>>c:\temp\perms.reg
     FOR /F "Tokens=*" %%n IN ('dir c:\users\*. /b') do set users=!users!%Userdomain%\\%%n,
     echo "ClientOwners"=^"%users%%Userdomain%\\bogususer^">>c:\temp\perms.reg
     REG IMPORT c:\temp\perms.reg
     Del c:\temp\perms.reg
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Working with multiple users and computers, but shared data

    Sorry if this is posted in a poor place, I'm not sure where the best place is. This is sort of a general questions.
    For a long time, my wife and I have had either one computer, or two machines but one has definitely been just a terminal. We've basically set up all of our data to be one one primary machine, and if we want to view/edit that data we have to use that machine.
    We just got a new MacBook Pro and I would like to be able to use two machines as equals. Sadly, this idea of multiple computers, with two users and some shared data is really giving me difficulty. I was wondering if anyone has any suggestions on how to best manage things like:
    Synchronizing portions of our contact list (We share about 50% of the combined library -- we don't have to share all though).
    How to manage iPhoto so that we can each have access to the photos. As an added difficulty (or maybe this is easier?) my Wife just wants to have access to the pictures for viewing and sharing on Facebook/Picassa/etc. I am the only one who wants to edit, correct and cull our library. That said, I always edit when I first put the data on the machine, and almost never again; so it would be fine to have one (or both accounts) set up as view only for the iPhoto data.
    How to manage iTunes so that we can each have access to the music. As a super awesome bonus, it would be great if we could have three libraries: His, Hers and Shared. Maybe as much as 30% of our music library is similar, the rest just gets in the way.
    What is the best solution people have found for calendars? (I'm thinking two separate calendars, and we each subscribe to each others iCal feed)
    Mail.app and bookmark synching is not really a problem for us.
    Two extra points:
    * One machine is portable, and the other isn't. Ideally, when the laptop is out of the house, both machines should still have reasonable access to the shared data. That is: Just dumping things in the shared folder won't work because when the laptop is out of the house we will be disconnected from the source data.
    * We just got a second iPhone. This means that both of us will be taking photos/video separately and trying to synch back to the master data store.
    * Basically, I'm trying to minimize data duplication as much as possible, and just synchronize the systems to each other.
    Thanks a ton in advance. If anyone has any suggestions at all, I would love to hear them. Including "This is in the wrong forum, go ask here instead..."

    So you have a desktop Mac and a laptop Mac, right? Two user accounts (and a third admin account) on each computer, right?
    I profess that I haven't tried this, but here is how I would approach your problem:
    Sharing Music and Photos between multiple user accounts on the same computer: 
    See if http://forums.macrumors.com/showthread.php?t=194992 and http://forums.macrumors.com/showthread.php?t=510993 provide any useful information to assist you in this endeavor.
    Sharing across multiple computers:
    Turn on file sharing on the Desktop (System Preferences > Sharing). Now you can mount the Desktop as an external drive on the laptop's Desktop. Copy the music and photo folders across. Will take awhile to do the first time. Then, for future use, get a copy of the donationware CarbonCopyCloner or equivalent. You can use CCC to selectively sync specific folders from one computer to the other. There may be a hassle with digital copyright issues on music and movies, though.
    Calendars:
    As you have suggested yourself, publishing yours and subscribing to hers is probably the best way to do it, on the same computer. Across computers, syncing with CCC or equivalent would probably be the way to go.

  • Error in Process Chain while loading Master data text?

    Hello All
    Having a Process chain and having 5 infopackages(Master data Attri & text)with FULL LOAD with an processing options as PSA and Infobject.In this one of the infopackage failed and the status is in red colour,and when I right clicked there is no <b>repeat</b> option?Now <b>how to run this process</b>?
    Heard that v can restart the process in debug screen?can anyone let me know how to do this?
    Many Thanks
    balaji

    Hi dear,
    you don't find the repeat option because this is a full load...I think you can easily restart the entire PC (or loading directly the failed infopackage from RSA1), because full loads for master data don't cause any data duplication !
    Hope it helps!
    Bye,
    Roberto

  • STANDARD DATA SOURCE 2LIS_11_VAHDR NOT GETING DATA

    RESPECTED GURUS
    I AM A NEW BI USER. I AM TRYING TO EXTRACT DATA FROM 2LIS_11_VAHDR WHICH IS A STANDARD DATASOURCE THROUGH RSA3, BUT I AM GETTING A MESSAGE "0 RECORD FOUND" BUT THERE ARE PLENTY OF DATA. I WAS TRIED WITH THE OTHER STANDARD DATA SOURCES AND GETTING THE SAME MESSAGE" ZERO RECORD FOUND".PLS GUIDE ME HOW I WOULD I GET DATA.
    REGARDS
    ABHAY MAHODAYA

    Hi Abhay,
    The data source which you are trying to use is comes under logistics. To take the logistics data from R/3 to BW set up table needs to be filled up. To fill the set up table follow the below mentioned procedures.
    First Delete the set up table to avoid data duplication:
    Go to T.Code: SBIW -> Settings for Application Specific data sources (PI) -> Logistics -> Managing Extract structures -> Initialization -> Delete the contents of the setup table
    Fill the set up table:
    Go to T.Code: SBIW -> Settings for Application Specific data sources (PI) -> Logistics -> Managing Extract structures -> Initialization -> Filling in the setup table -> Application specific set up of statistical data
    Choose the relevant node to perform the set up based on your requirement.
    Remember while filling the set up table no entry sould get posted in the relevant tables. So while you do this in production system, you may have to request for user locking for the relevant T.Codes.
    All the best.
    Regards,
    Sarath.

  • Active data retrieval

    I've created a report using an external data source, and set an active data retrieval interval.
    However, when i update a field, using sqlplus, it doesn't update the report.
    With an internal data source, it works fine.
    Can you help me with this?

    Theoretically, yes.
    Instead of using a External Data Source as the basis for the report, you could create a data object that is a replica of of the external source. And then you can have changes to the external data source, be replicated in the data object through an EL plan. I recommend this, since Enterprise Link resembles the ETL portion of any data warehouse, which should in charge of handling data cleansing and uploading. IMHO, this may result in data duplication, but you're better off than having your client hit that reprompt button every time she wants to view the report.

  • Logial Partition modeling - How to extract data

    Hello Gurus,
    I know this topic is discussed manytimes in forum. Everybody talks of creating multiple cubes with Multiprovider on top of them.
    My Question is lets suppose, I have 3 Purchasing header cube each for year 2009,2010 and 2011 & future. How do I load data in these three incocubes. I cannot do full laod as it is LIS. How delta is going to work? If I initialize only for 2011& future data it will bring only POs created on 2011(this is correct )  and later on. But what about changes that occured to 2010 rare but possible. I am confused how can I distribute delta records to respective years infocube.
    Please advise.
    Thanks in advance

    Hi Seema,
    As discussed we also have the same logical partitioning done in our project and in this case we have all reports build on top of Multiprovider. But in our case we have data flow built for only one cube from source system this is the very first basic cube which was existing before logical partitioning.
    And after logical partitioning all the data got moved from this cube to previous year cube and this cube acts as current year cube. Now coming to the delta part what we do is, if we have some delta coming for previous years then we have delta DTP created between source cube and each of the partitioned cube, we simply execute them transfer the delta records to the required cube and then do a selective deletion from base cube to avoide data duplication.
    If possible you can think of the same approach.
    Regards,
    Durgesh.

  • Data comparison and deletion of duplicates

    I have moved all of my data over to the mac from my PC. Everything has been fairly smooth, but I have a question for some of you regarding data duplication which I have not found a work around yet. Currently, I own a mac mini running 10.4.3 and I also picked up a copy of Filemaker Pro 7 along with MS Office 2004. I needed a database program for the Mac to keep my email lists up to date, but I haven't had time to tinker with Filemaker 7. So my problem is as follows:
    There are two .txt files. One with current email address's and the second with outdated email address's. I need to compare both files together and when I find a match between the current and bad email address I need to delete the address in the current email txt file.
    I am not fluent enough in Filemaker 7 to figure out how to do a comparison between the 2 tables and then delete the appropriate data when there is a match. I had this set up available on MS Access on my PC, but I need to replicate this process on the Mac. What options do I have available or if someone can help me in the right direction to a possible solution. I would assume that Automator could do this. Maybe there is something in Excel that I can create a workaround.
    Thanks again.
    Dankman

    FileMaker Pro has some support options where you might be able to find some more help:
    http://www.filemakerpro.com/support/mailinglists.html
    -Doug

  • Data flow of logical partitions via MultiProvider

    Hi experts,
    I need to report on an InfoCube which will hold 2 years of data totals about 120 million records..So I would like to use logical partition to distribute the data to other InfoCubes based on time characteristic e.g. month..But I am not sure how can I achieve this? What I would like to know is the procedure, e.g. how to set up DTPs, how to use Process Chain to automate the data load, recommended strategy/approach etc...
    Your advice is highly appreciated!
    Thanks in advance.
    Regards,
    Meng

    Hi Joon,
    In case of logical partitioning the very first import thing is reporting will always be done on Multiprovider so as to keep the reporting unchanged with respect to underlying infoprovider changes.
    Now you can have different approaches, let us say in your example you will have to create two different infocubes one for say current year data (I assume already existing) and one for previous year data.
    Create transformation and DTP between current cube and previous year data cube and move all the previous year data from current cube to newly created cube using delta method by giving month or year selection in DTP selections. Once you move the data validate the data in target cube and do the selective deletion on the current cube, so as to avoide the data duplication. Compression on current data cube will be good as most of the requests will be empty after selective deletion only holding request ID's.
    Till now you have separated the current year and previous year data in two separate cubes, create a multiprovider on top of these cubes and you are free to create reports on top of multiprovider.
    Now when you are loading data from source system you have two options,
    1) Keep the current data flow as it is for current cube from source system and load all the history data in current cube and let it be as it is in current cube, being in the same multiprovider your overall reporting will not have any impact. But if you want you can move this data from current cube to previous cube using earlier used delta DTP and do the selective deletion again. You can even automate this process.
    2) In second scenario you can create individual flows to current and history cube and do the selective loading from source system. Your data will be directly loaded to both these cubes but the extraction from source system will be twice from the same source.
    Regards,
    Durgesh.

Maybe you are looking for