Duplicate Records generating for infocube.

hi all,
when I load the data from datasource to infocube I am getting the duplicate records.Actually the data is loaded into the datasource from a flat file when i executed first time it worked but when i changed  the flat file structure i am not getting the modified content in the infocube instead it is showing the duplicates.In the data source     'preview data' option  it is showing the required data i.e modified flat file) .But where as in the infocube  i made all the necessary changes in the datasource,infocube,infopackage,dtp but still I am getting the duplicates. I even deleted the data in the infocube.Still i am getting the duplicates. What is the ideal solution for this problem ? One way is to create a new  data source with the modified flat file but  I think it is not ideal .Then what is the possible solution with out creating the data source again.
Edited by: dharmatejandt on Oct 14, 2010 1:46 PM
Edited by: dharmatejandt on Oct 14, 2010 1:52 PM
Edited by: dharmatejandt on Oct 14, 2010 1:59 PM

Finally i got it .I deleted the requestids in the infopackage ( right click infopackage go to manage) then i executed the transformation ,dtp finally i got the required output with out duplicate.
Edited by: dharmatejandt on Oct 14, 2010 4:05 PM

Similar Messages

  • How to get rid of duplicate records generated frm hierarchical cube in sql?

    Hi All,
    database version 10gR2.
    I am trying to aggregated data for two hierarchical dimensions, specifically organization and products.
    I am using one ROLLUP for each dimension, which would be two ROLLUP in GROUP BY clause to do the aggregation for every level of organization and product that are in included in the hierarchy.
    the troubling part is that that products that have data in corresponding fact table are not always located at the lowest level (which is 6) of the product hierarchy.
    e.g.
    product_id                               level
    0/01/0101/010102/01010201    5                           -->01010201, at level 5 , has data in fact table
    0/01/0101/010103                   4                           -->010103, at level 4, has data in fact table as well
    0/02/0201/020102/02010203/0201020304/020102030405              6   --> at level 6,(lowest level) and has data in fact table     we have a flat product hierarchy stored in table as below:
    prod_id  up_code_1 up_code_2 up_code_3   up_code_4   up_code_5 up_code_6
    01010201     0     01     0101     010102     01010201      NULL
    010103     0     01     0101     010103     null             nulldue to the NULL in product in level 6 for 01010201, when i run the query below, one duplicate record will be generated.
    for 010103, there will be 2 duplicate records, and for 020102030405 will be none.
    Encounter the same issue with the organizational dimension.
    currently, I am using DISTINCT to get rid of the duplicate records, but I don`t feel right to do it this way.
    So, I wonder if there is a more formal and standard way to do this?
    select distinct ORG_ID, DAY_ID,  TRADE_TYPE_ID, cust_id, PRODUCT_ID, QUANTITY_UNIT, COST_UNIT, SOURCE_ID,
          CONTRACT_AMOUNT, CONTRACT_COST, SALE_AMOUNT,SALE_COST, ACTUAL_AMOUNT, ACTUAL_COST, TRADE_COUNT
    from (     
    select  coalesce(UP_ORG_ID_6, UP_ORG_ID_5, UP_ORG_ID_4, UP_ORG_ID_3, UP_ORG_ID_2, UP_ORG_ID_1) as ORG_ID,
          a.day_id as day_id,        
          a.TRADE_TYPE_ID as TRADE_TYPE_ID,
          a.CUST_ID,
          coalesce(UP_CODE_6, UP_CODE_5, UP_CODE_4, UP_CODE_3, UP_CODE_2, UP_CODE_1) as product_id,
          QUANTITY_UNIT,
          COST_UNIT,
          A.SOURCE_ID as SOURCE_ID,
          SUM(CONTRACT_AMOUNT) as CONTRACT_AMOUNT,
          SUM(CONTRACT_COST) as CONTRACT_COST,
          SUM(SALE_AMOUNT) as SALE_AMOUNT,
          SUM(SALE_COST) as SALE_COST,
          SUM(ACTUAL_AMOUNT) as ACTUAL_AMOUNT,
          SUM(ACTUAL_COST) as ACTUAL_COST,
          SUM(TRADE_COUNT) as TRADE_COUNT     
    from DM_F_LO_SALE_DAY a, DM_D_ALL_ORG_FLAT B, DM_D_ALL_PROD_FLAT D --, DM_D_LO_CUST E
    where a.ORG_ID=B.ORG_ID
          and a.PRODUCT_ID=D.CODE
    group by rollup(UP_ORG_ID_1, UP_ORG_ID_2, UP_ORG_ID_3, UP_ORG_ID_4, UP_ORG_ID_5, UP_ORG_ID_6),
          a.TRADE_TYPE_ID,
          a.day_id,
          A.CUST_ID,
          rollup(UP_CODE_1, UP_CODE_2, UP_CODE_3, UP_CODE_4, UP_CODE_5, UP_CODE_6),
          a.QUANTITY_UNIT,
          a.COST_UNIT,
          a.SOURCE_ID );Note, GROUPING_ID seems not help, at least i didn`t find it useful in this scenario.
    any recommendation, links or ideas would be highly appreciated as always.
    Thanks

    anyone ever encounter this kind of problems?
    any thought would be appreciated.
    thanks

  • Calendar and Adressbook error: Duplicate records found for GUID

    Hi all,
    i have a Mountaion Lion Server running on a mac mini and everything was working well.
    This morning one user of mine is unable to connect to his calendar and adressbook.... i found this error in the log files:
    2013-06-30 15:19:50+0200 [-] [caldav-1]  [-] [twistedcaldav.directory.appleopendirectory.OpenDirectoryService#error] Duplicate records found for GUID ****USER_GUID****:
    2013-06-30 15:19:50+0200 [-] [caldav-1]  [-] [twistedcaldav.directory.appleopendirectory.OpenDirectoryService#error] Duplicate: ***USER_Shortname***
    Apperetnly there is a duplicate match in the database. how can i fix this issue?
    In Server App this user is only listed once.
    Mail and other services for this user are working correctly.
    Thanks for any advice!

    Hi Samuel,
    You may try:
    select code,count(code)
    from [dbo\].[@XTSD_XA_CMD\]
    group by code having count(code) > 1
    What is the result?
    Thanks,
    Gordon

  • Duplicate Records in the InfoCube how should i do to fix it?

    Hi All,
    we have different values between R/3 and BW, in the first control i find that in the cube i have duplicate records, how should i do to control and fix step by step this problem.
    the Infocube receive data from 7 ODS.
    let me know if you need further detail about our data model or load
    thanks a lot for your help
    Bilal

    Hello All,
    please i need further detail to don't make critical errors in the cube.
    when i control data in my infocube right click ==> view data with this selection
    0GL_ACCOUNT= R060501950
    0PSTNG_DATE= from 01.01.2009 to 31.03.2009
    i find duplicate records for all this info:
    0GL_ACCOUNT, 0CO_DOC_NO, 0DOC_DATE, 0PSTNG_DATE, 0COORDER, 0FISCPER... and all the key figures.
    to delete this duplicate records i have to make selections going: Manage ==> Contents tab ==> selective deletion (in the right corner beside) ... at this step what should i do?
    i have start in background or i can select "Selective Deletion"
    for this selective deletion which kind of info i have to put in reletion with my problem explained before
    0GL_ACCOUNT= R060501950
    0PSTNG_DATE= from 01.01.2009 to 31.03.2009
    IF I PUT THIS INFO AND EXECUTE wich records the system will delete? all the records with this selections or only the DUPLICATE RECORDS?
    Thanks a lot for your help
    Bilal

  • Duplicate records generated on same Key in Layout !!

    Dear Experts,
    We are facing a serious problem in our Forecasting Application.
    Its generating duplicate records on same key which results in wrong results and negative numbers.
    Also since this is not happening constantly but at certain time so difficult to regenerate error and find solution.
    If anyone have came across similar problem and found the solution please reply.
    Appreciate any help and suggestion on this....
    Thanks..

    Dear All,
    No we haven't compressed the cube also we are using same h/w and system as earlier. Actually lot of forecasting data was entered in UAT and in production but never we came across this issue. Now since this is problem in production, its very serious.
    As suggeted by SAP we impletemented note 1179076 - Incorrect data in planning but it is too general and we are not sure if our problem has been resolved by this note as its not consistent problem, it occurs randomly.
    we are on ABAP stack 18 and Java stack 16.
    Thanks a lot Krishna for note but since we are not using BIA it will not help.
    Please suggest if any more ways to debug / solve this strange problem.
    Regards,
    Jasmi

  • No Inventory record generated for zero stock check

    Hi All,
    The issue is no inventory record created for zero stock check.
    prerequisite:
    1. I activated the zero stock  check for the storage type in warehouse,
    2. I also activate the PZ inventory method for zero stock check for this storage type.
    3. storage type is under with 'P -Storage unit type" putaway strategy.
    Steps:
    1. First time I moved the last SU out of the bin, it generated a physical inventory when creating the TO.
        we can go to the item - other data of the TO, and found that hte inventory record created with "PN" inventory method. and the zero stock check indicator is 1.
    2. Then I continued to confirmed the TO with empty bin check. the invetory record was cleared when confirmed the TO.
    3. Then I moved that SU back to the same bin.  (only has one SU in the bin)
    4. Try to create the TO to move the SU out, the TO created, and if we go to TO item - other data, the Zero stock check indicator is 1, but this time, it doesn't create any inventory record?
    Can anyone tell me why it didn't create the inventory record in the 2nd TO?
    Thanks,

    i tried to replicate the case and got the same result that only first time inventory documen was created.
    seems its standard behaviour and if the inventory is already done for the bin then it does not carry out again for the same bin automatically as it did earlier.

  • Loacate and remove duplicate records in infocube.

    Hi!!
    we have found the infocube 0PUR_C01 contians duplicate records for the month april 2008, approx 1.5lac records are extracted to this infocube, similar situations may be occuring in the subsequent months.
    How do I locate these records and remove them for the infocube?
    How do I ensure that duplicate records are not extracted in the infocube?
    All answers/ links are welcome!!
    Yours Truly
    K Sengupto

    First :
    1. How do I locate duplicate records in an Infocube? other than down load all the records in an excel file and use excel funtionality to locate duplicate records.
    This is not possible since a duplicate record would not exist - the records are sent to a cube with a + and - sign to accordingly summarize data.
    You search for duplicate data would become that much troublesome.
    If you have a DSO to load it from - delete data for that month and reload if possible this would be quicker and cleaner as opposed to removing duplicate records.
    If you had
    ABC|100 in your DSO and it got doubled
    it would be
    ABC|+100
    ABC|+100
    against different requests in the cube - and added to this ill be your correct deltas also.

  • How to look at sid tables generated for an ods?

    Hi all,
    There  is atrns code : listschema which enables to have a look of the sid tables generated for infocubes.
    Is there any trsn code where we can see the list of sid tables generated for ods. if not let me know the procedure how to look at the sid tables generated for ods.
    regds
    hari

    hi,
    i think u will try like this:
    go to transaction code SE11
    TYEP UR NAME LIKE THIS:
    EX: /BIC/AODS_CUS00XXXX
    BI: MEANS business defined
    C: customer
    0 : SAP defined
    A: Active data table of ods
    TYPE LIKE THIS : /BIC/ODS NAME * (STAR)
    YOU MAY GET VALUES:
    THANKS, KR

  • J1INQEFILE - efile generation - Exported file shows Duplicate records.

    Dear Team,
    When I execute J1INQEFILE, I am facing problem with the e-file generation i.e. exported Excel file. When I execute and export the file in excel to the desktop, I can see duplicate records.
    For eg. On execution of J1INQEFILE, I can see 4 records on the SAP screen, whereas the exported file to the desktop shows me 2 more identical records i.e 6 records. As a result, in the SAP system i can see Base Amount as 40000 ie. 10000 for each. on the contrary the excel sheet shows me 60000 i.e. 6 records of 10000 each (bcse of 2 more duplicated records) and also shows the TDS amount wrong. How are the records getting duplicated? Is there any SAP note to fix this? We are debugging on this but no clue....
    Please assist on this issue....
    Thanks in Advance !!!!

    Dear Sagar,
    I  am an abaper,
    Even I came across the same kind of situation for one of our  client ,When we  execute J1INQEFILE, after exporting the same to  Excel file we use to get duplicate records.
    For this I have Debug the program and checked at  point of efile generation, there duplicate records were getting appended for the internal table that is downloaded for Excel, so I have pulled the Document number in to Internal table and  used  Delete Adjacent duplicates by comparing all fields and hence able to resolve the issue.
    Hope the same logic helps or guide you to proceed with the help of an abaper.
    <<Text removed>>
    Regards,
    Kalyan
    Edited by: Matt on Sep 8, 2011 9:14 PM

  • Check for duplicate record in SQL database before doing INSERT

    Hey guys,
           This is part powershell app doing a SQL insert. BUt my question really relates to the SQL insert. I need to do a check of the database PRIOR to doing the insert to check for duplicate records and if it exists then that record needs
    to be overwritten. I'm not sure how to accomplish this task. My back end is a SQL 2000 Server. I'm piping the data into my insert statement from a powershell FileSystemWatcher app. In my scenario here if the file dumped into a directory starts with I it gets
    written to a SQL database otherwise it gets written to an Access Table. I know silly, but thats the environment im in. haha.
    Any help is appreciated.
    Thanks in Advance
    Rich T.
    #### DEFINE WATCH FOLDERS AND DEFAULT FILE EXTENSION TO WATCH FOR ####
                $cofa_folder = '\\cpsfs001\Data_pvs\TestCofA'
                $bulk_folder = '\\cpsfs001\PVS\Subsidiary\Nolwood\McWood\POD'
                $filter = '*.tif'
                $cofa = New-Object IO.FileSystemWatcher $cofa_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
                $bulk = New-Object IO.FileSystemWatcher $bulk_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
    #### CERTIFICATE OF ANALYSIS AND PACKAGE SHIPPER PROCESSING ####
                Register-ObjectEvent $cofa Created -SourceIdentifier COFA/PACKAGE -Action {
           $name = $Event.SourceEventArgs.Name
           $changeType = $Event.SourceEventArgs.ChangeType
           $timeStamp = $Event.TimeGenerated
    #### CERTIFICATE OF ANALYSIS PROCESS BEGINS ####
                $test=$name.StartsWith("I")
         if ($test -eq $true) {
                $pos = $name.IndexOf(".")
           $left=$name.substring(0,$pos)
           $pos = $left.IndexOf("L")
           $tempItem=$left.substring(0,$pos)
           $lot = $left.Substring($pos + 1)
           $item=$tempItem.Substring(1)
                Write-Host "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp"  -fore green
                Out-File -FilePath c:\OutputLogs\CofA.csv -Append -InputObject "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp"
                start-sleep -s 5
                $conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=PVSNTDB33; Initial Catalog=adagecopy_daily; Integrated Security=TRUE")
                $conn.Open()
                $insert_stmt = "INSERT INTO in_cofa_pvs (in_item_key, in_lot_key, imgfileName, in_cofa_crtdt) VALUES ('$item','$lot','$name','$timestamp')"
                $cmd = $conn.CreateCommand()
                $cmd.CommandText = $insert_stmt
                $cmd.ExecuteNonQuery()
                $conn.Close()
    #### PACKAGE SHIPPER PROCESS BEGINS ####
              elseif ($test -eq $false) {
                $pos = $name.IndexOf(".")
           $left=$name.substring(0,$pos)
           $pos = $left.IndexOf("O")
           $tempItem=$left.substring(0,$pos)
           $order = $left.Substring($pos + 1)
           $shipid=$tempItem.Substring(1)
                Write-Host "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp"  -fore green
                Out-File -FilePath c:\OutputLogs\PackageShipper.csv -Append -InputObject "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp"
    Rich Thompson

    Hi
    Since SQL Server 2000 has been out of support, I recommend you to upgrade the SQL Server 2000 to a higher version, such as SQL Server 2005 or SQL Server 2008.
    According to your description, you can try the following methods to check duplicate record in SQL Server.
    1. You can use
    RAISERROR to check the duplicate record, if exists then RAISERROR unless insert accordingly, code block is given below:
    IF EXISTS (SELECT 1 FROM TableName AS t
    WHERE t.Column1 = @ Column1
    AND t.Column2 = @ Column2)
    BEGIN
    RAISERROR(‘Duplicate records’,18,1)
    END
    ELSE
    BEGIN
    INSERT INTO TableName (Column1, Column2, Column3)
    SELECT @ Column1, @ Column2, @ Column3
    END
    2. Also you can create UNIQUE INDEX or UNIQUE CONSTRAINT on the column of a table, when you try to INSERT a value that conflicts with the INDEX/CONSTRAINT, an exception will be thrown. 
    Add the unique index:
    CREATE UNIQUE INDEX Unique_Index_name ON TableName(ColumnName)
    Add the unique constraint:
    ALTER TABLE TableName
    ADD CONSTRAINT Unique_Contraint_Name
    UNIQUE (ColumnName)
    Thanks
    Lydia Zhang

  • Ignore duplicate records for master data attributes

    dear  experts ,
                   how & where can i  enable "ignore duplicate records" when i am running my DTP to load data
                     to master data attributes.

    Hi Raj
    Suppose you are loading master data to InfoObject and in PSA you have more than one records for Key.
    Let's assume you are loading some attributes of Document NUmber ( 0DOC_NUMBER) and in PSA you have multiple records for same document number. In PSA this is not a problem, as Key for PSA table is technical key and which is different for every record in PSA. But this is a problem for InfoObject attribute table as more than  one records for primary key ( here 0DOC_NUMBER) will create Primary Key consraints in database.
    This issue can be easily avoided by selecting "Handle Duplicate Records" in DTP . You will find this option under *update" tab of DTP.
    Regards
    Anindya

  • Duplicate records in PO scheduled line for framework order (going for dump)

    Hi all,
    i am creating framework purchase order with item category B. I am assigning external number range for PO. This PO is created with respect to expense PR. i just found there is duplicate records are appearing in schedule line for the same item.
    Then, after i save the PO, it is going for dump & sending some message to SAP inbox that there is duplicate records.
    later i can not find those PO in the system. Please let me know where i am doing mistake ? why duplicate records are appearing in PO scheduled line ?
    Thanks a lot
      pabi

    Hi,
    Once you please debug the particular program with the help of ABAPer. That may resolve your issue. Thanking you

  • Duplicate Records in Details for ECC data source. Help.

    Hello. First post on SDN. I have been searching prior posts, but have come up empty. I am in the middle of creating a report linking directly into 4 tables in ECC 6.0. I am having trouble in getting either the table links set up correctly, or filtering out duplicate record sets that are being reporting in the details section of my report. It appears that I have 119 records being displayed, when the parameters values should only yeild 7. The details section is repeating the 7 records 17 times (there are 17 matching records for the parameter choices in one of the other tables which I think is the cause).
    I think this is due to the other table links for my parameter values. But, I need to keep the links the way they are for other aspects of the report (header information). The tables in question are using an Inner Join, Enforced Both, =. I tried the other link options, with no luck.
    I am unable to use the "Select Disctinct Records" option in the Database menu since this is not supported when connecting to ECC.
    Any ideas would be greatly appreciated.
    Thanks,
    Barret
    PS. I come from more of a Functional background, so development is sort of new to me. Take it easy on the newbie.

    If you can't establish links to bring back unique data then use a group to diplay data.
    Group report by a filed which is the lowest commom denominator.
    Move all fields into group footer and suppress Group header and details
    You will not be able to use normal summaries as they will count/sum all the duplicated data, use Running Totals instead and select evaluate on change of the introduced group
    Ian

  • Duplicate records in database view for ANLA and ANLC tables

    HI all,
    Can any one please suggest me how to remove duplicate records from ANLA and ANLC tables when creating a database view.
    thanks in advance,
    ben.

    Hi,
    Suppose we have two tables one with one field and another with two fields:
    TAB1 - Key field KEY1
    TAB2 - Key fields KEY1 & Key 2.
    No if we create a Database view of these two tables we can do by joining these two tables on Key field KEY1.
    Now if in View tab we have inculded TAB1- Key1.
    Now lets suppose following four entries are in table TAB1: (AAA), (BBB), (CCC).
    and following entries are in table TAB2: (AAA, 1), (AAA, 2),  (BBB, 3), (BBB, 5), (DDD, 3).
    The data base view will show following entries:
    AAA,
    AAA,
    BBB,
    BBB,
    Now these entris are duplicate in the output.
    This is because TAB2 has multilple entries for same key value of TAB1.
    Now if we want to remove multiple entries from ouput - we need to include an entry in selection conditions like TAB2-KEY2 = '1'.
    Regards,
    Pranav.

  • Need a query for duplicate records deletion

    here is one scenario...
    23130 ----> 'A'
    23130 ----> 'X'
    23130 ----> 'c'
    These are duplicate records.. when we remove duplicates, the record must get 'c', if it contains A,C,X. If it contains A and X, then the record must get 'X'. That means the priority goes like this C-->X-->A. for this i need query.. this is one scenario. It would be great if u reply me asap.

    Hello
    It's great that you gave examples of your data, but it is quite helpful to supply create table and insert statements too along with a clear example of expected results. Anyway, I think this does what you are looking for.
    CREATE TABLE dt_dup (ID NUMBER, flag VARCHAR2(1))
    INSERT INTO dt_dup VALUES(23130, 'A');
    insert into dt_dup values(23130, 'X');
    insert into dt_dup values(23130, 'C');
    INSERT INTO dt_dup VALUES(23131, 'A');
    INSERT INTO dt_dup VALUES(23131, 'X');
    DELETE
    FROM
      dt_dup
    WHERE
      ROWID IN (  SELECT
                    rid
                  FROM
                    (   SELECT
                          rowid rid,
                          ROW_NUMBER() OVER (PARTITION BY ID ORDER BY CASE
                                                                        WHEN flag = 'A' THEN
                                                                          3
                                                                        WHEN flag = 'X' THEN
                                                                          2
                                                                        WHEN flag = 'C' THEN
                                                                          1
                                                                      END
                                            ) rn
                        FROM
                          dt_dup
                  WHERE
                    rn > 1
    select * from dt_dup;HTH
    David
    Edited by: Bravid on Jun 30, 2011 8:12 AM

Maybe you are looking for

  • In need of guidance for website ad

    I am in the process of making a website and I have everything done and ready to go. I had it reviewed and am being asked to put in a ad on the home page. The website I was shown is http://www.dog.com/. On the top of this site is a flash ad but the pr

  • Motion is messing up my colors!

    When I export a clip to motion from final cut pro to add graphics to it the clip comes back with less contrast, lighter and heavier in reds after rendering. What's up with that!?

  • EX90 and Self-Provisioning IVR

    I am building a demo and I want users to be able to connect a EX90/60 to it, let auto-register with CUCM, and then use the self-provisioning IVR to setup the device.  I have the Self-Provisioning setup and working with all the phones like 9971/DX650/

  • PRODUCTION to LIVE Scenarios

    Hey Adobe: I've started using Adobe ICE in replacement for CONTRIBUTE-based services we offer at our company. More and more people are asking for simple CMS on their sites. I love the approach. After working with the DreamWeaver / Contribute scenario

  • Help with Photomerge Panorama

    Hello, I can't find what i'm looking for in the discuission so I post here. My problem is I want to have an image stitching software that is not expensive and easy to use. I try to do it in Photoshop Elements 12 (PSE12) as trial software. As I can se