Fact Table allows duplicate records

Is Fact Table allows duplicate records?

What do you mean by duplicate records? It could be what appears duplicate to you is having some other technical key info that are different in the fact table.
At technical level, there wouldn't be duplicate records in fact table [in SAP's BW/BI there are two fact tables for each cube - which can itself cause some confusion]

Similar Messages

  • Join creation between 2 tables while duplicate records exist

    Hi all ,
    I have following 2 tables. I need to join both table on SAM_CUST_ID col in ABC (detail table) and DE_CUST_DI (UK) in XYZ (master table). But both columns contain duplicate values of customer because we load these table from anther database.Is there any solution that how can i join these tables by using DE_CUST_ID and SAM_CUST_ID.If I create Enable Novalidate constraint on these columns thery are working ??? Any suggestion would be highly appriciated.
    Regards
    REM TEST XYZ
      CREATE TABLE "TEST"."XYZ"
       (     "DE_BANK_ID" VARCHAR2(255 BYTE),
         "DE_CUST_ID" NUMBER(*,0),
         "XYZ_NO" NUMBER(*,0),
         "DE_HOLD_OUTPUT" VARCHAR2(255 BYTE),
         "DE_SHORT_NAME" VARCHAR2(255 BYTE),
         "DE_NAME1" VARCHAR2(255 BYTE),
         "DE_NAME2" VARCHAR2(255 BYTE),
         "DE_STREET_ADDR" VARCHAR2(255 BYTE),
         "DE_TOWN_COUNTRY" VARCHAR2(255 BYTE),
         "DE_POST_CODE" NUMBER(*,0),
         "DE_COUNTRY" VARCHAR2(255 BYTE),
         "DE_COPIES" NUMBER(*,0),
         "DE_EMAIL" VARCHAR2(255 BYTE),
         "DE_LOCAL_A1" VARCHAR2(255 BYTE),
         "DE_LOCAL_A2" VARCHAR2(255 BYTE),
         "DE_LOCAL_A3" VARCHAR2(255 BYTE),
         "DE_LOCAL_N1" FLOAT(126),
         "DE_LOCAL_N2" FLOAT(126),
         "DE_LOCAL_N3" FLOAT(126),
          CONSTRAINT "XYZ_UNIQUE" UNIQUE ("DE_CUST_ID", "XYZ_NO") ENABLE
    REM TEST XYZ_IDX1
      CREATE UNIQUE INDEX "WH1"."XYZ_IDX1" ON "WH1"."XYZ" ("DE_CUST_ID", "XYZ_NO")
    REM TEST ABC
      CREATE TABLE "TEST"."ABC"
       (     "ABC_BANK_ID" VARCHAR2(255 BYTE),
         "ABC_CUST_ID" NUMBER(22,0),
         "ABC_ABC_ID" VARCHAR2(255 BYTE),
         "ABC_PORT_NAME" VARCHAR2(255 BYTE),
         "ABC_CCY" VARCHAR2(255 BYTE),
         "ABC_INDICATOR" NUMBER(*,0),
         "ABC_MANAGED_ACCOUNT" VARCHAR2(255 BYTE),
         "ABC_CLOSED_DATE" DATE,
         "ABC_INVESTMENT_PGM" VARCHAR2(255 BYTE),
         "ABC_ACC_OFF" NUMBER(*,0),
         "ABC_FREQUENCY" VARCHAR2(255 BYTE),
          CONSTRAINT "ABC_UNIQUE" UNIQUE ("ABC_ABC_ID") ENABLE,
          CONSTRAINT "ABC_DE_ADD_CUSID_FK" FOREIGN KEY ("ABC_CUST_ID")
           REFERENCES "WH1"."DE_ADDR" ("DE_CUST_ID") ENABLE NOVALIDATE
    REM WH1 ABC_UNIQUE
      CREATE UNIQUE INDEX "WH1"."ABC_UNIQUE" ON "WH1"."ABC" ("ABC_ABC_ID")
     

    would u like to explain by an example?First of all why not removing duplicates
    If you need to keep duplicates, you entire design looks wrong.
    You have to follow normalization rules and always have LPK (logical primary key) and PK for transactional tables
    An example is simple
    A1 and B2 duplicates
    After data is loaded it should look like this
    Table 1
    id val
    1 A1
    2 A1
    3 B2
    4 B2
    Table 2
    id val
    1 A1
    2 A1
    3 B2
    4 B2--
    (id - unique sequence)
    Of cause there is more possibility but
    nobody can tell you how to do this (or what to do) based on what you showed.
    Good luck

  • Duplicate record with same primary key in Fact table

    Hi all,
       Can the fact table have duplicate record with same primary key . When i checked a cube i could see records with same primary key combination but the key figure values are different. My cube has 6 dimentions (Including Time,Unit and DP) and 2 key figures. So 6 fields combined to form the composite primary key of the fact table. When i checked the records in se16 i could see duplicate records with same primary key. Ther are no parallel loading happening for the cube.
    BW system version is 3.1
    Data base is : Oracle 10.2
    I am not sure how is this possible.
    Regards,
    PM

    Hi Krish,
       I checked the datapacket dimention also. Both the record have same dimention id (141). Except the Keyfigure value there is no other change in the Fact table record.  I know this is against the basic DBMS primary key rule. But i have records like this in the cube.
    Can this situation arise when same records is there in different data packet of same request.
    Thx,
    PM
    null

  • Is it ok? if we have 42 million records in a single fact table!!

    Hello,
    We have three outstanding fact tables, and we need to add one more fact type, and we were thinking whether we can do two different fact tables, or can we put the values in one of the same fact table which is similar, but the records are upto 42 million if we add ,so my question is having a single fact table with all records, or breaking it down, to two different ones!!!Thnx!!

    I am not sure what is an "outstanding fact" or an "fact type". A 42m fact table doesn't necessarily indicate you are doing something wrong although it does sound as odd. I would expect most facts to be small as they should have aggregated measures to speed up report. In some cases you may want to drill down to the detailed transaction level in which case you may find these large facts. But care should be taken not to allow users to query on this fact without user the "transaction ID" which obviously should be indexed and should guarantee that queries will be quick.
    Guessing from your post (as it is not clear not descriptive enough) it would seem to imply that you are adding a new dimension to your fact and that will cause the fact to increase it's row count to 42m. That probably means that you are changing the granularity of the fact. That may or may not be correct, depending on your model.

  • Record count is double in the Fact tables when compared to Cube data!

    Hello BW Gurus,
    I have 2 question to be answered!
    1. I have a cube which consist of 3 years of data. Due to some bad data in it, I have dropped the cube data completely from fact and dim tables and did an init. from ODS. the load failed due to "No SID found for value '2000000000000000000000010' of characteristic 0MAT_PLANT". I have run a RSRV test to find if there is any inconsistency in SID's for 0MAT_PLANT but everything looks great. Considering the time span in mind to finish the load I have removed that record from PSA and re-pushed the data from PSA which was successful, however I wanted to know if anyone has come across such kind of error.
    2. The load finished successfully in the cube but the performance of the cube was very bad coz my cube request shows only 3.5 million records but when I checked in the fact table it was showing double i.e. 7 million records, which is because I deleted the failed init. request and re-pushed the data from PSA. can anyone suggest me how to overcome this I mean how to bring back my fact table records to 3.5 million?
    please advise me and thanks so much!
    Swathi.

    hi Swathi,
    1. for 0mat_plant no sid, beside rsrv, you can try rsd1, infoobject maintenance, there is menu (about 3rd from left) 'fill sid ...'.
    for infocube update, please make sure you 'delete data',
    'fact and dim...'. check again if fact table counts 0 records then you can update from ods with 'initialize'. or if you go by psa then delete data both infocube and ods (from ods to cube should sufficient).
    2. for performance, check in infocube -> manage -> performance, make sure index and statistic is green. you can create index there, also refresh the statistic.
    hope this helps.

  • Counting fact table records within brackets

    Hi,
    I have a star schema setup in OBIEE, consisting of a fact table of transaction records, linked to a customer dimension and a time dimension via PK-FK relations.
    One of the requirements is to count the number of customers based on how many transactions they have done per time period. I imagine solving this by creating logical colums such as "# customers (1-10 transactions)", "# customers (11-20 transactions)", etc. The counting should be based on the current level of drill-down in the time dimension.
    Any advice on how to approach this would be greatly appreciated
    best regards
    M

    Thanks for you reply, it brought me a lot closer to a solution. So far i've been able to create a bracket attribute on the customer dimension (via Answers and then into Administrator, as advised). I've verified this against the database, and it categorizes each customer correctly.
    The remaining challenge is that I want to see the count within each category on an aggregate bracket level rather than per customer. Using one attribute ("customer bracket") which can take on one of several values (e.g. "0", "1-10", "11-20", etc.) i'm not sure how to accomplish this.
    As a workaround i tried creating one column in Answers for each bracket, taking on the value 1 for records falling within that bin, and 0 for all other records. Treating the column as numeric, and adding a sub total (sum), i get the answers i want for each bin. The problem here however, is that one line (value 0 or 1) is shown for each customer record, which makes the report useless for most practical intents and purposes as thousands of records are shown.
    Any thoughts on which method i should go ahead with, and how?
    regards
    M

  • Key Figure units in Fact Table - Error

    All -
    When a run a report off of a cube, some row display 0 when there are corresponding values in my cube.  The report doesn't agree with LISTCUBE.  I have ran transaction RSRV on my cube and tested the "Key figure units in fact tables of Infocube" and I get an error saying that 1380 units are missing from fact table.
    <b>Diagnosis
    In the fact table /BIC/FEU_FRCTS records have been found that contain values other than zero for key figures that have units, but that have no value for the unit of the key figure. Since the value of the unit has to correspond to the value of the key figure, this inidicates an error when the data was loaded. The values of the units have not been loaded into BW correctly. Choose Details to display the incorrect records.</b>
    Does anyone know what this error means? How do I solve this problem?
    Thanks,
    Nyrvole

    hi Nyrvole,
    as the message said, you have keyfigures with unit but the unit value not filled, click 'detail' as suggested to check which keyfigure(s) involved, that go to rsd2 type in that keyfigure and see which infoobject unit is used, then check transfer/update rules how this unit infoobject mapped, try correct the values and upload again.
    there is option 'repair' in rsrv but think in this case it can't fix the error, just try.
    hope this helps.

  • How to design a fact table to keep track of active dimensions?

    I would like to design a classic OLAP facts table using a star scheme. The SQL model of the facts table should be independent of any concrete RDBMS technology and portable between different systems.
    The problem is this: users should be able to select subsets of the facts based on conjunctive queries on the dimension values defined for the facts. However, the program that provides the interface for doing this to the user should only present those dimensions where anything is still selectable at all. For example, if a user selected year 2001 and for dimension contract code there is only a single value for all records in the fact table for that year, this dimension should not be shown to the user any more. This should be solved in a generic way. So for n dimensions in total, if the current set of facts is based on constraints from j dimensions, I want to know for which of the remaining n-j dimensions there is still something to select from and only show those.
    The obvious way is to make a count(*) query on the distinct foreign keys of each of the dimensions on the fact table, using the same where clause. That means that one would need (n-j) such queries on the whole facts table and that sounds like an awful waste of resources given that the original query for selecting the facts could have done it internally "on the fly".
    How can this be achieved in the most performant way? Is there a "classical" way of how to approach this problem? Is there tool support for doing this efficiently?
    Any help or pointers to where one could find out more about this would be greatly apreciated - thank you!

    >
    Did you get the counts for each value of each dimension by doing a separate query with the current "WHERE" clause on each dimension?
    >
    My method doesn't apply to your use case. I wrote a Java class to create my own bit-mapped indexes on CSV files. So each attribute value was a one million bit binary raw.
    I don't know, and don't want to know, what your particular requirements are. But I can show you a basic process that will work for large numbers of rows. Get a simple process working and then explore to see if it will meet your particular needs. Not going to answer questions here about anything but about my example code
    1. Assume a single fact table with one primary key column and multiple single-value attribute columns.
    2. The table is not subject to DML operations AT ALL - truncate and load if you want apply changes. Meaning it will be useful for research purposes on archived data.
    3. The purpose of the table is to select the fact table ROWIDs for records of interest. So the only value selected is a result set of ROWIDs that can then be used to get any of the normal FACt table data and other linked data as needed.
    Create the table - insert some records, create a bitmap index on each dimension column and collect the statistics
    ALTER TABLE SCOTT.STAR_FACT
    DROP PRIMARY KEY CASCADE;
    DROP TABLE SCOTT.STAR_FACT CASCADE CONSTRAINTS;
    create table star_fact (
        fact_key varchar2(30) DEFAULT 'N/A' not null,
        age      varchar2(30) DEFAULT 'N/A' not null,
        beer    varchar2(30) DEFAULT 'N/A' not null,
        marital_status varchar2(30) DEFAULT 'N/A' not null,
        softdrink varchar2(30) DEFAULT 'N/A' not null,
        state    varchar2(30) DEFAULT 'N/A' not null,
        summer_sport varchar2(30) DEFAULT 'N/A' not null,
        constraint star_fact_pk PRIMARY KEY (fact_key)
    INSERT INTO STAR_FACT (FACT_KEY) SELECT ROWNUM FROM ALL_OBJECTS;
    create bitmap index age_bitmap on star_fact (age);
    create bitmap index beer_bitmap on star_fact (beer);
    create bitmap index marital_status_bitmap on star_fact (marital_status);
    create bitmap index softdrink_bitmap on star_fact (softdrink);
    create bitmap index state_bitmap on star_fact (state);
    create bitmap index summer_sport_bitmap on star_fact (summer_sport);
    exec DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'STAR_FACT', NULL, CASCADE => TRUE);Now if you run the 'complex' query for the example from my first reply you will get
    SQL> set serveroutput on
    SQL> set autotrace on explain
    SQL> select rowid from star_fact where
      2   (state = 'CA') or (state = 'CO')
      3  and (age = 'young') and (marital_status = 'divorced')
      4  and (((summer_sport = 'baseball') and (softdrink = 'pepsi'))
      5  or ((summer_sport = 'golf') and (beer = 'coors')));
    no rows selected
    Execution Plan
    Plan hash value: 1934160231
    | Id  | Operation                      | Name                  | Rows  | Bytes |
    |   0 | SELECT STATEMENT               |                       |     1 |    30 |
    |   1 |  BITMAP CONVERSION TO ROWIDS   |                       |     1 |    30 |
    |   2 |   BITMAP OR                    |                       |       |       |
    |*  3 |    BITMAP INDEX SINGLE VALUE   | STATE_BITMAP          |       |       |
    |   4 |    BITMAP AND                  |                       |       |       |
    |*  5 |     BITMAP INDEX SINGLE VALUE  | AGE_BITMAP            |       |       |
    |*  6 |     BITMAP INDEX SINGLE VALUE  | MARITAL_STATUS_BITMAP |       |       |
    |*  7 |     BITMAP INDEX SINGLE VALUE  | STATE_BITMAP          |       |       |
    |   8 |     BITMAP OR                  |                       |       |       |
    |   9 |      BITMAP AND                |                       |       |       |
    |* 10 |       BITMAP INDEX SINGLE VALUE| SOFTDRINK_BITMAP      |       |       |
    |* 11 |       BITMAP INDEX SINGLE VALUE| SUMMER_SPORT_BITMAP   |       |       |
    |  12 |      BITMAP AND                |                       |       |       |
    |* 13 |       BITMAP INDEX SINGLE VALUE| BEER_BITMAP           |       |       |
    |* 14 |       BITMAP INDEX SINGLE VALUE| SUMMER_SPORT_BITMAP   |       |       |
    Predicate Information (identified by operation id):
       3 - access("STATE"='CA')
       5 - access("AGE"='young')
       6 - access("MARITAL_STATUS"='divorced')
       7 - access("STATE"='CO')
      10 - access("SOFTDRINK"='pepsi')
      11 - access("SUMMER_SPORT"='baseball')
      13 - access("BEER"='coors')
      14 - access("SUMMER_SPORT"='golf')
    SQL>As you can see Oracle is combining bitmap indexes on columns in a single table to implement the same AND/OR complex conditions I showed earlier. It doesn't need any other table to do this.
    In 11g you can create virtual columns and then index them.
    so if you find that the condition 'young' and 'divorced' is used frequently you could create a VIRTUAL 'young_divorced' column and create an index.
    alter table star_fact add (young_divorced AS (case
       when (age = 'young' and marital_status = 'divorced') then 'TRUE' else 'N/A' end) VIRTUAL);
    create bitmap index young_divorced_ndx on star_fact (young_divorced);
    exec DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'STAR_FACT', NULL, CASCADE => TRUE);Now you can query using the name of the virtual column
    SQL> select rowid from star_fact where young_divorced = 'TRUE'
      2  and  (state = 'CA') or (state = 'CO')
      3  /
    no rows selected
    Execution Plan
    Plan hash value: 2656088680
    | Id  | Operation                    | Name               | Rows  | Bytes | Cost
    |   0 | SELECT STATEMENT             |                    |     1 |    28 |
    |   1 |  BITMAP CONVERSION TO ROWIDS |                    |       |       |
    |   2 |   BITMAP OR                  |                    |       |       |
    |*  3 |    BITMAP INDEX SINGLE VALUE | STATE_BITMAP       |       |       |
    |   4 |    BITMAP AND                |                    |       |       |
    |*  5 |     BITMAP INDEX SINGLE VALUE| STATE_BITMAP       |       |       |
    |*  6 |     BITMAP INDEX SINGLE VALUE| YOUNG_DIVORCED_NDX |       |       |
    Predicate Information (identified by operation id):
       3 - access("STATE"='CO')
       5 - access("STATE"='CA')
       6 - access("YOUNG_DIVORCED"='TRUE')
    SQL>
    ----------------------------------------------------------------Notice that at line #6 the new index was used. The VIRTUAL column itselfl doesn't create data for the fact table; the definition only exists in the data dictionary.
    The YOUNG_DIVORCE_NDX is real and does consume space. The tradeoff is additional space for the index but you make the query easier because you don't have to recreate the complex condition every time.
    Oracle can work with the complex condition and combine the indexes so this really only helps the query writer. Your UI should be able to hide the query construction from the user so I would avoid the use of VIRTUAL columns and an additional index until you demonstrate you really need it.
    If you provide users with their own RESULT table to store custom query results you could just store the query name and the set of primary keys from the result set. I used ROWIDs in the example but don't use rowid for a real application - use a primary key value that won't change.
    So your UI would let users construct complext dimension queries for 'young_sportsters' and get a result set of primary keys for that. They could save the label 'young_sportsters' and the primary keys in their own work table. Then you can let them run queries that use the primary keys to query data from your active data warehouse to get any other data it contains.
    >
    Did you get the counts for each value of each dimension by doing a separate query with the current "WHERE" clause on each dimension?
    >
    For an Oracle implementation you need to do a count select for each dimension. I haven't tried it but you might be able to do multiple dimensions in a singe query. One query would look like this>
    -- get the dimension counts
    SQL> select beer, count(*) from star_fact group by beer;
    BEER                             COUNT(*)
    N/A                                 56977
    Execution Plan
    Plan hash value: 1692670403
    | Id  | Operation                | Name        | Rows  | Bytes | Cost (%CPU)| Ti
    |   0 | SELECT STATEMENT         |             |     1 |    12 |     3   (0)| 00
    |   1 |  SORT GROUP BY NOSORT    |             |     1 |    12 |     3   (0)| 00
    |   2 |   BITMAP CONVERSION COUNT|             | 56977 |   667K|     3   (0)| 00
    |   3 |    BITMAP INDEX FULL SCAN| BEER_BITMAP |       |       |            |
    SQL>Notice that Oracle uses only the index to gather the data.

  • Duplicate Records in InfoProvider

    Hi,
    I am loading the Transaction Data from the flat files in to the Data Sources.
    Initially I have One Request (data from one flat file) loaded in to PSA and InfoCube that has say 100 records.
    Later, I loaded another flatfile in to PSA with 50 records (without deleting the initial request).  Now in PSA, I have 150 records.
    But I would like to load only 50 New records in to the Infocube.  When I am executing the DTP, its loading 150 records.  i.e. Total 100 (initial records) + 150 = 250 records.
    Is there any option by which I can avoid loading the duplicate records in to my InfoCube.
    I can find an option that says "Get Data by Request" in DTP.  I tried checking that, but no luck.
    How can I solve this issue and what exactly the check on "Get Data by Request" does?
    Thanks,

    Hi Sesh,
    There is an option in DTP where you can load only the new records. I think you have to select "Do not allow Duplicate records" radio button(i guess)... then try to load the data... I am not sure but you can research for that option in DTP...
    Regards,
    Kishore

  • What is the use of ingnore duplicate records ?

    Hi guru's
    what is the use of ingnore duplicate records ? it will not allow duplicate records when u r loading masterdata ?
    actually md will not duplicate records , why we use this option? with out select the check box in duplicate records, what will happen?
    its supports only for flat file or r/3 sys, if supports flat file tell me both procedure
    Thanks
    Reddy

    Hi,
    If u checks Ignore Duplicate records means, system will allow duplicate records.
    Actually Master data will not have duplicate records. This option is rarely useful in certain scenarios.
    regards
    SR

  • Help with clean up duplicate records

    One of my tables have duplicate records in paris that needs to be cleaned up. I want to cleanup the earlier TIMESTAMP.
    ID MODIF_TIME_STAMP
    483070 1/7/2005 11:49
    483070 1/13/2005 17:19
    483071 1/6/2005 11:49
    483071 1/14/2005 17:19
    483072 1/15/2005 11:49
    483072 1/07/2005 17:19
    9000 records
    What is the easiest way to pick only the ID of the earlier timestamp of each pair.
    Thanks in advance.
    Ittichai

    Hallo,
    delete from your_tab
    where rowid in
    (select rowid from
    (select rowid, id, modif_time_stamp, row_number() over (partition by id order by modif_time_stamp desc) rn from your_tab)
    where rn > 1
    )This query deletes records with earlier date for every id
    Regards
    Dmytro
    Message was edited by:
    Dmytro Dekhtyaryuk

  • Error RSMPTEXTS~:Duplicate record dur durin EHP5 in phase SHADOW_IMPORT_INC

    Hi expert,
    i find this error during an EHP5 upgrade in phase shadow_import_inc:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    SHADOW IMPORT ERRORS and RETURN CODE in SAPK-701DOINSAPBASIS.ERD
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    2EETW000 Table RSMPTEXTS~: Duplicate record during array insert occured.
    2EETW000 Table RSMPTEXTS~: Duplicate record during array insert occured.
    1 ETP111 exit code           : "8"
    here also the last part of log SAPK-701DOINSAPBASIS.ERD
    4 ETW000 Totally 4 tabentries imported.
    4 ETW000 953984 bytes modified in database.
    4 ETW000  [     dev trc,00000]  Thu Aug 11 16:58:45 2011                                             7954092  8.712985
    4 ETW000  [     dev trc,00000]  Disconnecting from ALL connections:                                       28  8.713013
    4 ETW000  [     dev trc,00000]  Disconnecting from connection 0 ...                                       38  8.713051
    4 ETW000  [     dev trc,00000]  Closing user session (con=0, svc=0000000005C317C8, usr=0000000005C409A0)
    4 ETW000                                                                                8382  8.721433
    4 ETW000  [     dev trc,00000]  Detaching from DB Server (con=0,svchp=0000000005C317C8,srvhp=0000000005C32048)
    4 ETW000                                                                                7275  8.728708
    4 ETW000  [     dev trc,00000]  Now I'm disconnected from ORACLE                                        8648  8.737356
    4 ETW000  [     dev trc,00000]  Disconnected from connection 0                                            84  8.737440
    4 ETW000  [     dev trc,00000]  statistics db_con_commit (com_total=13, com_tx=13)                        18  8.737458
    4 ETW000  [     dev trc,00000]  statistics db_con_rollback (roll_total=0, roll_tx=0)                      14  8.737472
    4 ETW000 Disconnected from database.
    4 ETW000 End of Transport (0008).
    4 ETW000 date&time: 11.08.2011 - 16:58:45
    4 ETW000 1 warning occured.
    4 ETW000 1 error occured.
    1 ETP187 R3TRANS SHADOW IMPORT
    1 ETP110 end date and time   : "20110811165845"
    1 ETP111 exit code           : "8"
    1 ETP199 ######################################
    4 EPU202XEND OF SECTION BEING ANALYZED IN PHASE SHADOW_IMPORT_INC
    and i've already try to use the last version of R3trans.
    can you help me????
    thanks a lot
    Franci

    Hello Fransesca,
    I am also facing same error while upgrading ehp5 upgradation please if you know tell me steps to solve it.
    Thanks,
    Venkat

  • Allow duplicate key in InfoObject/ compounding length problem

    Hi all.
    I try to create InfoObject with compounding key, consisting of IObj itself plus 4 compound attribs and total length of 63 symbols (max allowed is 60). Obviously, the system doesn't allow me to activate the InfoObject.
    I consider to move compounds into attributes area, but don't know is it possible to allow duplicate records in InfoObject?
    or
    Maybe you can help me with some other advise how to manage this compound key limitation?
    additional info:
    The InfoObject is the "copy" of  source system master data table

    Hi,
    ok, now I see. Well there is a restriction that the keyfields as in your example shouldn't be longer than 60 characters (summed up). So you need to think about using another object as compound. Isn't that possible? Another option might be to post the data to a ods/dso first which has your object as well as the compounding attributes as keyfields and which was already recommended, and after that, post the value to a new infoobject which gets it's unique value from a number range object that you need to define specially for this purpose. Add the keyfields of that ods/dso as navigational characteristics to that new object.
    regards
    Siggi

  • Setting change to correct duplicate records ..

    Hi Experts,
    > I tried loading a flat file to an ODS which was in the quality system and i got some duplicate rows which i had to correct manually in the PSA.
    > When i tried loading the same flat file to the same ODS in Production , i did not get the error message 'duplicate records' .
    Can you please tell me what is the setting that has to be changed to allow duplicate records, or correct it ...that i have to do in the quality.
    Help done would be defenately rewarded with points ....
    Thanks ,
    Santosh ....

    Hi Santhosh,
    Remove check for unique data records.
    <i><b>Unique Data Records</b></i>
    With the Unique Data Records indicator, you determine whether only unique data records are to be updated to the ODS object. This means that you cannot load a data record into the ODS object the key combination for which already exists in the system – otherwise a termination occurs. Only use this setting when you are sure that only unique data records are to be loaded into the ODS object (for example, single documents). A typical application of this is in the loading of mass data. It improves the load performance.
    You can also deselect this indicator again (even if data has already been loaded into the ODS object). This can be necessary if you want to re-post deleted data records using a repair request (see: Tab Page: Updating). In this case, you need to deselect the Unique Data Records indicator before posting the repair request, following which you can then reset the Unique Data Records indicator once more. The regeneration of metadata of the Export DataSource, which takes place when the ODS object is reactivated, has no effect on the existing data mart delta method.
    More info @ <a href="http://help.sap.com/saphelp_nw04/helpdata/en/a6/1205406640c442e10000000a1550b0/content.htm">ODS Object Settings</a>
    Hope it Helps
    Srini

  • Duplicate records in Fact Tables

    Hi,
    We are using BPC 7.0 MS SP7. BPC created duplicate records in WB and Fac2 tables. We faced similar issue before and the solution was to reboot the server and cleanup the additional data created. I think it should not be an issue with the script logic files we have. We had the issue across all applications. Data is fine now after the server reboot and running the same logic files.  I want to know if any one faced this issue and if there is any solution other than reboot. I appreciate your help.
    Thanks
    Raj

    Hi Sorin,
    I know this thread is rather old, but i have a problem which is pretty much related to this thread and appreciate if you could assist me. I have client running on 7.0 MS who has been using it for the past 3 years.
    It is a heavily customized system with many batch files running daily to update dimensions, copy data and sort. And Yes we do use custom packages that incorporates stored procedures.
    Recently, with no change in environment, we encountered our factwb ballooning up out of no where. fact table only contains less then 1 gb data but, factwb has 200 gb data and practically paralayzed the system. There is also equilavent 300 gb increase in the log files.
    We are not able to find out what caused this? Or if even the 200gb records in wb are even valid records that are duplicated. Is there a way to troubleshoot this?

Maybe you are looking for

  • Bestbuy threatens my iPhone4s pre order

    I pre ordered the iPhone 4s the first day pre orders were available. I was told then my order would be fulfilled within a week. Since then I have received numerous emails from bestbuy apologizing for the wait and reassuring me my phone will arrive. H

  • Auto Packing in outbound delivery via IDOC

    Hello All, I got a requirement for Auto-packing in outbound delivery via IDOC process. Can anybody through some light on the above requirement in finding Message type, process code and so on... your every suggestion or hint will help me to fulfill th

  • How do I change the new tabs page from google to the thumbnail one?

    I just downloaded some stuff and when I got back onto Firefox it made the home page and new tab page Google. I changed the home page back but I'm unsure of how to make the new tab page the way it is with the thumbnails. None of the articles I've read

  • OTL - Hwo to change the error message

    Hi When someone submits time beyond the valids project and task dates, we get teh followign error. We want to change the text of this message to give more meaningful text. How do i change that in OTL The expenditure item date is not within the active

  • Unix equivalent for ActiveXObject

    We have couple of JavaScripts which were originally written for Windows platform. We are now trying to run those scripts in a Sun Solaris 5.8 platform. However the script refers to ActiveXObjects such as those below var tFSO = new ActiveXObject("Scri