FCC - No key field available

Hi SDN,
  I have a flat file and I need to convert it into XML using FCC. The record set structure will be 1 header, multiple line items followed by 1 trailer record. The problem I am facing here is the header and trailer will have a field uniquely to identify them. But the item will not have any keyfield to uniquely identify it.
What are the possible soln to approach this scenario.
Your thoughts are highly appreciated.
Eg. of a sample input file
*HDR*E43S20070427162139000159CORDEV26      
TESTING     3989971   20070422WW GRAINGER INC                         143372   185057                    +000001234.56USD00120070428STANDARD
TESTING     3989971   20070422WW GRAINGER INC                         143372   185057                    +000000061.73USD00220070428STANDARD
*TRL*E43S200704270000022 
Here *HDR* is the header, *TRL* is the trailer and the rest are item records.
Thank You.
Regards,
Jai Shankar

Hi,
Go trugh these blogs
/people/venkat.donela/blog/2005/06/08/how-to-send-a-flat-file-with-various-field-lengths-and-variable-substructures-to-xi-30
/people/michal.krawczyk2/blog/2004/12/15/how-to-send-a-flat-file-with-fixed-lengths-to-xi-30-using-a-central-file-adapter
/people/sap.user72/blog/2005/01/06/how-to-process-csv-data-with-xi-file-adapter
/people/anish.abraham2/blog/2005/06/08/content-conversion-patternrandom-content-in-input-file
Regards
Hemant
Award points if find helpful

Similar Messages

  • Sender File FCC - Key Field Value not available in file

    Hi All,
    I am new to SAP PI. I am working on Sender FCC. Below is the source file structure. I need to handle multiple sub structures Header,1Data,*. I think I have to use Key.field Name and Value. But in the below format we don't have any indicator to identify which is header and which is data record. Please suggest how to processed.
    Header Record (Pipe delimited) (Cardinality 1:1)
    Date
    Invoice Number
    Total Amount
    Company Name
    Data Records (Fixed Length) (Cardinality 1:n)
    Date
    Amount
    Country
    Card Number
    etc...
    Sample File Snippet:
    20100430|4123451810|218.50|CC
    20100430    $150.00     INDIA       1234567     
    20100430    $150.00     INDIA       1234567     
    20100430    $150.00     INDIA       1234567     
    20100430    $150.00     INDIA       1234567

    Hi
    Try using the below parameters in FCC
    Header.fieldSeparator    |
    Header.endSeparator    'nl'
    Data.fieldFixedLengths   ur values --3,5,5..
    Data.endSeparator         'nl'
    Regards
    Ramg
    Edited by: Ramkumar Ganesh on May 24, 2010 4:05 AM

  • Key Field Value in FCC

    Hi Experts,
    I have a scenario in PI, where I have 1 Header, n Data Records and 1 Trailer in the source file. This data is coming in CSV format.
    I am using FCC to convert CSV into XML.
    In the FCC, I have used keyFieldValue parameter. For the Header Record, the Key Field Value is constant "H"; for Trailer Record the key field value is constant "T".
    However for Data Record, the key field value is not constant. The first character of Key field of Data Record will always be "D", but rest of the Alphabets can change.
    Sample File:
    "H","3.04",22/10/2009,16:31:12
    "D2S",21/10/2009,20:00:26,"20044",00666,"S",1
    "D2S",22/10/2009,14:26:20,"20044",00668,"S",1
    "D0S",22/10/2009,08:33:34,"00044",04165,"S",1
    "D0S",22/10/2009,11:59:59,"00044",04166,"S",1
    "T",1393.27,1393.27,8
    Here, first line is Header Line (Key Field Value "H"), Last Line is Trailer Line (Key Field Value "T") and all lines in between Header and Trailer are Data Records (Key Field Value starts with "D). I need to convert this file into XML.
    I have no clue, if this can be converted into XML through FCC.
    Any help will be highly appreciated.
    Regards,
    Varun

    >
    Varun Agarwal wrote:
    > Sample File:
    >
    "H","3.04",22/10/2009,16:31:12
    > "D2S",21/10/2009,20:00:26,"20044",00666,"S",1
    > "D2S",22/10/2009,14:26:20,"20044",00668,"S",1
    > "D0S",22/10/2009,08:33:34,"00044",04165,"S",1
    > "D0S",22/10/2009,11:59:59,"00044",04166,"S",1
    > "T",1393.27,1393.27,8
    >
    > Here, first line is Header Line (Key Field Value "H"), Last Line is Trailer Line (Key Field Value "T") and all lines in between Header and Trailer are Data Records (Key Field Value starts with "D). I need to convert this file into XML.
    >
    > I have no clue, if this can be converted into XML through FCC.
    > Any help will be highly appreciated.
    >
    >
    > Regards,
    > Varun
    Write a simple module. The module will do a replace of the Dxx fields to D (you can use simple regex function for this)
    After the module, use the messagetransformbean to do the FCC for you.
    The module might sound complex, but trust me its a simple logic you need to implement and you can easily do the FCC with the messagetransformbean

  • Problem in key field name in FCC

    hi,
    I am using FCC on the sender side.
    Source file is
    1;;PY;X101;20060630;06;20060630;GBP;Ref.1;Payroll June 2006; (Header)
    1;1;40;S2225000;;1050;;;;;;;;;;;;;;;X101003;;;;;;;;;;;;;;;
    1;2;240;S2225000;;4563;;;;;;;;;;;;;;;X101004;;;;;;;;;;;;;;;
    1;3;31;3100001;;5013;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
    1;4;31;3100002;;600;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
    2;;PY;X101;20060630;06;20060630;GBP;Ref.2;Payroll June 2006;  (Header)
    2;1;40;S2225000;;530;;;;;;;;;;;;;;;X101003;;;;;;;;;;;;;;;
    2;2;40;S2225000;;2490;;;;;;;;;;;;;;;X101004;;;;;;;;;;;;;;;
    2;3;31;3100002;;3020;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
    The first field in the header is the key field to identify the header, which keeps incrementing for evry occurence of the header.
    So for the first occurence of the header first field of header will be 1, first field of item will be 1( same as header) and second field of item will be the key field now  which will increment with evry occurence of item within the header.So there is no constant value for key field name.
    So how do i perform FCC for this type of structure?
    Regards,
    Loveena

    Hi,
    You can create a custom Adapater Module and add it in your Sender Adapter. This will add a Unique Identifier to your Header and Record item which you can give in FCC.
    For more details refer to this forum
    https://forums.sdn.sap.com/click.jspa?searchID=-1&messageID=6143385
    Thanks
    Amit

  • FCC configuration in Sender File Channel Without any key field name

    Hi Everyone,
    We have below flat file generated from ECC using standard tcode.
    The flat file is fixed length file and first record is header record followed by line items.
    There is no key field name in the file name.
    Can we read the file and convert into xml without any key field name?
    I want to understand if the FCC configuration for above file can be done without key name or not.
    Thanks,
    Vertika

    Hello,
    AFAIK, using FCC i really doubt u can convert file into xml.
    So technically, there are two options either read each line one by one and then do conversions in ur mapping or write a custom module which will read ur input file and convert into xml.
    Configuring Generic Sender File CC Adapter
    Thanks
    Amit Srivastava

  • Key field not available in report

    Hi,
    I have a DSO and in that I have 1 key field and 20 data fields.
    I was able to load the data successfully in the DSO. And I am creating a report based on this DSO.
    But when I create a report, I was unable to view the key field in the report.
    I went to the query designer and checked, I could not see that particular key field also in my query designer, I could only see the data fields.
    Could someone please guide me in this regard???
    Would definitely assign points for the solution...
    Thanks & Regards.

    Hi,
    Please uncheck the Attribute only check box in the General tab of that Keyfield info Object.
    If any info Object checked as Attribute only then those objects wont display in the Query Designer.
    dont forget to give me the points..
    thanks,
    CK

  • DSO Key Fields Deisgn

    Hi Experts,
    I have Sales Schedule data( DSO ) and Delivery Item data( DSO ). I would like to combine this two DSO into one new DSO in order to get the Open Order Quantity ( Order Quantity - Delivery Quantiy ).
    Fields in Sales Schedule DSO are
    Sales Document   ( Key Field )
    Sales Docment Item ( Key Field )
    Schedule Line  ( Key Field )
    Material Avail Date  ( Data Field )
    Order Quantity  ( Key Figure )
    Fields in Delivery Item  DSO are
    Delivery Number ( Key Field )
    Delivery Item Number ( Key Field )
    Document number of the reference document   ( Data Field ) which is refered to the field Sales Document in 1st DSO
    Item Number of the reference document  ( Data Field )  which is refered to the Sales Document Item in 1st DSO
    Delivery Quantity  ( Key Figure )
    I would like to create new DSO based on the common fields Sales Document (DSO1)= Document number of the reference document ( DSO2)
    and Sales Document Item Number ( DSO1) = Item Number of the reference document (DSO2)  in the both DSOs.
    I required feilds like Sales Document, Sales Document Item, Material Avail Date, Order Quantity and Deliver Quantity ( Some other fields which are not mandatory ).
    Would like to create new DSO on top of two DSOs, please suggest me what are the Key Fields and Data fields should be in the new DSO.
    Your quick help is really appreciated.
    Thanks
    Robert.

    Hi Robert,
    Define the key as a combination of the keys of both DSO:
    Sales Document ( Key Field )
    Sales Docment Item ( Key Field )
    Schedule Line ( Key Field )
    Delivery Number ( Key Field )
    Delivery Item Number ( Key Field )
    Map the sales doc from the delivery to the key as well.
    This will ensure you have a unique key and your data will be aggregated correctly when reporting.
    Cheers,
    Diego

  • Key fields and data fields re-arranged

    Hello SDN,
    Which design contains more records due to it showing more granularity? :
    1. ODS with more infoobjects assigned as Keys.
    2. ODS with less infoobjects assigned as Keys.
    assuming the total number of infoobjects remain the same.
    Where data fields can contain characteristics that can be used as keys or non-keys, depending on choice of design.
    My understanding is with more keys, less details available due to aggregation. Could you kindly help to confirm this is correct?
    I need to ask this because I am trying to change the re-arrange the keys by putting data fields as keys but not sure how this will turn out / behave.
    Thanks,
    Suzie

    Hi,
    <i>BUT! I cannot say that it will increase. That means, I think the record number remains the same!</i>
    I do not think that this is Right?
    for ex:
    1)
    Key fields:
    <i>Billing document</i>
    Data Fields
    <i>Sales Office
    Customer
    Net sales</i>
    So in the update rules , you can see the Overwrite for all Data fields. If the sales office is chaged for a record , the the OLD value will be over written by new value. So Before and after change to the records, the no of records are equal to 1.
    2)
    Key fields:
    <i>Billing document
    Sales Office</i>
    Data Fields
    <i>Customer
    Net sales</i>
    So in the update rules , you can see the Overwrite for all Data fields(now not for 'Sales Office'). If the sales office is chaged for a record , the the OLD value will not be over written by new value. It creates 2 records in Active table with respect to one change in new table.  So Before the change , the no of records was 1.now after change, the records' number becomes 3.
    With rgds,
    Anil Kumar Sharma .P

  • In Infoprovider DSO no query fields available ( 0PM_DS02_Q0001  )

    Hi,
    I have a problem in Quality Managemenr (QM) module query called ( Mean Time To Repair (MTTR) and Mean Time Between Repair  ) 0PM_DS02_Q0001   and this query 0PM_DS02 which loads the data from 0PM_DS01 data mart. in the query key figures are
    MTTR (days),
    MTBR (days) ,
    MTTR (h),
    MTBR (h)  ,
    no.of outages,
    tot. no of notifications,
    actual outage time(h) ,
    effective time between outages(h)
    Except tot. no of notifications field  no other filed is available in 0PM_DS02 DSO.
    without the fields no report executed, and saying no pplicable data found in the report.
    please advice and resolve the issue
    thanks,
    sapsdn

    Hi Vikram:
      Some questions for you:
    - Did you apply any SAP Note to try to solve this issue?
    - Is the DataFlow that writes data to the 0PM_DSO2 DSO in version 3.5 or you migrated it to 7.0?
    - What are the Key Fields both on 0PM_DS01 and 0PM_DS02 DSOs?
    - How many records does the 0PM_DS01 DSO have?
    - On the Dataflow that goes to 0PM_DS01 is the field ROCANCEL mapped both to the 0STORNO and 0RECORDMODE InfoObjects?
    - What happens when you try to load data to the 0PM_DSO2 DSO? Do you get an error message? Does the 0PM_DS02 DSO remains empty or some records are loaded to it?
    - Did you enhance the DataSource or added some InfoObjects to 0PM_DS01 or 0PM DSOs?
    Regards,
    Francisco Milán.
    P.S. Please note that the required logic for the Key Fields on 0PM_DS02 is available on the Start Routine.

  • DSO Key Field Design

    Hi Gurus,
    We are using DSO with Billing doc ,Billing item and Material Id as a Key fields in DSO.But I found Billing doc ,Billing item as a key fields in Tables VBRP VBRK(as we are using Billing dataSource) and I didn't find Material no as a Key field in those 2 Tables.Do we need to keep Material no as a Key Field in DSO.What aspects do I need to consider before taking any decission regarding material no as a Key field in DSO. Please can any one throw light on this topic.
    Thanks,
    Suryam.

    Hello,
    I am not sure, how it would work but you can try for yourself by changing a material in the  Billing item line and loading it into your DSO with material as a key. But I am still confused why would you need material no. as Key in your DSO when the lowest granular level in source itself available is line item. In other words, there is no positive effect, I could see adding material as a key fields. Yes there are chances, it may effect negatively (it may not overwrite the changes in R3 as keys are different in both places - need to check this).
    Thanks,

  • How to merge key field from external source system with SAP R/3 master

    Hi,
    In SAP BW 7.0 system, our scenario is Master Data for 0GL_ACCOUNT is coming from SAP R/3 alongwith the Transactional data records for standard FI cubes. Then, one more set of transaction data is coming from external source system, a flat file, into another custom DSO(ZDSO_FI), which also has this GL Account field.
    This flat file's GL account, GL_file, has to be basically mapped/merged with the standard 0GL_ACCOUNT field so that at the time of loading the transactional data for custom DSO, ZDSO_FI (with transformation mapping GL_file > 0GL_ACCOUNT), system automatically refers to the 0GL_ACCOUNT master data to load these incoming transactional values, from the external flat file system. How can this be done?
    To illustrate the scenario, say I have 5 records in 0GL_ACCOUNT, loaded from SAP R/3 into SAP BW-
    0GL_ACCOUNT      Short Description     Source System
    100                                   D1                          R/3
    200                                   D2                          R/3
    300                                   D3                          R/3
    400                                   D4                          R/3
    500                                   D5                          R/3
    Now suppose if my flat file has following sample transactional data, to be uploaded in SAP BW  ZDSO_FI-
    GL_file      Key Figure1
    400          789
    200          567
    Then after uploading this transactional data in ZDSO_FI (with transformation mapping GL_file > 0GL_ACCOUNT), the 0GL_ACCOUNT data becomes as below-
    0GL_ACCOUNT      Short Description     Source System
    400
    200
    100                                   D1                          R/3
    200                                   D2                          R/3
    300                                   D3                          R/3
    400                                   D4                          R/3
    500                                   D5                          R/3
    So note that the system did not refer the incoming GL's from flat file, although the field is mapped to 0GL_ACCOUNT in transformation, to the already available master data. Rather created 2 new data rows for the GL accounts coming from external system. Because of this I am not able to perform the calculations common from standard FI cube and ZDSO_FI, with GL account as key field. I need to synchronise these data values based on GL Account to proceed with further calculation and am badly stuck.
    Request if anyone can please throw some light on how to achieve this seemingly simple requirement?
    Thanks in advance.
    Nirmit

    Better post this thread is in the [Enterprise Data Warehousing|Data Warehousing; forum.

  • Data Extraction and ODS/Cube loading: New date key field added

    Good morning.
    Your expert advise is required with the following:
    1. A data extract was done previously from a source with a full upload to the ODS and cube. An event is triggered from the source when data is available and then the process chain will first clear all the data in the ODS and cube and then reload, activate etc.
    2. In the ODS, the 'forecast period' field was now moved from data fields to 'Key field' as the user would like to report per period in future. The source will in future only provide the data for a specific period and not all the data as before.
    3) Data must be appended in future.
    4) the current InfoPackage in the ODS is a full upload.
    5) The 'old' data in the ODS and cube must not be deleted as the source cannot provide it again. They will report on the data per forecast period key in future.
    I am not sure what to do in BW as far as the InfoPackages are concerned, loading the data and updating the cube.
    My questions are:
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    Your assistance will be highly appreciated. Thanks
    Cornelius Faurie

    Hi Cornelius,
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    -->> Try to load data into ODS in Overwrite mode full update asbefore(adds new records and changes previous records with latest). Pust delta from this ODS to CUBE.
    If existing ODS loading in addition, introduce one more ODS with same granularity of source and load in Overwrite mode if possible delta or Full and push delta only subsequently.
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    --> Yes, It is correct. Otherwise you will loose historic data.
    Hope it Helps
    Srini

  • Changing the length of a key field in a table

    Hi,
    I want to increase the length of the field from 2 to 4 in a standard SAP table and deliver it to the customers. This field is a key field in table. This field from this table is also used in view and view clusters.
    What is the implication of changing the length to the customers. The customers would have already data in this field and they should not loose any data. Will the existing data for customers remain at length 2 or do they have to do some conversion?
    Regards,
    Srini.
    Edited by: Srinivasa Raghavachar on Feb 7, 2008 12:45 PM

    hi,
    The database table can be adjusted to the changed definition in the ABAP Dictionary in three different
    ways:
    By deleting the database table and creating it again. The table on the database is deleted, the inactive
    table is activated in the ABAP Dictionary, and the table is created again on the database. Data
    existing in the table is lost.
    By changing the database catalog (ALTER TABLE). The definition of the table on the database is
    simply changed. Existing data is retained. However, indexes on the table might have to be built again.
    By converting the table. This is the most time-consuming way to adjust a structure.
    If the table does not contain any data, it is deleted in the database and created again with its new
    structure. If data exists in the table, there is an attempt to adjust the structure with ALTER TABLE. If the
    database system used is not able to do so, the structure is adjusted by converting the table.
    Field 1     Field 2,    Field 3
    NUMC,6  CHAR 8    CHAR, 60
    Field 1    Field 2       Field 3
    NUMC,6 CHAR, 8     CHAR,30
    The following example shows the steps necessary during conversion.
    Starting situation: Table TAB was changed in the ABAP Dictionary. The length of field 3 was reduced
    from 60 to 30 places.
    The ABAP Dictionary therefore has an active (field 3 has a length of 60 places) and an inactive (field 3
    still has 30 places) version of the table.
    The active version of the table was created in the database, which means that field 3 currently has 60
    places in the database. A secondary index with the ID A11, which was also created in the database, is
    defined for the table in the ABAP Dictionary.
    The table already contains data.
    Step 1: The table is locked against further structure changes. If the conversion terminates due to an
    error, the table remains locked. This lock mechanism prevents further structure changes from being
    made before the conversion has been completed correctly. Data could be lost in such a case.
    Step 2: The table in the database is renamed. All the indexes on the table are deleted. The name of the
    new (temporary) table is defined by the prefix QCM and the table name. The name of the temporary
    Step 3: The inactive version of the table is activated in the ABAP Dictionary. The table is created on the
    database with its new structure and with the primary index. The structure of the database table is the
    same as the structure in the ABAP Dictinary after this step. The database table, however, does not
    contain any data.
    The system also tries to set a database lock for the table being converted. If the lock is set, application
    programs cannot write to the table during the conversion.
    The conversion is continued, however, even if the database lock cannot be set. In such a case
    application programs can write to the table. Since in such a case not all of the data might have been
    loaded back into the table, the table data might be inconsistent.
    You should therefore always make sure that no applications access the table being converted
    during the conversion process.
    Step 4: The data is loaded back from the temporary table (QCM table) to the new table (with MOVECORRESPONDING).
    The data exists in the database table and in the temporary table after this step.
    When you reduce the size of fields, for example, the extra places are truncated when you reload the
    data.
    Since the data exists in both the original table and temporary table during the conversion, the storage
    requirements increase during the process. You should therefore verify that sufficient space is available in
    the corresponding tablespace before converting large tables.
    There is a database commit after 16 MB when you copy the data from the QCM table to the original
    table. A conversion process therefore needs 16 MB resources in the rollback segment. The existing
    database lock is released with the Commit and then requested again before the next data area to be
    converted is edited.
    When you reduce the size of keys, only one record can be reloaded if there are several records whose
    key cannot be distinguished. It is not possible to say which record this will be. In such a case you should
    clean up the data of the table before converting.
    Step 5: The secondary indexes defined in the ABAP Dictionary for the table are created again.
    Step 6: The temporary table (QCM table) is deleted.
    Step 7: The lock set at the beginning of the conversion is deleted.
    If the conversion terminates, the table remains locked and a restart log is written.
    Caution: The data of a table is not consistent during conversion. Programs therefore should not access
    the table during conversion. Otherwise a program could for example use incorrect data when reading the
    table since not all the records were copied back from the temporary table. Conversions therefore
    should not run during production! You must at least deactivate all the applications that use tables to
    be converted.
    You must clean up terminated conversions. Programs that access the table might otherwise run
    incorrectly. In this case you must find out why the conversion terminated (for example overflow of the
    corresponding tablespace) and correct it. Then continue the terminated conversion.
    Since the data exists in both the original table and temporary table during conversion, the storage
    requirements increase during conversion. If the tablespace overflows when you reload the data from the
    temporary table, the conversion will terminate. In this case you must extend the tablespace and start the
    conversion in the database utility again.
    If you shorten the key of a table (for example when you remove or shorten the field length of key fields),
    you cannot distinguish between the new keys of existing records of the table. When you reload the data
    from the temporary table, only one of these records can be loaded back into the table. It is not possible
    to say which record this will be. If you want to copy certain records, you have to clean up the table
    before the conversion.
    During a conversion, the data is copied back to the database table from the temporary table with the
    ABAP statement MOVE-CORRESPONDING. Therefore only those type changes that can be executed
    with MOVE-CORRESPONDING are allowed. All other type changes cause the conversion to be
    terminated when the data is loaded back into the original table. In this case you have to recreate the old
    state prior to conversion. Using database tools, you have to delete the table, rename the QCM table to
    its old name, reconstruct the runtime object (in the database utility), set the table structure in the
    Dictionary back to its old state and then activate the table.
    If a conversion terminates, the lock entry for the table set in the first step is retained. The table can no
    longer be edited with the maintenance tools of the ABAP Dictionary (Transaction SE11).
    A terminated conversion can be analyzed with the database utility (Transaction SE14) and then
    resumed. The database utility provides an analysis tool with which you can find the cause of the error
    and the current state of all the tables involved in the conversion.
    You can usually find the precise reason for termination in the object log. If the object log does not
    provide any information about the cause of the error, you have to analyze the syslog or the short dumps.
    If there is a terminated conversion, two options are displayed as pushbuttons in the database utility:
    After correcting the error, you can resume the conversion where it terminated with the Continue
    adjustment option.
    There is also the Unlock table option. This option only deletes the existing lock entry for the table .
    You should never choose Unlock table for a terminated conversion if the data only exists in the
    temporary table, i.e. if the conversion terminated in steps 3 or 4. table for table TAB is therefore QCMTAB.
    Hope this is helpful,Do reward.

  • Add a new key field in an InfoObject

    Hi all,
    I have to add a new key field in a standard InfoObject (0vendor) in order to extract data correctly from a table in R3 which has both the vendor and the plant as keys.
    Doubts:
    1. the 0vendor InfoObject is already used in several InfoCubes
    2. it contains data
    3. we have another object built with reference to 0vendor
    Any problem with the existent InfoCubes and queries if I change the structure of this InfoObject?
    Have I got to delete data on it first?
    any problem on the object built with reference to 0vendor?
    I have just thought of updating the object 0vendor (this would avoid lots of substitutions and lot of work) instead of developing a brand new object..
    Do you agree with this kind of solution or would you do something different?
    Thanks
    Elisa

    Hi Elisa,
    "In R/3 the vendor sub-range is blank and we use only one Purch Org, so no matter."
    This is OK but remind that it shall remain like this ALWAYS; because if you design your model with these assumptions and later you start getting sub ranges and/or a second P.O then you can forget your model...
    Replacing 0VENDOR by ZVENDOR completely will depend on your requirement; I don't know which content you are using but you should try to keep 0VENDOR as such because ZVENDOR will be compounded with 0PLANT and that's not the same... (e.g. the key will be displayed like "PLANT/VENDOR" in reports unless you drilldown the PLANT in front of ZVENDOR... your users may perhaps not appreciate...)
    In this matter, how are you going to proceed?
    - fill the vendor ID in ZVENDOR and have 0PLANT as compounding key? In this case you'll have to add 0VENDOR attributes in ZVENDOR as well. With this scenario you will end with redundant data since attributes for 0VENDOR will be the same for each combination 0PLANT/ZVENDOR.
    - wiser: have 0VENDOR and 0PLANT both in the compounding key. This way you could have ZVENDOR key CHAR1 and leave it empty (it would be a "dummy" key you wouldn't use in reports)
    The good thing here is that you don't have to replicate your 0VENDOR design in the ZVENDOR. the bad thing for you is that you NEED to keep both IObjs; thus manage your DIM issue.
    Your approach to fill the cubes is absolutely right! What are the sizes of your cubes?
    You should take this opportunity to check you cubes datamodel:
    - are the dimension well deisgned? (run report SAP_INFOCUBE_DESIGNS in order to see the cardinality of your DIMs). I am sure this can be improved.
    - are the cubes partitioned? Every cube should be partitioned for performance reasons.
    - if the cubes are huge, wouldn't it be good to do a logical partitioning? E.g have one cube per year and a multicube on top...
    Finally use dimension number range buffering when loading data into an empty cube and other techniques like dropping all indexes in order to speed up your load process and minimize reporting downtime. If your reports are based on multicubes and your model is not too complex you could even keep the copy cube instead of moving your data back to the original; you would then just have your multicube transport to make the switch of the new model available for reporting...
    hoping this help you to go with the right path...
    regards,
    Olviier.

  • In the interface file to proxy key field in the sender file adapter

    Hi all
    i do have a FLAT FILE ,
    1820000000|
    0010|XXX
    0020|XXX
    0040|XXX
    0050|XXX
    where 1820000000 is my PO number and 0010|ACK
    0020|XXX
    0040|XXX
    0050|XXX
    i dont what to use key field for this case in my sender adapter configuration please help me
    thanking you

    HI
    Read the file without using key field. Just define the FCC parameters Header and Items PI will pick the file.
    DT_Source
       Record
         Header 0..1
           PO_Number String 0..1
         Items  0..unbounded
           Items String 0..1
    Do the FCC
    Header.fieldnames PO_Number
    other config for Header
    Items.fieldnames Items
    other config for items
    Thanks
    Gaurav
    Edited by: Gaurav Bhargava on Nov 12, 2008 10:27 AM
    Edited by: Gaurav Bhargava on Nov 12, 2008 10:27 AM

Maybe you are looking for

  • Error While Login ADF Security Sample Application

    Hi All, Jdevloper Version : 11.1.1.5.0 we are Creating ADF Login Application contains login.jspx and main.jspx pages. we define ADF Security on this Sample Application. when we provide valid credentials to login(username and password) it shows Error:

  • My Safari browser is blocked by a virus warning.  What can I do?

    My Safari browser is blocked by a virus warning.  What can I do?

  • HTTP 1.1 compliance of SunONE 7.0

    I have to send a large amount of data (size is unknown when request is made) using HTTP to a Servlet running on SunONE AppServer 7. I tried using the "Transfer-encoding: chunked" HTTP header, but SunONE returns a 411 error code (asking for content le

  • Oracle 10-XE, PHP & Zend...

    Hya, My name is Bogdan Laurentiu, and i am final year at university. I have to make a project and my choice was a Web page, made in PHP for a delivery company like DHL for eg. My question is should i use Zend as well ? If yes why.. What is Zend espec

  • SWT and Tables

    Hi, i am developing a piece of software where i need to be able to edit the contents of an SWT table and then save any changes, i have been able to edit the cell contents, but am stuck saving the changes. I found how to edit the table cells using a s