Best practise around handling time dependency for flat file loads

Hi folks,
This is a fairly common situation - handling time dependency for flat file loads. Please can anyone share their experience around handling this. One common approach is to handle the time validity changes within the flat file where it is easily changeable by the user but then again is prone to input errors by the user. Another would be to handle this via a DSO. Possibly, also have this data entered directly in BI using IP planning layouts. There is a IP planning function that allows for loading flat file data but then again, it only works without the time dependency factor.
It would be great to hear thoughts or if anyone can point to a best practise document for such a scenario.
Thanks.

Bump!

Similar Messages

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • Selection in IP for flat file Load

    Hi Experts ,
    I Want to know can I have selections available for an IP which is created for Flat file as data source.
    your early response is highly appreciated.
    Regards
    Patil

    Hi,
       Yes You can. But at the data source level you have to tick the selection fields, so that at info pack level those fields will be available for selections.
    Regards
    Sankar

  • Data loading mechanism for flat file loads for hierrarchy

    Hi all,
    We have a custom hierarchy which is getting data from a flat file that is stored in the central server and that gets data from MDM through XI. Now if we delete few records in MDM, the data picked in BI will not consist of the records which are deleted. Does it mean that the hierarchy itself deletes the data it consists of already and does a full load or does it mean every time we load the data to the BI, do weu delete the records from the tables in BI and reload?
    Also we have some Web Service(gets loaded from XI) text data sources.
    Is the logic about updating the hierrarchy records different as compared to the existing web service interfaces?
    Can anyone please tell me the mechanism behind these data loads and differentiate the same for above mentioned data loads?

    create the ODS with the correct keys. And load full loads from the flat files. You can have a cube pulling data from the ODS.
    Load data in ODS
    Create the cube.
    Generate export datasource ( rsa1 > rt clk the ods > generate export Datasource )
    Replicate the export ds ( rsa1 > source system > ds overview > search the ds starting with 8 + the ODS name
    press the '+' button activate the transfer rules and comm str
    create the update rules for the cube with the above infource ( same as '8ODSNAME' Datasource )
    create infopackage with intial load (in the update tab)
    Now load data to cube
    Now load new full loads to ODS
    create a new infopackage for delta (in the update tab)
    run in infopackage. (any changes / new records will be loaded to cube)
    Regards,
    BWer
    Assing points if helpful.

  • How to create source system for flat file loads

    How do I create a source system to load flat file?
    I have a screen that asks for the following:
    Logical system name
    Source system name
    Type and release
    What should I enter for these?
    I am not Basis and Basis was supposed to set this up.

    hi sam.
    STEPS TO CREATE FLAT FILE SOURCE SYSTEM
    step:1 select source systems under modeling in AWB.
    step:2 select Source Systems root node.
               |---> context menu
                   |--->create......
    step:3 select your required source system Icon.
              (in your case it is PC Icon).
    there it askes for a logical.......and source system name...
    here u can specify any name of ur wish
    for example:-
    logical sys name --- PC_FF
    source system name --- flat file source system*
    *press Continue * button.
    Observe the activation icon to confirm the successfull creation(_glowing lamp icon_ symbol)

  • Error while creating process chain for flat file loading

    Hi All,
    I had Created a process Chain to load Transactiion data(Full load) form flat file which is in my computer.
    start>Load>DTP>DeleteIndex>DTP loading CUBE--> Create Index
    but the system is throwing an error as "An upload from the client workstation in the background is not possible."
    I dont know why this error is coming?
    Can some one help me
    Regards
    Mamta

    Hi Mamta,
    Basically if you want to load the DS through FF using process chain, the FF has to be placed in Application server. We cant load the FF when it is located in the client local workstation(FF On your PC).
    So better you remove the Infopackage step from the PC. Load the IP manually. Once it is completed you can start the process chain with the following steps:
    start>DTP>DeleteIndex>DTP loading CUBE> Create Index
    Hope it is clear & helpful!
    Regards,
    Pavan

  • Flat File loading Initialize with out Data transfer is disabled in BI 7.0

    Hi experts,
              When loading through flat file in BI 7.0 for Info Package Level Initialization Delta Process with data Transfer is coming by default,but when i want to select Initialization Delta Process without Data transfer is disabled. (in the creation of Data Source (flat file) in the Extraction Tab Delta Process is changed to FIL1 Delta Data (Delta Images).
    please provide me Solution.
    regards
    Subba reddy.

    Hi Shubha,
    For flat file load please go throught he following link:
    http://help.sap.com/saphelp_nw70/helpdata/EN/43/03450525ee517be10000000a1553f6/frameset.htm
    This will help.
    Regards,
    Mahesh

  • Golden Gate for flat file

    hi,
    I have tried with GoldenGate for Oracle/ non-Oracle databases. Now, I am trying for flat file.
    What i have done so far:
    1. I have downloaded Oracle "GoldenGate Application Adapters 11.1.1.0.0 for JMS and Flat File Media Pack"
    2. Kept it on the same machine where Database and GG manager process exists. Port for GG mgr process 7809, flat file 7816
    3. Following doc GG flat file administrators guide Page 9 --> configuration
    4. Extract process on GG manager process_
    edit params FFE711*
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    add extract FFE711, EXTTRAILSOURCE ./dirdat/oo*
    add rmttrail ./dirdat/pp, extract FFE711, megabytes 20*
    start extract  FFE711*
    view report ffe711*
    Oracle GoldenGate Capture for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows (optimized), Oracle 11g on Apr 22 2011 03:28:23
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:24:19
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 4628
    Description:
    ** Running with the following parameters **
    extract ffe711
    userid ggs@bidb, password ********
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.77G
    CACHESIZEMAX (strict force to disk): 1.57G
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    Database Language and Character Set:
    NLS_LANG environment variable specified has invalid format, default value will b
    e used.
    NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7A
    SCII.
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "AL32UTF8"
    Warning: your NLS_LANG setting does not match database server language setting.
    Please refer to user manual for more information.
    2011-11-07 18:24:25 INFO OGG-01226 Socket buffer size set to 27985 (flush s
    ize 27985).
    2011-11-07 18:24:25 INFO OGG-01052 No recovery is required for target file
    E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, at RBA 0 (file not opened).
    2011-11-07 18:24:25 INFO OGG-01478 Output file E:\GoldenGate11gMediaPack\V2
    6071-01\dirdat\ffremote is using format RELEASE 10.4/11.1.
    ** Run Time Messages **
    5. on Flat file GGSCI prompt-->_*
    edit params FFR711*
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\sample-dirprm\ffwriter.properties"
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    add extract ffr711, exttrailsource ./dirdat/pp*
    start extract ffr711*
    view report ffr711*
    Oracle GoldenGate Capture
    Version 11.1.1.0.0 Build 078
    Windows (optimized), Generic on Jul 28 2010 19:05:07
    Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:21:31
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 5008
    Description:
    ** Running with the following parameters **
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSE
    REXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFil
    e\V22262-01\sample-dirprm\ffwriter.properties"
    E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\ggs_Windows_x86_Generic_32bit_v11
    _1_1_0_0_078\extract.exe running with user exit library E:\GoldenGate11gMediaPac
    k\GGFlatFile\V22262-01\flatfilewriter.dll, compatiblity level (2) is current.
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.87G
    CACHESIZEMAX (strict force to disk): 1.64G
    Started Oracle GoldenGate for Flat File
    Version 11.1.1.0.0
    ** Run Time Messages **
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....
    Thanks,
    Vikas

    Ok, I haven't run your example, but here are some suggestions.
    Vikas Panwar wrote:
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    ggsci> add extract FFE711, EXTTRAILSOURCE ./dirdat/oo
    ggsci> add rmttrail ./dirdat/pp, extract FFE711, megabytes 20
    ggsci> start extract  FFE711
    You of course need data captured from somewhere to test with. You could capture changes directly from a database and write those to a trail, and use that as a source for the flat-file writer; or, if you have existing trail data, you can just use that (I often test with old trails, with known data).
    In your example, you are using a data pump that is doing nothing more than pumping trails to a remote host. That's fine, if that's what you want to do. (It's actually quite common in real implementations.) But if you want to actually capture changes from the database, then change "add extract ... extTrailSource" to be "add extract ... tranlog". I'll assume you want to use the simple data pump to send trail data to the remote host. And I will assume that some other database capture process is creating the trail dirdat/oo
    Also... with your pump "FFE711", you can create either a local or remote trial, that's fine. But don't use a rmtfile (or extfile). You should create a trail, either a "rmttrail" or "exttrail". The flat-file adapter will read that (binary) trail, and generate text files. Trails automatically roll-over, the "extfile/rmtfile" do not (but they do have the same internal GG binary log format). (You can use a 'maxfiles' to force them to rollover, but that's beside the point.)
    Also, <ul>
    <li> don't forget your "table" statements... or else no data will be processed!! You can wildcard tables, but not schemata.
    <li> there is no reason that anything would be discarded in a pump.
    <li> although a matter of choice, I don't see why people use absolute paths for reports and discard files. Full paths to data and def files make sense if they are on the SAN/NAS, but then I'd use symlinks from dirdat to the storage directory (on Unix/Linux)
    <li> both windows and unix can use forward "/" slashes. Makes examples platform-independent (another reason for relative paths)
    <li> your trails really should be much larger than 5MB for better performance (e.g,. 100MB)
    <li> you probably should use a source-defs file, intead of a dblogin for metadata. Trail data is by its very nature historical, and using "userid...password" in the prm file inherently gets metadata from "right now". The file-writer doesn't handle DDL changes automatically.
    </ul>
    So you should have something more like:
    Vikas Panwar wrote:
    extract ffe711
    sourcedefs dirdef/vikstkFF.def
    rmthost 10.180.182.77, mgrport 7816
    rmttrail dirdat/ff, purge, megabytes 100
    table myschema.*;
    table myschema2.*;
    table ggs.*;For the file-writer pump:
    +5. on Flat file GGSCI prompt+
    extract ffr711
    CUSEREXIT flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params dirprm\ffwriter.properties
    SOURCEDEFS dirdef/vikstkFF.def
    table myschema.*;
    table ggs.*;
    ggsci> add extract ffr711, exttrailsource ./dirdat/pp
    ggsci> start extract ffr711
    Again, use relative paths when possible (the flatfilewriter.dll is expected to be found in the GG install directory). Put the ffwriter.properties file into dirprm, just as a best-practice. In this file, ffwriter.properties, is where you define your output directory and output files. Again, make sure you have a "table" statement in there for each schema in your trails.
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....The generated files are defined in the ffwriter.properties file. Search for "rootdir" property, e.g.,
    goldengate.flatfilewriter.writers=csvwriter
    csvwriter.files.formatstring=output_%d_%010n
    csvwriter.files.data.rootdir=dirout
    csvwriter.files.control.ext=_data.control
    csvwriter.files.control.rootdir=dirout
    ...The main problem you have is: (1) use rmttrail, not rmtfile, and (2) don't forget the "table" statement, even in a pump.
    Also, for the flat-file adapter, it does run in just a "extract" data pump; no "replicat" is ever used. The replicats inherently are tied to a target database; the file-writer doesn't have any database functionality.
    Hope it helps,
    -m

  • When we go for Flat File creatin

    Hi gurus,
              When we go for flat file creation.Can any one give me one Scenario with example.
    Thanks in advance

    Hello Srikar,
    How r u ?
    We go for flat file in some scenarios like,
    1. The source system could not be connected to the BW System (here u could generate a FlatFile)
    2. If the Customer asks for a different kind of report (in our case we do FlatFile Creation for a complex time management - Employee Working Time is 1st merged with R/3 data, then we combine both together as 1 FlatFile)
    3. Sometimes the customer will give the FlatFile data generated by them
    4. We also take FlatFile reports from Data Targers, using InfoSpokes
    More scenarios are there,
    Best Regards....
    Sankar Kumar
    +91 98403 47141

  • Delta Upload for Flat File

    Hello Everyone
    i am srikanth. i would like to know wheather do we have a facility for Delta Upload for flat file. If yes can u please give me the steps.
    thanks in advance
    srikanth

    Hi Sabrina...thank you for ur help...i did load the data from cube to ods
    steps.
    1. i generated export data source on the cube
    2. i found the name of the cube with prefix 8<infocube name> in the infosource under DM application component.
    3. there are already communication structure and transfer rules activated but when i am creating update rules for the ods..i am getting the message 0recordmode missing in the infosource.
    4. so i went to infosource and added 0recordmode in communication structure and activated but the transfer rules in yellow colour..there was no object assigned to 0recordmode but still i activated.
    5.again i went to ods and created update rules and activated (tis time i didnt get any message about 0recordmode).
    6.i created infopackage and loaded.
    a)Now my question is without green signal in the transfer rule how data populated into the ods and in your answer you mentioned to create communication structure and transfer rules where in i didnt do anything.
    b) will i b facing any problem if i keep loading the data into the ods from cube (in yellow signal) ..is it a correct procedure..plz correct me..thanks in advance

  • Omit the Open Hub control file 'S_*' for flat file extracts

    Hi Folks,
    a quick question is it somehow possible to omit the control file generation for flat file extracts.
    We got some unix scripts running that get confused by the S_* files and we where wondering if we can switch the creation of those files off.
    Thanks and best regards,
    Axel

    Hi,
    However, the S_ (structure) file does not reflect the proper structure. The S_ file lists all fields sorted in alphabetical order based on the field name, instead of listing all fields in the correct order that they appear in the output file.
    As I know and checked this is not the case. The S_ file has the fields in the order of infospoke object sequence ( and in transformation tab, source structure).
    I would suggest you to control it again.
    Derya

  • I don't know why it takes too long time to sample flat file.

    I don't know why it takes too long time to sample flat file.
    OWB Client 10.1
    While importing a flat file of fixed width ,
    in the screen "Flat File Sample Wizard" shows the text box number of rows with default value 200.
    I want to extend this value to 700,000.
    But, it takes too long time (over 5 hours) to sample it.
    Do you know why it is happend? or How can i fix this problem?
    Thanks in Advance.
    Regards,
    JWS.

    Hello,
    Actually flat file sampling process’ goal is to capture the structure of the file. That’s why initially the sample size is set to 200 lines.
    The question is why you are trying to perform sampling by 700000 rows? Are you expecting some change in structure beyond this mark?
    If so, and you want to capture the fact that your source file is multi – typed, your better prepare small file for sampling outside the OWB.
    Sergey

  • How IE works for  flat file

    Hi all:
         As we all know that, when IE gets a idoc's service name from SLD, then use it with idoc's message type and Idoc type to do receiver determniation, what about flat file ? how can we know its Service name and interface name  if there is only a flat file on FTP?  how IE works for Flat file ?
         Couldn't thank you more

    Hi,
    For any idoc scenarious, you would use business systems rather than business service which is stored in SLD. So the IE would fetch it from SLD at runtime.
    For file based scenarious also, you can create business system as type third party and use the same.
    Is that answer your question?
    Regards
    Krish

  • Oracle OWB integrator for Flat Files 3.0

    Hi,
    I have installed owb 11g r2 in my machine, but while creating flat files module for file import it is showing that
    it does not find *"Oracle OWB integrator for Flat Files 3.0"*. so i cannot use "import metadata using flat file wizard".
    do i need to install the owb again or the Oracle OWB integrator for Flat Files 3.0 alone..or how do i integrate it with owb....... please help if anyone can,,
    Thanks..

    Here is the certification matrix
    https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CFEQFjAE&url=http%3A%2F%2Fwww.oracle.com%2Ftechnetwork%2Fmiddleware%2Fdata-integration%2Fodi-11gr1certmatrix-ps6-1928216.xls&ei=mtmUUcX7DoiJrQek04DwAg&usg=AFQjCNGoOUFQHdK7Ti2DLb6vz_3s-miP3A&sig2=q3rf2foe9bl4_WbsLPwWng&bvm=bv.46471029,d.bmk

  • Custom code for Flat file reconciliation on LDAP

    Hello,
    I have to write a custom code for flat file reconciliation on LDAP as the GTC connector wasn't working entirely.
    Could someone help me out with this.. How do i do this ??
    Thanks

    flat file reconciliation on LDAPWhat do you mean by Flat File on LDAP ?
    If you want to create Flat File connector then search google for reading a flat file using Java.
    Define RO Fields and do mapping in Process Defintion. You can use Xellerate User RO for Trusted Recon.
    Make a map of CSV that and Recon Field
    Call the Reconciliation API

Maybe you are looking for