Processing records using Flat file ActiveSync resource

Hi Folks,
I was working on flat file activeSync resource adapter. Documentation says the following..
The Flat File Active Sync adapter can track the timestamp of a flat file. In addition, the adapter can archive the last file processed and then compare it to the most recent version. Identity Manager will then act on the accounts that are different in the two files.
If these features are enabled, the first time Identity Manager polls the source flat file, the system copies the file and places it in the same directory. The copied (archived) file is named FFAS_timestamp.FFAS, with the timestamp indicating the last time the original file was changed. The format of the timestamp is determined by the operating system on which the source file resides.
On each subsequent poll, Identity Manager compares the timestamp on the original file with the most recent timestamp. If the new timestamp value is the same as the previous value, then the file has not changed, and no further processing is performed until the next poll. If the timestamp values are different, Identity Manager checks for the presence of the FFAS file. If the file does not exist, Identity Manager processes the updated source file as if it were a new file.
If the timestamps are different and the archived FFAS file exists, Identity Manager compares the source file with the archived file. The comparison will filter any users that have not changed. If a user has changed, then it will be sent through the adapter in the normal manner, and the configured process, correlation and delete rules determine what to do with the user.
I have set the following attributes in synchronization policy as
Track Last Processed Timestamp : true     
Process Differences Only : true
Unique Key for Diff : <some attribute>
I have the following flat file..
PERSONID,accountID,FIRSTNAME,LASTNAME,NATID,BIRTHDATE,EXTERNALID,TYPE,COUNTRY
10004,,test,4,123444,12/01/1804,204,A,USA
10005,,test,5,,12/01/1805,205,M,USA
There is no user account in IDM repository. After processing this file, my code creates the above accounts in IDM.
I deleted the above two accounts from IDM using admin interface and the accounts are not created again. So far good.
Now I opened the above flat file and just saved again. It creates the account in IDM.
According to the above documentation, it should not create the account as the flat file is already processed based on diff operation. Then Why it is creating the account again in IDM? Any logic??
Your repsonse will be appreciated.

Hi,
I am pretty sure this has been working for me but with IAPI.Cancel. Please try to capitalize the C.
Regards,
Patrick

Similar Messages

  • Creating BOM using BDC :How to display no of records from flat file under

    Hi,
          How to display no of records from flat file under one (Alternative BOM) vertically.
        When i execute, the records are replacing one by one.
    Here my coding:
    report ZBOM1
           no standard page heading line-size 255.
    *include bdcrecx1.
    DATA: BEGIN OF bdc OCCURS 0,
           matnr(18),
           werks(4),
           stlan(1),
          END OF BDC.
    DATA: BEGIN OF BDC1 OCCURS 0,
           idnrk(18),
           MENGE(18),
           MEINS(3),
           postp(1),
          END OF bdc1.
    DATA: BEGIN OF BDCDATA OCCURS 0,
             matnr(18),
             werks(4),
             stlan(1),
             idnrk(18),
             MENGE(18),
             MEINS(3),
             postp(1),
             posnr(4),
          END OF BDCDATA.
    data: ibdcdata type  standard table of bdcdata WITH HEADER LINE.
    *start-of-selection.
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        FILENAME                      = 'C:\Documents and Settings\dilipkumar.b\Desktop\soft.txt'
       FILETYPE                       = 'ASC'
       HAS_FIELD_SEPARATOR            = ','
      HEADER_LENGTH                 = 0
      READ_BY_LINE                  = 'X'
      DAT_MODE                      = ' '
      CODEPAGE                      = ' '
      IGNORE_CERR                   = ABAP_TRUE
      REPLACEMENT                   = '#'
      CHECK_BOM                     = ' '
      VIRUS_SCAN_PROFILE            =
      NO_AUTH_CHECK                 = ' '
    IMPORTING
      FILELENGTH                     =
      HEADER                         =
      TABLES
        DATA_TAB                      = BDCDATA
    EXCEPTIONS
      FILE_OPEN_ERROR               = 1
      FILE_READ_ERROR               = 2
      NO_BATCH                      = 3
      GUI_REFUSE_FILETRANSFER       = 4
      INVALID_TYPE                  = 5
      NO_AUTHORITY                  = 6
      UNKNOWN_ERROR                 = 7
      BAD_DATA_FORMAT               = 8
      HEADER_NOT_ALLOWED            = 9
      SEPARATOR_NOT_ALLOWED         = 10
      HEADER_TOO_LONG               = 11
      UNKNOWN_DP_ERROR              = 12
      ACCESS_DENIED                 = 13
      DP_OUT_OF_MEMORY              = 14
      DISK_FULL                     = 15
      DP_TIMEOUT                    = 16
      OTHERS                        = 17
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    *perform open_group.
    loop at bdcdata.
    perform bdc_dynpro      using 'SAPLCSDI' '0100'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29N-STLAN'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'RC29N-MATNR'
                                  'SOFTDRINKS'.
    perform bdc_field       using 'RC29N-WERKS'
                                  'WIND'.
    perform bdc_field       using 'RC29N-STLAN'
                                  '1'.
    perform bdc_field       using 'RC29N-DATUV'
                                  '16.09.2008'.
    perform bdc_dynpro      using 'SAPLCSDI' '0110'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'RC29K-BMENG'
                                  '1'.
    perform bdc_field       using 'RC29K-STLST'
                                  '1'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29K-EXSTL'.
    perform bdc_dynpro      using 'SAPLCSDI' '0111'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29K-LABOR'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_dynpro      using 'SAPLCSDI' '0140'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POSTP(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '=FCBU'.
    perform bdc_field       using 'RC29P-IDNRK(01)'
                                  BDCDATA-IDNRK.
    perform bdc_field       using 'RC29P-MENGE(01)'
                                  BDCDATA-MENGE.
    perform bdc_field       using 'RC29P-MEINS(01)'
                                  BDCDATA-MEINS.
    perform bdc_field       using 'RC29P-POSTP(01)'
                                  BDCDATA-POSTP.
    perform bdc_dynpro      using 'SAPLCSDI' '0130'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POSNR'.
    perform bdc_field       using 'RC29P-POSNR'
                                   BDCDATA-POSNR.            "'0010'.
    perform bdc_field       using 'RC29P-IDNRK'
                                  BDCDATA-IDNRK.             "'15'.
    perform bdc_field       using 'RC29P-MENGE'
                                  BDCDATA-MENGE.             "'1'.
    perform bdc_field       using 'RC29P-MEINS'
                                  BDCDATA-MEINS.             "'ml'.
    perform bdc_dynpro      using 'SAPLCSDI' '0131'.
    perform bdc_field       using 'BDC_OKCODE'
                                  '/00'.
    perform bdc_field       using 'BDC_CURSOR'
                                  'RC29P-POTX1'.
    perform bdc_field       using 'RC29P-SANKA'
                                  'X'.
    *perform bdc_transaction using 'CS01'.
    *perform close_group.
    CALL TRANSACTION 'CS01' USING IBDCDATA MODE 'A' UPDATE 'S'.
    REFRESH IBDCDATA.
    endloop.
           Start new screen                                              *
    FORM BDC_DYNPRO USING PROGRAM DYNPRO.
      CLEAR iBDCDATA.
      iBDCDATA-PROGRAM  = PROGRAM.
      iBDCDATA-DYNPRO   = DYNPRO.
      iBDCDATA-DYNBEGIN = 'X'.
      APPEND ibDCDATA .
    ENDFORM.
           Insert field                                                  *
    FORM BDC_FIELD USING FNAM FVAL.
    IF FVAL <> NODATA.
        CLEAR iBDCDATA.
        iBDCDATA-FNAM = FNAM.
        iBDCDATA-FVAL = FVAL.
        APPEND iBDCDATA .
    ENDIF.
    ENDFORM.

    Hi,
    the BDCDATA structure must be fnam, fval,dynbegin,dynpro,program.
    You have to declare like this and pass this in your CALL TRANSACTION statement.
    Please give some other table name for BDCDATA you declared for and also for IBDCDATA.

  • Object reference not set to an instance of an object error when generating a schema using flat file schema wizard.

    I have a csv file that I need to generate a schema for. I am trying to generate a schema using flat file schema wizard but I keep getting "Object reference not set to an instance of an object." error when I am clicking on the Next button after
    specifying properties of the child elements on the wizard. At the end I get schema file generated but it contains an empty root record with no child elements.
    I thought may be this is because I didn't have my project checked out from the Visual SourceSafe db first but I tried again with the project checked out and got the same error.
    I also tried creating a brand new project and generating a schema for it but got the same error.
    I am not sure what is causing Null Reference exception to be thrown and there is nothing in the Windows event log that would tell me more about the problem.
    I am using Visual Studio 2008 for my BizTalk development.
    I would appreciate if some has any insides on this issue.

    Hi,
    To test your environment, create a new BizTalk project outside of source control.
    Create a simple csv file on the file system.
    Name,City,State
    Bob,New York,NY
    Use the Flat file schema Wizard to create the flat file schema from your simple csv instance.
    Validate the schema.
    Test the schema using your csv instance.
    This will help you determine if everything is ok with you environment.
    Thanks,
    William

  • How to create the Export Data and Import Data using flat file interface

    Hi,
    Request to let me know based on the requirement below on how to export and import data using flat file interface.....
    Please provide the steps involved for the same.......
    BW/BI - Recovery Process for SNP data. 
    For each SNP InfoProvider,
    create:
    1) Export Data:
    1.a)  Create an export data source, InfoPackage, comm structure, etc. necessary to create an ASCII fixed length flat file on the XI
    ctnhsappdata\iface\SCPI063\Out folder for each SNP InfoProvider. 
    1.b)  All fields in each InfoProvider should be exported and included in the flat file. 
    1.c)  A process chain should be created for each InfoProvider with a start event. 
    1.d)  If the file exists on the target drive it should be overwritten. 
    1.e)  The exported data file name should include the InfoProvider technical name.
    1.f)  Include APO Planning Version, Date of Planning Run, APO Location, Calendar Year/Month, Material and BW Plant as selection criteria.
    2) Import Data:
    2.a) Create a flat file source system InfoPackage, comm structure, etc. necessary to import ASCII fixed length flat files from the XI
    ctnhsappdata\iface\SCPI063\Out folder for each SNP InfoProvider.
    2.b)  All fields for each InfoProvider should be mapped and imported from the flat file.
    2.c)  A process chain should be created for each InfoProvider with a start event. 
    2.d)  The file should be archived in the
    ctnhsappdata\iface\SCPI063\Archive directory.  Each file name should have the date appended in YYYYMMDD format.  Each file should be deleted from the \Out directory after it is archived. 
    Thanks in advance.
    Tyson

    Here's some info on working with plists:
    http://developer.apple.com/documentation/Cocoa/Conceptual/PropertyLists/Introduc tion/chapter1_section1.html
    They can be edited with any text editor. Xcode provides a graphical editor for them - make sure to use the .plist extension so Xcode will recognize it.

  • How to use 'flat file extraction' option in Infospoke

    Hello Gurus,
    We are trying to extract the BW data using Infospoke to a 3rd party ETL tool (Datastage). Now in the current BW 3.5 release, the customer is on SP11 and we are not able to extract any data using a 3rd party system option in Infospoke. Due to the fact, we found out from SAP, that since we are getting an error (3rd party system is not set)in infospoke, we have to upgrade the SP level from 11 to either 16 or 17. The customer that we are dealing with is not going to do that until July - 2007.
    So now we have a second option to load the 'flat file' in BW App. server and FTP to Datastage. Is this something workable option?
    When using that option within Infospoke destination Tab, we can see two options. One is to use File option and load the data within the standard DIR HOME. I tried that but I can't see the file and the entire data within the standard directory. Can anyone explain why and what will be the procedure to follow?
    The second option is to use Logical file. I never done that before. Can anyone explain step by step process or provide the detail document and/or link?
    Since we are at the critical timeframe stage, I would appreciate if I can get the faster response from the Gurus.
    I will make sure to give 'reward points' to each and every helpful answer(s).
    thanks in advance,
    Maulesh

    Hi shashi,
    first, let me clarify one thing here! I am not saying '3rd party tool' as an option in infospoke. I know Infospoke provides only two options (DB table, and file). But when you use DB table option, you check mark on 3rd party destination and that's where you define an RFC destination for 3rd party tool. So when you execute an infospoke, it will start extracting data and load into the defined destination. Since we are on SP11 on BW 3.5 release, we are getting an error while executing an Infospoke using 'DB table option'. The error is like '3rd party destination is not set'. When I went to Marketplace and found an OSS note, I found out that we need an upgrade on SP level (upto 17).
       As I mentioned earlier, the client doesn't want to upgrade until next year, and we are at the critical timeframe stage, where we have to make a decision whether we hold this project until the SP upgrade happens or found alternate approach where we can extract the data from BW (somehow) and load into Teradata database via Datastage - BW Pack (3rd party ETL).
       So we might be able to use the second option in Infospoke destination, which is to use 'Flat file' load.
       Now when I use File option with App. server checked, the data loads into default Server and directory on BW. Which is DIR_HOME. I can see those file extracted and loaded into that directory under the transaction AL11. But I can't see any data. Do you think I have to request for an increase in size limit? If I see all the data, how can I FTP from the BW server? Do I have to work with SAP Basis or create a UNIX script?
       When I use Logical file name, I guess I have to follow what you have described on your response, correct? Can i use an existing Logical path and file name? Do you have a 'How to' document on creating YEAR and PER FM? Is it possible that Basis can do this work for us? What happens after we assign a logical Path and logical file name into the destination tab on Infospoke? What is the use of logical file versus file? Overall, I am looking for a process from extracting the data from Infospoke using either file or logical file, all the way to load into 3rd party database.
      looking forward for your response!
    thanks,
    Maulesh

  • How to load hierrarchies using dtp  using flat file  in bw  ineed clear ste

    how to load hierrarchies using dtp  using flat file  in bw  ineed clear steps

    Hi,
    If you want to load InfoObjects in the form of hierarchies, you have to activate the indicator with hierarchies for each of the relevant InfoObjects in the InfoObject maintenance. If necessary, you need to specify whether the whole hierarchy or the hierarchy structure is to be time-dependent, whether the hierarchy can contain intervals, whether additional node attributes are allowed (only when loading using a PSA), and which characteristics are allowed.
    1.Defining the source system from which you want to load data:
    Choose Source System Tree ® Root (Source System) ® Context menu (Right Mouse Button) ® Create.
    For a flat file, choose: File System, Manual Metadata; Data via File Interface.
    2.Defining the InfoSource for which you want to load data:
    Optional: Choose InfoSource Tree ®Root (InfoSources) ® Context Menu (Right Mouse Button) ® Create Application Component.
    Choose InfoSource Tree ® Your Application Component ® Context Menu (Right Mouse Button) ® Create InfoSource ® Direct Update
    Choose an InfoObject from the proposal list, and specify a name and a description.
    3.Assigning the source system to the InfoSource
    Choose InfoSource tree ® Your Application Component ® One of your InfoSources ® Context Menu (Right Mouse Button) ® Assign Source System. You are taken automatically to the transfer structure maintenance.
    The system automatically generates DataSources for the three different data types to which you can load data.
          Attributes
          Texts
          Hierarchies (if the InfoObject has access to hierarchies)
    The system automatically generates the transfer structure, the transfer rules, and the communication structure (for attributes and texts).
    4.Maintaining the transfer structure / transfer rules
    Choose the DataSource to be able to upload hierarchies.
    Idoc transfer method: The system automatically generates a proposal for the DataSource and the transfer structure. This consists of an entry for the InfoObject, for which hierarchies are loaded. With this transfer method, during loading, the structure is converted to the structure of the PSA, which affects performance.
    PSA transfer method: The transfer methods and the communication structure are also generated here. 
    5.Maintaining the hierarchy:
    Choose Hierarchy Maintenance, and specify a technical name and a description of the hierarchy.
    PSA Transfer Method: You have the option here to set the Remove Leaf Value and Node InfoObjects indicator. As a result, characteristic values are not transferred into the hierarchy fields NODENAME, LEAFFROM and LEAFTO as is normally the case, but in their own transfer structure fields.  This option allows you to load characteristic values having a length greater than 32 characters.
    Characteristic values with a length > 32 can be loaded into the PSA, but they cannot be updated in characteristics that have a length >32.
    The node names for pure text nodes remain restricted to 32 characters in the hierarchy (0HIER_NODE characteristic).
    The system automatically generates a table with the following hierarchy format (for sorted hierarchies without removed leaf values and node InfoObjects):
    Description
    Field Name
    Length
    Type
    Node ID
    NODEID
    8
    NUMC
    InfoObject name
    INFOOBJECT
    30
    CHAR
    Node name
    NODENAME
    32
    CHAR
    Catalog ID
    LINK
    1
    CHAR
    Parent node
    PARENTID
    8
    NUMC
    First subnode
    CHILDID
    8
    NUMC
    Next adjacent node
    NEXTID
    8
    NUMC
    Language key
    LANGU
    1
    CHAR
    Description - short
    TXTSH
    20
    CHAR
    Description - medium
    TXTMD
    40
    CHAR
    Description- long
    TXTLG
    60
    CHAR
    The system transfers the settings for the intervals and for time-dependency from the InfoObject maintenance. Depending on which settings you have defined in the InfoObject maintenance, further table fields can be generated from the system.
    The valid from and valid to field is filled if you select Total Hierarchy Time-dependent in the InfoObject maintenance. The time-dependent indicator is activated if you select the Hierarchy Nodes Time-dependent option in the InfoObject maintenance.
    6.Save your entries.
    Depending on which settings you defined in the InfoObject maintenance, additional fields can be generated from the system. Also note the detailed description for Structure of a Flat Hierarchy File for Loading via an IDoc and for Structure of a Flat Hierarchy File for Loading via a PSA.
    Also find the below blog for your reference...
    /people/prakash.bagali/blog/2006/02/07/hierarchy-upload-from-flat-files
    You can load hierarchy using process chain...
    Find the below step by step procedure to load hierarchy using process chain...
    http://help.sap.com/saphelp_nw70/helpdata/EN/3d/320e3d89195c59e10000000a114084/frameset.htm
    Assign points if this helps u...
    Regards,
    KK.
    Edited by: koundinya karanam on Apr 8, 2008 1:08 PM
    Edited by: koundinya karanam on Apr 8, 2008 1:09 PM

  • Steps to load the data by using flat file for hierarchies in BI 7.0

    Hi Gurus,
    steps to load the data by using flat file for hierarchies in BI 7.0

    hi ,
    u will get the steps int he following blog by Prakash Bagali
    Hierarchy Upload from Flat files
    regards,
    Rathy

  • FM for reading total record in flat file

    Hi,
    Do we any function module which can tell me about number of record in flat file.
    I want only FM name!!
    Thanks for your reply!! but File is only there in Application server.
    Thanks in advance.
    Message was edited by: Vipin Nagpal

    Hi,
    then you need to call the unix command (if your application server is Unix)
    <b>wc -l fielname</b>
    data: unixcom like   rlgrap-filename.
    unixcom = 'wc -l fielname'
    data: begin of tabl occurs 500,
            line(400),
          end of tabl.
    data: lines type i.
      call 'SYSTEM' id 'COMMAND' field unixcom
                    id 'TAB'     field tabl[].
    loop at tabl.
        write:/01 tabl-line.
      endloop.
    Regards
    vijay

  • Loading Hierarchies using flat file

    Hi experts,
    I have requirement to load hierarchies using flat file.
    I have to create hier on Person Responsible, sub nodes are Sales Office, Project Definition, Project Type and Project Definition Description. I have file, i need to load into BW.
    I dont know much about hier loading, can you tell me what are the Attributes to be inserted and how to prepare the hier loading file which contains HIENM, VERSION, HCLASS, DATEFROM and DATETO.
    I dont know these fields are enough to load my hier.
    I have created one hier, when i try to activate it's giving error like ' no data exists'.
    Can anyone tell me how to prepare file to load into hierarchy?
    Thanks in advance,
    Venky

    Hi Venkatesh,
    There are several formats to load a hierarchy via file. The following two are the commonly used formats.
    1) Default
    Node ID                     NODEID
    InfoObject Name     INFOOBJECT
    Node Name     NODENAME
    Link Name     LINK
    HghrLvlNode     PARENTID
    Language Key     LANGU
    Description - Short     TXTSH
    Description - Medium     TXTMD
    Description - Long     TXTLG
    2) Sorted
    Node ID                     NODEID
    InfoObject Name     INFOOBJECT
    Node Name     NODENAME
    Link Name     LINK
    HghrLvlNode     PARENTID
    First Subnode     CHILDID
    Next Node Along     NEXTID
    Language Key     LANGU
    Description - Short     TXTSH
    Description - Medium     TXTMD
    Description - Long     TXTLG
    If Default format is used, the sequence of node IDs appeared in the flat file determines the structure of the hierarchy. Whereas the sorted hierarchy format defines the structure via PARENTID, CHILDID, NEXTID relationship.
    You make the selection in DataSource/Tran.Structure tab of InfoSource Page by clicking on the "Hier. Structure button.
    Good luck!
    Bill

  • Error Processing a line in Flat File ActiveSync

    We are using IDM 5.5 running on Websphere 6.0.2.3.
    We are feeding a caret seperated flat file to IDM which in turn writes to Sun One Directory Server.
    A sample format of the caret seperated file is
    ssocorrespondlanguage^ssouid^mail^cn^sn^givenname^appaccess^uid^ssostatus
    ^A1C1C423-71F0-138F-CB8C-BCC3BAAF484F^[email protected]^Luigi Marra^Marra^Luigi^portal^[email protected]^A
    ^3DCC1E95-4E7D-113A-BFC0-AAA3BA195331^[email protected]^adhi asokan^asokan^adhi^portal^aadhees^A
    We are facing an issue where IDM is not able to process a line.
    I suppose its because of the "@" or "." character in the uid field as higlighted. Not sure though.
    But am not able to figure it out how to aviod this.
    The error which we are getting is
    2005-12-02T15:31:11.152-0500: Error Processing Line: {gessouid=A1C1C423-71F0-138F-CB8C-BCC3BAAF484F, sn=Marra, cn=Luigi Marra, uid=[email protected], gessostatus=A, gegcfappaccess=portal, mail=[email protected], givenname=Luigi, gessocorrespondlanguage=, diffAction=create}
    com.waveset.adapter.iapi.IAPIException: Item Resource:uniteam.it@GECF_B2B_FF(id=null) was not found.
    Have someone seen this issue.
    Or someone have any idea on this.
    Thanks.

    Thanks for the reply.
    Sorry! I was out of pocket for couple of days........
    I just double checked. I am using unique Employee ID as the "Unique key for diff" and also I have 'process diff. only' flag to set to true.
    In my tests, I modified only contractor expiry date in the incoming sap flat file and then tried processing the file. It gave me the above error. Please note that the same employee id was available in the FFA* file also.
    Interestingly, I tried the same test for another account and it worked for that account. That's what actually totally threw me off.
    Thanks once again.........
    - Lalit

  • Duplicate records in flat file extracted using openhub

    Hi folks
    I am extracting data from the cube to opnhub into a flat file, I see duplicate records in the file.
    I am doing a full load to a flat file
    I cannot have technical key because I am using a flat file.
    Poonam

    I am using aggregates(In DTP there is a option to use aggregates) and the aggregates are compressed and I am still facing thiis issue.
    Poonam

  • How to Trigger the process chain using Excel Files?

    Hi All,
    We have 9 excel files(4 master data attributes and text files and 1 transaction file).
    Everyday new records will be added into the excel sheets and we need to trigger all these files into the bw server through process chain automatically . As we have all these Excel files in the desktop.
    Please suggest how to achieve this task?
    Thanks
    Gowtham

    Hi,
    You cannot load flat files to BW automatically using process chains.
    You have to upload them to Apllication server and point your Infopackage to the directory path of the flat file in application server. then process chains can load it.
    How to guide for loading flat file to application server.
    Program to load CSV file from local desktop to Application server(For flat file Data extraction)
    -Sriram

  • PROCESS CHAIN and flat file

    hi xperts,
       I tried to load the flat file to ODS  in the process chain.
    I got this error:
    An upload from the client workstation in the background is not possible
    ,can anybody tell me how to solve this error.What are the proper settings for loading a flat file in process chains.

    Hi,
          in the process chain at the background, u cannot load the data using the option Client Workstation instead you would need to select the option Application Server and then save it , go to the schedule tab of the IP scheduler and then click on the start button and then go to monitor screen, u willt hen see the records being extracted.
    1. change the option to Application Server
    2. Save
    3. under schedule tab in Infopackage scheduler , select "Start to load data immediately" and then click on Start button.
    4. after u see "Data was requested" at the status bar of the Infopackage scheduler then you click on Monitor icon and see the records being extracted.
    Do let me know if it was successfull or not.

  • Error while creating process chain for flat file loading

    Hi All,
    I had Created a process Chain to load Transactiion data(Full load) form flat file which is in my computer.
    start>Load>DTP>DeleteIndex>DTP loading CUBE--> Create Index
    but the system is throwing an error as "An upload from the client workstation in the background is not possible."
    I dont know why this error is coming?
    Can some one help me
    Regards
    Mamta

    Hi Mamta,
    Basically if you want to load the DS through FF using process chain, the FF has to be placed in Application server. We cant load the FF when it is located in the client local workstation(FF On your PC).
    So better you remove the Infopackage step from the PC. Load the IP manually. Once it is completed you can start the process chain with the following steps:
    start>DTP>DeleteIndex>DTP loading CUBE> Create Index
    Hope it is clear & helpful!
    Regards,
    Pavan

  • BI IP - Upload flat file - "requested resource not found" when typing URL

    Hi,
    we tried to implement the "upload flat file" functionality (blog Marc Bernard). We followed the different steps and included the bug fixes as well. When we want to launch the URL in an internet explorer session we get message "requested resource not found".
    Hereby the URL used:
    http://sapscmbid.vpkgrp.int:50100/sap/bc/webdynpro/sap/zrsplf_file_upload?sap-language=EN&planning_sequence=FILE_SEQ&show_messages=WEXA
    Do you know what we did wrong? We would like to finish this by the end of this week ...
    kind regards
    D

    Hi,
    when we use URL http://sapscmbid.vpkgrp.int:50100/ we see the screens of the portal so I presume this server and port is accurate (we use the same link for the planning modeler and this works fine).
    The SICF service zrsplf_file_upload has been imported and has status active. Should we check other things as well?
    D

Maybe you are looking for

  • ITunes 10.7 for Windows (64-bit)

    I'm using iTunes 6.3.25 for Windows (64-bit). When I checked in Help<Check for Updates, iTunes doesn't show 10.7 update. Even Apple Software Update doesn't show any update. In case, I download version 10.7 from Apple support site, would it affect my

  • Moving iTunes on external hard drive

    I have my (very large) iTunes library on an external hard drive. I just got a new computer and want to move to it. Do I just plug in to new one, then point iTunes to the new location of the hard drive and deauthorize the old one.

  • LABVIEW.LIB was not called from a LabVIEW process

    Hi All, I've inherited LV code that calls a CIN node to access a motor controller.  I'd like to compile this code to a .NET DLL, but receive the following error when calling it from an external source: I've read the knowledgebase article explaining t

  • Html panel + button + onclick

    HI Everybody! I'm sorry for my language skill but I can't speak english very good but I hope you understand what am I want. So I have a problem because so I would like to make a button with an onclick attr. like this: <input type="button" onclick="ca

  • Working with offline clips

    This question was posted in response to the following article: http://help.adobe.com/en_US/premierepro/cs/using/WSd79b3ca3b623cac9-5dacca1512743f7a833-80 00.html