Analyze data from LabWindows 4.0

I have a small hydro power plant which has been controlled by a program made in LabWindows 4.1 (from 1998-1999). This program have logged a lot of information on production, waterlevels and so on for 10 years, and I have now an interest in analysing the data. Each months there was produced a .dat-file with the data, which can be (only?) read by the control-program to watch plots and tables. But I am more interested in extracting the data to analyze with Excel, MatLab or LabView, i.e. getting the data in nice tables in maybe a simple .txt-file. Unfortunately the .dat-file only contains jibborish which I can't make any sense of. Do any veterans in here have any ideas on how to go forth to aquire the data in a sensible ASCII-format? I'm attaching a file if that helps. Wouldn't expect anyone to decode it, but maybe someone could point me in the right direction.
Attachments:
1999-01.txt ‏210 KB

Hi Super-Chub,
from what you are saying you don't have the source code for your application, right? Couldn't you find the original developer and either ask him the description of the data files data file or have him write a small application to translate them?
From the file alone it is difficult to understand which data is saved on it and how; .dat extension does not identify a specific file type. From a fast glance to the file you attached, I seemed to recognize a 36-byte pattern with embedded 4 spaces that repeats throughout the whole file, but no more than this. Based on your knowledge of the application you may try to guess which type of data is stored in each file and possibly how data are organized in it.
One could try to decode the file based on the assuption it was produced by ArrayToFile function, but I can't remember whether CVI 4 was already having that function and how it was organized; moreover the 4-spaces field seem to contrast with the use of such function. It could also be the dump of an array of structures... Anyway, I am just throwing out some idea that may help you in this task.
** Edit: I just saw in the online help that ArrayToFile function is present in CVI from release 4.0.
Proud to use LW/CVI from 3.1 on.
My contributions to the Developer Zone Community
If I have helped you, why not giving me a kudos?

Similar Messages

  • How do I report/analyze data from the KM activity reporting service

    I have turned on the km activity reporting service and I am getting .txt files generated with the information I am looking for but now I'm looking for the best way to collect this data from all the server clusters and aggregate the data for easier data analysis.  I have reviewed the 'how to analyze portal activity with the activity data collector' white paper.  This leads me to believe that the only way to collect and analyze this data is to write a script that collects and analyzes the data.  Is this true or is there a better way of gathering and reporting this data?  Is this data stored in a table somewhere?

    Hi,
    the portal only offers basic monitoring capabilities. But it offers the possibility to create quite extensive logs, like km activity reporting service. What the portal writes to a DB table are the PAR statistics. The quite more extensive KM and ADC data is written to a file.
    You can use SolMan to analyze the data. SolMan can gather, analze and write the data into the BI and gives you the basic report.
    In the same way, you can use SolMan to gather the data and create your own BI reports. As it can be really hard to collect X logs from Y servernodes, SolMan offers an easy way to collect the different logs.
    br,
    Tobias

  • What are the major process to transfer the data from legacy to sap system.

    What are the major process to transfer the data from legacy to sap system using BDC at Real Time only?

    hi,
    BATCH DATA COMMUNICATION
    main methods are:
    1. SESSION METHOD
    2. CALL TRANSACTION
    3. DIRECT INPUT
    Advantages offered by BATCH INPUT method:
    1. Can process large data volumes in batch.
    2. Can be planned and submitted in the background.
    3. No manual interaction is required when data is transferred.
    4. Data integrity is maintained as whatever data is transferred to the table is through transaction. Hence batch input data is submitted to all the checks and validations.
    To implement one of the supported data transfers, you must often write the program that exports the data from your non-SAP system. This program, known as a “data transfer” program must map the data from the external system into the data structure required by the SAP batch input program.
    The batch input program must build all of the input to execute the SAP transaction.
    Two main steps are required:
    • To build an internal table containing every screen and every field to be filled in during the execution of an SAP transaction.
    • To pass the table to SAP for processing.
    Prerequisite for Data Transfer Program
    Writing a Data Transfer Program involves following prerequisites:
    Analyzing data from local file
    Analyzing transaction
    Analyzing transaction involves following steps:
    • The transaction code, if you do not already know it.
    • Which fields require input i.e., mandatory.
    • Which fields can you allow to default to standard values.
    • The names, types, and lengths of the fields that are used by a transaction.
    • Screen number and Name of module pool program behind a particular transaction.
    To analyze a transaction::
    • Start the transaction by menu or by entering the transaction code in the command box.
    (You can determine the transaction name by choosing System – Status.)
    • Step through the transaction, entering the data will be required for processing your batch input data.
    • On each screen, note the program name and screen (dynpro) number.
    (dynpro = dyn + pro. Dyn = screen, pro = number)
    • Display these by choosing System – Status. The relevant fields are Program (dynpro) and Dynpro number. If pop-up windows occur during execution, you can get the program name and screen number by pressing F1 on any field or button on the screen.
    The technical info pop-up shows not only the field information but also the program and screen.
    • For each field, check box, and radio button on each screen, press F1 (help) and then choose Technical Info.
    Note the following information:
    - The field name for batch input, which you’ll find in its own box.
    - The length and data type of the field. You can display this information by double clicking on the Data Element field.
    • Find out the identification code for each function (button or menu) that you must execute to process the batch-input data (or to go to new screen).
    Place the cursor on the button or menu entry while holding down the left mouse button. Then press F1.
    In the pop-up window that follows, choose Technical info and note the code that is shown in the Function field.
    You can also run any function that is assigned to a function key by way of the function key number. To display the list of available function keys, click on the right mouse button. Note the key number that is assigned to the functions you want to run.
    Once you have program name, screen number, field name (screen field name), you can start writing.
    DATA TRANSFER program.
    Declaring internal table
    First Integral Table similar to structure like local file.
    Declaring internal table like BDCDATA
    The data from internal table is not transferred directly to database table, it has to go through transaction. You need to pass data to particular screen and to particular screen-field. Data is passed to transaction in particular format, hence there is a need for batch input structure.
    The batch input structure stores the data that is to be entered into SAP system and the actions that are necessary to process the data. The batch input structure is used by all of the batch input methods. You can use the same structure for all types of batch input, regardless of whether you are creating a session in the batch input queue or using CALL TRANSACTION.
    This structure is BDCDATA, which can contain the batch input data for only a single run of a transaction. The typical processing loop in a program is as follows:
    • Create a BDCDATA structure
    • Write the structure out to a session or process it with CALL TRANSACTION USING; and then
    • Create a BDCDATA structure for the next transaction that is to be processed.
    Within a BDCDATA structure, organize the data of screens in a transaction. Each screen that is processed in the course of a transaction must be identified with a BDCDATA record. This record uses the Program, Dynpro, and Dynbegin fields of the structure.
    The screen identifier record is followed by a separate BDCDATA record for each value, to be entered into a field. These records use the FNAM and FVAL fields of the BDCDATA structure. Values to be entered in a field can be any of the following:
    • Data that is entered into screen fields.
    • Function codes that are entered into the command field. Such function codes execute functions in a transaction, such as Save or Enter.
    The BDCDATA structure contains the following fields:
    • PROGRAM: Name of module pool program associated with the screen. Set this field only for the first record for the screen.
    • DYNPRO: Screen Number. Set this field only in the first record for the screen.
    • DYNBEGIN: Indicates the first record for the screen. Set this field to X, only for the first record for the screen. (Reset to ‘ ‘ (blank) for all other records.)
    • FNAM: Field Name. The FNAM field is not case-sensitive.
    • FVAL: Value for the field named in FNAM. The FVAL field is case-sensitive. Values assigned to this field are always padded on the right, if they are less than 132 characters. Values must be in character format.
    Transferring data from local file to internal table
    Data is uploaded to internal table by UPLOAD of WS_UPLOAD function.
    Population of BDCDATA
    For each record of internal table, you need to populate Internal table, which is similar to BDCDATA structure.
    All these five initial steps are necessary for any type of BDC interface.
    DATA TRANSFER program can call SESSION METHOD or CALL TRANSACTION. The initial steps for both the methods are same.
    First step for both the methods is to upload the data to internal table. From Internal Table, the data is transferred to database table by two ways i.e., Session method and Call transaction.
    SESSION METHOD
    About Session method
    In this method you transfer data from internal table to database table through sessions.
    In this method, an ABAP/4 program reads the external data that is to be entered in the SAP System and stores the data in session. A session stores the actions that are required to enter your data using normal SAP transaction i.e., Data is transferred to session which in turn transfers data to database table.
    Session is intermediate step between internal table and database table. Data along with its action is stored in session i.e., data for screen fields, to which screen it is passed, the program name behind it, and how the next screen is processed.
    When the program has finished generating the session, you can run the session to execute the SAP transactions in it. You can either explicitly start and monitor a session or have the session run in the background processing system.
    Unless session is processed, the data is not transferred to database table.
    BDC_OPEN_GROUP
    You create the session through program by BDC_OPEN_GROUP function.
    Parameters to this function are:
    • User Name: User name
    • Group: Name of the session
    • Lock Date: The date on which you want to process the session.
    • Keep: This parameter is passed as ‘X’ when you want to retain session after
    processing it or ‘ ‘ to delete it after processing.
    BDC_INSERT
    This function creates the session & data is transferred to Session.
    Parameters to this function are:
    • Tcode: Transaction Name
    • Dynprotab: BDC Data
    BDC_CLOSE_GROUP
    This function closes the BDC Group. No Parameters.
    Some additional information for session processing
    When the session is generated using the KEEP option within the BDC_OPEN_GROUP, the system always keeps the sessions in the queue, whether it has been processed successfully or not.
    However, if the session is processed, you have to delete it manually. When session processing is completed successfully while KEEP option was not set, it will be removed automatically from the session queue. Log is not removed for that session.
    If the batch-input session is terminated with errors, then it appears in the list of INCORRECT session and it can be processed again. To correct incorrect session, you can analyze the session. The Analysis function allows to determine which screen and value has produced the error. If you find small errors in data, you can correct them interactively, otherwise you need to modify batch input program, which has generated the session or many times even the data file.
    CALL TRANSACTION
    About CALL TRANSACTION
    A technique similar to SESSION method, while batch input is a two-step procedure, Call Transaction does both steps online, one after the other. In this method, you call a transaction from your program by
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>
    Messages into <MSGTAB>.
    Parameter – 1 is transaction code.
    Parameter – 2 is name of BDCTAB table.
    Parameter – 3 here you are specifying mode in which you execute transaction
    A is all screen mode. All the screen of transaction are displayed.
    N is no screen mode. No screen is displayed when you execute the transaction.
    E is error screen. Only those screens are displayed wherein you have error record.
    Parameter – 4 here you are specifying update type by which database table is updated.
    S is for Synchronous update in which if you change data of one table then all the related Tables gets updated. And sy-subrc is returned i.e., sy-subrc is returned for once and all.
    A is for Asynchronous update. When you change data of one table, the sy-subrc is returned. And then updating of other affected tables takes place. So if system fails to update other tables, still sy-subrc returned is 0 (i.e., when first table gets updated).
    Parameter – 5 when you update database table, operation is either successful or unsuccessful or operation is successful with some warning. These messages are stored in internal table, which you specify along with MESSAGE statement. This internal table should be declared like BDCMSGCOLL, a structure available in ABAP/4. It contains the following fields:
    1. Tcode: Transaction code
    2. Dyname: Batch point module name
    3. Dynumb: Batch input Dyn number
    4. Msgtyp: Batch input message type (A/E/W/I/S)
    5. Msgspra: Batch input Lang, id of message
    6. Msgid: Message id
    7. MsgvN: Message variables (N = 1 - 4)
    For each entry, which is updated in database, table message is available in BDCMSGCOLL. As BDCMSGCOLL is structure, you need to declare a internal table which can contain multiple records (unlike structure).
    Steps for CALL TRANSACTION method
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. UPLOAD or WS_UPLOAD function to upload the data from local file to itab. (Considering file is local file)
    4. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>.
    Refresh BDCTAB.
    Endloop.
    (To populate BDCTAB, You need to transfer each and every field)
    The major differences between Session method and Call transaction are as follows:
    SESSION METHOD CALL TRANSACTION
    1. Data is not updated in database table unless Session is processed. Immediate updation in database table.
    2. No sy-subrc is returned. Sy-subrc is returned.
    3. Error log is created for error records. Errors need to be handled explicitly
    4. Updation in database table is always synchronous Updation in database table can be synchronous Or Asynchronous.
    Error Handling in CALL TRANSACTION
    When Session Method updates the records in database table, error records are stored in the log file. In Call transaction there is no such log file available and error record is lost unless handled. Usually you need to give report of all the error records i.e., records which are not inserted or updated in the database table. This can be done by the following method:
    Steps for the error handling in CALL TRANSACTION
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. Internal table BDCMSG like BDCMSGCOLL
    4. Internal table similar to Ist internal table
    (Third and fourth steps are for error handling)
    5. UPLOAD or WS_UPLOAD function to upload the data from the local file to itab. (Considering file is local file)
    6. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tr.code> using <Bdctab>
    Mode <A/N/E>
    Update <S/A>
    Messages <BDCMSG>.
    Perform check.
    Refresh BDCTAB.
    Endloop.
    7 Form check.
    IF sy-subrc <> 0. (Call transaction returns the sy-subrc if updating is not successful).
    Call function Format_message.
    (This function is called to store the message given by system and to display it along with record)
    Append itab2.
    Display the record and message.
    DIRECT INPUT
    About Direct Input
    In contrast to batch input, this technique does not create sessions, but stores the data directly. It does not simulate the online transaction. To enter the data into the corresponding database tables directly, the system calls a number of function modules that execute any necessary checks. In case of errors, the direct input technique provides a restart mechanism. However, to be able to activate the restart mechanism, direct input programs must be executed in the background only. Direct input checks the data thoroughly and then updates the database directly.
    You can start a Direct Input program in two ways;
    Start the program directly
    This is the quickest way to see if the program works with your flat file. This option is possible with all direct input programs. If the program ends abnormally, you will not have any logs telling you what has or has not been posted. To minimize the chance of this happening, always use the check file option for the first run with your flat file. This allows you to detect format errors before transfer.
    Starting the program via the DI administration transaction
    This transaction restarts the processing, if the data transfer program aborts. Since DI document are immediately posted into the SAP D/B, the restart option prevents the duplicate document posting that occurs during a program restart (i.e., without adjusting your flat file).
    Direct input is usually done for standard data like material master, FI accounting document, SD sales order and Classification for which SAP has provided standard programs.
    First time you work with the Direct Input administration program, you will need to do some preparation before you can transfer data:
    - Create variant
    - Define job
    - Start job
    - Restart job
    Common batch input errors
    - The batch input BDCDATA structure tries to assign values to fields which do not exist in the current transaction screen.
    - The screen in the BDCDATA structure does not match the right sequence, or an intermediate screen is missing.
    - On exceptional occasions, the logic flow of batch input session does not exactly match that of manual online processing. Testing the sessions online can discover by this.
    - The BDCDATA structure contains fields, which are longer than the actual definition.
    - Authorization problems.
    RECORDING A BATCH INPUT
    A B recording allows you to record a R/3 transaction and generate a program that contains all screens and field information in the required BDC-DATA format.
    You can either use SHDB transaction for recording or
    SYSTEM ? SERVICES ? BATCH INPUT ? EDIT
    And from here click recording.
    Enter name for the recording.
    (Dates are optional)
    Click recording.
    Enter transaction code.
    Enter.
    Click Save button.
    You finally come to a screen where, you have all the information for each screen including BDC_OKCODE.
    • Click Get Transaction.
    • Return to BI.
    • Click overview.
    • Position the cursor on the just recorded entry and click generate program.
    • Enter program name.
    • Click enter
    The program is generated for the particular transaction.
    BACKGROUND PROCESSING
    Need for Background processing
    When a large volume of data is involved, usually all batch inputs are done in background.
    The R/3 system includes functions that allow users to work non-interactively or offline. The background processing systems handle these functions.
    Non-interactively means that instead of executing the ABAP/4 programs and waiting for an answer, user can submit those programs for execution at a more convenient planned time.
    There are several reasons to submit programs for background execution.
    • The maximum time allowed for online execution should not exceed 300 seconds. User gets TIMEOUT error and an aborted transaction, if time for execution exceeds 300 seconds. To avoid these types of error, you can submit jobs for background processing.
    • You can use the system while your program is executing.
    This does not mean that interactive or online work is not useful. Both type of processing have their own purposes. Online work is the most common one entering business data, displaying information, printing small reports, managing the system and so on. Background jobs are mainly used for the following tasks; to process large amount of data, to execute periodic jobs without human intervention, to run program at a more convenient, planned time other than during normal working hours i.e., Nights or weekends.
    The transaction for background processing is SM36.
    Or
    Tools ? Administration ? Jobs ? Define jobs
    Or
    System ? services ? Jobs
    Components of the background jobs
    A job in Background processing is a series of steps that can be scheduled and step is a program for background processing.
    • Job name. Define the name of assigned to the job. It identifies the job. You can specify up to 32 characters for the name.
    • Job class. Indicates the type of background processing priority assigned to the job.
    The job class determines the priority of a job. The background system admits three types of job classes: A B & C, which correspond to job priority.
    • Job steps. Parameters to be passed for this screen are as follows:
    Program name.
    Variant if it is report program
    Start criteria for the job: Option available for this are as follows:
    Immediate - allows you to start a job immediately.
    Date/Time - allows you to start a job at a specific name.
    After job - you can start a job after a particular job.
    After event - allows you to start a job after a particular event.
    At operation mode - allows you to start a job when the system switches to a particular operation mode.
    Defining Background jobs
    It is two step process: Firstly, you define the job and then release it.
    When users define a job and save it, they are actually scheduling the report i.e., specifying the job components, the steps, the start time.
    When users schedule program for background processing, they are instructing the system to execute an ABAP/4 report or an external program in the background. Scheduled jobs are not executed until they are released. When jobs are released, they are sent for execution to the background processing system at the specified start time. Both scheduling and releasing of jobs require authorizations.
    HANDLING OF POP UP SCREEN IN BDC
    Many times in transaction pop up screen appears and for this screen you don’t pass any record but some indication to system telling it to proceed further. For example: The following screen
    To handle such screen, system has provided a variable called BDC_CURSOR. You pass this variable to BDCDATA and process the screen.
    Usually such screen appears in many transactions, in this case you are just passing information, that YES you want to save the information, that means YES should be clicked. So you are transferring this information to BDCDATA i.e., field name of YES which is usually SPOT_OPTION. Instead of BDC_OKCODE, you are passing BDC_CURSOR.
    BDC_CURSOR is also used to place cursor on particular field.
    A simple transaction where you are entering customer number on first screen and on next screen data is displayed for the particular customer number. Field, which we are changing here, are name and city. When you click on save, the changed record gets saved.
    Prerequisite to write this BDC interface as indicated earlier is:
    1. To find screen number
    2. To find screen field names, type of the field and length of the field.
    3. To find BDC_OKCODE for each screen
    4. Create flat file.
    generally  Batch Input usually are used to transfer large amount of data. For example you are implementing a new SAP project, and of course you will need some data transfer from legacy system to SAP system.
    CALL TRANSACTION is used especially for integration actions between two SAP systems or between different modules. Users sometimes wish to do something like that click a button or an item then SAP would inserts or changes data automatically. Here CALL TRANSACTION should be considered.
    2. Transfer data for multiple transactions usually the Batch Input method is used.
    check these sites for step by step process:
    For BDC:
    http://myweb.dal.ca/hchinni/sap/bdc_home.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/bdc&
    http://www.sap-img.com/abap/learning-bdc-programming.htm
    http://www.sapdevelopment.co.uk/bdc/bdchome.htm
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://help.sap.com/saphelp_47x200/helpdata/en/69/c250684ba111d189750000e8322d00/frameset.htm
    http://www.sapbrain.com/TUTORIALS/TECHNICAL/BDC_tutorial.html
    Check these link:
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://www.sap-img.com/abap/question-about-bdc-program.htm
    http://www.itcserver.com/blog/2006/06/30/batch-input-vs-call-transaction/
    http://www.planetsap.com/bdc_main_page.htm
    call Transaction or session method ?
    null

  • Uploading data from Legacy system

    Hi
    i am shorthly going to work on data migration project. I have never worked on a data migration project and want to know the answers for the following questions.
    1] which technique i must first consider in data upload ( both master data and transaction data )
    2] What steps are involved?
    3] I know we can use LSMW, BAPI's , BDC , Direct Input , IDOC etc but want to know which one choose and when?
    4] What care must be taken in each data migration
    5] is there is std list of programs or a good document on when to choose what?
    ~SR

    Hi,
    Data migration can be done by the follwoing methods:
    LSMW : for master data migartion or small data migartion
    BAPI : its just like a FM , thru which u can create master aswell as Transactional data.
    u have to import some paremetre as well as tables if required, and Bapi will RETURN the message saying so and so material/delievry had cretaed.
    BDC : in BDC we got 2 methods.
    CAll transaction and session.
    CAll transacton : using this u can uplaod transcational data as well as master data.
    the updation can be synchronus or asynchronus updates.
    Session: using this u can uplaod transcational data as well as master data.
    The updation in session is synchronus, so its time taking process.
    Most of the time we go for Call transcation In real situations.
    Dircet input method: its out dated.no longer it is being used.
    in all the above cases, data is alraedy existing in flat file and u r going to migrate the data in one shot but IDOC is used where the dynamical data migration is necessary.
    suppose say ones the material is craeted in legacy sytem , that particulat matreial shud be migarted to R/3.this can be handled thru IDOC.
    Idoc: master data as well as Trnascational data is migarted thru IDOC( matmas, orders).
    Idoc uses ALE technology in migartion of idoc.
    Revert back if any issues.
    Check these links.
    SAP Data Migration with LSMW
    http://www.sap-img.com/sap-data-migration.htm
    For BDC:
    http://myweb.dal.ca/hchinni/sap/bdc_home.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/home/bdc&
    http://www.sap-img.com/abap/learning-bdc-programming.htm
    http://www.sapdevelopment.co.uk/bdc/bdchome.htm
    http://www.sap-img.com/abap/difference-between-batch-input-and-call-transaction-in-bdc.htm
    http://help.sap.com/saphelp_47x200/helpdata/en/69/c250684ba111d189750000e8322d00/frameset.htm
    http://www.sapbrain.com/TUTORIALS/TECHNICAL/BDC_tutorial.html
    LSMW
    No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    For those with the older SAP version (4.7 and below), the data migration programs might not have been pre-loaded.
    You can download the LSMW at no cost from SAPNet under Services, SAP Methodology and Tools, category Tools.
    If you are an existing SAP customer with an OSS ID, you can access the SAP Service Marketplace to download the LSWM for your Basis teams to install to your SAP system: http://service.sap.com/lsmw
    The LSM Workbench carries out the following tasks:
    Reads the transfer data from one or more files (for example, spreadsheets, sequential files etc.)
    Converts the data from the source format into the target format
    Note that with background processing, the input file must not be located in the presentation server. Access to presentation server files is only possible when you are working onlineUpload Condition Pricing
    LSMW STEPS
    Data Cleansing
    Data cleansing allows you to compare, include and merge redundant business partner master records (potential duplicates) in data cleansing cases. Following the data cleansing process you can remove data records from the system using archiving.
    Integration
    Before you can carry out data cleansing, you must determine the redundant data in your system and include it in data cleansing cases.
    You have the following options for duplicate recognition and creation of data cleansing cases.
    The Business Address Services (BAS) provide interfaces for integrating the relevant external software (search machines).
    User-defined programs
    Services of data providers, who check their data for possible duplicates.
    In the options described above, the data cleansing cases are created via the system and put into the data cleansing worklist for further processing.
    In individual cases you can find potential business partner duplicates in the hitlist of the business partner search and create a separate data cleansing case.
    After connecting the non-SAP software (search engines), the system starts duplicate recognition (Delta Scan) with every change to or new entry of a business partner for this specific individual record. You should do a full scan through the complete dataset at the beginning of the consolidation process using the non-SAP software, this registers all potential duplicates in data cleansing cases and makes them available or further processing.
    Within the application basis, the referencing objects, which are taken into account by the data cleansing, are limited to the sub-objects of the business partner. You have the possibility to make modification-free enhancements to the referencing objects.
    Prerequisites
    You must have determined the redundant data records in your system.
    To use the interface of the Business Address Services, make the following Customizing settings: SAP Implementation Guide -> Basis -> Basis Services -> Address Management -> Activate Duplicate Check Index Pool
    Make the following system settings in the IMG of the SAP Business Partner: Basic Settings -> Data Cleansing -> Maintain Number Range / Define Priorities / Activate Data Cleansing.
    Activities
    Register data cleansing cases via BAPIs or create them in the dialog for business partner maintenance from the worklist or the business partner search.
    Process the created data cleansing cases in a separate process step afterwards.
    BATCH DATA COMMUNICATION
    About Data Transfer In R/3 System
    When a company decides to implement the SAP R/3 to manage business-critical data, it usually does not start from a no-data situation. Normally, a SAP R/3 project comes into replace or complement existing application.
    In the process of replacing current applications and transferring application data, two situations might occur:
    • The first is when application data to be replaced is transferred at once, and only once.
    • The second situation is to transfer data periodically from external systems to SAP and vice versa.
    • There is a period of time when information has to be transferred from existing application, to SAP R/3, and often this process will be repetitive.
    The SAP system offers two primary methods for transferring data into SAP systems. From non-SAP systems or legacy system. These two methods are collectively called “batch input” or “batch data communication”.
    1. SESSION METHOD
    2. CALL TRANSACTION
    3. DIRECT INPUT
    Advantages offered by BATCH INPUT method:
    1. Can process large data volumes in batch.
    2. Can be planned and submitted in the background.
    3. No manual interaction is required when data is transferred.
    4. Data integrity is maintained as whatever data is transferred to the table is through transaction. Hence batch input data is submitted to all the checks and validations.
    To implement one of the supported data transfers, you must often write the program that exports the data from your non-SAP system. This program, known as a “data transfer” program must map the data from the external system into the data structure required by the SAP batch input program.
    The batch input program must build all of the input to execute the SAP transaction.
    Two main steps are required:
    • To build an internal table containing every screen and every field to be filled in during the execution of an SAP transaction.
    • To pass the table to SAP for processing.
    Prerequisite for Data Transfer Program
    Writing a Data Transfer Program involves following prerequisites:
    Analyzing data from local file
    Analyzing transaction
    Analyzing transaction involves following steps:
    • The transaction code, if you do not already know it.
    • Which fields require input i.e., mandatory.
    • Which fields can you allow to default to standard values.
    • The names, types, and lengths of the fields that are used by a transaction.
    • Screen number and Name of module pool program behind a particular transaction.
    To analyze a transaction::
    • Start the transaction by menu or by entering the transaction code in the command box.
    (You can determine the transaction name by choosing System – Status.)
    • Step through the transaction, entering the data will be required for processing your batch input data.
    • On each screen, note the program name and screen (dynpro) number.
    (dynpro = dyn + pro. Dyn = screen, pro = number)
    • Display these by choosing System – Status. The relevant fields are Program (dynpro) and Dynpro number. If pop-up windows occur during execution, you can get the program name and screen number by pressing F1 on any field or button on the screen.
    The technical info pop-up shows not only the field information but also the program and screen.
    • For each field, check box, and radio button on each screen, press F1 (help) and then choose Technical Info.
    Note the following information:
    - The field name for batch input, which you’ll find in its own box.
    The length and data type of the field. You can display this information by double clicking on the Data Element field.
    • Find out the identification code for each function (button or menu) that you must execute to process the batch-input data (or to go to new screen).
    Place the cursor on the button or menu entry while holding down the left mouse button. Then press F1.
    In the pop-up window that follows, choose Technical info and note the code that is shown in the Function field.
    You can also run any function that is assigned to a function key by way of the function key number. To display the list of available function keys, click on the right mouse button. Note the key number that is assigned to the functions you want to run.
    Once you have program name, screen number, field name (screen field name), you can start writing.
    DATA TRANSFER program.
    Declaring internal table
    First Integral Table similar to structure like local file.
    Declaring internal table like BDCDATA
    The data from internal table is not transferred directly to database table, it has to go through transaction. You need to pass data to particular screen and to particular screen-field. Data is passed to transaction in particular format, hence there is a need for batch input structure.
    The batch input structure stores the data that is to be entered into SAP system and the actions that are necessary to process the data. The batch input structure is used by all of the batch input methods. You can use the same structure for all types of batch input, regardless of whether you are creating a session in the batch input queue or using CALL TRANSACTION.
    This structure is BDCDATA, which can contain the batch input data for only a single run of a transaction. The typical processing loop in a program is as follows:
    • Create a BDCDATA structure
    • Write the structure out to a session or process it with CALL TRANSACTION USING; and then
    • Create a BDCDATA structure for the next transaction that is to be processed.
    Within a BDCDATA structure, organize the data of screens in a transaction. Each screen that is processed in the course of a transaction must be identified with a BDCDATA record. This record uses the Program, Dynpro, and Dynbegin fields of the structure.
    The screen identifier record is followed by a separate BDCDATA record for each value, to be entered into a field. These records use the FNAM and FVAL fields of the BDCDATA structure. Values to be entered in a field can be any of the following:
    • Data that is entered into screen fields.
    • Function codes that are entered into the command field. Such function codes execute functions in a transaction, such as Save or Enter.
    The BDCDATA structure contains the following fields:
    • PROGRAM: Name of module pool program associated with the screen. Set this field only for the first record for the screen.
    • DYNPRO: Screen Number. Set this field only in the first record for the screen.
    • DYNBEGIN: Indicates the first record for the screen. Set this field to X, only for the first record for the screen. (Reset to ‘ ‘ (blank) for all other records.)
    • FNAM: Field Name. The FNAM field is not case-sensitive.
    • FVAL: Value for the field named in FNAM. The FVAL field is case-sensitive. Values assigned to this field are always padded on the right, if they are less than 132 characters. Values must be in character format.
    Transferring data from local file to internal table
    Data is uploaded to internal table by UPLOAD of WS_UPLOAD function.
    Population of BDCDATA
    For each record of internal table, you need to populate Internal table, which is similar to BDCDATA structure.
    All these five initial steps are necessary for any type of BDC interface.
    DATA TRANSFER program can call SESSION METHOD or CALL TRANSACTION. The initial steps for both the methods are same.
    First step for both the methods is to upload the data to internal table. From Internal Table, the data is transferred to database table by two ways i.e., Session method and Call transaction.
    SESSION METHOD
    About Session method
    In this method you transfer data from internal table to database table through sessions.
    In this method, an ABAP/4 program reads the external data that is to be entered in the SAP System and stores the data in session. A session stores the actions that are required to enter your data using normal SAP transaction i.e., Data is transferred to session which in turn transfers data to database table.
    Session is intermediate step between internal table and database table. Data along with its action is stored in session i.e., data for screen fields, to which screen it is passed, the program name behind it, and how the next screen is processed.
    When the program has finished generating the session, you can run the session to execute the SAP transactions in it. You can either explicitly start and monitor a session or have the session run in the background processing system.
    Unless session is processed, the data is not transferred to database table.
    BDC_OPEN_GROUP
    You create the session through program by BDC_OPEN_GROUP function.
    Parameters to this function are:
    • User Name: User name
    • Group: Name of the session
    • Lock Date: The date on which you want to process the session.
    • Keep: This parameter is passed as ‘X’ when you want to retain session after
    processing it or ‘ ‘ to delete it after processing.
    BDC_INSERT
    This function creates the session & data is transferred to Session.
    Parameters to this function are:
    • Tcode: Transaction Name
    • Dynprotab: BDC Data
    BDC_CLOSE_GROUP
    This function closes the BDC Group. No Parameters.
    Some additional information for session processing
    When the session is generated using the KEEP option within the BDC_OPEN_GROUP, the system always keeps the sessions in the queue, whether it has been processed successfully or not.
    However, if the session is processed, you have to delete it manually. When session processing is completed successfully while KEEP option was not set, it will be removed automatically from the session queue. Log is not removed for that session.
    If the batch-input session is terminated with errors, then it appears in the list of INCORRECT session and it can be processed again. To correct incorrect session, you can analyze the session. The Analysis function allows to determine which screen and value has produced the error. If you find small errors in data, you can correct them interactively, otherwise you need to modify batch input program, which has generated the session or many times even the data file.
    CALL TRANSACTION
    About CALL TRANSACTION
    A technique similar to SESSION method, while batch input is a two-step procedure, Call Transaction does both steps online, one after the other. In this method, you call a transaction from your program by
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>
    Messages into <MSGTAB>.
    Parameter – 1 is transaction code.
    Parameter – 2 is name of BDCTAB table.
    Parameter – 3 here you are specifying mode in which you execute transaction
    A is all screen mode. All the screen of transaction are displayed.
    N is no screen mode. No screen is displayed when you execute the transaction.
    E is error screen. Only those screens are displayed wherein you have error record.
    Parameter – 4 here you are specifying update type by which database table is updated.
    S is for Synchronous update in which if you change data of one table then all the related Tables gets updated. And sy-subrc is returned i.e., sy-subrc is returned for once and all.
    A is for Asynchronous update. When you change data of one table, the sy-subrc is returned. And then updating of other affected tables takes place. So if system fails to update other tables, still sy-subrc returned is 0 (i.e., when first table gets updated).
    Parameter – 5 when you update database table, operation is either successful or unsuccessful or operation is successful with some warning. These messages are stored in internal table, which you specify along with MESSAGE statement. This internal table should be declared like BDCMSGCOLL, a structure available in ABAP/4. It contains the following fields:
    1. Tcode: Transaction code
    2. Dyname: Batch point module name
    3. Dynumb: Batch input Dyn number
    4. Msgtyp: Batch input message type (A/E/W/I/S)
    5. Msgspra: Batch input Lang, id of message
    6. Msgid: Message id
    7. MsgvN: Message variables (N = 1 - 4)
    For each entry, which is updated in database, table message is available in BDCMSGCOLL. As BDCMSGCOLL is structure, you need to declare a internal table which can contain multiple records (unlike structure).
    Steps for CALL TRANSACTION method
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. UPLOAD or WS_UPLOAD function to upload the data from local file to itab. (Considering file is local file)
    4. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>.
    Refresh BDCTAB.
    Endloop.
    (To populate BDCTAB, You need to transfer each and every field)
    The major differences between Session method and Call transaction are as follows:
    SESSION METHOD CALL TRANSACTION
    1. Data is not updated in database table unless Session is processed. Immediate updation in database table.
    2. No sy-subrc is returned. Sy-subrc is returned.
    3. Error log is created for error records. Errors need to be handled explicitly
    4. Updation in database table is always synchronous Updation in database table can be synchronous Or Asynchronous.
    Error Handling in CALL TRANSACTION
    When Session Method updates the records in database table, error records are stored in the log file. In Call transaction there is no such log file available and error record is lost unless handled. Usually you need to give report of all the error records i.e., records which are not inserted or updated in the database table. This can be done by the following method:
    Steps for the error handling in CALL TRANSACTION
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. Internal table BDCMSG like BDCMSGCOLL
    4. Internal table similar to Ist internal table
    (Third and fourth steps are for error handling)
    5. UPLOAD or WS_UPLOAD function to upload the data from the local file to itab. (Considering file is local file)
    6. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tr.code> using <Bdctab>
    Mode <A/N/E>
    Update <S/A>
    Messages <BDCMSG>.
    Perform check.
    Refresh BDCTAB.
    Endloop.
    7 Form check.
    IF sy-subrc 0. (Call transaction returns the sy-subrc if updating is not successful).
    Call function Format_message.
    (This function is called to store the message given by system and to display it along with record)
    Append itab2.
    Display the record and message.
    DIRECT INPUT
    About Direct Input
    In contrast to batch input, this technique does not create sessions, but stores the data directly. It does not simulate the online transaction. To enter the data into the corresponding database tables directly, the system calls a number of function modules that execute any necessary checks. In case of errors, the direct input technique provides a restart mechanism. However, to be able to activate the restart mechanism, direct input programs must be executed in the background only. Direct input checks the data thoroughly and then updates the database directly.
    You can start a Direct Input program in two ways;
    Start the program directly
    This is the quickest way to see if the program works with your flat file. This option is possible with all direct input programs. If the program ends abnormally, you will not have any logs telling you what has or has not been posted. To minimize the chance of this happening, always use the check file option for the first run with your flat file. This allows you to detect format errors before transfer.
    Starting the program via the DI administration transaction
    This transaction restarts the processing, if the data transfer program aborts. Since DI document are immediately posted into the SAP D/B, the restart option prevents the duplicate document posting that occurs during a program restart (i.e., without adjusting your flat file).
    Direct input is usually done for standard data like material master, FI accounting document, SD sales order and Classification for which SAP has provided standard programs.
    First time you work with the Direct Input administration program, you will need to do some preparation before you can transfer data:
    - Create variant
    Define job
    Start job
    Restart job
    Common batch input errors
    - The batch input BDCDATA structure tries to assign values to fields which do not exist in the current transaction screen.
    The screen in the BDCDATA structure does not match the right sequence, or an intermediate screen is missing.
    On exceptional occasions, the logic flow of batch input session does not exactly match that of manual online processing. Testing the sessions online can discover by this.
    The BDCDATA structure contains fields, which are longer than the actual definition.
    Authorization problems.
    RECORDING A BATCH INPUT
    A B recording allows you to record a R/3 transaction and generate a program that contains all screens and field information in the required BDC-DATA format.
    You can either use SHDB transaction for recording or
    EDIT&#61614; BATCH INPUT &#61614; SERVICES &#61614;SYSTEM
    And from here click recording.
    Enter name for the recording.
    (Dates are optional)
    Click recording.
    Enter transaction code.
    Enter.
    Click Save button.
    You finally come to a screen where, you have all the information for each screen including BDC_OKCODE.
    • Click Get Transaction.
    • Return to BI.
    • Click overview.
    • Position the cursor on the just recorded entry and click generate program.
    • Enter program name.
    • Click enter
    The program is generated for the particular transaction.
    BACKGROUND PROCESSING
    Need for Background processing
    When a large volume of data is involved, usually all batch inputs are done in background.
    The R/3 system includes functions that allow users to work non-interactively or offline. The background processing systems handle these functions.
    Non-interactively means that instead of executing the ABAP/4 programs and waiting for an answer, user can submit those programs for execution at a more convenient planned time.
    There are several reasons to submit programs for background execution.
    • The maximum time allowed for online execution should not exceed 300 seconds. User gets TIMEOUT error and an aborted transaction, if time for execution exceeds 300 seconds. To avoid these types of error, you can submit jobs for background processing.
    • You can use the system while your program is executing.
    This does not mean that interactive or online work is not useful. Both type of processing have their own purposes. Online work is the most common one entering business data, displaying information, printing small reports, managing the system and so on. Background jobs are mainly used for the following tasks; to process large amount of data, to execute periodic jobs without human intervention, to run program at a more convenient, planned time other than during normal working hours i.e., Nights or weekends.
    The transaction for background processing is SM36.
    Or
    Define jobs&#61614; Jobs &#61614; Administration &#61614;Tools
    Or
    &#61614;System Jobs&#61614;services
    Components of the background jobs
    A job in Background processing is a series of steps that can be scheduled and step is a program for background processing.
    • Job name. Define the name of assigned to the job. It identifies the job. You can specify up to 32 characters for the name.
    • Job class. Indicates the type of background processing priority assigned to the job.
    The job class determines the priority of a job. The background system admits three types of job classes: A B & C, which correspond to job priority.
    • Job steps. Parameters to be passed for this screen are as follows:
    Program name.
    Variant if it is report program
    Start criteria for the job: Option available for this are as follows:
    Immediate - allows you to start a job immediately.
    Date/Time - allows you to start a job at a specific name.
    After job - you can start a job after a particular job.
    After event - allows you to start a job after a particular event.
    At operation mode - allows you to start a job when the system switches to a particular operation mode.
    Defining Background jobs
    It is two step process: Firstly, you define the job and then release it.
    When users define a job and save it, they are actually scheduling the report i.e., specifying the job components, the steps, the start time.
    When users schedule program for background processing, they are instructing the system to execute an ABAP/4 report or an external program in the background. Scheduled jobs are not executed until they are released. When jobs are released, they are sent for execution to the background processing system at the specified start time. Both scheduling and releasing of jobs require authorizations.
    HANDLING OF POP UP SCREEN IN BDC
    Many times in transaction pop up screen appears and for this screen you don’t pass any record but some indication to system telling it to proceed further. For example: The following screen
    To handle such screen, system has provided a variable called BDC_CURSOR. You pass this variable to BDCDATA and process the screen.
    Usually such screen appears in many transactions, in this case you are just passing information, that YES you want to save the information, that means YES should be clicked. So you are transferring this information to BDCDATA i.e., field name of YES which is usually SPOT_OPTION. Instead of BDC_OKCODE, you are passing BDC_CURSOR.
    BDC_CURSOR is also used to place cursor on particular field.
    AN EXAMPLE WITH SESSION METHOD
    Following program demonstrates how data is passed from flat file to SAP transaction and further to database table by using SESSION method.
    The transaction is TFBA (to change customer).
    A simple transaction where you are entering customer number on first screen and on next screen data is displayed for the particular customer number. Field, which we are changing here, are name and city. When you click on save, the changed record gets saved.
    Prerequisite to write this BDC interface as indicated earlier is:
    1. To find screen number
    2. To find screen field names, type of the field and length of the field.
    3. To find BDC_OKCODE for each screen
    4. Create flat file.
    Flat file can be created in your hard disk as follows:
    1 Vinod Krishna Hyderabad
    2 Kavitha Secunderabad
    3 Kishore Hyderabad
    (Where 1st character field is Customer number, 2nd field is Customer name and 3rd field is City.)
    To transfer this data to database table SCUSTOM following interface can be used.
    REPORT DEMO1.
    Following internal table is to upload flat file.
    DATA: BEGIN OF ITAB OCCURS 0,
    ID(10),
    NAME(25),
    CITY(25),
    END OF ITAB.
    *Following internal table BDCDATA is to pass date from internal table to session.
    DATA: BDCTAB LIKE BDCDATA OCCURS 0 WITH HEADER LINE.
    Variables
    DATA: DATE1 LIKE SY-DATUM. DATE1 = SY-DATUM - 1. “ This is for Hold Date
    To upload flat file to internal table.
    CALL FUNCTION UPLOAD
    EXPORTING
    FILE NAME = ‘C:FF.TXT’
    FILE TYPE = ‘ASC”
    TABLES
    DATA_TAB = ITAB
    EXCEPTIONS
    CONVERSION_ERROR = 1
    INVALID_TABLE_WIDTH = 2
    INVALID_TYPE = 3
    NO_BATCH = 4
    UNKNOWN_ERROR = 5
    OTHERS = 6.
    If sy-subrc = 0.
    Calling Function to Create a Session
    CALL FUNCTION ‘BDC_OPEN_GROUP’
    EXPORTING
    CLIENT = SY-MANDT
    GROUP = ‘POTHURI’
    HOLDDATE = DATE1
    KEEP = ‘X’
    USER = SY-UNAME
    EXCEPTIONS
    CLIENT_INVALID = 1
    DESTINATION_INVALID = 2
    GROUP_INVALID = 3
    GROUP_IS_LOCKED = 4
    HOLDDATE_INVALID = 5
    INTERNAL_ERROR = 6
    QUEUE_ERROR = 7
    RUNNING = 8
    SYSTEM_LOCK_ERROR = 9
    USER_INVALID = 10
    OTHERS = 11.
    If sy-subrc = 0.
    *-- MAIN Logic--
    LOOP AT ITAB
    PERFORM GENERATE_DATA. “ Populating BDCDATA Table
    CALL FUNCTION ‘BDC_INSERT’
    EXPORTING
    TCODE = ‘TFBA’
    TABLES
    DYNPROTAB = BDCTAB
    EXCEPTIONS
    INTERNAL_ERROR = 1
    NOT_OPEN = 2
    QUEUE_ERROR = 3
    TCODE_INVALID = 4
    PRINTING_INVALID = 5
    POSTING_INVALID = 6
    OTHERS = 7.
    REFRESH BDCTAB
    ENDLOOP.
    Calling function to close the session
    CALL FUNCTION ‘BDC_CLOSE_GROUP’
    EXCEPTIONS
    NOT_OPEN = 1
    QUEUE_ERROR = 2
    OTHERS = 3.
    Endif.
    Endif.
    *& Form GENERATE_DATA
    Create BDC Data
    FORM GENERATE_DATA
    Passing information for 1st screen on BDCDATA
    BDCTAB-PROGRAM = ‘SAPMTFBA’.
    BDCTAX-DYNPRO = 100.
    BDCTAP-DYNBEGIN = ‘X’.
    APPEND BCDTAB.CLEAR BDCTAB.
    Passing field information to BDCDATA
    BDCTAB-FNAM = ‘SCUSTOM-ID’
    BDCTAB-FVAL = ITAB-ID.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing BDC_OKCODE to BDCDATA
    BDCTAB-FNAM = ‘BDC_OKCODE’.
    BDCTAB-FVAL = ‘/5’.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing screen information for next screen to BDCDATA
    BDCTAB-PROGRAM = ‘SAPMTFBA’.
    BDCTAB-DYNPRO = 200.
    BDCTAB-DYNBEGIN = ‘X’.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing screen information to BDCDATA
    BDCTAB-FNAM = ‘SCUSTOM-NAME’.
    BDCTAB-FVAL = ITAB-NAME.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing screen information to BDCDATA
    BDCTAB-FNAM = ‘SCUSTOM-CITY’.
    BDCTAB-FVAL = ITAB-CITY.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing BDC_OKCODE to BDCDATA
    BDCTAB-FNAM = ‘BDC_OKCODE’.
    BDCTAB-FVAL = ‘SAVE’.
    APPEND BDCTAB.CLEAR BDCTAB.
    ENDFORM. “GENERATE_DATA
    AN EXAMPLE WITH CALL TRANSACTION
    Same steps to be repeated for CALL TRANSACTION
    The only difference between the two types of interface is in Session method, you create session and store information about screen and data into session. When session is processed the data is transferred to database. While in CALL TRANSACTION, data is transferred directly to database table.
    REPORT DEMO1.
    Follow above Code till MAIN Logic. Even the Subroutine should be copied
    LOOP AT ITAB
    PERFORM GENERATE_DATA, “Populating BDCDATA Table
    Call transaction ‘TFBA’ using BCDDATA Mode ‘A’ Update ‘S’.
    REFRESH BDCTAB
    ENDLOOP.
    check this link:
    http://www.sap-img.com/abap/learning-bdc-programming.htm
    http://www.sap-img.com/bdc.htm
    www.sappoint.com/abap/bdcconcept.pdf
    www.sap-img.com/abap/learning-bdc-programming.htm
    www.sap-img.com/abap/question-about-bdc-program.htm
    www.sapdevelopment.co.uk/bdc/bdchome.htm
    www.planetsap.com/bdc_main_page.htm
    Re: bdc mm01
    http://www.sapbrain.com/TUTORIALS/TECHNICAL/BDC_tutorial.html
    http://www.sap-img.com/abap/bdc-example-using-table-control-in-bdc.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/fa/097119543b11d1898e0000e8322d00/frameset.htm
    http://myweb.dal.ca/hchinni/sap/bdc_home.htm
    IDOC
    What is IDOC ?
    IDoc
    Standard SAP format for electronic data interchange between systems (Intermediate Document). Different message types (such as delivery confirmations or purchase orders) normally represent different specific formats, the IDoc types. However, multiple message types with related content can be assigned to one IDoc type: For example, the IDoc type ORDERS01 transfers the “logical” message types ORDERS (purchase order) and ORDRSP (order confirmation).
    Also check these link it will help you.
    idoc information
    http://help.sap.com/saphelp_nw2004s/helpdata/en/78/21785851ce11d189570000e829fbbd/frameset.htm
    https://www.sdn.sap.com/irj/sdn/wiki?path=/display/xi/sapR3%28Idocs%29ToXI--Steps+Summarized&
    /people/prateek.shah/blog/2005/06/08/introduction-to-idoc-xi-file-scenario-and-complete-walk-through-for-starters
    ALE/ IDOC/ XML
    Troubleshooting of ALE Process - /people/raja.thangamani/blog/2007/07/19/troubleshooting-of-ale-process
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://www.thespot4sap.com/Articles/SAP_XML_Business_Integration.asp
    http://help.sap.com/saphelp_srm30/helpdata/en/72/0fe1385bed2815e10000000a114084/content.htm
    IDOC Convertion
    /people/kevin.wilson2/blog/2005/12/07/changing-fields-in-an-idoc-segment
    Please check this online document for ALE and IDoc.
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCMIDALEIO/BCMIDALEIO.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCMIDALEPRO/BCMIDALEPRO.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/CABFAALEQS/CABFAALEQS.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCSRVEDISC/CAEDISCAP_STC.pdf
    http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCSRVEDI/CAEDI.pdf
    Also check this links for additional information.
    http://help.sap.com/saphelp_erp2004/helpdata/en/dc/6b835943d711d1893e0000e8323c4f/content.htm
    http://www.sapgenie.com/sapgenie/docs/ale_scenario_development_procedure.doc
    http://edocs.bea.com/elink/adapter/r3/userhtm/ale.htm#1008419
    http://www.netweaverguru.com/EDI/HTML/IDocBook.htm
    http://www.sapgenie.com/sapedi/index.htm
    serialization /people/alessandro.guarneri/blog/2006/11/26/content-based-serialization-dynamic-queue-name-in-xi
    /people/prateek.shah/blog/2005/06/08/introduction-to-idoc-xi-file-scenario-and-complete-walk-through-for-starters - IDoc to File
    IDOc testing - /people/suraj.sr/blog/2005/12/29/generate-test-case-for-an-idoc-scenario
    /people/ravikumar.allampallam/blog/2005/06/24/convert-any-flat-file-to-any-idoc-java-mapping - Any flat file to any Idoc
    /people/pooja.pandey/blog/2005/07/27/idocs-multiple-types-collection-in-bpm - Collection of IDoc to Single File
    /people/stefan.grube/blog/2006/09/18/collecting-idocs-without-using-bpm - collecting IDocs without BPM
    /people/prateek.shah/blog/2005/06/08/introduction-to-idoc-xi-file-scenario-and-complete-walk-through-for-starters - IDoc to File
    Hope it will be useful to u....plz don't forget to reward points...!!!!
    Regards
    Vasu

  • Passing a range of dates from Visual Composer 7.0 to Bex Analyzer

    Dear Experts,
             I created a button in my Visual Composer model that has the following settings:
    System action:  Hyperlink
    Apply to:  Self
    Hyperlink address:  "...QUERY=BMMSEG_C01_SAS_Q0001&BI_COMMAND_1-BI_COMMAND_TYPE=SET_SELECTION_STATE&BI_COMMAND_1-TARGET_DATA_PROVIDER_REF_LIST-TARGET_DATA_PROVIDER_REF_1=DP_1&BI_COMMAND_1-CHARACTERISTICS_SELECTIONS-CHARACTERISTIC_SELECTIONS_1-CHARACTERISTIC=MBUDAT&BI_COMMAND_1-CHARACTERISTICS_SELECTIONS-CHARACTERISTIC_SELECTIONS_1-SELECTIONS-SELECTION_1-SELECTION_INPUT_STRING=08/22/2006;08/28/2006"
            Basically, what I need is to pass a range of dates from Visual Composer to Bex Analyzer on the characteristic "MBUDAT".  However, it only brings back the data with those specific dates, not the range.  
          I don't think this is the correct syntax to pass in a range value:  08/22/2006;08/28/2006  Please advise how should I do it. 
    Thank you in advance,
    Kevin

    Hi J GOEL,
           Thank you for your quick response.  I created a date range variable ZPSTDAT, and then I passed in the oncatenated value to my Bex query (for this example, let says the From date is 08/22/2006, and the To date is 08/28/2006 ).
          Here's what I passed in into the url:
    ...QUERY= BMMSEG_C01_SAS_Q0001
    &#38;BI_COMMAND_1-BI_COMMAND_TYPE=SET_SELECTION_STATE
    &#38;BI_COMMAND_1-TARGET_DATA_PROVIDER_REF_LIST-TARGET_DATA_PROVIDER_REF_1=DP_1
    &#38;BI_COMMAND_1-CHARACTERISTICS_SELECTIONS-CHARACTERISTIC_SELECTIONS_1-CHARACTERISTIC= ZPSTDAT
    &#38;BI_COMMAND_1-CHARACTERISTICS_SELECTIONS-CHARACTERISTIC_SELECTIONS_1-SELECTIONS-SELECTION_1-SELECTION_INPUT_STRING= 08/22/2006:08/28/2006
       However, it returns all the records, so it seems like it doesn't even accept the range value that I passed into the Bex.  Do you mind to please show me the url you have that pass in the date range.  I just need to see the date range part, not the entire url.
    Thanks,
    Kevin

  • APD read data from Web Analyzer or BEx Analyzer?

    Dears,
    We are on SAP BI7.0, recently we upgraded to enhancement package1. SInce then, "suppress zeros (all values=0)"  setting in the query is not working properly in Web Analyzer while it's working correctly in BEx Analyzer.
    We are transfering data from this query to a table in CRM system usin APD. In CRM system, we are getting the output as in web analyzer (it contains zero value records).
    My questions are -
    1. Do APDs always read values from Web analyzer or we have some setting somewhere? (can we read from BEx Analyzer also?)
    2. What exactly is the background process sequence which is followed when an APD is executed?
    Your help is really appreciated.
    Thanks in advance.

    When we execute APD , MDX statement generated and which will execute and bring data from the data targets .
    Bex Query Conditions will not be applied to the MDX statements, we need to filter those records in the APD process , using ABAP program Process or any other filter process ..
    Ravi

  • I'm new to the LabView. How do I pass data from VI configured using Serial (CMTS using CLI commands to set Parameters ) to VI configured using GPIB(vecto​r signal analyzer ) to measure such as RF frequency or power on the instrument​? Thanks

    I'm new to the LabView. How do I pass data from VI configured using Serial (CMTS using CLI commands to set Parameters ) to VI configured using GPIB(vector signal analyzer ) to measure such as RF frequency or power on the instrument?
    I just want to set something on the front panel that will execute the Serial parameters first and then pass these settings to vector signal analyzer
    Thanks
    Phong

    You transfer data with wires.
    Frankly, I'm a little confused by your question. I can't think of any reason why you would want to pass serial parameters (i.e. baud rate, parity) to a GPIB instrument. Please explain with further detail and attach the code.

  • How to create a view consisting of data from tables in2 different databases

    Using Oracle 10.2g
    I have 2 databases Gus and haggis on Comqdhb schema.
    glink indicates a databse link between Haggis and Gus
    In Gus there are tables student,subject,grade,school containing columns like upn...
    STUDENT
    upn
    academicYear
    SUBJECT
    subject
    GRADE
    examlevel
    grade
    SCHOOL
    sn
    In HAGGIS there are tables student,grade,teacher containing columns upn...desc below.
    STUDENT
    upn
    GRADE
    grade
    upn
    academicyear
    level
    Create view in your HAGGIS database which will join all of the exam grades together. You should have one view which will produce the following relation :
    examGrade(upn, subject, examlevel, sn, grade,academicYear)
    so I need to create a view which gets the data from both the tables in both the databases.
    create view as examGrade(upn, subject, examlevel, sn, grade,academicYear) as select s.upn
    But i am not getting how to select a column from 2 tables in different databases
    I mean if i said
    select upn from comqdhb.student@glink,comqdhb.student;
    select upn from comqdhb.student@glink,comqdhb.student
    ERROR at line 1:
    ORA-00918: column ambiguously defined
    help me out,Thank you.

    Thank you for the reply will follow up the code format
    Create views in your HAGGIS schema database which will join all of the exam grades together. You should have one view which will produce the following relation :
    examGrade(upn, subject, examlevel, sn, grade,academicYear)
    I understand that there wont be duplication when we use conditions
    If i query
    select count(upn)
    from   comqdhb.student@glink I get 9000
    but after the union
    create view examGrade(upn, subject, examlevel, sn, grade,academicYear)
    as
    select distinct s.upn as upn
    ,                  g.subject as subject
    ,                  g."LEVEL" as examlevel
    ,                  g.grade as grades
    ,                  '9364097'
    ,                  to_number(g.academicyear) as academicyear
    from             comqdhb.student s
    ,                   comqdhb.grade g
    where           s.upn=g.upn
    union
    select            s.upn
    ,                   sb.subject
    ,                   g.elevel
    ,                   g.grade
    ,                   s.acyr
    ,                   sc.sn
    from              comqdhb.subject@glink sb
    ,                   comqdhb.student@glink s
    ,                    comqdhb.gradevalues@glink g
    ,                    comqdhb.school@glink sc,
    ,                    comqdhb.studentingroup@glink sg
    ,                    comqdhb.teachinggroup@glink tg
    where            sb.sid=tg.sid
    and                tg.gid=sg.gid
    and                sg.upn=s.upn
    and                g."LEVEL"=tg.elevel
    and                s.school=sc.id
    and                sc.id=tg.id; returns
    count(upn) from exam gradeIt gets stuck actually sometimes it returns
    932002 some results.
    2:
    Another problem i am having which i am trying to solve and written up my ideas but haven't been getting the expected results.Hope you can help.Thank you.
    Information:
    =======
    All children take exams at the age of 16 called a General Certificate of SecondaryEducation (GCSE).
    They have to study and take exams in Mathematics, English and Science, and can take other subjects such as History, French, Art etc. Most students will study between 5 and 10 different subjects before taking their GCSEs.
    For each exam, a student is awarded a grade from A*, A, B,C,D,E,F,G,U,X An A* grade is the best grade achievable and an X is the worst grade.
    In order to analyze how students have performed, each grade is mapped to a numeric value as follows:
    Grade Numerical score
    A* 8
    A 7
    B 6
    C 5
    D 4
    E 3
    F 2
    G 1
    U 0
    X 0
    Now why i need this avgGCSE is because i have to create a view containing avgGCSE of the students it is used in the next question where a condition is avgGCSE is between 6.5 and 7
    In order to calculate the avgGCSE the idea is to calculate the grades of the students and map the grades to their corresponding scores/values
    add them all up and div by the total no of grades to get the avg.
    desc comqdhb.STUDENT@glink;
    STUDENT
    =======
    UPN
    FNAME
    CNAME
    DOB
    GENDER
    PREVIOUSSCHOOL
    XGCSE
    SCHOOL
    ACYR
    STUDENTINGROUP
    =============
    UPN
    GID
    STARTDATE
    ENDDATE
    GRADE
    GRADEVALUES
    ===========
    GRADE
    LEVEL
    VALUE
    I have a opinion that xgcse in STUDENT table refers to the avgGCSE which i want to calculate as when i asked my professor as to what xgcse he said that he forgot to take it out of the table and it is not necessary while creating avggcse.
    select *
    from comqdhb.student@glink
    where xgcse<6.5; Displaying a result
    returns:
    UPN FAMILYNAME COMMONNAME DATEOFBIR GENDER PREVIOUSSCHOOL XGCSE SCHOOL ACYR
    ===========================================================================
    1011 KIMBERLY ABBOT 07-JUL-79 f none 3.93500948 2 2
    select *
    from comqdhb.student@glink
    where xgcse between 6.5 and 7 and upn = 1386; Displaying a result
    returns:
    UPN FAMILYNAME COMMONNAME DATEOFBIR GENDER PREVIOUSSCHOOL XGCSE SCHOOL ACYR
    ===========================================================================
    1386 STEPHANIE AANNESSON 15-JAN-79 f none 6.88873 2 2 so if xgcse is the avgGCSE then upn 1011 has avggcse<6.5 and 1386 has avggcse >6.5
    my idea was backward strategy like so now if we find out upn 1368 has suppose xgcse(avggcse)>6.5 how to extract the avggcse for the particular upn We need to map grades from GRADEVALUES to grade in STUDENTINGROUP and map upn from studentingroup to upn in student to output the values for the corresponding grades from GRADEVALUES
    select grade
    from comqdhb.studentingroup@glink
    where upn = 1011;
    Result:
    GRADE
    =====
    D
    F
    B
    E
    C
    E
    E
    B
    8 rows selected. Mapping each grade to the corresponding value and calculating we get
    32/8=4 total(values to corresponding grades)/no of grades.
    But the xgcse for upn 1011 is 3.935 and i am getting 4!! maybe xgcse isn't avggrade but ? is the procedure by me correct for calculating avggcse
    select grade
    from comqdhb.studentingroup@glink
    where upn = 1386;
    Result:
    GRADE
    ======
    A*
    A*
    A*
    A*
    B
    A*
    A*
    A
    B
    B
    B
    11 rows selected. grade to the corresponding value and calculating we get
    79/11=7.12 total(values to corresponding grades)/no of grades.
    But the xgcse for upn 1011 is 6.88... and i am getting 7.12!!
    But another problem
    when i say
    select   g.value,g.grade
    from     comqdhb.gradevalues@glink g
    ,        comqdhb.studentingroup@glink sg
    where    g.grade=sg.grade
    and      sg.upn=1011;
    result:
    ======
    VALUE GRADE
    ===========
      100 B
      100 B
       80 C
       60 D
       40 E
       40 E
       40 E
       20 F
        6 B
        6 B
        5 C
    VALUE GRADE
    =============
        4 D
        3 E
        3 E
        3 E
        2 F
    16 rows selected.
    select   distinct g.value,g.grade
    from     comqdhb.gradevalues@glink g
    ,        comqdhb.studentingroup@glink sg
    where    g.grade=sg.grade
    and      sg.upn=1011;
    result:
    ======
    VALUE GRADE
    ============
         2 F
       100 B
         6 B
         3 E
        60 D
         5 C
         4 D
        80 C
        40 E
        20 F
    10 rows selected. I am getting only 8 for the query
    select grade
    from comqdhb.studentingroup@glink
    where upn = 1386; here its becomming 10 and also its displaying values as 100 and ...
    select distinct *
    from   comqdhb.gradevalues@glink;
    GRADEVALUES
    ===========
    LEVEL      GRADE           VALUE
    ================================
    a          A                 120
    a          B                 100
    a          C                  80
    a          D                  60
    a          E                  40
    a          F                  20
    a          U                   0
    a          X                   0
    g          A                   7
    g          A*                  8
    g          B                   6
    LEVEL      GRADE           VALUE
    ================================
    g          C                   5
    g          D                   4
    g          E                   3
    g          F                   2
    g          G                   1
    g          U                   0
    g          X                   0
    18 rows selected. I was hoping if i could map the grades and get the values and calculate avggrade by total(values)/count(values)that would be it but here there are values like 100...
    select  sum(g.value)/count(g.grade) as avggrade
    from    comqdhb.gradevalues@glink g
    ,         comqdhb.studentingroup@glink sg
    where  g.grade=sg.grade
    and     sg.upn=1386;
    avggrade
    ========
    37.4375 the avggrade cant be this big and when i map each grade i obtained for 1368 like a to 7+b to 6 so on i get avggrade 7.12
    kindly help.
    Edited by: Trooper on Dec 15, 2008 4:49 AM

  • POWER QUERY Get External Data From File From Folder (Excel 2013)

    Hi,
    Beginner's question :
    What could be the use of the query on a folder : we just get a list of files with their path. What can we do with that?
    Thanks

    Hi,
    Do you want to combine data from multiple Excel Files in the same folder path into one table? If I understand correct, we can add a custom column to import the data.
    After we getting a list of files with their path, the Query Editor window will activate to show you a table containing a record for each file in the chosen directory. These will provide our function with the needed FilePath and FileName parameters. 
    Function sample: File name([Folder path],[Field name]
    For more detailed steps, please see the article:
    http://datapigtechnologies.com/blog/index.php/using-power-query-to-combine-data-from-multiple-excel-files-into-one-table/
    Please Note: Since the web site is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information.
    George Zhao
    TechNet Community Support
    It's recommended to download and install
    Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
    programs.

  • Help with exporting data from pdf form

    I have about 100 pdf forms that I created in adobe forms central and distributed as a pdf form (rather than on the web). I am trying to export the data into a spreadsheet but when I export it, the fields are all jumbled in the csv file, as in they are not in the same order. I need to export the data all together so I'm going to the forms menu and selecting "manage form data" and then selecting "merge data files into spreadsheet". I tried exporting a single file but that gave me something really weird.
    Please help, I have a deadline next week to analyze this data and can't make sense of it once it is exported to a spreadsheet.

    Would you please share your form with me and send me one of your pdf forms and some of the csv files?
    You can share your form by doing the following:
    1. Click on the “Share” icon on the bottom left corner.
    2. Click on “Add Collaborator” on the popup menu.
    3. Enter [email protected] under “People to share with”.
    4. Set subject to "Export data from pdf form"
    5. Click the “Share” button on the bottom right of the dialog.
    Thanks
    Ken

  • Acquiring sales data from SAP ERP

    (While evaluating Lumira, after watching promoclips, reading papers and working with the free trial some questions are still unanswered. I kindly ask you for help… Supporting a clear structure, I decided to open a separate thread for each question.)
    How’s about connecting Lumira to a SQL-based SAP Instance to analyze sales data? Does SAP provide a “Business Content” similar to SAP-BW or do I have to join all the affected tables manually?
    How would the scenario above be if I connect Lumira to a HANA-bases SAP Instance?
    Any help would be appreciated, thank you in advance!

    Hi,
    there are a couple of similar threads already, which might give you suggestions:
    Retrieve Data from ERP Infosets
    SAP Lumira: Getting Familiar Freehand SQL Against SAP ERP - then to iPad
    both those links discuss using 'freehand sql' drivers in Lumira to connect to ERP Infosets . (a kind of ECC query)
    a similar approach, going via Business Objects BI 4,  would be to have a Universe ontop of Infoset, and Lumira connects to BI4 to consume the Infoset Universe.
    In both cases, the prerequisite is the definition of said 'infoset' - the creation of which would be handled by content creators within ECC environment, not likely the Lumira user.
    Regards,
    H

  • Getting short dumps while reading bulky data from excel sheet

    Hi all,
    We have converted our non-unicode based sap system (R/3 4.7, Windows 2003,
    SQL 2000) into a unicode based system recently. We are facing two peculiar
    problems for last few days.
    *Problem 1 - *Whenever we want to upload the data from excel sheet, the
    system throws a dump after reading about 400 entries, while last week the
    same system used to read about 8000 entries in a single go.
    *Problem 2 - *Also, whenever we realease a transport requst from development
    containing a lot of changes, the request will reach the quality system but
    without the changes. but if i ll try to import the changes into quality
    system after generating more than one request containing small small
    changes, they get reflected in QAS immediately. Is there any size
    constraint in transporting a request fro DEV->QAS->PRD.
    Please suggest some ways.
    I am pasting the dump (that I am getting while data uploading from Excel).
    ~~~~~~~~~~~~~~~~~~
    Runtime errors
    MESSAGE_TYPE_X*
    Occurred on
    22.02.2008 at
    13:21:02*
    The current application triggered a termination with a short dump.
    What happened?
    The current application program detected a situation which really
    should not occur. Therefore, a termination with a short dump was
    triggered on purpose by the key word MESSAGE (type X).
    What can you do?
    Print out the error message (using the "Print" function)
    and make a note of the actions and input that caused the
    error.
    To resolve the problem, contact your SAP system administrator.
    You can use transaction ST22 (ABAP Dump Analysis) to view and administer
    termination messages, especially those beyond their normal deletion
    date.
    Error analysis
    Short text of error message:
    Control Framework : Error processing control
    Technical information about the message:
    Diagnosis
    An error occurred when the system tried to process the commands
    from the Automation Queue on the presentation server.
    There are several possible reasons for this:
    - The installation of the SAP GUI on the presentation server is
    faulty or obsolete.
    - There is an error in the application program
    - There is an error in the SAPGUI or an integrated control
    Procedure
    1. Make sure that you have imported the appropriate Support
    Package, the current kernel, and GUI patch for the release of your
    System
    2. Check whether the error occurs locally on one or a few PCs, or
    generally on all PCs. Note whether the error only occurs for some
    users, for example because of a specific Customizing setting.
    If it only occurs locally, this suggests an installation problem
    with the PC. Check the installation; if necessary, reinstall the
    software. In the dump, search for the SY-MSGLI field, since it may
    point to the cause of the error.
    3. Activate the Automation Trace (in accordance with SAP Note
    158985).
    4.Start the transaction and continue until the screen immediately
    before the dump.
    5. From the System -> Utilities menu, choose Autom. Queue,
    Synchronous Processing.
    The status bar of the GUI displays the text:
    "Automation synchron flush mode on"
    6. If you now proceed with the application, the short dump will
    display the ABAP call that caused the error; the Automation Trace
    will contain the error on the presentation server.
    7. If necessary, load the short dump and trace files on to
    sapservX, so that SAP can analyze them.
    Message classe...... "CNDP"
    Number.............. 006
    Variable 1.......... " "
    Variable 2.......... " "
    Variable 3.......... " "
    Variable 4.......... " "
    Variable 3.......... " "
    Variable 4.......... " "
    How to correct the error
    Probably the only way to eliminate the error is to correct the program.
    You may able to find an interim solution to the problem
    in the SAP note system. If you have access to the note system yourself,
    use the following search criteria:
    "MESSAGE_TYPE_X" C
    "SAPLOLEA" or "LOLEAU02"
    "AC_SYSTEM_FLUSH"
    If you cannot solve the problem yourself, please send the
    following documents to SAP:
    1. A hard copy print describing the problem.
    To obtain this, select the "Print" function on the current screen.
    2. A suitable hardcopy prinout of the system log.
    To obtain this, call the system log with Transaction SM21
    and select the "Print" function to print out the relevant
    part.
    3. If the programs are your own programs or modified SAP programs,
    supply the source code.
    To do this, you can either use the "PRINT" command in the editor or
    print the programs using the report RSINCL00.
    4. Details regarding the conditions under which the error occurred
    or which actions and input led to the error.
    System environment
    SAP Release.............. "620"
    Application server....... "nhbho930"
    Network address.......... "192.168.0.30"
    Operating system......... "Windows NT"
    Release.................. "5.2"
    Hardware type............ "4x Intel 801586"
    Character length......... 16 Bits
    Pointer length........... 32 Bits
    Work process number...... 0
    Short dump setting....... "full"
    Database server.......... "NHBHO930"
    Database type............ "MSSQL"
    Database name............ "DEV"
    Database owner........... "dev"
    Character set............ "C"
    SAP kernel............... "640"
    Created on............... "Aug 17 2007 00:18:58"
    Created in............... "NT 5.0 2195 Service Pack 4 x86 MS VC++ 13.10"
    Database version......... "SQL_Server_8.00 "
    Patch level.............. "196"
    Patch text............... " "
    Supported environment....
    Database................. "MSSQL 7.00.699 or higher, MSSQL 8.00.194"
    SAP database version..... "640"
    Operating system......... "Windows NT 5.0, Windows NT 5.1, Windows NT 5.2,
    Windows NT 6.0"
    User, transaction...
    Client.............. 300
    User................ "NHBABAP"
    Language key........ "E"
    Transaction......... "SE38 "
    Program............. "SAPLOLEA"
    Screen.............. "SAPMSDYP 0010"
    Screen line......... 0
    Information on where terminated
    The termination occurred in the ABAP program "SAPLOLEA" in
    "AC_SYSTEM_FLUSH".
    The main program was "ZBDC_CONTRACT ".
    The termination occurred in line 29 of the source code of the (Include)
    program "LOLEAU02"
    of the source code of program "LOLEAU02" (when calling the editor 290).
    ~~~~~~~~~~~~~~~~~~~~~
    Thank you all.
    Cheers.
    Mitra __.____._

    for the 50 millionth time: Excel spreadsheets are NOT databases, you should NOT try to use them as such and NEVER access them using JDBC.
    You should also under NO condition use the JDBC-ODBC bridge driver (that was actually the 78 millionth time that was told someone).

  • How to get data from an existing Excel file using report generation toolkit

    I'm trying to use the Excel Get Data vi from the report generation toolkit, but i can't figure out how to turn a browsable path into the right type for the VI.  I've tried using the New Report.vi, but that doesn't work unless you use a template.  It won't open an existing excel file and make it an open report to pull data from.
    Essentially, i have a bunch of excel files that have data in them and i want to use a VI to analyze the data.  So, i'm trying to pull in all the data directly from the excel file so i don't have to reproscess them all into text so i can use the more standard spreadsheet VIs.  But to even convert the excel file programtically in labview won't i still need to be able to open the excel file and get the data?
    Solved!
    Go to Solution.

    I C.
    Got it,
    the reason why I can view it using my Excel file because its in .CSV.
    I open Microsoft Excel to open .CSV file.
    But you can also open .csv file with notepad.
    So, you cannot open .xls with Read Spreadsheet File
    I learn something everyday
    Best regards,
    Krispiekream

  • Multiple reads of the same data from Multiprovider by Query (Bex)

    Hello, guys!
    We're having issue with performance of a query built on Multiprivider. During our investigation, we've found out that within one run of a Query, it several times refers to InfoProvider for the same data (see image attached).
    Do you have ideas what can be a reason for multiply reads of the same data from Multipvovider?

    Hello Nikita,
    By "copy of a query" i meant something like this as shown below :
    *Kindly click on the screenshot for a better view.
    1) See the highlighted portions below in the screenshot . See Query 2 highlighted and name of the BEx query highlighted.
    2) See the highlighted portions . See Query 3 highlighted and name of the BEx query highlighted.
    As you can see from the above screenshots i have used the same BEx query 2 times by the name of Query 2 & Query 3 . Infact i have not attached the complete screenhsot . In that i have used it 6 times.
    I have to analyze this a bit in detail but what i am guessing is that when this WEBi is called the single BEx is also called multiple times. And hence it hits the Info Provider multiple times resulting in a decreased performance.
    But this does not mean that this is wrong approach. There are various areas where you can improve for example :
    1) Either improve your BEx query if possible or use aggregates or something like that .
    2) Use the  Query stripping setting in WEBi so that unused dimensions and measures are not pulled resulting in an improved performance. It's switched on by default.
    Thanks!!
    Regards,
    Ashutosh Singh

  • How do I export the pdf data from multiple forms to an excel spreadsheet?

    I have several forms that one of my employees collects data in. I need that data from those forms to go to a spreadsheet for analyzation purposes. I am using Acrobat XI.

    No, you can't change the way this built-in command works. You can either process it later on in Excel, or you would need to use a custom-made script and an Action to collect the data in the format you want in Acrobat and then export it.

Maybe you are looking for