Security considerations wrt the ABAP Workflow implementation

Maybe somebody can comment on this:
I am an SAP auditor and doing SAP audits. Me and my colleagues always struggle with the security risks imposed by WF-BATCH and the related RFC-destination.
The question I am interested in is: Why did SAP go for this construction ? Why does Workflow Management require the WF-BATCH user ?
I do have an assumption here: The user is required as the Workflow Management functionality requires some authorization objects (e.g. S_USER_GRP/02/*) that you wouldn't like to hand out to the users directly. By calling this part of the functionality via the RFC-destination, those authorization objects have to be "handed out" only to WF-BATCH.
Well - nice assumption, but I have no evidence that this was actually the reason. My knowledge of the Workflow Management functionality is limited and the example I mentioned above (S_USER_GRP) is most probably wrong and not complete anyway as it doesn't state a reason why S_USER_GRP would be necessary.
Nevertheless it would be important to understand the in's and out's here a bit better in order to arrive at a better understanding of the risk and to trim WF-BATCH authorizations to the unavoidable.
So if anybody can shed some light on this, please do so ...
BTW: I am also asking because although I can't believe that S_USER_GRP/02/* is necessary, so far I always found it in real-live WF-BATCH users
Thanks and regards,
Ralf Nellessen

Hi,
"The calling of the object method for the work item ended with a return value for which no handling is modeled in the workflow." in the workflow log
This kind of Error comes due to the following reasons:-
1> There is a problem in the Binding, in which step this Error is Coming. You have Binded the element for which there is no Target Defined. Source is there but no Target is there, so it is unable to catch the data that you want to send through Binding.
2> It also could be because of Binding Mismatch. The Container elements would not be having the same Data Type.
"The workflow runtime system has called an application method in a tRFC or background context. A message was processed in this application method. This causes the execution of the workitem to be cancelled in this context."
This Error can be because of the RFC Settings that are not propoerly configured in SWU3. All the ticks should be green in Colour. Check the same for the Server on which you are trying to trigger the WF.
For more on WF: Kindly check theses:-
https://www.sdn.sap.com/irj/scn/wiki?path=/display/abap/workflow%252bscenario
/people/sapna.modi/blog/2007/02/19/workflows-for-dummies--introductionpart-i
Let me know if you still face any issues.
Regards,
Kanika

Similar Messages

  • Integration of ABAP workflow with portal

    Hello Experts,
    Need urgent help on UWL configuration.
    We have integrated our ABAP workflow with portal through UWL.
    We have a Ad Hoc link in the ABAP workflow which takes us to some transaction. For some reason we are not able to view that link when we see this workflow through portal.
    1. Is is possible to show any Ad hoc links from ABAP workflow into portals?
    2. If yes, what is the process for the same? We did not get any relevant documents.
    Any inputs/sugessions are most welcome.
    Ashutosh

    Hi Ashutosh,
    Can you please send me the documents how did you resolve this issue?
    koti

  • What are the major responsibilites of abaper in implementation?

    Hi gurus,
                 tell me abapers full role in implementation.
    Thanks in advance...

    Hi,
    it may help u
    BATCH DATA COMMUNICATION
    About Data Transfer In R/3 System
    When a company decides to implement the SAP R/3 to manage business-critical data, it usually does not start from a no-data situation. Normally, a SAP R/3 project comes into replace or complement existing application.
    In the process of replacing current applications and transferring application data, two situations might occur:
    • The first is when application data to be replaced is transferred at once, and only once.
    • The second situation is to transfer data periodically from external systems to SAP and vice versa.
    • There is a period of time when information has to be transferred from existing application, to SAP R/3, and often this process will be repetitive.
    The SAP system offers two primary methods for transferring data into SAP systems. From non-SAP systems or legacy system. These two methods are collectively called “batch input” or “batch data communication”.
    1. SESSION METHOD
    2. CALL TRANSACTION
    3. DIRECT INPUT
    Advantages offered by BATCH INPUT method:
    1. Can process large data volumes in batch.
    2. Can be planned and submitted in the background.
    3. No manual interaction is required when data is transferred.
    4. Data integrity is maintained as whatever data is transferred to the table is through transaction. Hence batch input data is submitted to all the checks and validations.
    To implement one of the supported data transfers, you must often write the program that exports the data from your non-SAP system. This program, known as a “data transfer” program must map the data from the external system into the data structure required by the SAP batch input program.
    The batch input program must build all of the input to execute the SAP transaction.
    Two main steps are required:
    • To build an internal table containing every screen and every field to be filled in during the execution of an SAP transaction.
    • To pass the table to SAP for processing.
    Prerequisite for Data Transfer Program
    Writing a Data Transfer Program involves following prerequisites:
    Analyzing data from local file
    Analyzing transaction
    Analyzing transaction involves following steps:
    • The transaction code, if you do not already know it.
    • Which fields require input i.e., mandatory.
    • Which fields can you allow to default to standard values.
    • The names, types, and lengths of the fields that are used by a transaction.
    • Screen number and Name of module pool program behind a particular transaction.
    To analyze a transaction::
    • Start the transaction by menu or by entering the transaction code in the command box.
    (You can determine the transaction name by choosing System – Status.)
    • Step through the transaction, entering the data will be required for processing your batch input data.
    • On each screen, note the program name and screen (dynpro) number.
    (dynpro = dyn + pro. Dyn = screen, pro = number)
    • Display these by choosing System – Status. The relevant fields are Program (dynpro) and Dynpro number. If pop-up windows occur during execution, you can get the program name and screen number by pressing F1 on any field or button on the screen.
    The technical info pop-up shows not only the field information but also the program and screen.
    • For each field, check box, and radio button on each screen, press F1 (help) and then choose Technical Info.
    Note the following information:
    - The field name for batch input, which you’ll find in its own box.
    - The length and data type of the field. You can display this information by double clicking on the Data Element field.
    • Find out the identification code for each function (button or menu) that you must execute to process the batch-input data (or to go to new screen).
    Place the cursor on the button or menu entry while holding down the left mouse button. Then press F1.
    In the pop-up window that follows, choose Technical info and note the code that is shown in the Function field.
    You can also run any function that is assigned to a function key by way of the function key number. To display the list of available function keys, click on the right mouse button. Note the key number that is assigned to the functions you want to run.
    Once you have program name, screen number, field name (screen field name), you can start writing.
    DATA TRANSFER program.
    Declaring internal table
    First Integral Table similar to structure like local file.
    Declaring internal table like BDCDATA
    The data from internal table is not transferred directly to database table, it has to go through transaction. You need to pass data to particular screen and to particular screen-field. Data is passed to transaction in particular format, hence there is a need for batch input structure.
    The batch input structure stores the data that is to be entered into SAP system and the actions that are necessary to process the data. The batch input structure is used by all of the batch input methods. You can use the same structure for all types of batch input, regardless of whether you are creating a session in the batch input queue or using CALL TRANSACTION.
    This structure is BDCDATA, which can contain the batch input data for only a single run of a transaction. The typical processing loop in a program is as follows:
    • Create a BDCDATA structure
    • Write the structure out to a session or process it with CALL TRANSACTION USING; and then
    • Create a BDCDATA structure for the next transaction that is to be processed.
    Within a BDCDATA structure, organize the data of screens in a transaction. Each screen that is processed in the course of a transaction must be identified with a BDCDATA record. This record uses the Program, Dynpro, and Dynbegin fields of the structure.
    The screen identifier record is followed by a separate BDCDATA record for each value, to be entered into a field. These records use the FNAM and FVAL fields of the BDCDATA structure. Values to be entered in a field can be any of the following:
    • Data that is entered into screen fields.
    • Function codes that are entered into the command field. Such function codes execute functions in a transaction, such as Save or Enter.
    The BDCDATA structure contains the following fields:
    • PROGRAM: Name of module pool program associated with the screen. Set this field only for the first record for the screen.
    • DYNPRO: Screen Number. Set this field only in the first record for the screen.
    • DYNBEGIN: Indicates the first record for the screen. Set this field to X, only for the first record for the screen. (Reset to ‘ ‘ (blank) for all other records.)
    • FNAM: Field Name. The FNAM field is not case-sensitive.
    • FVAL: Value for the field named in FNAM. The FVAL field is case-sensitive. Values assigned to this field are always padded on the right, if they are less than 132 characters. Values must be in character format.
    Transferring data from local file to internal table
    Data is uploaded to internal table by UPLOAD of WS_UPLOAD function.
    Population of BDCDATA
    For each record of internal table, you need to populate Internal table, which is similar to BDCDATA structure.
    All these five initial steps are necessary for any type of BDC interface.
    DATA TRANSFER program can call SESSION METHOD or CALL TRANSACTION. The initial steps for both the methods are same.
    First step for both the methods is to upload the data to internal table. From Internal Table, the data is transferred to database table by two ways i.e., Session method and Call transaction.
    SESSION METHOD
    About Session method
    In this method you transfer data from internal table to database table through sessions.
    In this method, an ABAP/4 program reads the external data that is to be entered in the SAP System and stores the data in session. A session stores the actions that are required to enter your data using normal SAP transaction i.e., Data is transferred to session which in turn transfers data to database table.
    Session is intermediate step between internal table and database table. Data along with its action is stored in session i.e., data for screen fields, to which screen it is passed, the program name behind it, and how the next screen is processed.
    When the program has finished generating the session, you can run the session to execute the SAP transactions in it. You can either explicitly start and monitor a session or have the session run in the background processing system.
    Unless session is processed, the data is not transferred to database table.
    BDC_OPEN_GROUP
    You create the session through program by BDC_OPEN_GROUP function.
    Parameters to this function are:
    • User Name: User name
    • Group: Name of the session
    • Lock Date: The date on which you want to process the session.
    • Keep: This parameter is passed as ‘X’ when you want to retain session after
    processing it or ‘ ‘ to delete it after processing.
    BDC_INSERT
    This function creates the session & data is transferred to Session.
    Parameters to this function are:
    • Tcode: Transaction Name
    • Dynprotab: BDC Data
    BDC_CLOSE_GROUP
    This function closes the BDC Group. No Parameters.
    Some additional information for session processing
    When the session is generated using the KEEP option within the BDC_OPEN_GROUP, the system always keeps the sessions in the queue, whether it has been processed successfully or not.
    However, if the session is processed, you have to delete it manually. When session processing is completed successfully while KEEP option was not set, it will be removed automatically from the session queue. Log is not removed for that session.
    If the batch-input session is terminated with errors, then it appears in the list of INCORRECT session and it can be processed again. To correct incorrect session, you can analyze the session. The Analysis function allows to determine which screen and value has produced the error. If you find small errors in data, you can correct them interactively, otherwise you need to modify batch input program, which has generated the session or many times even the data file.
    CALL TRANSACTION
    About CALL TRANSACTION
    A technique similar to SESSION method, while batch input is a two-step procedure, Call Transaction does both steps online, one after the other. In this method, you call a transaction from your program by
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>
    Messages into <MSGTAB>.
    Parameter – 1 is transaction code.
    Parameter – 2 is name of BDCTAB table.
    Parameter – 3 here you are specifying mode in which you execute transaction
    A is all screen mode. All the screen of transaction are displayed.
    N is no screen mode. No screen is displayed when you execute the transaction.
    E is error screen. Only those screens are displayed wherein you have error record.
    Parameter – 4 here you are specifying update type by which database table is updated.
    S is for Synchronous update in which if you change data of one table then all the related Tables gets updated. And sy-subrc is returned i.e., sy-subrc is returned for once and all.
    A is for Asynchronous update. When you change data of one table, the sy-subrc is returned. And then updating of other affected tables takes place. So if system fails to update other tables, still sy-subrc returned is 0 (i.e., when first table gets updated).
    Parameter – 5 when you update database table, operation is either successful or unsuccessful or operation is successful with some warning. These messages are stored in internal table, which you specify along with MESSAGE statement. This internal table should be declared like BDCMSGCOLL, a structure available in ABAP/4. It contains the following fields:
    1. Tcode: Transaction code
    2. Dyname: Batch point module name
    3. Dynumb: Batch input Dyn number
    4. Msgtyp: Batch input message type (A/E/W/I/S)
    5. Msgspra: Batch input Lang, id of message
    6. Msgid: Message id
    7. MsgvN: Message variables (N = 1 - 4)
    For each entry, which is updated in database, table message is available in BDCMSGCOLL. As BDCMSGCOLL is structure, you need to declare a internal table which can contain multiple records (unlike structure).
    Steps for CALL TRANSACTION method
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. UPLOAD or WS_UPLOAD function to upload the data from local file to itab. (Considering file is local file)
    4. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tcode> using <BDCTAB>
    Mode <A/N/E>
    Update <S/A>.
    Refresh BDCTAB.
    Endloop.
    (To populate BDCTAB, You need to transfer each and every field)
    The major differences between Session method and Call transaction are as follows:
    SESSION METHOD CALL TRANSACTION
    1. Data is not updated in database table unless Session is processed. Immediate updation in database table.
    2. No sy-subrc is returned. Sy-subrc is returned.
    3. Error log is created for error records. Errors need to be handled explicitly
    4. Updation in database table is always synchronous Updation in database table can be synchronous Or Asynchronous.
    Error Handling in CALL TRANSACTION
    When Session Method updates the records in database table, error records are stored in the log file. In Call transaction there is no such log file available and error record is lost unless handled. Usually you need to give report of all the error records i.e., records which are not inserted or updated in the database table. This can be done by the following method:
    Steps for the error handling in CALL TRANSACTION
    1. Internal table for the data (structure similar to your local file)
    2. BDCTAB like BDCDATA
    3. Internal table BDCMSG like BDCMSGCOLL
    4. Internal table similar to Ist internal table
    (Third and fourth steps are for error handling)
    5. UPLOAD or WS_UPLOAD function to upload the data from the local file to itab. (Considering file is local file)
    6. Loop at itab.
    Populate BDCTAB table.
    Call transaction <tr.code> using <Bdctab>
    Mode <A/N/E>
    Update <S/A>
    Messages <BDCMSG>.
    Perform check.
    Refresh BDCTAB.
    Endloop.
    7 Form check.
    IF sy-subrc <> 0. (Call transaction returns the sy-subrc if updating is not successful).
    Call function Format_message.
    (This function is called to store the message given by system and to display it along with record)
    Append itab2.
    Display the record and message.
    DIRECT INPUT
    About Direct Input
    In contrast to batch input, this technique does not create sessions, but stores the data directly. It does not simulate the online transaction. To enter the data into the corresponding database tables directly, the system calls a number of function modules that execute any necessary checks. In case of errors, the direct input technique provides a restart mechanism. However, to be able to activate the restart mechanism, direct input programs must be executed in the background only. Direct input checks the data thoroughly and then updates the database directly.
    You can start a Direct Input program in two ways;
    Start the program directly
    This is the quickest way to see if the program works with your flat file. This option is possible with all direct input programs. If the program ends abnormally, you will not have any logs telling you what has or has not been posted. To minimize the chance of this happening, always use the check file option for the first run with your flat file. This allows you to detect format errors before transfer.
    Starting the program via the DI administration transaction
    This transaction restarts the processing, if the data transfer program aborts. Since DI document are immediately posted into the SAP D/B, the restart option prevents the duplicate document posting that occurs during a program restart (i.e., without adjusting your flat file).
    Direct input is usually done for standard data like material master, FI accounting document, SD sales order and Classification for which SAP has provided standard programs.
    First time you work with the Direct Input administration program, you will need to do some preparation before you can transfer data:
    - Create variant
    - Define job
    - Start job
    - Restart job
    Common batch input errors
    - The batch input BDCDATA structure tries to assign values to fields which do not exist in the current transaction screen.
    - The screen in the BDCDATA structure does not match the right sequence, or an intermediate screen is missing.
    - On exceptional occasions, the logic flow of batch input session does not exactly match that of manual online processing. Testing the sessions online can discover by this.
    - The BDCDATA structure contains fields, which are longer than the actual definition.
    - Authorization problems.
    RECORDING A BATCH INPUT
    A B recording allows you to record a R/3 transaction and generate a program that contains all screens and field information in the required BDC-DATA format.
    You can either use SHDB transaction for recording or
    SYSTEM ? SERVICES ? BATCH INPUT ? EDIT
    And from here click recording.
    Enter name for the recording.
    (Dates are optional)
    Click recording.
    Enter transaction code.
    Enter.
    Click Save button.
    You finally come to a screen where, you have all the information for each screen including BDC_OKCODE.
    • Click Get Transaction.
    • Return to BI.
    • Click overview.
    • Position the cursor on the just recorded entry and click generate program.
    • Enter program name.
    • Click enter
    The program is generated for the particular transaction.
    BACKGROUND PROCESSING
    Need for Background processing
    When a large volume of data is involved, usually all batch inputs are done in background.
    The R/3 system includes functions that allow users to work non-interactively or offline. The background processing systems handle these functions.
    Non-interactively means that instead of executing the ABAP/4 programs and waiting for an answer, user can submit those programs for execution at a more convenient planned time.
    There are several reasons to submit programs for background execution.
    • The maximum time allowed for online execution should not exceed 300 seconds. User gets TIMEOUT error and an aborted transaction, if time for execution exceeds 300 seconds. To avoid these types of error, you can submit jobs for background processing.
    • You can use the system while your program is executing.
    This does not mean that interactive or online work is not useful. Both type of processing have their own purposes. Online work is the most common one entering business data, displaying information, printing small reports, managing the system and so on. Background jobs are mainly used for the following tasks; to process large amount of data, to execute periodic jobs without human intervention, to run program at a more convenient, planned time other than during normal working hours i.e., Nights or weekends.
    The transaction for background processing is SM36.
    Or
    Tools ? Administration ? Jobs ? Define jobs
    Or
    System ? services ? Jobs
    Components of the background jobs
    A job in Background processing is a series of steps that can be scheduled and step is a program for background processing.
    • Job name. Define the name of assigned to the job. It identifies the job. You can specify up to 32 characters for the name.
    • Job class. Indicates the type of background processing priority assigned to the job.
    The job class determines the priority of a job. The background system admits three types of job classes: A B & C, which correspond to job priority.
    • Job steps. Parameters to be passed for this screen are as follows:
    Program name.
    Variant if it is report program
    Start criteria for the job: Option available for this are as follows:
    Immediate - allows you to start a job immediately.
    Date/Time - allows you to start a job at a specific name.
    After job - you can start a job after a particular job.
    After event - allows you to start a job after a particular event.
    At operation mode - allows you to start a job when the system switches to a particular operation mode.
    Defining Background jobs
    It is two step process: Firstly, you define the job and then release it.
    When users define a job and save it, they are actually scheduling the report i.e., specifying the job components, the steps, the start time.
    When users schedule program for background processing, they are instructing the system to execute an ABAP/4 report or an external program in the background. Scheduled jobs are not executed until they are released. When jobs are released, they are sent for execution to the background processing system at the specified start time. Both scheduling and releasing of jobs require authorizations.
    HANDLING OF POP UP SCREEN IN BDC
    Many times in transaction pop up screen appears and for this screen you don’t pass any record but some indication to system telling it to proceed further. For example: The following screen
    To handle such screen, system has provided a variable called BDC_CURSOR. You pass this variable to BDCDATA and process the screen.
    Usually such screen appears in many transactions, in this case you are just passing information, that YES you want to save the information, that means YES should be clicked. So you are transferring this information to BDCDATA i.e., field name of YES which is usually SPOT_OPTION. Instead of BDC_OKCODE, you are passing BDC_CURSOR.
    BDC_CURSOR is also used to place cursor on particular field.
    AN EXAMPLE WITH SESSION METHOD
    Following program demonstrates how data is passed from flat file to SAP transaction and further to database table by using SESSION method.
    The transaction is TFBA (to change customer).
    A simple transaction where you are entering customer number on first screen and on next screen data is displayed for the particular customer number. Field, which we are changing here, are name and city. When you click on save, the changed record gets saved.
    Prerequisite to write this BDC interface as indicated earlier is:
    1. To find screen number
    2. To find screen field names, type of the field and length of the field.
    3. To find BDC_OKCODE for each screen
    4. Create flat file.
    Flat file can be created in your hard disk as follows:
    1 Vinod   Hyderabad
    2 Kavitha Secunderabad
    3 Kishore Hyderabad
    (Where 1st character field is Customer number, 2nd field is Customer name and 3rd field is City.)
    To transfer this data to database table SCUSTOM following interface can be used.
    REPORT DEMO1.
    Following internal table is to upload flat file.
    DATA: BEGIN OF ITAB OCCURS 0,
    ID(10),
    NAME(25),
    CITY(25),
    END OF ITAB.
    *Following internal table BDCDATA is to pass date from internal table to session.
    DATA: BDCTAB LIKE BDCDATA OCCURS 0 WITH HEADER LINE.
    Variables
    DATA: DATE1 LIKE SY-DATUM. DATE1 = SY-DATUM - 1. “ This is for Hold Date
    To upload flat file to internal table.
    CALL FUNCTION UPLOAD
    EXPORTING
    FILE NAME = ‘C:\FF.TXT’
    FILE TYPE = ‘ASC”
    TABLES
    DATA_TAB = ITAB
    EXCEPTIONS
    CONVERSION_ERROR = 1
    INVALID_TABLE_WIDTH = 2
    INVALID_TYPE = 3
    NO_BATCH = 4
    UNKNOWN_ERROR = 5
    OTHERS = 6.
    If sy-subrc = 0.
    Calling Function to Create a Session
    CALL FUNCTION ‘BDC_OPEN_GROUP’
    EXPORTING
    CLIENT = SY-MANDT
    GROUP = ‘POTHURI’
    HOLDDATE = DATE1
    KEEP = ‘X’
    USER = SY-UNAME
    EXCEPTIONS
    CLIENT_INVALID = 1
    DESTINATION_INVALID = 2
    GROUP_INVALID = 3
    GROUP_IS_LOCKED = 4
    HOLDDATE_INVALID = 5
    INTERNAL_ERROR = 6
    QUEUE_ERROR = 7
    RUNNING = 8
    SYSTEM_LOCK_ERROR = 9
    USER_INVALID = 10
    OTHERS = 11.
    If sy-subrc = 0.
    *-- MAIN Logic--
    LOOP AT ITAB
    PERFORM GENERATE_DATA. “ Populating BDCDATA Table
    CALL FUNCTION ‘BDC_INSERT’
    EXPORTING
    TCODE = ‘TFBA’
    TABLES
    DYNPROTAB = BDCTAB
    EXCEPTIONS
    INTERNAL_ERROR = 1
    NOT_OPEN = 2
    QUEUE_ERROR = 3
    TCODE_INVALID = 4
    PRINTING_INVALID = 5
    POSTING_INVALID = 6
    OTHERS = 7.
    REFRESH BDCTAB
    ENDLOOP.
    Calling function to close the session
    CALL FUNCTION ‘BDC_CLOSE_GROUP’
    EXCEPTIONS
    NOT_OPEN = 1
    QUEUE_ERROR = 2
    OTHERS = 3.
    Endif.
    Endif.
    *& Form GENERATE_DATA
    Create BDC Data
    FORM GENERATE_DATA
    Passing information for 1st screen on BDCDATA
    BDCTAB-PROGRAM = ‘SAPMTFBA’.
    BDCTAX-DYNPRO = 100.
    BDCTAP-DYNBEGIN = ‘X’.
    APPEND BCDTAB.CLEAR BDCTAB.
    Passing field information to BDCDATA
    BDCTAB-FNAM = ‘SCUSTOM-ID’
    BDCTAB-FVAL = ITAB-ID.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing BDC_OKCODE to BDCDATA
    BDCTAB-FNAM = ‘BDC_OKCODE’.
    BDCTAB-FVAL = ‘/5’.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing screen information for next screen to BDCDATA
    BDCTAB-PROGRAM = ‘SAPMTFBA’.
    BDCTAB-DYNPRO = 200.
    BDCTAB-DYNBEGIN = ‘X’.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing screen information to BDCDATA
    BDCTAB-FNAM = ‘SCUSTOM-NAME’.
    BDCTAB-FVAL = ITAB-NAME.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing screen information to BDCDATA
    BDCTAB-FNAM = ‘SCUSTOM-CITY’.
    BDCTAB-FVAL = ITAB-CITY.
    APPEND BDCTAB.CLEAR BDCTAB.
    Passing BDC_OKCODE to BDCDATA
    BDCTAB-FNAM = ‘BDC_OKCODE’.
    BDCTAB-FVAL = ‘SAVE’.
    APPEND BDCTAB.CLEAR BDCTAB.
    ENDFORM. “GENERATE_DATA
    AN EXAMPLE WITH CALL TRANSACTION
    Same steps to be repeated for CALL TRANSACTION
    The only difference between the two types of interface is in Session method, you create session and store information about screen and data into session. When session is processed the data is transferred to database. While in CALL TRANSACTION, data is transferred directly to database table.
    REPORT DEMO1.
    Follow above Code till MAIN Logic. Even the Subroutine should be copied
    LOOP AT ITAB
    PERFORM GENERATE_DATA, “Populating BDCDATA Table
    Call transaction ‘TFBA’ using BCDDATA Mode ‘A’ Update ‘S’.
    REFRESH BDCTAB
    ENDLOOP.
    with regards,
    vasavi.
    reward if helpful.

  • How to call the abap program in workflow

    HI Exeprts,
    I need to call one abap program in workflow.
    can any tell me how to call the abap program in workflow.
    thanks &regards
    ramesh

    Dear Ramesh,
    U can use REPORT business object.
    Method : EXECUTE_2
    Regards,
    Sagar

  • Three part blog about Reducing the Cost to Implement a Security Plan

    Part 3 of a great blog done by in AlienVault Support who has "heard it all" about the problems SMBs have in implementing a security plan with small budgets. Kenneth offers lots of practical and helpful advice for IT and security practitioners.
    https://www.alienvault.com/blogs/security-essentials/third-step-in-reducing-the-cost-to-implement-a-...
    This topic first appeared in the Spiceworks Community

    hi Elistariel -
    With no texting plan, it is 25 cents per picture message. The LG VX5500 (same phone my daughter has) does not use a memory card, so you can try two different programs on your computer (both free) and see if either one will get the pics off and saved on your computer; from there you can upload to your online album without a per picture charge.
    You can try Verizon's VCast media manager - download and install it on your computer, then use the USB cable to link the phone to the computer and transfer the pics with VCast.
    Here's a link
    A third party program called BitPim will also work, but it's more technical and does a lot more than just transfer your media. It can also brick your phone if you don't know what you are doing, so it's "use at your own risk", as Verizon won't cover any losses due to using BitPim. It does work though--I have used it, very cautiously!

  • SAP Security handover from the Onshore Implementation team Documents

    Dear All,
    We are an Implementation & Support Team and we are getting SAP Security handover from the Onshore Implementation team where in future we ought to continue the Implementation.
    Please could you let me know what others documents which we require for handling the complete security landscape for our Scenario!
    CRM, BI, BS, SOLMAN, EP and PI
    Please suggest any other documents besides the below or any other specific details with respect to each Module,
    u2022           Enterprise-Wide Role Matrix
    u2022           Role Implementation Framework Prototype
    u2022           User Authorization and Strategy Management Procedures
    u2022           User Role and Authorization Concept Technical Design
    u2022           SAP Security Organization Hierarchy Requirements
    u2022           Transaction to Role Mapping
    u2022           Role to Position Mapping
    u2022           Available authorization policy documents
    u2022           Role matrix with segregation of Duties
    Many Thanks

    What do you have defined for your support?
    Presumably you have quoted a price per call but what do you cover and how do you calculate the charge to your client?
    Please let me know so that I can undercut your quote.
    Damn - forgot to ask who your client was and the contact name.
    Cheers
    David
    Edited by: David Berry on Feb 11, 2011 12:29 AM
    Edited by: David Berry on Feb 11, 2011 12:30 AM

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Database View is not appearing in the ABAP Coding

    Dear Friends,
    I am very much new in the ABAP. Actually I worked last 8 years in the Oracle PL/SQL. Recently, my company implements SAP. The database is Oracle 10g. I would like to use my oracle experience in the project. But in everywhere I faced some problem.
    I have created one Database view by the Native sql. Here I am giving you the code.
    Create OR REPLACE view ZVMVSL_ALLTPORT
    AS
    SELECT T1.MVESSEL_NO, T1.MVESSEL_NAME, T1.SHIPPINGLINE, T2.MVOYAGENUMBER,
    T2.TRANSHIPMENTA AS TPORT, T2.ETDTSHIPMENTA AS ETD_TPORT,
    T3.MVESSEL_SRL_NO,
    T3.PORTOFDISCHARGE POD, T3.ETAPORT
    FROM ZSD_MOTHER_MST T1, ZSD_MOTHER_DTL T2, ZSD_MOTHER_VIA T3
    WHERE T1.MANDT = T2.MANDT
    AND T1.MVESSEL_NO = T2.MVESSEL_NO
    AND T1.MVESSEL_NO = T3.MVESSEL_NO
    AND T2.MVOYAGENUMBER = T3.MVOYAGENUMBER
    AND T2.TRANSHIPMENTA <>' '
    UNION
    SELECT T1.MVESSEL_NO, T1.MVESSEL_NAME, T1.SHIPPINGLINE, T2.MVOYAGENUMBER,
    T2.TRANSHIPMENTB AS TPORT, T2.ETDTSHIPMENTB AS ETD_TPORT,
    T3.MVESSEL_SRL_NO,
    T3.PORTOFDISCHARGE POD, T3.ETAPORT
    FROM ZSD_MOTHER_MST T1, ZSD_MOTHER_DTL T2, ZSD_MOTHER_VIA T3
    WHERE T1.MANDT = T2.MANDT
    AND T1.MVESSEL_NO = T2.MVESSEL_NO
    AND T1.MVESSEL_NO = T3.MVESSEL_NO
    AND T2.MVOYAGENUMBER = T3.MVOYAGENUMBER
    AND T2.TRANSHIPMENTB <>' '
    This view i could not use from ABAP codes, ABAP/4 only identified by views which we would create from T-code se11. Now tell me how could I solve this issue. Because in this query i did some UNION which is not directly available in teh se11.
    Please help me how to handle this situation. Because if I can use this sort of native sql views, I could solve lot of things in the development.
    rgds
    Farhad

    >
    Farhad Islam wrote:
    > This view i could not use from ABAP codes, ABAP/4 only identified by views which we would create from T-code se11. Now tell me how could I solve this issue. Because in this query i did some UNION which is not directly available in teh se11.
    >
    >
    > Please help me how to handle this situation. Because if I can use this sort of native sql views, I could solve lot of things in the development.
    >
    > rgds
    > Farhad
    You can use real SQL to query non SE11 created database tables - look at the SAP help on EXEC SQL - but this is really not a good idea.  For example, it is database specific which removes one of the 'advantages' of SAP ie that it is portable between different database platforms;using SAP's Open SQL should mean that if your company moves to eg SQL Server it shouldn't be necessary to doing any re-coding whereas anything you've done in Oracle SQL will have to be re-coded. There are also other disadvantages of using real SQL eg you don't get any help from the syntax checker and I think you would bypass the whole SAP security level - SAP security is defined at application level and not via database grants.
    I would bet a reasonable amount of money that there is nothing that you can do in real SQL that you can't replicate in SAP's Open SQL .  Ok the Open SQL solution might be a bit more convoluted and possibly less elegant but you can make it do the same thing.  If you want to stick with Oracle then I guess you should look for a job that uses Oracle rather than SAP.

  • Does CAF have the possibility to implement wait steps?

    I'm wondering if CAF has functionality to allocate a wait time to an action, in the same way that it is possible to put a wait step in ABAP workflow tasks.
    I can see one tab on the Action definition in design time GP that has a fixed date and allows a callable object to be invoked, but the kind of logic I am looking for is to allow a separate block to be invoked if a certain time limit has been reached.
    Is this possible to do in CAF?
    - Tony.

    Hi Tony,
    I cannot completely understand your requirement- invoke a block when a certain amount of time has elapsed?
    Well you cannot branch to a particular block, but in terms of callable object to implement wait feature maybe an idea is to create a exclusive one through a background callable object where you are free to code. You could implement a wait in here through java code. You could take in the wait duration as an input through the callable object interface. I have not tried it, but it should work.
    -Sijesh

  • Missing parameter in abap proxy implementing class

    we have a XI 3.0 (DR0) and a XI 7.0 (DR8) systems. We have exported a
    message interface from system DR0 and imported in DR8, then we have
    generated the abap proxy. The problem is in differences between abap
    proxy generated in XI 3.0 and the one generated in XI 7.0. The most
    sostantial difference is in parameters of implementing class where, how
    you can see from attached document, the CONTROLLER parameter is missing.Please, we need a support for this issue.

    maybe this field is not inlcuded in the version u are using (XI 7.0). but this shud not be a show stopper. u can go to the class editor and check out if the req strucutres and fields are present WRT to the I/B and O/B interface.

  • ABAP & Workflow for CRM

    Hello  Buddies,
            I have seen so mail threads somewhat near to my doubt but could you please tell me some good link for learning ABAP & Workflow for CRM. Also please advise me, being ABAP-Workflow Consultant which section of CRM is best suitable for me.
    Thanks in advance. Your guidance is appreciated.
    Warm Regards,
    N. Singh

    Hi,
    ABAP    and Workflow for CRM is nothing different and specific to CRM. They are same and at least all the basic concepts are present. There might be some implementation and conceptual differences but they are more of functionality driven.
    You start learning SAP CRM Fucntionality which will help you to understand SAP CRM much better. And then you can have a feel of the CRM feature and where and how it's been used to drive the ABAP and Workflow functionality.
    Hope this helps. You can scan the forum for good links and resources.
    Best Regards,
    Samantak

  • BPM or Abap Workflow

    Hi Experts
    I have to implement a Sales Order process for my client.
    Overview of the process which needs to be implemented:
    A company sells xyz product. So the customer places an offer for abc product at some price. The request goes to the sales department's manager who checks the customer demands and accepts/rejects/edits the request according to some rules like mkt price. If he accepts the offer, it becomes the contract for which sales order is created and invoicing is done.
    Company can edit the request, updated the price and sends back to customer. If customer is happy it becomes a contract again for which sales order can be created.
    I need to implement the scenario using webdynpro for abap or java.
    My Question:
    I have implemented such scenarios using UWL and SAP Abap workflows. I am new to CE 7.3 version and BPM concept.
    Experts please guide me which would be the best way to implement such scenario. Should I stick to normal abap workflow method or can I do it through BPM? What would be the challenges if I go by BPM approach.
    Thanks and Regards
    Sonal

    Hi Sonal,
      You didn't specify all the systems involved in this process - i.e. is it one central ERP, multiple SAP and non-SAP? The primary use case for SAP NetWeaver BPM is to orchestrate processes across multiples systems, presenting the user with one central location to conduct their work (UWL) and a consistent user intercace (Web Dynpro).
      If your process only uses one central ERP system for all backend tasks I would (not knowing more details) probably favor a WDA and business workflow based solution. Going outside of the ERP system to implement it wouldn't gain you very much if anything could make your landscape more complicated and create more work. On the other hand, if multiple backend systems are involved this would probably be a good candidate for a SAP NetWeaver BPM based process. Rough generalities, for sure, but you asked :-).
    O.

  • Problem in triggering the correct workflow on  AdobeForm submission for PCR

    Hi All,
    I am facing a problem involving triggering of workflows through Adobe form submission. I have created a scenario for "Employee seperation" in QISRSCENARIO transaction and assigned it to an approval workflow. I have also activated and assigned the BUS7051-Created event in the workflow Basic Data. My workflow also triggers perfectly when i submit the adobe form. Everything is perfect till here.
    Now i have to create another scenario for "Request for Transfer" Now for this i had to create a seperate workflow. My problem is since both these workflows are assigned to the same event whenever i submit the "Employee seperation" form both the workflows get triggered. <b>Is there a setting where i can configure the corresponding workflow to be triggered for the respective scenario's alone?</b> How do we handle this situation?

    Hi Jocelyn/Raja,
    I am trying to use SWB_COND for differentiating between the different workflows. I have created a virtual attribute W_SCENARIO_KEY for this. I tried populating this scenario key by using the following staement,
    <b>
    CALL FUNCTION 'ISR_SPECIAL_DATA_GET'                       
      EXPORTING                                                
        notification_no                     =  object-key-number
    IMPORTING                                                 
        SCENARIO                            = w_scenario_key.  
      SWC_SET_ELEMENT CONTAINER 'W_SCENARIO_KEY' W_SCENARIO_KEY.</b>
    When i try to include W_SCENARIO_KEY as a start condition the workflow shows up an express message and fails to trigger. In ST22 i can see that there is an exception "INVALID_NOTIF_NUMBER" raised.
    But if i don't set this as a start condition all the workflows activated to BUS7051-CREATED are triggered and in the WF logs i can see the correct value of W_SCENARIO_KEY for the respective notification number.
    I am not sure why this happens when i set it as a start condition alone.
    I instead used a select statement as shown below,
    <b>select * from viqmel into table itab_VIQMEL          
                  where qmnum = object-key-number.       
    loop at itab_viqmel where qmnum = object-key-number. 
    w_scenario_key = itab_viqmel-auswirk.                
    endloop.</b>
    After inserting this statement it works fine without any issues. Any idea on why ISR_SPECIAL_DATA_GET cant be used in the virtual attribute implementation?
    The following is the dump i get if i use ISR_SPECIAL_DATA_GET,
    Information on where terminated                                                                 
        The termination occurred in the ABAP program "SAPLQISR9" in                                 
         "ISR_SPECIAL_DATA_GET".                                                                    
        The main program was "RSWDSTRT ".                                                                               
    The termination occurred in line 39 of the source code of the (Include)                     
         program "LQISR9U01"                                                                        
        of the source code of program "LQISR9U01" (when calling the editor 390).                                                                               
    Source Code Extract                                                                               
    Line  SourceCde                                                                               
    9 *"  EXCEPTIONS                                                                               
    10 *"      NO_INTERNAL_SERVICE_REQUEST                                                       
       11 *"      INVALID_NOTIF_NUMBER                                                              
       12 *"      INT_SERVICE_REQUEST_NOT_FOUND                                                     
       13 *"----
       14                                                                               
    15 * local data                                                                               
    16   DATA: lt_dummy TYPE qisrsgeneral_param.                                                 
       17                                                                               
    18   DATA: lr_isr_document TYPE REF TO cl_isr_xml_document.                                  
       19                                                                               
    20   DATA: ls_notif TYPE qmel.                                                               
       21                                                                               
    22 * MAIN                                                                               
    23 * try buffer first                                                                      
       24   CALL FUNCTION 'ISR_SPECIAL_DATA_BUFFER_GET'                                           
       25     IMPORTING                                                                               
    26       ET_SPECIAL_DATA       = special_data                                              
       27       ED_SCENARIO           = scenario                                                  
       28     EXCEPTIONS                                                                          
       29       BUFFER_EMPTY          = 1.                                                        
       30                                                                               
    31   IF sy-subrc eq 0.                                                                     
       32     EXIT.                                                                               
    33   ENDIF.                                                                               
    34                                                                               
    <b>   35 * check notification number                                                             
       36   SELECT SINGLE * FROM qmel INTO  ls_notif                                              
       37                             WHERE qmnum = notification_no.                              
       38   IF sy-subrc NE 0.                                                                     
    >>>>>     RAISE invalid_notif_number.                                                         
       40   ELSEIF ls_notif-auswirk IS INITIAL.                                                   
       41     RAISE no_internal_service_request.                                                  
       42   ENDIF.     </b>                                                                               
    43                                                                               
    44 * set scenario                                                                          
       45   scenario = ls_notif-auswirk.                                                          
       46                                                                               
    47 * read ISR XML document                                                                 
       48   CALL METHOD cl_isr_xml_document=>read_for_display                                     
       49     EXPORTING  id_notif_no         = notification_no                                    
       50     IMPORTING  er_isr_xml_document = lr_isr_document                                    
       51     EXCEPTIONS bds_error = 1.                                                           
       52                                                                               
    53   IF sy-subrc NE 0.                                                                     
    54     RAISE int_service_request_not_found.           
    55   ENDIF.                                           
    56                                                    
    57 * read data from XML document                      
    58   CALL METHOD lr_isr_document->get_data_from_xml

  • Workflow implementation(help)

    hi all,
    i am new to workflows but have learned a bit on the same from help.sap,sdn etc....
    now we r implementing workflows for our client and he needs some std. workflows to start with
    i have done performed auto workflow customizing(SWU3)
    but now when i am testing std. worfklows in dev. server i am getting diffrent
    kinds of errors like say
    one of the task say "deletebillingblock" says error as "The calling of the object method for the work item ended with a return value for which no handling is modeled in the workflow." in the workflow log
    other says "The workflow runtime system has called an application method in a tRFC or background context. A message was processed in this application method. This causes the execution of the workitem to be cancelled in this context."
    i dont know y this is happening
    is there any more customization needed other than swu3 (basis/abap work)
    rgds
    Edited by: SAP SD GUY on Jan 7, 2009 10:08 AM

    Hi,
    "The calling of the object method for the work item ended with a return value for which no handling is modeled in the workflow." in the workflow log
    This kind of Error comes due to the following reasons:-
    1> There is a problem in the Binding, in which step this Error is Coming. You have Binded the element for which there is no Target Defined. Source is there but no Target is there, so it is unable to catch the data that you want to send through Binding.
    2> It also could be because of Binding Mismatch. The Container elements would not be having the same Data Type.
    "The workflow runtime system has called an application method in a tRFC or background context. A message was processed in this application method. This causes the execution of the workitem to be cancelled in this context."
    This Error can be because of the RFC Settings that are not propoerly configured in SWU3. All the ticks should be green in Colour. Check the same for the Server on which you are trying to trigger the WF.
    For more on WF: Kindly check theses:-
    https://www.sdn.sap.com/irj/scn/wiki?path=/display/abap/workflow%252bscenario
    /people/sapna.modi/blog/2007/02/19/workflows-for-dummies--introductionpart-i
    Let me know if you still face any issues.
    Regards,
    Kanika

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

Maybe you are looking for