Help in BPM: Routing data to mutiple receivers based on field content
Hi All
I have a scenario where my source data is from a file. Typically this file is from a DMS application and based on the contents in this file i need to send the data to 2 receivers which might be R/3 and DB or file. Though BPM may not be of use for this scenariuo i want to make use of BPM:
Q: My file content looks like:
1200,fombell,12,1200.50,ea,tetra
1900,fombell wdc,2,200,ea,magneta
2020,morris,1,12.50,Pc,frp_films
2020,morris,10,1200,Pc,xr_tutos
field separator is ,
primarily my first field is a plant and based on the plant i need to send the data to R3 and File or DB. If i make use of switch in BPM i'm not able to read the entire list of the message. it checks only for the last record. how to goahead with this and kidnly let me know in details the steps involved.
Thks
Prabhu
Hi Prabhu,
Just to suggest i guess its always better to avoid BPMs to improve your performance...
Your requirement can be easily met using the Extended Reciever determination that we have...
Here actually based on condition from the message in the payload u can assign your receivers..
Below is the steps to do the same...
Under Receiver Determination --> Configured Receivers --> Specify the 2 receivers and beside the same you have an option called condition, there you specify the condition on which it has to go to the first system and then the condition on which it has to go to the second system.
When you click on the codition it will open another window (Condition Editor window) under which you have a Left Operand which when you click will open up another window(Expression Editor window), there for your need you check on the XPath radio button and then click on the field whose value you wanna check from the structure that it displays, when you click on the field it fills in the XPath Expression, then click ok.
After that it takes you back to the previous window(Condition Editor window) with the Left Operand having the XPath Expression, so all you will have to do is to fill the Right Operand which is nothing but the value you want that field to contain.(For Example if a field "A" that you have selected which comes in the XPath Expression in ur Left Operand,has a value "1" in your Right Operand)
There you also have an option to put in an "and" or an "or" condition if required(For Example if you wanna check if "A" and "B" is satisfied or it you wanna check if "A" or "B" is satisfied, this can also be done)
I also suggest u to go through these blogs..
/people/shabarish.vijayakumar/blog/2005/08/03/xpath-to-show-the-path-multiple-receivers
/people/shabarish.vijayakumar/blog/2006/06/07/customise-your-xpath-expressions-in-receiver-determination
If you have any doubts or clarifications on the same, please do ask and i will tell you how to proceed.
Regards,
Abhy
Similar Messages
-
How can i convert the data from mutiple-table to the other database(MSSQL)?
Dears,
How can i convert the data from mutiple-table to the other database such as MS-SQL?
I have a third party system based on MS-SQL 2000.
Now we want to make a integration between SAP R/3(Oracle) and SQL server.
When my user releases the purchase order in R/3, the application we coded will convert the releated data to the temp database on the SQL server.
But i don't know which tools will help me reach the purpose. BAPI, LSMW, IDoc... ???
Would anybody tell me which way is better and how to do?
Thanks a lot!
Kevin WangHello Kevin,
The question to use which method depend on your detail requirements. If you use BAPI, you need to find which Bapi can provide the data you want. Bapi normally use as a function called by external system. So you need to develop an external program like VB/Java to call this Bapi and move it to SQL. LSMW is use when you want to upload data from an external system to SAP. So it does not serve your requirement. Idoc can be use to export data to an external system. Again like Bapi, you need to find what Idoc can provide the data you want. However, it does not any programming from the external system. If I were you, based on your requirements, I think writing an Abap program that read the data you want and download it to NT/SQL server will be faster and easier. -
Facing problem during uploadation of Routing data using CA01-BDC - URGENT
Dear All,
When I am trying to upload Routing data using CA01 in the Table Control scenario, then I am facing problem as my last 2 records are not getting uploaded from my Test file.
For example, I am having 47 records in my Test File and after setting Default size parameters (to avoid screen resolution problem)
I have 15 table control line items data per page. The Page down logic ('=P+') is working fine, but my below BDC code failed to take
the remainder last 2 records from the Test File.
Analysis: When I am running my Call Transaction bdc in foreground, then the 1st page down occurs after 15th record, 2nd page down occurs after 29th record( as in Table Control 1st pages 15th record is coming on the Top of 2nd page). 3rd page down occurs after 43rd record
(as 2nd pages 29th record is coming on the top of 3rd page). In the 4th Table Control Page 43rd record of previous page is coming on top, and then its taking 44th & 45th records from the Test File and then it is triggering SAVE (=BU). Thus, our last 2 records
(i.e. 46th, 47th record) are not getting uploaded in the routing screen from our Test File.
If anybody has encountered this scenario previously, please help me URGENTLY in fixing the bugs here. Its VERY, VERY URGENT
FYI. For others 45 successful records already uploaded, all the screen fields values are coming properly in the routing screen, and here there is no issue.
Thanks very much
Thanks & Regards
Sudipta Project Lead
Volvo Client Location
I am pasting my BDC source code below:
REPORT ZRT1_UPLOAD_CA01_F
NO STANDARD PAGE HEADING
LINE-SIZE 255.
I N C L U D E S *
Include for Data Declarations
INCLUDE zrout_top.
Include for Forms
INCLUDE zrout_form.
INCLUDE zrout_include_f_ca01.
*AT SELECTION-SCREEN ON VALUE-REQUEST FOR <field>
AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_FILE.
Attaching F4 help with filename
PERFORM F1001_GET_F4.
S T A R T - O F - S E L E C T I O N *
START-OF-SELECTION.
Perform to read the input file
PERFORM f_read_file.
Perform to fill the BDC data
PERFORM f_fill_bdctab.
E N D - O F - S E L E C T I O N *
END-OF-SELECTION.
FREE: i_bdcdata,
i_messtab,
i_record.
x----
*& Include ZROUT_TOP *
D A T A B A S E T A B L E S *
TABLES: t100. "Messages
D A T A D E C L A R A T I O N S *
T A B L E T Y P E S *****************************
For input data
TYPES: BEGIN OF ty_record,
matnr(18), "Material Number
werks(4), "Plant
verwe(3), "Usage
statu(3), "Status
arbpl(8), "Work Center
steus(4), "Control Key
ltxa1(40), "Description of Operation
bmsch(13), "Base Quantity
meinh(3), "Unit of Measure
vgw01(11), "Machine
vge01(3), "Unit of measure of activity
END OF ty_record.
I N T E R N A L T A B L E S ***********************
Internal Table for input file name
DATA: i_file_tab TYPE STANDARD TABLE OF sdokpath INITIAL SIZE 0.
Internal Table for BDC Data
DATA: i_bdcdata TYPE STANDARD TABLE OF bdcdata INITIAL SIZE 0.
Internal Table for BDC Messages
DATA: i_messtab TYPE STANDARD TABLE OF bdcmsgcoll INITIAL SIZE 0.
Internal Table for Input file
DATA: i_record TYPE STANDARD TABLE OF ty_record INITIAL SIZE 0.
W O R K A R E A S *************************
Work Area for input file name
DATA: wa_file_tab LIKE sdokpath.
Work Area for BDC Data
DATA: wa_bdcdata LIKE bdcdata.
Work Area for BDC Messages
DATA: wa_messtab LIKE bdcmsgcoll.
Work Area for Input file
DATA: wa_record TYPE ty_record.
V A R I A B L E S ****************************
DATA: v_filename TYPE string,
v_fnam(40) TYPE c.
DATA: wa_opt TYPE ctu_params.
C O N S T A N T S ***************************
CONSTANTS: c_werks TYPE rc27m-werks VALUE 'tp',
c_steus TYPE plpod-steus VALUE 'PP01'.
*Selection Screen.
SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
PARAMETERS:
Input file name
P_FILE TYPE rlgrap-filename OBLIGATORY. " DEFAULT 'C:\'.
SELECTION-SCREEN END OF BLOCK B1.
x----
*& Include ZROUT_FORM *
*& Form f_fill_bdctab
Form to fill the BDC Data
FORM f_fill_bdctab.
TABLES mapl. "Assignment of Task Lists to Materials
DATA: l_cnt_item(3) TYPE n VALUE 1. "Line item counter
DATA: first(3) TYPE n VALUE 16. "Line item counter
DATA: next(3) TYPE n . "Line item counter
DATA: lin(3) TYPE n . "Line item counter
DATA: l_v_bmsch(13), "Base qty
l_v_meinh(3), "Unit of Measure
l_v_vgw01(11), "Machine
l_v_vgw02(11), "Labour
l_v_vge01(3). "Unit of measure of activity
DATA l_v_nextline TYPE sy-tabix.
DATA wa_temp TYPE ty_record.
Initialize Counter
l_cnt_item = 1.
SORT i_record BY matnr.
LOOP AT i_record INTO wa_record.
AT NEW matnr.
REFRESH: i_bdcdata,
i_messtab.
SET PARAMETER ID 'PLN' FIELD space.
SET PARAMETER ID 'PAL' FIELD space.
PERFORM f_bdc_dynpro USING 'SAPLCPDI' '1010'.
PERFORM f_bdc_field USING 'BDC_OKCODE'
'/00'.
Material Number
PERFORM f_bdc_field USING 'RC27M-MATNR'
wa_record-matnr.
Plant
PERFORM f_bdc_field USING 'RC27M-WERKS'
c_werks.
PERFORM f_bdc_field USING 'RC271-PLNNR'
Check if routing already exits for the material
SELECT * FROM mapl
INTO mapl
WHERE matnr EQ wa_record-matnr
AND werks EQ c_werks
AND plnty EQ 'N'.
IF sy-subrc EQ 0.
PERFORM f_bdc_dynpro USING 'SAPLCPDI' '1200'.
PERFORM f_bdc_field USING 'BDC_OKCODE'
'=ANLG '.
ENDIF.
ENDSELECT.
perform f_bdc_dynpro USING 'SAPLCPDA' '1200'.
perform f_bdc_field USING 'BDC_OKCODE'
'=VOUE'.
Group Counter
perform f_bdc_field USING 'PLKOD-PLNAL'
Usage
PERFORM f_bdc_field USING 'PLKOD-VERWE'
'1'.
Status
PERFORM f_bdc_field USING 'PLKOD-STATU'
'4'.
ENDAT.
PERFORM f_bdc_dynpro USING 'SAPLCPDI' '1400'.
Check if page is full
IF l_cnt_item EQ '16'.
Page down
PERFORM f_bdc_field USING 'BDC_OKCODE'
'=P+'.
l_cnt_item = 1.
ELSE.
PERFORM f_bdc_field USING 'BDC_OKCODE'
'/00'.
ENDIF.
CLEAR v_fnam.
Populate item level details
Work Center
CONCATENATE 'PLPOD-ARBPL(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
wa_record-arbpl.
Control Key
CONCATENATE 'PLPOD-STEUS(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
c_steus.
Description of Operation
CONCATENATE 'PLPOD-LTXA1(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
wa_record-ltxa1.
Base Quantity
CONCATENATE 'PLPOD-BMSCH(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
wa_record-bmsch.
Unit of Measure
CONCATENATE 'PLPOD-MEINH(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
wa_record-meinh.
Machine
CONCATENATE 'PLPOD-VGW01(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
wa_record-vgw01.
Labour
CONCATENATE 'PLPOD-VGW02(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
wa_record-vgw02.
Unit of measure of activity
CONCATENATE 'PLPOD-VGE01(' l_cnt_item ')' INTO v_fnam.
PERFORM f_bdc_field USING v_fnam
wa_record-vge01.
l_cnt_item = l_cnt_item + 1.
CLEAR wa_record.
AT END OF matnr.
PERFORM f_bdc_field USING 'BDC_OKCODE'
'/00'.
PERFORM f_bdc_field USING 'BDC_OKCODE'
'=BU'.
wa_opt-DISMODE = 'A'.
wa_opt-DEFSIZE = 'X'.
wa_opt-UPDMODE = 'S'.
PERFORM f_bdc_transaction USING 'CA01'.
Initialize Counter
l_cnt_item = 1.
ENDAT.
ENDLOOP.
ENDFORM. " f_fill_bdctab
x----
*& Include ZROUT_INCLUDE_F_CA01 *
*& Form f_read_file
Form to read the file from presentation server
FORM f_read_file .
To get the file name
DATA l_v_file TYPE string.
l_v_file = P_FILE.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = l_v_file
filetype = 'ASC'
has_field_separator = 'X'
TABLES
data_tab = i_record
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
ENDIF.
ENDFORM. " f_read_file
*& Form f_bdc_dynpro
Form to populate BDC Tab for new screen
-->fp_program Screen program name
-->fp_dynpro Screen Number
Start new screen *
FORM f_bdc_dynpro USING fp_program fp_dynpro.
CLEAR wa_bdcdata.
wa_bdcdata-program = fp_program.
wa_bdcdata-dynpro = fp_dynpro.
wa_bdcdata-dynbegin = 'X'.
APPEND wa_bdcdata TO i_bdcdata.
ENDFORM. "f_bdc_dynpro
*& Form f_bdc_field
Insert field *
FORM f_bdc_field USING fp_fnam fp_fval.
IF NOT fp_fval IS INITIAL.
CLEAR wa_bdcdata.
wa_bdcdata-fnam = fp_fnam.
wa_bdcdata-fval = fp_fval.
APPEND wa_bdcdata TO i_bdcdata.
ENDIF.
ENDFORM. "f_bdc_field
*& Form f_bdc_transaction
Call transaction and error handling
-->fp_tcode Transaction code
FORM f_bdc_transaction USING fp_tcode.
DATA: l_mstring(480),
l_color TYPE i,
l_mode TYPE c.
REFRESH i_messtab.
CALL TRANSACTION fp_tcode USING i_bdcdata
OPTIONS FROM wa_opt
MESSAGES INTO i_messtab.
Messages during upload
LOOP AT i_messtab INTO wa_messtab.
CASE wa_messtab-msgtyp.
WHEN 'S'.
l_color = 5.
WHEN 'E'.
l_color = 6.
WHEN 'W'.
l_color = 3.
ENDCASE.
FORMAT COLOR = l_color.
SELECT SINGLE * FROM t100 WHERE sprsl = wa_messtab-msgspra
AND arbgb = wa_messtab-msgid
AND msgnr = wa_messtab-msgnr.
IF sy-subrc = 0.
l_mstring = t100-text.
IF l_mstring CS '&1'.
REPLACE '&1' WITH wa_messtab-msgv1 INTO l_mstring.
REPLACE '&2' WITH wa_messtab-msgv2 INTO l_mstring.
REPLACE '&3' WITH wa_messtab-msgv3 INTO l_mstring.
REPLACE '&4' WITH wa_messtab-msgv4 INTO l_mstring.
ELSE.
REPLACE '&' WITH wa_messtab-msgv1 INTO l_mstring.
REPLACE '&' WITH wa_messtab-msgv2 INTO l_mstring.
REPLACE '&' WITH wa_messtab-msgv3 INTO l_mstring.
REPLACE '&' WITH wa_messtab-msgv4 INTO l_mstring.
ENDIF.
CONDENSE l_mstring.
WRITE: / wa_messtab-msgtyp, l_mstring(250).
ELSE.
WRITE: / wa_messtab.
ENDIF.
FORMAT COLOR OFF.
ENDLOOP.
SKIP.
ENDFORM. " f_bdc_transaction
FORM F1001_GET_F4.
CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
EXPORTING
PROGRAM_NAME = SY-REPID
DYNPRO_NUMBER = SY-DYNNR
FIELD_NAME = P_FILE
CHANGING
FILE_NAME = P_FILE
EXCEPTIONS
MASK_TOO_LONG = 1
OTHERS = 2.
IF SY-SUBRC <> 0.
File is not selected
MESSAGE I000 WITH TEXT-M01.
ENDIF.
ENDFORM. " F1001_GET_F4Sudipta,
Would request you to post this to ABAP-Forum for Immediate response.
I had this problem, but the ABAP guy did something to correct this...it was more of screen resoultion difference between the recorded system and uploading system. Please try to use the same system which was used to record and try.
Regards,
Prasobh -
XML mapped to IDoc routed to two different receivers
Hello gurus,
I have an XML document coming into XI which needs to be mapped into an IDOC, then sent to two different systems (not to both of them) based on a value in the XML (the value is actually the SAP partner ID).
Can this be done in BPM? Perhaps using Enhanced Receivers would be a viable solution for this?
Thank you!have u looked into XPaths ?
Ref:
/people/shabarish.vijayakumar/blog/2006/06/07/customise-your-xpath-expressions-in-receiver-determination
/people/shabarish.vijayakumar/blog/2005/08/03/xpath-to-show-the-path-multiple-receivers
Here based on a field in your message you can route the message to a receiver.
Also ref: /people/venkataramanan.parameswaran/blog/2006/03/17/illustration-of-enhanced-receiver-determination--sp16 for enhanced recv. det. -
Sending the same data to multiple receivers.
Hi,
In 7.1, I did a simple file-to-file scenario with mulitple receivers.I used only 1 Rcvr Det.(included multiple business components) and 1 Interface Det. without BPM and it works.I would like to know if there's any other way to send same data to multiple receivers without BPM and/or the approach used by me.
Thanks,
VishalThanks all for your replies. I did multimapping as suggested by couple of you and it works. Since it is required to send to 7 or more systems and the no. of fields to be mapped is huge, multimapping is time-consuming. So I would like to know if there's any other approach.
@Ravi: can you please explain with more details regarding the approach suggested by you?
Can someone explain why and/or how a mapping executes multiple times if we have multiple receivers getting the same data?
And if there's any way to check(For Ex: Moni) if the mapping executed multiple times?
Thanks in adavnce,
Vishal -
I am required to create audit reports on BPM workflows.
I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.Hi,
You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
Here's the contents of the document:
1. Introduction
The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
2. Creating the Service Endpoints
The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
On the following screen, specify:
· the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
· the port for the Oracle service
· the SID – here I have use Oracle Express so the SID is ‘XE’
· the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
· the password for the user
Scroll down to the bottom of the page and click <Save>.
In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
On the following screen, specify:
· the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
Then click <Save>.
In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
On the following screen, specify:
· the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
· the port for the Oracle service
· the SID – here I have use Oracle Express so the SID is ‘XE’
· the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
· the password for the user
3. Configuring BAM Updater Service
Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
So, here’s the example of how to set up the BAM Updater service….
Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
Once the data is input, click <Save>.
We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
Text showing the successful creation of the database and data structures should appear.
Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
4. Configuring Process Data Mart Updater Service
In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
Also, un-tick the Generate O3 Cubes (see later notes):
Then click <Save>.
Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
Text showing the successful creation of the database and data structures should appear.
5. Configuring Common Updater Service Parameters
In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
• Log directory: location of the log file
• Messages logged from Data Store Updater: severity level of the Updater logs
• Language
• Generate Performance Metrics: enables performance metrics generation
• Generate Workload Metrics: enables workload metrics generation
• Generate O3 Cubes: enables O3 Cubes generation
In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
a. the Log directory must be specified in order for the logs to be generated
b. the Messages logged from Data Store Updater, which indicates the level
of the logs, should be DEBUG for troubleshooting and WARNING otherwise
c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
6. Set up the WebLogic configuration
We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
Click on <Lock and Edit> and then <New> to add a New data source.
Specify:
· the Name – use the name you set up in the Process Administrator
· the JNDI Name – again use the name you set up in the Process Administrator
· the Database Type – Oracle
· use the default Oracle Database Driver
Then click <Next>
On the next screen, click <Next>
On the next screen specify:
· the Database Name – this is the SID – for me that is XE
· the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
· the Database User Name and Password – this is the BAM database user specified in the Process Administrator
Then click <Next>
On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
On the next screen, select your server as the target server and click <Finish>:
Finally, click <Activate Changes>.
7. The Last Step: Starting Up and Shutting Down the Updater Service
ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
- For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
$ALBPM_HOME/bin$ ./albpmwarehouse.sh start
- For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
%ALBPM_HOME%\bin> albpmwarehouse.bat install
After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
8. Finally: Running BAM dashboards to show it is Working
Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
9. General BAM and Data Mart Caveats
a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
g. Configuring the Updater Service Log
In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
- dwupdater.log: which is the log for the Data Mart updater service
- bam-dwupdater.log: which is the log for the BAM updater service
In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
- wrapper.console.loglevel: level for the updater service log
- wrapper.logfile.loglevel: level for the updater service log
- wrapper.java.additional.n: additional argument to the service JVM
- wrapper.logfile.maxsize: maximum size of the updater service log files
- wrapper.logfile.maxfiles: maximum number of updater service log files
- wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
9.1. Updater Service Log Configuration Caveats
a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
Hope this helps,
Dan -
Update Routing data after change in Work Center
Hello All,
I have just modified the Costing Data for 100 Work Centers CR02, setting a new "Activity Type" and "Activity Unit". But the Routing Data CA02 does not get updated correspondingly. I would like to ask is there any standard program which can help me to do the mass processing as to update the data to the corresponding routing?
Thanks.1) You can change as per Mr Anupam sharma Said
or
2) YOU can also Update the data IN CA85N
In selection criteria enter the respective work center and Plant
and In new Value Give
Same work center, Plant, activity value , Unit and activity type
execute
select the all the work center and then click mass change
Edited by: Sundaresan . E. V on Nov 12, 2010 10:25 AM -
An iPhone 5 has a built in GPS system, OK?
So your iPhone keeps "track" of where you are? [Actually where the iPhone is! ] Now I assume that it stores the route you took to get to where you are in some format? My question is really twofold: Can I somehow recover the route data and transfer it to a PC or LapTop be used in a program like ArcGis or ArcView? (ii) If it can be done, how do I do it? I ask the questions as I wish to keep track of wanderings during guinea fowl counting activities. My current system of using a separate GPS and subsequent data transfer is a bit cumbersome. If my iPhone can do it, it sure will make my life quite a bit easier!
Any help appreciated.
Andrew McLarenThe native Maps app does not support this.
-
Plese explain me routing data to be collected and uploaded into sap system
Hi Balaji
You have to collect the following data for routing:
1.Operations & their sequences
2. work centers used for the operations
3.stansdard values for the respective operations with base( batch) qtys
4. Operation details like splitting, external operations etc
You have to identify the respective fields for the same.
You can take the help of Technical ( ABAP) consultant for this purpose.
Regards
YMREDDY -
Was trying the Ovi maps on the 5800 today in the car, had full strength GPS with no network support. I set it to navigate me home but it kept coming up with the message 'No Route Data'. I have the full UK map installed & it has worked fine all the other times I have tried it out. Still new to Ovi so not to sure what to try.
All latest firmware updates installed & latest maps (not beta)
Any help appreciated.After my last post on this topic I deleted all maps from my 5800 which has Nokia Maps 3.03. Downloaded all maps again using Nokia Map Loader 3.0.24.
On my way to a new location I was taking the exit off the highway with my position current & good sat lock and asked Maps to drive me to my destination from there, and the response (twice) was "no route found to destination" (not exact text). I was quite disappointed and just used the map to see where I was and drove there myself by looking at street names.
BTW, the destination I had chosen was from the list of locations from Nokia Maps, which means there was no problem with the way I typed in the addrss. In addition I was using Maps in offline mode if that makes a difference.
Lumia 920, Lumia 800
Nokia N8-00 (NAM, Product Code: 059C8T6), Symbian Belle, Type RM-596, 111.030.0609
Nokia 5800 XpressMusic (NAM, Product Code: 0577454) Software v51.2.007, Type RM-428 -
Ensure field sequence is correct for data for mutiple source structure
Hi,
I'm using LSMW with IDOC message type 'FIDCC2' Basic type 'FIDCCP02'.
I'm getting error that packed fields are not permitted.
I'm getting Ensure field sequence is correct for data for mutiple source structures.
Source Structures
HEADER_STRUCT G/L Account Document Header
LINE_STRUCT G/L Account Document Line
Source Fields
HEADER_STRUCT G/L Account Document Header
BKTXT C(025) Document Header Text
BLART C(002) Document Type
BLDAT DYMD(008) Document Date
BUDAT DYMD(008) Posting Date
KURSF C(009) Exchange rate
WAERS C(005) Currency
WWERT DYMD(008) Translation Date
XBLNR C(016) Reference
LINE_STRUCT G/L Account Document Line
AUFNR C(012) Order
HKONT C(010) G/L Account
KOSTL C(010) Cost Center
MEINS C(003) Base Unit of Measure
MENGE C(013) Quantity
PRCTR C(010) Profit Center
SGTXT C(050) Text
SHKZG C(001) Debit/Credit Ind.
WRBTR AMT3(013) Amount
I have changed PAC3 field for caracters fields of same length to avoid erreur message of no packed fields allowed.
Structure Relations
E1FIKPF FI Document Header (BKPF) <<<< HEADER_STRUCT G/L Account Document Header
Select Target Structure E1FIKPF .
E1FISEG FI Document Item (BSEG) <<<< LINE_STRUCT G/L Account Document Line
E1FISE2 FI Document Item, Second Part of E1FISEG (BSEG)
E1FINBU FI Subsidiary Ledger (FI-AP-AR) (BSEG)
E1FISEC CPD Customer/Vendor (BSEC)
E1FISET FI Tax Data (BSET)
E1FIXWT Extended Withholding Tax (WITH_ITEM)
Files
Legacy Data On the PC (Frontend)
File to read GL Account info c:\GL_Account.txt
Data for Multiple Source Structures (Sequential Files)
Separator Tabulator
Field Names at Start of File
Field Order Matches Source Structure Definition
With Record End Indicator (Text File)
Code Page ASCII
Legacy Data On the R/3 server (application server)
Imported Data File for Imported Data (Application Server)
Imported Data c:\SYNERGO_CREATE_LCNA_FI_GLDOC_CREATE.lsmw.read
Converted Data File for Converted Data (Application Server)
Converted Data c:\SYNERGO_LCNA_FI_GLDOC_CREATE.lsmw.conv
Wildcard Value Value for Wildcard '*' in File Name
Source Structures and Files
HEADER_STRUCT G/L Account Document Header
File to read GL Account info c:\GL_Account.txt
LINE_STRUCT G/L Account Document Line
File to read GL Account info c:\GL_Account.txt
File content:
Document Header Text Document Type Document Date Posting Date Exchange rate Currency Translation Date Reference
G/L Account document SA 20080401 20080409 1.05 CAD 20080409 Reference
Order G/L Account Cost Center Base Unit of Measure Quantity Profit Center Text Debit/Credit Ind. Amount
44000022 1040 Line item text 1 H 250
60105M01 13431 TO 10 Line item text 2 S 150
800000 60105M01 Line item text 3 S 100
60110P01 6617 H 40 Line item text 4 S 600
44000022 ACIBRAM Line item text 5 H 600
The file structure is as follow
Header titles
Header info
Line titles
Line1 info
Line2 info
Line3 info
Line4 info
Line5 info
Could someone direct me in the wright direction?
Thank you in advance!
CurtisHi,
Thank you so much for yout reply.
For example
i have VBAK(Heder structure)
VBAP( Item Structure)
My file should be like this i think
Identification content Fieldnames
H VBELN ERDAT ERNAM
Fieldvalues for header
H 1000 20080703 swapna
Identification content Fieldnames
I VBELP AUART
Fieldvalues for item
I 001 OR
002 OR
Is this format is correct.
Let me know whether i am correct or not -
Need help in formatting the Date - Date does not
Need help in formatting the Date - Date does not formats and give Not a valid month error in the below scenario.
select oc.ST_PGM_MGR, r.ag_dnum, get_major_work_type(r.perf_eval_rtng_id) "v_work_code", r.ag_dnum_supp "supp", r.intfinal, to_char(r.formdate,'MM/DD/YYYY') "formdate", to_char(r.servfrom,'MM/DD/YYYY') "srv_from", to_char(r.servto,'MM/DD/YYYY') "srv_to", descript, add_months(to_char
--- Bellow line of Code on trying to format it to mm/dd/yyyy gives the error
(r.formdate, 'DD-MON-YYYY'),12) "formdate2"
from table REdited by: Lucy Discover on Jul 7, 2011 11:34 AM
Edited by: Lucy Discover on Jul 7, 2011 1:05 PMYour syntax is wrong - look at the post above where this syntax is given:
to_char (add_months(r.formdate,12), 'MM/DD/YYYY') "formdate2"Look at the formula from a logical perspective - "inside out" to read what is happening -
take formdate, add 12 months
add_months(r.formdate, 12)then apply the to_char format mask - basic syntax
to_char(date, 'MM/DD/YYYY')Compare to your syntax:
to_char(add_months(r.formdate, 'MM/DD/YYYY'),12) "formdate2"You will see your format string inside the call to add_months, and your 12 inside the call to to_char.
Good luck! -
I have a tableView displaying a list of contacts from a Cloud Database. After selecting a contact, I push to a programmatically created MKMapView. Then I display the initial region (the view) that includes the users current location (starting point) and their selected destination (end point).
Now I want to display annotations (as described in the Location Awareness Programming Guide) that displays polylines which will represent the turn-by-turn navigation IN MY OWN APPLICATION, and not in the Map App currently used in IOS6.
Due to licensing and its becoming depricated in IOS 6, I do not want to get routing data from the Google Maps API. How do I get routing data from the IOS 6 Map App (powered by TomTom) so I can display the point-to-point annotations (turn-by-turn navigation) without leaving my own application?
I checked out Stack Overflow and other forums which basically left me with the impression that this is not possible. I also checked out the TomTom iPhone Mobile SDK User Guide from the TomTom Developer Portal and am still confused. It must be possible to retrieve routes, since the Map App can display turn-by-turn directions. How can I retrieve turn-by-turn data that I may display as a route within my own application?Thanks Michael. Apologies for the slow reply I was away for a bit (holiday blitz at work and visiting family madness etc.etc.) back now, I set both options you requested to "never " and retried the CMS software with no change.
I do have progress of a sort though, as a test I took a separate test PC and put a clean install of Win7 on and loaded up the CMS software (it worked perfectly) and then took the version of ole32.dll off that machine and put it onto the computer I had built
for her (using Linux) and...
got a new error code. Darn I was so sure I had found a clever solution this time lol.
Anyway now when the CMS fails it gives me a similar error but the offending module is "ntdll.dll" sooo... I tried taking the "working" version of ntdll.dll from the test box and moving it over (making sure to back up the existing ones
first so I could put them back if needed) to her new PC and the PC would not boot.
It seems to want the original versions of a few Dynamic Link Libraries and if I could somehow give it those while not breaking Win7 it should theoretically work seeing as it no longer errors with ole32.dll.
ntdll.dll however seems necessary for Win7 to boot.
So what I am wondering now is:
Is there some way to have both versions of the DLL file in the system32 folder (bypassing the "cannot have two files with the exact same name in the same folder" thing) or rename the original DLL's something else and somehow make the CMS look for
the new named versions so the system has the updated DLL's it needs to boot/run and the CMS has the old ones it wants to run or is there someway to have a self contained install of the CMS, say on a USB flash drive and give it it's own E:/windows/system32/needed
dll's path to the files it needs?
Willing to try any other options or settings you may have come up with as well.
Thanks again for your reply and my apologies for not answering sooner. -
How to create hyperlinked text in F1 help of a particular Data Element.
Dear Guru
I have encountered an issuse regarding to create hyperlinked text in F1 help of a particular Data Element.
For Example what i am trying to do is ---
If you open a particular data element say "ATNAM" in se11 you will found the below documentation available for ATNAM -->>
DE ATNAM
Text
Characteristic Name
Definition
Name that uniquely identifies a *characteristic*.
>> The "characteristics" comes in hyperlinked bluecolor and if we press this it linked to below --- >>
Definition: characteristic
Classification (CA-CL)
Property for describing and distinguishing between objects, such as length, color, or weight.
Profitability Analysis (CO-PA)
I am able to make 1st part of the documentation using SE61.
But I am not able to make Hyperlinked part of that documentaion .
please show me some way to develop this
Thanks & regards
Saifur RahamanHI,
you can give the hyperlink in the documentation by going to the path below
MENUBAR ----> INSERT -----> TEXT -----> HYPERTEXT.
this will solve the issue
have a good day
regards
sarves -
Plz help upgrade issue moving data from char type structure to non char typ
Hi Experts
plz help its very urgent
Data :workout(5000) .
FIELD-SYMBOLS : <FS_WORKOUT> TYPE ANY.
workout = ' u' .
ASSIGN WORKOUT TO <FS_WORKOUT> CASTING TYPE C .
BAPISDITM = <FS_WORKOUT>.
i am getting dump after BAPISDITM = <FS_WORKOUT>.
i think i am getting the dump bcoz i am moving character type structure to non character type structure but i think with field symbols we can remove this issue thats y i used it but its not working plz help me
its very urgent
*dump is :*
Short text
Data objects in Unicode programs cannot be converted.
What happened?
Error in the ABAP Application Program
The current ABAP program "ZSDR0009" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
How to correct the error
Use only convertible operands "dst" and "src" for the statement
"MOVE src TO dst"
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"UC_OBJECTS_NOT_CONVERTIBLE" " "
"ZSDR0009" or "ZSDR0009_I02"
"USER_COMMAND"
thanx in advancei got d solution in this thread
Hi all,
data: gv_line(6000) type c.
Bvbapkom = gv_line.
But i am getting the Error like : gv_line and Bvbapkom are not mutually convertable.
Note: Bvbapkom is a Structure
How do i solve this ?
Mahesh
KR
Posts: 210
Registered: 11/24/06
Forum Points: 0
Re: gv_line and Bvbapkom are not mutually convertable.
Posted: Nov 30, 2007 8:40 AM in response to: KR Reply
Hi ,
i got the solution
ANSWER:
Field-symbols: <X_Bvbapkom> type x,
<X_gv_line> type x.
Assign: Bvbapkom to <X_Bvbapkom> casting,
gv_line to <X_gv_line> casting.
<X_Bvbapkom> = <X_gv_line>.
Nasaka Ramakris...
Posts: 4
Registered: 1/19/08
Forum Points: 20
Re: gv_line and Bvbapkom are not mutually convertable.
Posted: Jan 19, 2008 7:42 AM in response to: KR Reply
Hi Check this answer.
ANSWER:
Field-symbols: <X_Bvbapkom> type x,
<X_gv_line> type x.
Assign: Bvbapkom to <X_Bvbapkom> casting,
gv_line to <X_gv_line> casting.
<X_Bvbapkom> = <X_gv_line>.
Maybe you are looking for
-
Moving to the USA - With a UK iMac
Hey everyone I have searched for the answer to this with no such luck in finding an answer. Im moving to Orlando, Florida in August and am taking my iMac with me (Obviously). I am going to need some sort of converter for the power and although apple
-
Zen Micro freezes up during shutdown proced
Hi...Yesterday my Zen Micro developed a problem when shutting down after listening to songs. When I follow the normal shut down procedure it freezes and the screen locks up with the "Shutting Down" message. I'd appreciate any help anyone might offer
-
new babie question... i have a stupid problem, but cant resolve this. Moreover, SAX exceptions traces are in Spanish. I want them in English, so I can find easily some topic about that. any idea of how to change this? heres is the .java and the .xml
-
Balance in transaction currency - IDocs
Hi experts, When processing inbound IDocs, the message below is displayed. Can you explain why this happens and how to fix it? Is this an SAP issue for which there is an OSS note? In the same processing, I am posting several IDocs with similar inform
-
HOW TO DISABLE ALL BUT ONE CHECKBOX IN ALV BASED ON ITEM LEVEL
Hi All, I have a report that has output format in ALV , first column is a Checkbox and second in agreement number . the data is displayed on item level , now my requirement is check box must be displayed in front of only first line item per agreement