MAXLOCKS alert generation - SAP Live Cache
Hello Guru's,
We have a requirement for setting alerting for parameter MAXLOCKS on live cache server when it reaches to a threshold value, but we couldn't find any monitoring set in RZ20.
So we have got an idea to use Count(*) on LOCKSTATISTICS and calculate a threshold. But not sure how to do it through Shell Script for MAx DB? Please suggest if any other options as well.
Thanks and Regards,
Sri's
Hello Sri's,
1.
Please review the notes:
65946 Lock escalations
1243937 FAQ: MaxDB SQL locks
As you know, the system table LOCKLISTSTATISTICS displays the statistics of the
lock management since the last database start. This system table is insufficient for the analysis of locking conflicts.
2.
Review the sapact script attach to the SAP Note No. 974758 as HINT to your question on collecting the output
of the statement :
SELECT * FROM SYSDBA.LOCKLISTSTATISTICS and
SELECT * FROM SYSDBA.LOCKSTATISTICS
Periodically during the job/application run in question.
Also use the DB analyzer to find the bottleneck in liveCache.
3.
Create the SAP message and get SAP support.
It will be helpful to know what is the version of your system, SAP basis SP, the liveCache version and the reason of your project.
Did you get the DB analyzer warnings with lock escalation?
Regards, Natalia Khlopina
Similar Messages
-
Log Queue Overflows in SCM+Live Cache system
Hello Friends,
Log Queue Overflows showing RED alert in SCM Live cache, Please help me How to solve this issue.
Regards,
BalaramUser Basis wrote:
Dears,
>
> In my system some problems are occurring with log queues.
> The following message is presented in my system:
> Log queue overflows: 70641, configured log queue pages: 2000 for each of 4 log queues
>
Ok, so what is the problem with that?
You use the liveCache quite heavily (e.g. make lots of commits/changes) and the storage where you put your log-volume to is not quick enough to save the data.
Either you fix the storage speed or you accept that your users might have to wait a bit longer for their application to save the data.
User Basis wrote:
> I think it is occurring because we are managing the Log with "Log automatic overwrite". This configuration is being used because we don´t have a lot of disk space to manage it as well.
> Al jey.
These two things have no connection.
A log queue overflow simply means, that sessions have to wait until the log queue is saved to disk before they can put new entries to the log queue.
The log mode overwrite means: you don't care about data security and the ability to recover the liveCache from a backup.
It's a setting used for throw-away database instances where it doesn't matter to loose data.
It means that the database simply overwrites log entries that have not been backed up, when the log area gets filled.
regards,
Lars -
CC 5.2 Alert Generation - Not extracting any data from SAP
Hi there,
We are trying to use the alert function for critical transactions in CC 5.2. We have set the relevant parameters in CC. The Alert_Log.txt is created successfully but it is not extracting any data from SAP. Is there any settings that I am missing here, both in R/3 and CC 5.2? I really appreciate your help. Below is the Alert_Log.txt for your review.
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob scheduleJob
FINEST: Analysis Daemon started background Job ID:41 (Daemon ID D:\usr\sap\CC1\DVEBMGS00\j2ee\cluster\server0\. Thread ID 0)
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob scheduleJob
INFO: -
Scheduling Job =>41----
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob run
INFO: --- Starting Job ID:41 (GENERATE_ALERT) - AlertGeneration_Testing
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob setStatus
INFO: Job ID: 41 Status: Running
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob updateJobHistory
FINEST: --- @@@@@@@@@@@ Updating the Job History -
1@@Msg is AlertGeneration_Testing started
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.dao.BgJobHistoryDAO insert
INFO:
Background Job History: job id=41, status=1, message=AlertGeneration_Testing started
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob alertGen
INFO: @@@ Alert Generation Started @@@
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob alertGen
INFO: @@@ Conflict Risk Input has 1 records @@@
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob alertGen
INFO: @@@ Critical Risk Input has 1 records @@@
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob alertGen
INFO: @@@ Mitigation Monitor Control Input has 1 records @@@
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface alertGenerate
INFO: @@@@@ Backend Access Interface execution has been started @@@@@
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface alertGenerate
INFO: @@System=>R3
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface alertGenerate
INFO:
No of Records Inserted in ALTCDLOG =>0 For System =>R3
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface append_TcodeLogFile
INFO: *********SOD Tcode Size=>0**************
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface append_TcodeLogFile
INFO: *********Alert Tcode Log File=>D:\cc_alert_log\cc_alert_log1.txt is created**************
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface alertGenerate
INFO:
File Output Log File Size ==>0----
For System =>R3
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface alertGenerate
INFO: -
Conf Last Run Date=>2007-12-12--Conf Last Run Time=>12:45:11--
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface alertGenerate
INFO: ==$$$===Notif Current Date=>2007-12-20==$$$==Notif Current Time=>14:27:32===$$$===
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface send_AlertNotification
INFO: ****************** send Notification Alert Type=>1
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface send_AlertNotification
INFO: ******Alert Notification=>CONFALERTNOTIF==LastRunDate:=>2007-12-20==LastRunTime:=>00:00:00==Curr Date=>2007-12-20==Curr Time=>14:27:32*********
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface send_AlertNotification
INFO: ****************** send Notification Alert Type=>2
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface send_AlertNotification
INFO: ******Alert Notification=>CRITALERTNOTIF==LastRunDate:=>2007-12-20==LastRunTime:=>00:00:00==Curr Date=>2007-12-20==Curr Time=>14:27:32*********
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface send_AlertNotification
INFO: ****************** send Notification Alert Type=>3
Dec 20, 2007 2:27:32 PM com.virsa.cc.comp.BackendAccessInterface send_AlertNotification
INFO: ******Alert Notification=>MITALERTNOTIF==LastRunDate:=>2007-12-20==LastRunTime:=>00:00:00==Curr Date=>2007-12-20==Curr Time=>14:27:32*********
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.mgmbground.dao.AlertStats execute
INFO: Start AlertStats.............
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.mgmbground.dao.AlertStats execute
INFO: start:Sat Dec 01 14:27:32 CST 2007
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.mgmbground.dao.AlertStats execute
INFO: now:Thu Dec 20 14:27:32 CST 2007
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.mgmbground.dao.AlertStats execute
INFO: end: Tue Jan 01 14:27:32 CST 2008
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.mgmbground.dao.AlertStats execute
INFO: Month 2007/12
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob alertGen
INFO: @@@=== Alert Generation Completed Successfully!===@@@
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob setStatus
INFO: Job ID: 41 Status: Complete
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.BgJob updateJobHistory
FINEST: --- @@@@@@@@@@@ Updating the Job History -
0@@Msg is Job Completed successfully
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.bg.dao.BgJobHistoryDAO insert
INFO:
Background Job History: job id=41, status=0, message=Job Completed successfully
Dec 20, 2007 2:27:32 PM com.virsa.cc.xsys.riskanalysis.AnalysisDaemonBgJob scheduleJob
INFO: -
Complted Job =>41----Hi,
we have ran into similar issue of alert log being not displayed accurately in CC 4.0 version and we implemented the Note 1044393 - CC5.1 Alerts, it did work for us, as you are using CC5.2, I would recommend you to check the code and then try this note. But iam still skeptical whether this note will be helpful to you or not.
Thanks,
Kavitha -
What is the significance of Live Cache in demand planning ?
Hi all,
Can anyone explain me significance of live cache in the demand planning. What are the issues will turn up for live cache if it is not properly maintained?
Thanks
PoojaHi Pooja,
SAP has come up with Live cache concept for storage and most important, quick and efficient processing of transactional data. Its a layer between data base and GUI and even the search methods and storage space has been optimized due to its structure. In DP it is used for storage of time series data whereas in SNP it can store both time series and order series data.
Regarding your second query, it is recommended to have Live cache consistency on a periodic basis for synchrinising data between LC and database tables. You can face many issues due to LC inconsistency as incorrect time series generation, Transactional data discrepancy, COM routine errors during background processing etc.
Let me know if it helps
Regards
Gaurav -
Live Cache error with message number /SAPAPO/OM028
Hi Experts,
When I run RLCDEL program for deletion of obsolete SNP orders in the back ground , system shows livecache error alert with message number /SAPAPO/OM028 in the deletion job log. The details of message is "invalid order" or ABAP programming error. Live cache consistency checks are not resolving issue. Product release is 5.1 with SP08. Please suggest a resolution
Regards,
RaghavHi Raghav,
The sap notes 642503 & 973244 will fix your issue but they are for
versions 4.0 and 5.0 respectively.
You can write to SAP by creating a OSS message to update these
two message for version 5.1
This will resolve your issue
Please confirm
Regards
R. Senthil Mareeswaran. -
Does CC alert generation take into account 1/ act or 2/ act+perm ?
Dear Forum,
Can somebody shed some light on how alert generation works ? Are only the entries in the action tab of a function relevant ?
1. Suppose you have alertlog.txt
$ cat Alertlog.log
SYS-001 JDOE SM30 2008-07-28 09:15:07 ENDUSER
SYS-001 JDOE SU01 2008-07-28 09:45:24 ENDUSER
2. critical action rules
a) SM30 S_TABU_DIS 02 FC31 = open and close FI posting period
b) SU01 S_USER_GRP ACTVT 06 = Delete users
---> will the "Search Critical Action Alerts" functionality report JDOE or not ? That is, will SAP GRC take the permissions into account yes or no ? If not, than we have false positives.
Thanks - SamSimon,
Thanks for the reply
Can we conclude as follows :
"SAP GRC Risk and Remediation alert monitoring and alert notification do not take into account any function permissions settings before, during or after alert analysis. By this logic, users will be reported as soon as alertlog.txt line items correspond with items from the action tab as part of functions, regardless the fact those users' user buffer does not have the necessary permissions as specified within that same function."
I have noticed your email suffix sap.com --> Can I consider your answer as an official answer from SAP to my question ? -
Short dump while executing job SM:EXEC SERVICES for EWA alert generation
Hello All,
I'm getting short dump error while executing job SM:EXEC SERVICES for EWA alert generation.I'm using EHP1@SP22 of Sol Man.
Please find the logs
Error in the ABAP Application Program
The current ABAP program "SAPLDSVAS_PROC" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
The following syntax error occurred in program "RDSVASCABAP_TRANS__________073
" in include "RDSVASIABAP_TRANS__________073 " in
line 1782:
"Field "STATEMENT_SUMMARY001019" is unknown. It is neither in one of th"
"e specified tables nor defined by a "DATA" statement. "DATA" statement"
The include has been created and last changed by:
Created by: "SAP "
Last changed by: "SAP "
Error in the ABAP Application Program
The current ABAP program "SAPLDSVAS_PROC" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
If anyone came across such kind of dump, please reply asap, due to this, i could not generate EWA report for R/3 system 6.04.
Thanks,
Anand.>
Anand Tigadikar wrote:
> Thx Paul, but i already did checked this note and its not solving my issue
Dear Anand,
Did you perform a complete service definition replacement for SDCC/SDCCN as described in the note?
These are manual steps you need to perform on the system and are not code corrections.
I would suggest you perform it, if you have not, and check the SDCCN logs to review the process ran cleanly.
Note: Please make sure SDCC_OSS is functioning and a connection to SAP can be made before
deleting the service definitions.
If you have already replaced the service defintions on the Solution Manager system, then no point doing it again. However checking a note and applying it are not the same thing. So I am uncertain if you replaced the service definitions or not. The dump you are getting in SM:EXEC SERVICES suggests its due to a problem with the service definitons. The recommendation is to replace them according to the process described in SAP Note 727998. If you have not done so please do so.
If you have successfully replaced the service definitions are you still getting the same dump, or has it possibly changed?
Regards,
Paul -
Error during Live Cache Server Installation on SCM 4.1 system
Hi All,
I have an SCM 4.1 ABAP system running on MSSQL2005 and Win2003 server.I would like to install Live Cache Server on the same Server.Livecache client was installed as part of SCM 4.1 installation.
I have installed MAXDB software and now when im trying to install Live Cache Server Instance i get the below error
Im performing the installation with user root and it is an Administrator.
WARNING 2011-12-09 11:01:25
Execution of the command "change 'user' '/install'" finished with return code 1. Output: Install mode does not apply to a Terminal server configured for remote administration.
Installation start: Friday, 09 December 2011, 11:01:23; installation directory: G:\SCM_4.1_Media\Media_Live_Cache\New_Media\51031447_2\CD_SAP_SCM_4.1_liveCache_64bit\SAPINST\NT\AMD64; product to be installed: SAP SCM 4.1> Additional Services> Install a liveCache Server instance
Transaction begin ********************************************************
WARNING 2011-12-09 11:02:33
Error 3 (The system cannot find the path specified.
) in execution of a 'CreateProcess' function, line (265), with parameter (G:\SCM_4.1_Media\Media_Live_Cache\New_Media\51031447_2\CD_SAP_SCM_4.1_liveCache_64bit\NT\AMD64\SDBUPD.EXE -l).
Transaction end **********************************************************
WARNING 2011-12-09 11:02:34
The step Fill_sapdb_db_instance_context with step key LIVECACHESERVER|ind|ind|ind|ind|ind|0|LC_SERVER_INSTALL|ind|ind|ind|ind|ind|0|Fill_sapdb_db_instance_context was executed with status ERROR.
Has anyone seen this error before ? Any pointers would be helpful.
Regards,
Ershad Ahmed.Subprocess starts at 20111209154957
Execute Command : C:\Program Files\sdb\programs\pgm\dbmcli.exe -n XXXXXXXXX db_enum
Execute Session Command : exit
> Subprocess stops at 20111209154957
OK
> Subprocess starts at 20111209155027
Execute Command : C:\Program Files\sdb\programs\pgm\dbmcli.exe -n XXXXXXXXX db_enum
Execute Session Command : exit
> Subprocess stops at 20111209155027
OK
> Subprocess starts at 20111209155221
Execute Command : C:\Program Files\sdb\programs\pgm\dbmcli.exe -n XXXXXXXXX db_enum
Execute Session Command : exit
> Subprocess stops at 20111209155221
OK
> Subprocess starts at 20111209155323
Execute Command : C:\Program Files\sdb\programs\pgm\dbmcli.exe -n XXXXXXXXX inst_enum
Execute Session Command : exit
> Subprocess stops at 20111209155324
OK
7.5.00.31 f:\sapdb\liv\db
7.6.06.10 f\sapdb\sdb\7606
7.6.06.10 C:\Program Files\sdb\7606
> Subprocess starts at 20111209155324
Execute Command : C:\Program Files\sdb\programs\pgm\dbmcli.exe -n XXXXXXXXX inst_enum
Execute Session Command : exit
> Subprocess stops at 20111209155324
OK
7.5.00.31 f:\sapdb\liv\db
7.6.06.10 f\sapdb\sdb\7606
7.6.06.10 C:\Program Files\sdb\7606
> Subprocess starts at 20111209161349
Execute Command : C:\Program Files\sdb\programs\pgm\dbmcli.exe -n XXXXXXXXX inst_enum
Execute Session Command : exit
> Subprocess stops at 20111209161349
OK
7.5.00.31 f:\sapdb\liv\db
7.6.06.10 f\sapdb\sdb\7606
7.6.06.10 C:\Program Files\sdb\7606
Regards,
Ershad Ahmed. -
Live Cache Failed (DBM error)
Hi,
I am getting following error while starting live cache(LC10)
"Error DBMCLI_COMMAND_EXECUTE_ERROR when starting liveCache LC1 on server system"
Server: system
Users: SAPUSER
Logical Command: DBMRFC
Parameter: exec_lcinit restart
Name and Server : LC1 - system
DBMRFC Function : DBM_EXECUTE
Command : exec_lcinit restart
Error : DBM Error
Return Code : -24964
Error Message : ERR_EXECUTE: error in program execution#
0,sap\lcinit LC1 restart -uDBM , -uDBA , -uSQL ,
liveCache LC1 (restart)
The liveCache state is OFFLINE
DBMServer 7.6.00 Build 029-123-130-265
starting LC1 into ONLINE
ERROR : restart not possible [please check knldiag!!]
ERROR : liveCache LC1 not started (see "d:\sapdb\data\wrk\LC1\lcinit.log")
In Transaction DB59 i tried connection test "Connect. test with "native SQL" ( LCA ) unsuccesful"
how to restart Live Cache again? what will be the problem?
regards
ThennarasuHello,
what's wrong with the hint you already got?
> ERROR : restart not possible [please check knldiag!!]
Check the knldiag and then we might be able to do something about this issue.
regards,
Lars -
Unable to delete Order does not exist in live cache but in table POSMAPN
Hi Experts,
We are facing an issue where purchase order is not available in live cache (which means no GUID) but exists in database table POSMAPN. We have tried to delete it using standard SAP inconsistent order deletion program and also using BAPI BAPI_POSRVAPS_DELMULTI but not able to delete it.
Can anybody suggest a method by which we can get rid of this order from the system.
Thanks a lot.
Best Regards,
ChandanHi Chandan,
Apologize me for taking your question in a wrong perspective. If you want to delete the same then you need to Re-CIF the order from ECC so that it would come and sit in Live Cache. Once done, try using the BAPI.
If you are not successful with the above approach try running the consistency report /SAPAPO/SDRQCR21 in APO system
so that it first corrects the inconsistency between ECC and APO (Live Cache + DB tables) and then use the BAPI to delete the PO.
Not sure if you have tried this way. If this does not solve your purpose you need to check SAP Notes.
Thanks,
Babu Kilari -
DBM Error return code -11 in LC10 Administration in SCM System Live Cache
Hello,
We have installed SCM 4.1 on Solaris on one box and LC 7.5 on another solaris box.
For kernel upgrade, we shut down Live Cache using LC10>administration on SCM server. After kernel patch, other patches for ABAP stack, we upgraded LC to SP11 build 35.
Since then, we get following error in LC10.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Name and Server : LCA - gva1073
DBMRFC Function : DBM_EXECUTE
Command : dbm_version
Error : DBM Error
Return Code : -11
Error Message : tp error: Terminating. [nlsui0.c 1934] pid
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Also, in DB59, when we try to check connection, we get following error -
General Connection Data
Connection Name....: LCA
Database Name......: LCA
Database Server....: gva1073
tp Profiles........: no_longer_used
DBM User...........: CONTROL
Test Scope
1. Execute an external operating system command (DBMCLI)
2. Determine status using TCP/IP connection SAPDB_DBM (DBMRFC
command mode)
3. Determine status using TCP/IP connection SAPDB_DBM_DAEMON (DBMRFC
session mode)
4. Test the SQL connection (Native SQL at CON_NAME)
Application Server: gva1075_SCD_03 (
SunOS )
1. Connect. test with "dbmcli db_state"
Successful
2. Connect. test with command mode "dbmrfc db_state"
Unsuccessful
dbm_system_error
Name and Server : LCA - gva1073
DBMRFC Function : DBM_EXECUTE
Command : db_state
Error : DBM Error
Return Code : -11
Error Message : tp error: Terminating. [nlsui0.c 1934] pid
3. Connect. test with session mode "dbmrfc db_state"
Unsuccessful
dbm_system_error
Name and Server : LCA - gva1073
DBMRFC Function : DBM_CONNECT
Error : DBM Error
Return Code : -11
Error Message : tp error: Terminating. [nlsui0.c 1934] pid
4. Connect. test with "native SQL" ( LCA )
Successful
++++++++++++++++++++++++++++++++++++++++++++++++++++
Can anybody please help?
Thanks and regards,
VaibhavHello Vaibhav,
while using transaction LC10, the error 11 "tp error: Terminating. [nlsui0.c ...]" occurs. The user authorization with tp fails and the application server cannot connect to the liveCache.
I assumed, that it's due to a library version mismatch, tp cannot use the liveCahe UNICODE libraries.
Please check, that the tp call at the command line works properly. And a dbmcli call in the transaction SM49 with the tp options
(dbmcli ::
-d <LC-SID> -n <LC-servername> -tpp <profile> -tpi <system-SID>
-tpc <connection - LCA/LDA> dbm_version)
works also properly.
I recommend you to update the liveCache client software on the Application server.
If you've got access to OSS/Service Market Place, then please take a look at note
649814 how to update the liveCache client software on the application server.
< Please also review the SAP notes 847736 & 831108 >
Before the liveCache client version will be upgraded on the application
server you can use the workaround by switching off the central authorization for the liveCache LCA/LDA connections:
In transaction LC10, choose Integration and deactivate the option Central authorization, then save.
If you are the official SAP customer, I recommend you to create the ticket to SAP on 'BC-DB-LVC' queue.
Thank you and best regards, Natalia Khlopina -
How can I retrieve data from live cache?This is in Demand Planning : SCM APO.
Please suggest ways.
Thanks & Regards,
SavithaHi,
some time ago I worked on SAP APO.
To read live cache, you first have to open a SIM session.
You can do this as shown in this function module:
FUNCTION ZS_SIMSESSION_GET.
*"*"Local Interface:
*" IMPORTING
*" REFERENCE(IV_SIMID) TYPE /SAPAPO/VRSIOID
*" EXPORTING
*" REFERENCE(EV_SIMSESSION) TYPE /SAPAPO/OM_SIMSESSION
CONSTANTS:
lc_simsession_new TYPE c LENGTH 1 VALUE 'N'.
DATA:
lt_rc TYPE /sapapo/om_lc_rc_tab,
lv_simsession LIKE ev_simsession.
IF NOT ev_simsession IS INITIAL.
EXIT.
ENDIF.
*--> create Simsession
CALL FUNCTION 'GUID_CREATE'
IMPORTING
ev_guid_22 = lv_simsession.
*--> create transactional simulation
CALL FUNCTION '/SAPAPO/TSIM_SIMULATION_CONTRL'
EXPORTING
iv_simversion = iv_simid
iv_simsession = lv_simsession
iv_simsession_method = lc_simsession_new
iv_perform_commit = space
IMPORTING
et_rc = lt_rc
EXCEPTIONS
lc_connect_failed = 1
lc_com_error = 2
lc_appl_error = 3
multi_tasim_registration = 4.
IF sy-subrc > 0.
CLEAR ev_simsession.
* error can be found in lt_rc
ENDIF.
* return simsession
ev_simsession = lv_simsession.
ENDFUNCTION.
Then you can access the live cache.
In this case we read an order (if I rememver correctly, it's a plan order):
DATA:
lv_vrsioid TYPE /sapapo/vrsioid,
lv_simsession TYPE /sapapo/om_simsession.
* Get vrsioid
CALL FUNCTION '/SAPAPO/DM_VRSIOEX_GET_VRSIOID'
EXPORTING
i_vrsioex_fld = '000' "By default
IMPORTING
e_vrsioid_fld = lv_vrsioid
EXCEPTIONS
not_found = 1
OTHERS = 2.
CALL FUNCTION 'ZS_SIMSESSION_GET'
EXPORTING
iv_simid = iv_vrsioid
IMPORTING
ev_simsession = lv_simsession.
CALL FUNCTION '/SAPAPO/RRP_LC_ORDER_GET_DATA'
EXPORTING
iv_order = iv_orderid
iv_simversion = iv_vrsioid
IMPORTING
et_outputs = lt_outputs
et_inputs = lt_inputs.
If you change something in your simsession, you have to merge it back afterwards, so that your changes become effective.
You can do this like that:
* Merge simulation version (to commit order changes)
CALL FUNCTION '/SAPAPO/TSIM_SIMULATION_CONTRL'
EXPORTING
iv_simversion = lv_vrsioid
iv_simsession = lv_simsession
iv_simsession_method = 'M'
EXCEPTIONS
lc_connect_failed = 1
lc_com_error = 2
lc_appl_error = 3
multi_tasim_registration = 4
target_deleted_saveas_failed = 5
OTHERS = 6.
I hope this helps... -
Hi,
I've just completed an SCM 2007 Installation alongwith the live cache.
However, when I try to login to the LC using the dbmcli command (user control/superdba & the pwds. set during installation), it gives the foll. error:
-24950,ERR_USRFAIL: User Authorization failed.
Also, I'm not able to use LC10-->Live Cache Monitoring
Pls help, since it's extremely urgent.
Thanks a lot,
Saba.Hello Saba,
-> For SAP liveCache documentation in English see the SAP note 767598.
-> In general, the error could be due the wrong password of the dbm user.
For example, I created the database instance NLK with the control user and
control password. If I will try to connect to the database with wrong password
I will get this error ::
dbmcli -d NLK -u control,test
Error! Connection failed to node (local) for database NLK:
-24950,ERR_USRFAIL: User authorization failed
-> You are running SCM 2007 < SCM 5.1 > system, therefore you are SAP customer.
Please create the SAP ticket concerning this issue => we could logon via OSS and check the status of your system.
Please set 'high' priority of the ticket, if it's extremely urgent.
If it's the DEMO or PROD system you could escalate the message to 'VH'.
Thank you and best regards, Natalia Khlopina -
Some concern about Live cache Homogeneous System copy
Hi All,
I need to do the Homogeneous system copy SCM 5.0 / Live cache 7.6.00 on AIX. Following are the Source and target System SID for Live cache. Moreover I had check the respective SAP note 457425 and 877203 but still having some concern.
Source System
Live cache SID = SCD
Live cache user ID = SAPSCD
Live Database instant software owner = scdadm
Live cache Data Size = 8 GB ( 4 Data volume / 2 GB each / auto extend off)
Target System
Live cache SID = SCT
Live cache user ID = SAPSCT
Live Database instant software owner = sctadm
Live cache Data Size = 4 GB ( 2 Data volume / 2 GB each / auto extend off)
For same I having following concern.
1. Since there is difference between source system and target system size / data volume number what action I need to take on target system. Do I need to add two more data volume into the target system?
2. After complication of database restore on target the system do I need to change the Live cache user ID (i.e. SAPSCD to SAPSCT)?
or
It is just fine to change the log on data in Integration tab in LC10.
Please let me know some more information on same.
Thanks,
HarshalHello Harshal,
"1. Since there is difference between source system and target system size / data volume number what action I need to take on target system. Do I need to add two more data volume into the target system?"
Please pay attention to the SAP note :: 457425 Homogenous liveCache copy using backup/restore
Please update the thread withy additional information ::
What is the version of the source liveCache? What is the version of the target liveCache?
How much data do you have in the source liveCache?
And you need to have the size data area of the target liveCache to be able to restore the databackup created in the source liveCache.
So by the size of the data volumes in the source liveCache, it's not clear how much data you have.
"2. After complication of database restore on target the system do I need to change the Live cache user ID (i.e. SAPSCD to SAPSCT)?"
Please review the SAP note 877203.
You could rename the user SAPSCD to SAPSCT using the SAP note 877203 steps
OR
You could have the user SAPSCD in the target liveCache as Standard liveCache user &
After you run the homogenous liveCache copy using backup/restore procedure you need
to change the LCA/LDA/LEA connection with user SAPSCD on target system and
set the user SAPSCD to the user containers ( please see the SAP note 616555 for more details )
Before to restart the liveCache in LC10.
PS: Please pay attention about the value of the liveCache parameter _UNICODE.
What are the values of this parameter in the source & target liveCache?
Thank you and best regards, Natalia Khlopina -
Hi Experts,
I want to generate alerts when a message fails in PI. I am checking SXMB_MONI for message failures.
I have setup following to generate alerts.
1. Alert category in ALRTCATDEF and fixed receipients
2. Activated the Alert rule in RWB (Alter Config)
3. SMTP setup is present in SCOT
4. alertinbox service is activated in SICF.
Even after doing these, alerts are not getting generated as I cannot see alerts in Alert Inbox and I am not getting alert emails.
Please let me know if I am missing anything else required or any additional steps are required for alert generation.
Thanks,
DeepakCheck the following.
1)Check Exchange profile parameters 'Central Monitoring Server':
in section RuntimeWorkbench
com.sap.aii.rwb.server.centralmonitoring.r3.ashost
com.sap.aii.rwb.server.centralmonitoring.r3.client
com.sap.aii.rwb.server.centralmonitoring.r3.sysnr
Ensure they are filled with the information of the PI system where the
Central Monitoring Server is.
2)Go to tcode sm59 and check RFC Destination
'CentralMonitoringServer-XIAlerts'.
Under 'Technical Setting' the same host and system number as parameter
com.sap.aii.rwb.server.centralmonitoring.r3.ashost/r3.sysnr in the
exchange profile should be maintained.
Under 'Logon&Security' the same client as from parameter
com.sap.aii.rwb.server.centralmonitoring.r3.client should be
maintained.
If they are not the same/matching, delete the destination. It will be
created automatically with the correct parameters when the next alert
is created.
3)Run these reports to ensure all the necessary ICF
services are running?
RSXMB_ACTIVATE_ICF_SERVICES
RSALERT_ACTIVATE_ICF_SERVICES
Maybe you are looking for
-
JBO-26048 error after insert delete commit
Using JDeveloper 10.1.2 running local OC4J against Oracle 9i database * JBO-26048: Constraint "APPLREFLTR_PK" violated during post operation:"Insert" using SQL Statement "BEGIN INSERT INTO DCSAT_APPL_REF_LETTER(APPLICANT_ID,LETTER_ID,LETTER_INF,CREAT
-
Apps showing up twice in suggested apps for open with
Brand new MacBook Pro with clean install of 10.8.2 (not restored from backup). Had this thing for two days and today is the first time that I'm trying to use it. Don't have a lot installed on it yet. In other words, we're about best case scenario
-
Group by caluse before where clause
Hi all, Declare cursor c1 is select ARc_Month as mnth,count(*) as nof from dbarc group by ARc_Month where ARc_Month like '%02%'; e1 varchar2(1000); cnt number(10); begin e1:='1'; for crec in c1 loop cnt:= c1%rowcount; dbms_output.put_line(cnt||'-----
-
Session variable with list of availalble dashboards?
Hello, Is it possible to populate a -row-wize?- session variable (using an initialization block & system variable :USER) with a list of dashboards the user will have access to (like the list of dashboards displayed at the top of the webportal) once h
-
HI I downloaded the example for JMS for J2EE 5. But I see these examples are developed in Netbeans... and the problem is particularly during compilation using ant... which depends on NetBeans. How do I proceed with running these examples in Eclipse?