Regarding WLST Monitoring
I would like to know how the WLST monitoring can be of use in our environment.
To begin with let me tell you we use webmethods as our integration server to do some transaction with our partners.
and they invoke EJBs on the application server which is weblogic and they fail some times.
Now how would this tool WLST be useful in monitoring these calls is something I need help.
Mr. Satya Ghattu, the author of WLST if you can reply to this message that would be a great help.
Hi Hem,
As per the answers u have got from the above i want to add few points...
RWB---> it is the place where you can monitor the complete message flow. From the time it is picked to the time it is posted. Added to it u have the provision of Channel Monitoring,performance monitoring where you can check the performance(time to execute),End to End Monitoring etc... This really helps in debugging the problems.
where as is in Sxmb_moni you can just see the messages generated in the integration server. messages generated in the adapter engine cannot be viewed in SXMB_MONI or SXI_MONITOR.
you can view the same dynpro as in SXMB_MONI in the RWB.
Regards
Gopi
Similar Messages
-
Regarding performance monitoring of RWB
As far as I know , there are two ways to show the data through performance monitoring.One is [overview data],the other is [detailed data].My question is,my [overview data] is displayed successfully,but [detailed data] doesn't display anything.Some parameter need to set for [detailed data]?
Any help would be appreciated.
Brand.Hi Brand,
If detailed data is missing i think some settings were missed & just go thru the links given below ur problem will be solved
Can you check the following function modules. In SE37 look for FMs "SXIPERF*" . You will get
SXIPERF_RUNTIME XI Performance: Runtime
SXIPERF_RUNTIME_VERI XI Performance: Runtime
SXIPERF_CONFIG XI Performance: Configuration
SXIPERF_EVALUATE XI Performance: Evaluation
SXIPERF_GET_TRANSID XI Performance: Transaction ID
Refer this SAP Note-768456
Message Montioring errors- SAP Note-928862
http://help.sap.com/saphelp_nw2004s/helpdata/en/06/5d1741b393f26fe10000000a1550b0/content.htm
Try to perform SLDCHECK First If you face any problem means Use this Help file to solve the Issue
http://help.sap.com/saphelp_nw04s/helpdata/en/78/20244134a56532e10000000a1550b0/frameset.htm
Regards,
Vinod. -
Queries regarding CCMS monitoring setup
Hi All,
We have configured the CCMS alert monitoring setup for our system and thatu2019s running fine.
We have some queries:
Is it possible to get the availability of the physical server (windows) using CCMS ping?
We have configured it on high availability server and registered the SAPCCM4x and CCMSPING agent to both nodes but facing issue with file system usage monitoring, we are not getting the data for virtual drive, Example: we are getting the data for only 3 drives C, D, S but we are not getting data for other drive like G, Z, F..
We have checked also the ST06 for the file system there also we are getting the same data.
Please suggest
Regards,
HarishDear Mansoor
Since your questions are more related to configuration, I would suggest to go through the attached document especially the topics "Email Infrastructure" "Workflow" (defining routing logic for emails) and "Chat infrastructure. Here you will find information alongwith screenshots. -
Regarding Availability Monitoring.
Hi All,
We are using SOLMAN 7 for availability monitoring of our system landscape.
We have installed CCMSPING agents in all the servers and all are running fine.
For our BI system we are not getting the availability data for the different instances. In RZ21 availability monitoring we are able to see only the Central instance of the our BI system and not any of the other app. instance.
We checked the CCMSPING agent of all application servers in BI system and they are up and running and all RFC related to CCMSPING in SOLMAN are working fine but still no data.
We also checked the /etc/host file of SOLMAN and message server for BI is mentioned there.
How can we involve the application instances in SOLMAN for BI please help on this.
Regards,
Arun PathakHi,
Check this Link for the Performance monitoring-
http://help.sap.com/saphelp_nw2004s/helpdata/en/9e/6921e784677d4591053564a8b95e7d/content.htm
<i>g)system alarm trigger a page, FAX, email, etc..?</i>
>>>Is available.
http://help.sap.com/saphelp_nw2004s/helpdata/en/4e/20934259a5cb6ae10000000a155106/content.htm
Regards,
moorthy -
Hi All ,
Whats the differance between Message Monitoring using SXMB_MONI and Message Monitoring in RWB .
Whats the use of Transaction Code SXI_MONITOR .
Thnaks ,
hem.Hi Hem,
As per the answers u have got from the above i want to add few points...
RWB---> it is the place where you can monitor the complete message flow. From the time it is picked to the time it is posted. Added to it u have the provision of Channel Monitoring,performance monitoring where you can check the performance(time to execute),End to End Monitoring etc... This really helps in debugging the problems.
where as is in Sxmb_moni you can just see the messages generated in the integration server. messages generated in the adapter engine cannot be viewed in SXMB_MONI or SXI_MONITOR.
you can view the same dynpro as in SXMB_MONI in the RWB.
Regards
Gopi -
Whenever i try to use the Workload Monitor(ST03N) and get the statistics,I'm unable to get the statistics.When i schedule total load on the system as Background job,sometimes i do get dump and sometimes the backgroundjob is scheduled well but i won't get any result regarding the total load.I do get error"No data:RFC problem".
What could be the problem?Despite i have scheduled the job SAP_COLLECTOR_FOR_PERFMONITOR (RSCOLL0) periodically,Iam unable to get the statistics?
What could be the problem?
Any parameter i have to set? -
Regarding Distribution Monitor for export/import
Hi,
We are planning to migrate the 1.2TB of database from Oracle 10.2g to MaxDB7.7 . We are currently testing the database migration on test system for 1.2TB of data. First we tried with just simple export/import i.e. without distribution monitor we were able to export the database in 16hrs but import was running for more than 88hrs so we aborted the import process. And later we found that we can use distribution monitor and distribute the export/import load on multiple systems so that import will get complete within resonable time. We used 2 application server for export /import but export completed within 14hrs but here again import was running more than 80hrs so we aborted the import process. We also done table splitting for big tables but no luck. And 8 parallel process was running on each servers i.e. one CI and 2 App servers. We followed the document DistributionMonitorUserGuide from SAP. I observerd that on central system CPU and Memory was utilizing above 94%. But on 2 application server which we added on that servers the CPU and Memory utilization was very low i.e. 10%. Please find the system configuration as below,
Central Instance - 8CPU (550Mhz) 32GB RAM
App Server1 - 8CPU (550Mhz) 16GB RAM
App Server2 - 8CPU (550Mhz) 16GB RAM
And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
Please can someone let me know how to improve the import time. And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
Thanks,
Narendra> And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
> Please can someone let me know how to improve the import time.
R3load connects directly to the database and loads the data. The quesiton is here: how is your database configured (in sense of caches and memory)?
> And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
There are no such documents available since the process of migration to another database is called "heterogeneous system copy". This process requires a certified migration consultant ot be on-site to do/assist the migraiton. Those consultants are trained specially for certain databases and know tips and tricks how to improve the migration time.
See
http://service.sap.com/osdbmigration
--> FAQ
For MaxDB there's a special service available, see
Note 715701 - Migration to SAP DB/MaxDB
Markus -
Hello Gurus,
IDOC is sent from Third party tool to sap, it was working fine, but once they changed the third party tool , IDOC are not receiving from SAP side.
Is there any separate tool setting which should be checked ??
How to go ahead in this case?? i am checking we02 T-Code for incoming IDOCs.
I am not expert in IDOC. Please help ...
I will appreciate your help.
Regards,
SamHi,
1.By changing the third party tool,The cnfiguration also can change.
2.So,the idoc may not receive at the SAP side.If the configuration is done correctly,then it will appear.
3.The intermediate layer is FILE INTERFACE.with this we can configure it.
4.Check with we05 also.
Or
Please check InfoShuttle tool.
http://www.sapgenie.com/products/infoshuttle.htm
http://www.getgamma.com/products/infoshuttle.cfm
Regards,
Shiva. -
Need Help regarding wlst script
Hi
I am new to wlst scripting, I need to know how to add error handling in below script and how to unlock the invoke in case of error.
below is the script
connect(userConfigFile=str(gConfigFile), userKeyFile=str(gKeyFile), url=str(gAdminHost) + ':' + str(gAdminPort));
print 'Connecting to Domain ...'
domainCustom()
cd ('oracle.biee.admin')
print 'Connecting to BIDomain MBean ...'
cd ('oracle.biee.admin:type=BIDomain,group=Service')
objs = jarray.array([], java.lang.Object)
strs = jarray.array([], java.lang.String)
invoke('lock', objs, strs)
cd ('..')
cd ('oracle.biee.admin:type=BIDomain.BIInstance.ServerConfiguration,biInstance=coreapplication,group=Service')
objs=jarray.array([newRPDlocation,newRPDpassword],Object)
strs=jarray.array(['java.lang.String', 'java.lang.String'],String)
invoke('uploadRepository', objs, strs)
cd ('..')
cd ('oracle.biee.admin:type=BIDomain,group=Service')
objs = jarray.array([], java.lang.Object)
strs = jarray.array([], java.lang.String)
invoke('commit', objs, strs)
disconnect()
exit()
Thanks
Samit BaghlaA handy link - http://docs.python.org/2/tutorial/errors.html
An example (taken from - Middleware Snippets: Automate WebLogic 12.1.2 Deployment)
def createFile(directory_name, file_name, content):
dedirectory = java.io.File(directory_name);
defile = java.io.File(directory_name + '/' + file_name);
writer = None;
try:
dedirectory.mkdirs();
defile.createNewFile();
writer = java.io.FileWriter(defile);
writer.write(content);
finally:
try:
print 'WRITING FILE ' + file_name;
if writer != None:
writer.flush();
writer.close();
except java.io.IOException, e:
e.printStackTrace(); -
Hi Experts,
I need a small help from you guys.
I want to know the difference between requested start, requested end , leatest start and latest end deadlines. If possible please explain with an example.
Points are guaranteed.
thanks
sankarHi Sankar,
Every organization generally has some set SLA's for every activity that has to be performed.
In SRM for every work item that is created we can say that there are 4 SLA
<b>Requested Start Date:</b> This is the ideal dates by when the approver should have picked up the work item.
<b>Requested End Date:</b> This is the ideal date by when the approver should have actually completed the work item.
<b>Latest Start Date:</b> If the user doesn't start by the requested start date then he has to pick up the item by the latest start date. If however the approver has not picked up the item by this date then you could configure the system to escalate the issue.
<b>Latest End Date:</b> This is the date beyond which the approver should not keep the work item. Again you could begin escalation procedures.. in addition if the work item itself is no longer needed then the workflow may have to be terminated.
Generally the Latest start date and the Latest end date are the slack or the buffer time that is given to the user responsible for the work item.
These deadlines are executed in the following sequence:
Requested Start
Latest Start
Requested End
Latest End
I hope this answer gives you enough clarity.
If the answer was useful then please award the appropriate points.
Thanks and regards,
Murli Rao -
Hi team Is there any way to monitor the number of sessions established to particular url or website from an application level. No access to firewall and router. Any software or script which helps to get all those required date of web monitoring. Please help. Thanks in advance Naveen
Hi Naveen,
I think this question could be better answered by the team behind the Network Management Support Community. Please post your question on their discussion board.
Thank you,
Kelsey -
Regarding Dead line monitoring & Background Job
Hi Dear PMs,
plz let me know regarding Deadline Monitoring (IP30) for Maint plan. Why it is requires? only for scheduling multiple plans at a time or is their any other reason?
What is the meaning of,
1. Interval for Call Objects
2. Rescheduling included
3. Immediate start for all in IP30 screen.
Why we need to run Back ground job? - only for scheduling open & save daily in background....or for any other reason?
requested you to giv in detail...
Thanks in Advance...
Amol.In short, as the name suggest deadline monitoring is used to monitor deadlines so that any due activity could be done well in time. Deadline monitoring helps you to automatically schedule entries that are due within this period. With the help of Deadline Monitoring, you could schedule multiple Maintenance Plans at the same time from the same transaction with respect to Plant maintenance. For different modules it could used for different purposes. In MM/WM, it could be used for mass changing the batch status. It is also used in Workflows. W.r.t PM, Using IP10, you are able to schedule maintenance plans individually, so to reduce time delays and increase efficiency, Deadline Monitoring is used.
Using F1 help on each of these fields should be helpful.
Interval of Call Objects specify the duration for which you want to execute monitoring of due entries. For example, mentioning 1 mon here would show me all the maintenance plan due for the complete month.
Immediate start for all: You can use this indicator to show whether the maintenance plans that correspond with the selection criteria are to be scheduled immediately, or whether a list of the selected maintenance plans is to be displayed.
Rescheduling Inc: You can use the automatic deadline monitoring to schedule a maintenance plan again or for the first time. If you need to reschedule any maintenance plan after having scheduled it, this indicator must be checked:
The following link might help you understand:
http://help.sap.com/saphelp_erp60_sp/helpdata/EN/3c/abb350413911d1893d0000e8323c4f/frameset.htm
Executing in Background helps as it does not bother you to open up the ip30 screen and then print the document. The Document is printed automatically without you having to bother about it within the period u specify behind your background job settings. It could be monthly, daily, weekly, yearly etc. etc.
Edited by: Usman Kahoot on Apr 5, 2010 10:34 AM -
Monitoring mateiral dates functionality in PS
Hi Experts
I wish to use this functionality for the following scenario:
Project material needs long delivery duration. And the delivery monitoring is critical function since any delay hampers entire project.
Especially, for major project items, the release of purchase order is not enough. Because subsequently there are several activcities need to be performed by both parties, delay in which will impact the original delivery.
So, i wish to have the events as shown below:
Z0001 Receipt of acceptance
Z0002 Receipt of Advance Bank Guarantee doc
Z0003 Release of Advance
Z0004 Receipt of Drawings
Z0005 Approval of Drawings
My query is, what is the aspect with Refernece dates.
What is baseline, plan, actual. How the dates can be linked?
I request experts to guide me in this regard.
warm regards
ramSivadates monitoring functionality - now called progress tracking is done for the this purpose when the PO is to nbe monitored
the network activity (external) will have the reuqired dates based on scheduling
the dates within date monitoring functionality have to be within these dates - meaning all your events Z001 etc are completed within the required delivery of the material. Dates monitoring helps to plan the delivery of the PO.
Read sap help for more details -
Linux logfile monitoring does not work after using "privileged datasource"
Hello!
I have noticed a strange behaviour on one of my Linux Agents (lets call it server_a) regarding logfile monitoring with the "Microsoft.Unix.SCXLog.Datasource" and the "Microsoft.Unix.SCXLog.Privileged.Datasource".
After successfully testing monitoring of /var/log/messages on server_a with the "Privileged Datasource". This test has been on server_a and the MP containing this rule has been delete from the management gorup before the following tests.
I wanted to test another logfile (lets call it logfile_a) using the normal datasource "Microsoft.Unix.SCXLog.Datasource" on server_a. So I created the usual logfile rule (rule_a) in XML (which I have done countless times before) for monitoring
logfile_a. Logfile_a has been created by the "Linux Action Account User" with reading rights for everyone. After importing the Management Pack with the monitoring for logfile_a I had the following warning alert in the scom console managing
server_a:
Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_a" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
An internal error occurred. (the userid has been changed to keep the anonimity of our action account).
To make sure I did not make any mistakes in the XML i have created a new logfile rule (rule_b) monitoring "logfile_b" on "server_a" using the "Logfile Template" under the authoring tab. logfile_b was also created by the "Linux
Action Account User" and had reading rights for everyone. Unfortunately this logfile rule created the same error:
Fehler beim Überprüfen der Protokolldatei "/home/ActionAccountUser/logfile_b" auf dem Host "server_a" als Benutzer "<SCXUser><UserId>ActionAccountUser</UserId><Elev></Elev></SCXUser>";
An internal error occurred. (the userid has been changed to keep the anonimity of our action account).
Although both rules (rule_a and rule_b) used the "Microsoft.Unix.SCXLog.Datasource" which uses the Action Account for monitoring logfiles, the above error looks to me as SCOM wants to use the privileged user, which in this case it not necessary
as the Action Account can read logfile_a and logfile_b without any problems.
So after a few unsuccessfull tries to get both rules to raise an alert I tried to use the "Microsoft.Unix.SCXLog.Privileged.Datasource" for rule_a as last resort. Then suddenly after importing the updated Management Pack I finally received the
alert I desperately waited for this whole time.
Finally after lot of text here are my questions:
Could it be that the initial test with the Privileged Log Datasource somehow screwed up the agent on server_a so it could not monitor logfiles with the standard log datasource? Or may anyone of you might have an idea what went wrong here.
Like I said both logfile could be accessed and changed by the normal Action Account without any problems. So privileged right are not needed. I even restarted the scom agent in case something hanged.
I hope I could make the problm clear to you. If not, don´t hesitate to ask any questions.
Thank you and kind regards,
PatrickHello!
After all that text, I fogrot the most essential information..
We are currently using OpsMgr 2012 SP1 UR4, the monitored server (server_a) has agent version 1.4.1-292 installed.
Thanks for the explanation of how the logprovider works. I tried to execute the logfilereader just to see if there are any errors and everything looks fine to me:
ActionAccount @server_a:/opt/microsoft/scx/bin> ./scxlogfilereader -v
Version: 1.4.1-292 (Labeled_Build - 20130923L)
Here are the latest entry in the scx.log file:
* Microsoft System Center Cross Platform Extensions (SCX)
* Build number: 1.4.1-292 Labeled_Build
* Process id: 23186
* Process started: 2014-03-31T08:29:09,136Z
* Log format: <date> <severity> [<code module>:<process id>:<thread id>] <message>
2014-03-31T08:29:09,138Z Warning [scx.logfilereader.ReadLogFile:23186:140522274359072] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_slogfilewithoutsudo.txtEDST02
2014-03-31T08:29:09,138Z Warning [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
2014-03-31T08:29:09,138Z Warning [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
* Microsoft System Center Cross Platform Extensions (SCX)
* Build number: 1.4.1-292 Labeled_Build
* Process id: 23284
* Process started: 2014-03-31T08:30:06,139Z
* Log format: <date> <severity> [<code module>:<process id>:<thread id>] <message>
2014-03-31T08:30:06,140Z Warning [scx.logfilereader.ReadLogFile:23284:140016517941024] scxlogfilereader - Unexpected exception: Could not find persisted data: Failed to access filesystem item /var/opt/microsoft/scx/lib/state/ActionAccount/LogFileProvider__ActionAccount_shome_sActionAccount_stest.txtEDST02
2014-03-31T08:30:06,142Z Warning [scx.core.providers.logfileprovider:5209:140101980321536] LogFileProvider InvokeLogFileReader - Exception: Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4
2014-03-31T08:30:06,143Z Warning [scx.core.providers.logfileprovider:5209:140101980321536] BaseProvider::InvokeMethod() - Internal Error: Unexpected return code running '/opt/microsoft/scx/bin/scxlogfilereader -p': 4 - [/home/serviceb/ScxCore_URSP1_SUSE_110_x64/source/code/providers/logfile_provider/logfileprovider.cpp:442]
Strangely I could not acces the "Action Account User" directory under /var/opt/microsoft/scx/log as "ActionAccount" user. Is it ok for the directory to have the following rights: drwx------ 2 1001 users? Instead of "1001" it should say "ActionAccount",
right?
This could be a bit far fetched, but perhaps the logfile provider can´t access logfiles as the "ActionAccount" on this server because it needs to write in the scx.log file. But as the "ActionAccount" can´t access the file, the logfile provider throws
an error. And as "Privileged Account" the rule works flawlessly, as the logfile provider running in root context can access everything.
Don´t know if that makes sense, but right now it sounds logical to me. -
Remote Desktop windows 7 ultimate can't use two monitors
i have two computers in another office that remote into two other computers here at my site, all 4 computers run windows 7 ultimate, about a week to 10 days ago the two remote computers can still connect to the computers at my site but can only use one monitor
not dual monitors as before the 10 days or so. The use all displays box is checked in the remote desktop client, i have two sonicwall firewalls with a site to site vpn connection between the offices that they use to remote
can't get the dual monitors to work, any help would be appreciatedHi,
Here is a good blog talking about using multiple monitors in Windows 7 ultimate and Windows Server 2008 R2, please take a look if there are any restrict policy settings regarding the monitor numbers like: "Limit maximum number of monitors per session"
Using Multiple Monitors in Remote Desktop Session
Best regards
Michael Shao
TechNet Community Support
Maybe you are looking for
-
My notifications from Facebook and twitter or echofon won't appear or notify me at all. I tried restarting my phone for a lot of times. I checked my settings and everything is fine, but it really doesn't want to send my notifications. Plus, the pop u
-
MDX for Start of Month Values at Daily level
Hey Guys, I am dealing with a requirement as below, the solution i have is either not addressing the complete problem or is extremely slow and in many cases just not feasible. Need a little help on the same. Problem: Need a query with cross join of 4
-
Help for runtime change in list item
hi i have created a tabuler form and have list_items on it i want when i select category monitor then in next list i get monitor's serial numbers in next record when i select printer then in next list i get printer's serial numbers i had tried it on
-
just started happening out of the blue...
-
Deletion of Delivery Appended to Shipment
Hi, Can anybody help for process to delete a delivery which is appended to a shipment. Shipmaent has Handling Unit. I have tried to unpack and de assign the HU in shipment for the delivery and then Delete the delivery but no success. <removed by mode