Best practice for data persistance for monitoring without BAM

Greetings,
We are modeling a business process in a large organization using BPEL Process Manager. The key point is that business people needs to monitor the execution of the business process in several key sectors of the process execution as well as they need to get report information of the process.
To model this in our project, we decided to create a new Oracle Database Schema that is going to hold the information about the business process execution (we decided that because for this initial offering the customer is not buying BAM). In this context, the BPEL process is going to be sending this key information to the repository so business people can then view real time information about the process execution as well as historical information in form of reports.
The important issue here is, if there is a best practice to send the information to the Database Schema ? it could be just using single database adapters ? maybe using sensors sending the data using topics connections ?
Any help will be highly appreciated.
Thanks in advance.

hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

Similar Messages

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best Practice for Subcontracting without father

    Dear All,
    Can you throw some light on business case where we are sending some components to subcontractor but we are getting a service in return.
    So when the work is finished we are doing service entry instead of parent material goods receipt.
    What is the best practice to cater this business scenario?
    Thanks in advance!

    you can try this:
    Use Account assignment category K, and Item category L.   Don't enter material code , just enter the short text as the description of the service to be performed , enter  the material group , give the cost center and enter the price. 
    At the item level, define the components 
    Normally,  Item category L and acc. category K are not set together , but you can set this combination in OMG0 .
    I havent tried this cycle, but if it works , it may  meet your requirements in less steps.

  • Best practice for monitoring MXE3500 v3.2.1

    Hi all
    We have recently upgraded to v3.2.1 on our MXE3500, all has gone well but I am looking for the best way to monitor the system health of the device for our support teams.
    The Show and Share and DMM servers come with SNMP monitoring but I can't see the equivelent for the MXE.
    Due to the lock down nature of the Windows VM on the 3.2.1 software I do not want to install our standard OS monitoring software which is BMC patrol as Cisco have adviced to not install any additional software.
    Has anyone got any ideas on this?
    Adam

    Hello,
    Of course too sensitive might cause failover to happen when some packets get lost, but remember the whole purpose of this is to provide as less downtime to your network as possible,
    Now if you tune these parameters what happen is that failover will be triggered on a different time basis.
    This is taken from a cisco document ( If you tune the sla process as this states, 3 packets will be sent each 10 seconds, so 3 of them need to fail to SLA to happen) This CISCO configuration example looks good but there are network engineers that would rather to use a lower time-line than that.
    sla monitor 123
    type echo protocol ipIcmpEcho 10.0.0.1 interface outside
    num-packets 3
    frequency 10
    Regards,
    Remember to rate all of the helpful posts ( If you need assistance knowing how to rate a post just let me know )

  • Best Practice for Monitoring, on VPS 2012 Server Standard. Extended Events or Profiler?

    Hi,
    What tools do you use to determine if you should tweak SQL Server configuration and optimize code route, or simply bump up your virtual resources? Can someone share a bag of Extended Events to monitor at the VPS level?  
    I'm a reasonably decent SQL Developer but never advanced far with DBA efforts. Especially when the mainstream went virtual. Seemed to me that all SQL Servers flexibility with managing disk and memory went out the window now that everything is 'shared', NATed,
    and Plesked. So I basically dropped out of the conversation and built stuff with SSIS and TSQL.
    Now, I'm charged with assessing a bottleneck on a VPS Windows 2012 Standard running SQL 2012 Express. I've read that running profiler and traces are deprecated and I've looked a bit at the servers extended events on the hosted environment. I have not run anything.
    My question: Does it make sense to think in terms of 'levels' in deciding what to monitor? I consider the SQL Server as a level, then the Windows Server, and finally the Virtual Level. What I'm getting at, is sure, I can monitor SQL Server with a profile tool,
    but it won't know SQL is on a VPS. So do I miss something?
    There used to be day when we had a dedicated physical box for SQL Server. We ran traces using profiler and got good clues on how to improve performance. In todays VPS world we can use sliders to increase virtual memory and disk space. What tools do you use
    to determine if you should tweak SQL Server configuration and optimize code route, or simply bump up your virtual resources? Can someone share a bag of Extended Events to monitor at the VPS level?  
    What Extened Events at the VPS level tell me if SQL Server is struggling with the limited 1Gb virtual memory? I realize this is not a direct question but hopefully someone will point this developer in the right direction.
    John

    Hi John,
    From SQL point of view it doesnt really matter whether the box is physical or virtual. So if you feel there is performance issues with sql and you are well versed with sql profiler troubleshooting go ahead with that. If you feel performance is good, then
    you know where to look into.
    I would first prefer to find whether it is really a problem with SQL before trying to troubleshoot sql side and I use perfmon counters to do that.
    Also I would look at the SQL Error log to see if there are any obvious errors.
    I havent used extended events much so leaving that for others to comment on. :)
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Best Practice for monitoring RAC 11gR2

    Hi,
    I have RAC 11gR2+ASM on two nodes.
    I would like to get your advice what are the most critical things i should monitor - Regarding RAC COMPONENCTS
    Thanks

    Hi,
    here i am mentioning some basic monitoring
    1)interconnect switch is working properly or not(private network)
    2) check which instances are running on which nodes
    3) check if ASM,listeners,nodeapps,gsd,vip..... are running or not.
    Each instance carrying planned load (balanced?).
    – Shared storage access is equal.
    – Interconnect Load
    -Latency
    – High CPU usage - Oracle processes getting enough resources.?
    Thanks

  • Best Practices for ASA 5500 Device Monitoring

    I have looked high and low and am unable to find anything on this topic. I am hoping that somebody here may be able to share some insight into what are considered the best practices for monitoring ASA's--specifically the 5510 with Sec+ License.
    My current monitoring application keeps reporting issues with outbound interface buffers being too high, but there are not any performance issues and I believe the thresholds are just set absurdly low.
    Thank you in advance for any assistance.

    Hi James,
    You probably won't be able to find any all-encompassing documentation for these types of best practices that cover all scenarios. The better method would be to define exactly what items you'd like to monitor and we can provide some guidance on how to best get that working for you.
    -Mike

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Basic Strategy / Best Practices for System Monitoring with Solution Manager

    I am very new to SAP and the Basis group at my company. I will be working on a project to identify the best practices of System and Service level monitoring using Solution Manager. I have read a good amount about SAP Solution Manager and the concept of monitoring but need to begin mapping out a monitoring strategy.
    We currently utilize the RZ20 transaction and basic CCMS monitors such as watching for update errors, availability, short dumps, etc.. What else should be monitored in order to proactively find possible issues. Are there any best practices you all have found when implimenting Monitoring for new solutions added to the SAP landscape.... what are common things we would want to monitor over say ERP, CRM, SRM, etc?
    Thanks in advance for any comments or suggestions!

    Hi Mike,
    Did you try the following link ?
    If not, it may be useful to some extent:
    http://service.sap.com/bestpractices
    ---> Cross-Industry Packages ---> Best Practices for Solution Management
    You have quite a few documents there - those on BPM may also cover Solution Monitoring aspects.
    Best regards,
    Srini
    Edited by: Srinivasan Radhakrishnan on Jul 7, 2008 7:02 PM

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for upgrading task definition without deleting task instances

    best practice for upgrading task definition in production system without deleting or terminating task instances
    If I try and update a task definition with task instances running I get the following error:
    Task definition 'My Task - Add User' may not be modified while there are active task instances
    Is there a best practice to handle this. I tried to force an update through the console but that didn't work. I tried editing the task from the debug page and got the same error.

    1) Rename the original task definition.
    2) Upload the new task definition with the original name.
    3) Later, after all the running tasks have timed out, delete the old definition.
    E.g., if your task definition is "myWorkflow":
    1) Rename "myWorkflow" to "myWorkflow-old-2009-07-28"
    2) Upload the new task definition as "myWorkflow".
    Existing tasks will stay linked to the original (renamed) workflow definition.
    New tasks will use the new definition.
    As the previous poster notes, depending on the changes you are making, letting the old task definitions stay active could have bad side-effects and might be better avoided.

  • Best Practice for Using Static Data in PDPs or Project Plan

    Hi There,
    I want to make custom reports using PDPs & Project Plan data.
    What is the Best Practice for using "Static/Random Data" (which is not available in MS Project 2013 columns) in PDPs & MS Project 2013?
    Should I add that data in Custom Field (in MS Project 2013) or make PDPs?
    Thanks,
    EPM Consultant
    Noman Sohail

    Hi Dale,
    I have a Project Level custom field "Supervisor Name" that is used for Project Information.
    For the purpose of viewing that "Project Level custom field Data" in
    Project views , I have made Task Level custom field
    "SupName" and used Formula:
    [SupName] = [Supervisor Name]
    That shows Supervisor Name in Schedule.aspx
    ============
    Question: I want that Project Level custom field "Supervisor Name" in
    My Work views (Tasks.aspx).
    The field is enabled in Task.aspx BUT Data is not present / blank column.
    How can I get the data in "My Work views" ?
    Noman Sohail

  • Best Practice for report output of CRM Notes field data

    My company has a requirement to produce a report with variable output, based upon a keyword search of our CRM Request Notes data.  Example:  The business wants a report return of all Service Requests where the Notes field contains the word "pay" or "payee" or "payment".  As part of the report output, the business wants to freely select the output fields meant to accompany the notes data.  Can anyone please advise to SAP's Best Practice for meeting a report requirement such as this.  Is a custom ABAP application built?  Does data get moved to BW for Reporting (how are notes handles)?  Is data moved to separate system?

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

Maybe you are looking for

  • Where do I put a workflow so that all users on the Mac can access?

    Where do I put a workflow so that all users on the Mac can access it from Automator? Also. why does the search pattern "users" AND "workflows" return results whitch math "and" (lowercase)? I thought "AND" (in caps) is for boolean searches.

  • OC4J intermittently not responding to OHS

    We deployed a CRM type system into Production over the last few weeks. We are having the usual environmental issues, but nothing we can't handle. We are however experiencing one very unusual type of behaviour. Every so often, even if the system isn't

  • Oracle Reporting ActiveX Control vs. Internet Explorer

    Including an Oracle reporting Activex Control in a commonly Html file for Internet Explorer it works fine (generate the report) but it does'nt generate the events (BeforeRunReport,AfterRunReport). I mention that in other development tools (Visual C++

  • Datafile size more 2G on Oracle 9.2 and Linux 7.3

    I have Oracle 9.2 RAC and Linux 7.3. Do anybody use datafiles with more than 2 G sizes (raw devices)? Is everything OK? I created such a datafile 6 G in size. Tried to use it. It seems like everything OK. But what about using it in production databas

  • I can't open idml file exported from CS6 in Indesign 4 or 6.

    I created an InDesign document in CS6 and saved it as an idml file to be able open it on another machine with CS4. Unfortunately I did not save a copy in indd copy in 6. I get a message "Adobe may not support this format, a plug in might be missing."