Event on Servicing task status : best practice

hello,
I work on ORACLE Servicing module with Service requests, for each service request it could have some tasks to do, each task has got a status : I want to launch an email when for some status transition : ( Open to Close for example ).
What is the best way to do such a thing ?
- using workflow ?
- a trigger ?
or some other way.
( working on ORACLE APPLICATION 11i 11.5.9 )
Thanks.
Romeo.

How obnoxious! :-) This functionality already there.
If you check the the Notification checkbox in the task type setup, every update should send the notification to the owner of the task.
jtf_wf_task_events_pvt.publish_update_task raises the event.
jtf_task_wf_subscribe_pvt.update_task_notif_subs reads the event raised
Checks wthere the notification check box is enabled if yes
sends the notification using jtf_wf_task_util.create_notification (uses another workflow of item type JTFTASK).
Try it out. This code should give a good idea how it works.
Thanks
Nagamohan

Similar Messages

  • When to use unattend.xml in task sequence - best practice?

    Hi, I've tried researching this but not found an answer to my specific query.
    We have ConfigMgr 2012 R2 with MDT 2013 although I don't think this is an MDT specific question.
    I'm trying to create a Build and Capture task sequence for our Windows Server 2008 R2 and Server 2012 /2012R2 server builds utilising an UNATTEND.XML file to make some customisations that can be deployed for every build afterwards in a Deployment Task Sequence.
    Specifically the addition of some Windows Features like SNMP and it's configuration and the addition of the Telnet Client. There are other bits like language settings and configuration items but I'm specifically interested in the Features part for my question.
    In CM 2012R2 you now have the option under the "Apply Operating System" to use a captured image or an original installation source. However they work differently if you specify the use of the same unattended answer file.
    The "image" deployment ignores all of the "add features" sections of the XML file and the "installation source" loses the  configuration options from SNMP from the XML file. When you then deploy the captured image using
    the same unattend.xml again the one from the "installer" now has all the SNMP features required and the one from the "image" is still missing everything.
    So my question is as follows.
    What is best practice for specifying an unattend.xml file in a task sequence. Is it in the build and capture TS or in the Deployment TS ?
    or
    Do I need multiple XML files, one for build and capture with some bits in and another for deployment with the rest in?
    or
    Should I be doing something else?
    Although this is specifically asking about Server O/S we will be using the same methodology for Windows 7 deployment.

    In this case DISM is only used to add the actual features... for configuration you could use a simple script that runs afterwards. Sample registry file:
    SAMPLE REG FILE - HKLM-SNMP.reg
    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters]
    "NameResolutionRetries"=dword:00000010
    "EnableAuthenticationTraps"=dword:00000001
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\PermittedManagers]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\RFC1156Agent]
    "sysServices"=dword:0000004f
    "sysLocation"=""
    "sysContact"=""
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\TrapConfiguration]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\TrapConfiguration\public]
    "1"="127.0.0.1"
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\ValidCommunities]
    "public"=dword:00000004
    Sample batch file:
    SAMPLE SCRIPT FILE - ConfigureSNMPService.bat
    @ECHO OFF
    net stop "SNMP Service"
    regedit /s HKLM-SNMP.reg
    net start "SNMP Service"
    Also some settings for SNMP can be controlled through group policy:
    http://serverfault.com/questions/285762/group-policy-for-multiple-snmp-permitted-managers

  • Authorizations for tasks (R_UC_TASK) / Best Practice SEM-BCS authorization

    Dear Experts,
    I am quite new to authorizations and in particular to SEM-BCS authorization. So I would be happy if you could help me with the following requirement:
    We have to setup an authorization concepts for SEM-BCS. Among others we want to setup authorizations for consolidations tasks using authorization object R_UC_TASK. With this authorization object certain tasks can be restricted to certain characteristic values u2013 e.g. for a certain consolidation group or a certain consolidation unit. We have defined a role each for certain consolidation tasks. These roles are not restricted to any characteristic value yet. We have for instance a role u201Cregional controlleru201D who is allowed to perform certain BCS tasks on a regional level (consolidation unit level). This would mean that we would have to create the role u201Cregional controlleru201D for all consolidation units u2013 see example below:
    Role 1: Regional Controller u2013 Cons. Unit 1000
    Role 2: Regional Controller u2013 Cons. Unit 1100
    Role 3: Regional Controller u2013 Cons. Unit 1200
    Role n: Regional Controller u2013 Cons. Unit n
    We have more than 400 consolidation units. So this would require a high effort. Is there instead a possibility of creating one role based on authorization object R_UC_TASK which just defines which activities can be performed (without restricting access to a certain consolidation unit). , and using second role which defines the consolidation unit access? u2013 see example below:
    A
    Role: Regional Controller
    Role: Cons Unit 1000
    B
    Role: Regional Controller
    Role: Cons Unit 1100
    C
    Role: Regional Controller
    Role: Cons Unit 1200
    In this case we only would have to maintain one role u201CRegional Controlleru201D and we only would have to assign the restriction for the consolidation unit. How could this be realized?  Or do you have any other ideas to solve this requirement in a simple way?
    Moreover I would be happy if you could tell me where I could find best practice scenarios for SEM-BCS authorizations.
    Thanks a lot in advance!
    Best regards
    Marco

    Hello Marco,
    you can enter a master role in the description tab of a role. All fields populated via program PFCG_ORGFIELD_CREATE can be maintained in the role. All other fields will be taken from the master role. So you only need to populate the field for unit with the program.
    Good luck
    Harry

  • NSM Event Agent for AD location - Best practice

    Hello,
    We are currently designing our NSM 3.1 for AD implementation and would like some guidance with regard to installing the NSM Event Agent. We have come up with two options:
    The first option is to install the NSM Event Agent on a Domain Controller where new user accounts are provisioned.
    The second option is to install the NSM Event Agent on a server with the other NSM components.
    The argument for option 1 is that NSM will be notified as soon as an account is created.
    The argument for option 2 is that MS best practice is that no other software should be installed on a DC and that the NSM Event Agent will perform a network request to talk to the nearest domain controller to obtain a list of changes since it last connected.
    Is there any preferred option, or does it not matter?
    Regards,
    Jonathan

    On 10/28/2013 7:16 AM, JonathanCox wrote:
    >
    > Hello,
    >
    > We are currently designing our NSM 3.1 for AD implementation and would
    > like some guidance with regard to installing the NSM Event Agent. We
    > have come up with two options:
    >
    >
    > - The first option is to install the NSM Event Agent on a Domain
    > Controller where new user accounts are provisioned.
    > - The second option is to install the NSM Event Agent on a server with
    > the other NSM components.
    >
    >
    >
    > The argument for option 1 is that NSM will be notified as soon as an
    > account is created.
    > The argument for option 2 is that MS best practice is that no other
    > software should be installed on a DC and that the NSM Event Agent will
    > perform a network request to talk to the nearest domain controller to
    > obtain a list of changes since it last connected.
    >
    > Is there any preferred option, or does it not matter?
    >
    > Regards,
    >
    > Jonathan
    >
    >
    Jonathan,
    Unlike eDirectory event monitoring, Active Directory event monitoring is
    accomplished with a polling mechanism. Therefore putting your Event
    Monitor on the domain controller will not significantly increase
    performance. As long as the Event Monitor is in a site with a domain
    controller, it should pick up events as quickly as it can.
    For further reading on AD sites and domain/forest topology we recommend
    reviewing http://technet.microsoft.com/en-us/l.../cc755294.aspx.
    Remember that for AD, NSM requires only one Event Monitor per domain
    (and in fact you'll only be able to authorize one Event Monitor per
    domain through the NSM Admin client.) However, deploying a second Event
    Monitor as a backup may be helpful. When the AD Event Monitor is
    installed and configured for the first time, it first has to build a
    locally-cached replica of the domain it resides in. In a large domain
    this can take a long time, so having a second EM already running, which
    can be authorized immediately if the primary EM goes down, will ensure
    that you catch up with events in AD more quickly.
    -- NFMS Support Team

  • Scheduled Tasks - Administrator Best Practices

    Hi all,
    I've gotten assistance this week with a couple scripts and scheduling them as tasks. I actually have well over a dozen running on our Exchange server using a special user with a complex password. This user is not used for logging into any machine, but it
    is a member of 'Administrators' group and can be used for tasks requiring elevated privileges.
    What I am interested in learning is what the best practice is for running scheduled tasks. We have several, such as querying AD for members of select OUs or users who meet certain criteria. We also have automated emails regarding certain mailbox metrics,
    etc. You get the idea.
    Despite the complex credentials, this account is still discoverable and could be used in nefarious ways. Is it possible be running tasks on Server 2008 R2 (2012 possibly) without administrator credentials? Are there certain restrictions for the tasks (like
    is a scheduled reboot allowed by a standard account, but not querying Active Directory?).
    I also have noticed a checkbox with 'Run with  highest privileges' and do not fully understand what this means.
    When I try to run the task as a regular user (no remote permissions) and it says 'Logon failure: the user has not been granted the requested logon type as this computer.'
    In short, can I safely remove our special user account from 'Administrators' and place into regular users without breaking all of our tasks?

    Hi KSI-IT,
    Firstly, based on my research, if you want to run the task scheduler with a user account, the user account must have the corresponding permission, in other words, you can also manually run the script with the user account.
    1.  For the error you posted 'Logon failure: the user has not been granted the requested logon type as this computer', please make sure the task account has "logon as a batch job" privilege.
    To add the privilege of the account, please go to
    [Local Security policy\Local Policies\User Rights Assignment]
    -Log on as a batch job.
    Add the domain\username account and any others you may need and retry.
    2.  For the setting 'Run with  highest privileges', this means that it runs with the highest privileges available to that user. This is different from the context menu's 'Run As Admin'.
    It generates the highest privilege token for the specific user, however, it cannot run as a different user, for a standard user with no elevated permissions, 'Run with highest privileges' does not do anything.
    Reference from:
    What
    effect does "run with highest priviledges" in task scheduler have on powershell scripts?
    I hope this helps.

  • Best Practice - HCM service

    Hi,
        The ESS latest package was uploaded initially into the portal. This package consists of mostly the Webdynpro iviews.
        Later it was decided to use the Best practice, so it is uploaded. Now all the services provided in the Best practice is available in one common folder, except for the ESS. The iviews available in this Best practice are mostly the transaction iviews.
        My query is that, why the HCM (ESS) services are not there under the best practice folder? Is it due to the already uploaded ESS package? What should i do to avail the HCM services of the best practice?
    It's urgent. plz help. All useful answers will be rewarded.

    Thanks Bharathwaj for the reply. Here is the link.
    https://websmp104.sap-ag.de/swdc
    In the website "SAP Software Distribution Center" select the category "Download" -> "Installations & Upgrades" -> "Entry by Application Group" then select "SAP Best Practices" -> "SAP BP PORTALS".
    In this the EP V2.60 version was downloaded.

  • Upcoming SAP Best Practices Data Migration Training - Chicago

    YOU ARE INVITED TO ATTEND HANDS-ON TRAINING
    SAP America, Downers Grove in Chicago, IL:
    November 3 u2013 5, 2010     `
    Installation and Deployment of SAP Best Practices for Data Migration & SAP BusinessObjects Data Services
    Install and learn how to use the latest SAP Best Practices for Data Migration package. This new package combines the familiar IDoc technology together with the SAP BusinessObjects (SBOP) Data Services to load your customeru2019s legacy data to SAP ERP and SAP CRM (New!).
    Agenda
    At the end of this unique hands-on session, participants will depart with the SBOP Data Services and SAP Best Practices for Data Migration installed on their own laptops. The three-day training course will cover all aspects of the data migration package including:
    1.     Offering Overview  u2013 Introduction to the new SAP Best Practices for Data Migration package and data migration content designed for SAP BAiO / SAP ERP and SAP CRM
    2.     Data Services fundamentals u2013 Architecture, source and target metadata definition. Process of creating batch Jobs, validating, tracing, debugging, and data assessment.
    3.     Installation and configuration of the SBOP Data Servicesu2013 Installation and deployment of the Data Services and content from SAP Best Practices. Configuration of your target SAP environment and deploying the Migration Services application.
    4.     Customer Master example u2013 Demonstrations and hands-on exercises on migrating an object from a legacy source application through to the target SAP application.
    5.     Overview of Data Quality within the Data Migration process A demonstration of the Data Quality functionality available to partners using the full Data Services toolset as an extension to the Data Services license.
    Logistics & How to Register
    Nov. 3 u2013 5: SAP America, Downers Grove,  IL
                     Wednesday 10AM u2013 5PM
                     Thursday 9AM u2013 5PM
                     Friday 8AM u2013 3PM
                     Address:
                     SAP America u2013Buckingham Room
                     3010 Highland Parkway
                     Downers Grove, IL USA 60515
    Partner Requirements:  All participants must bring their own laptop to install SAP Business Objects Data Services on it. Please see attached laptop specifications and ensure your laptop meets these requirements.
    Cost: Partner registration is free of charge
    Who should attend: Partner team members responsible for customer data migration activities, or for delivery of implementation tools for SAP Business All-in-One solutions. Ideal candidates are:
    u2022         Data Migration consultant and IDoc experts involved in data migration and integration projects
    u2022         Functional experts that perform mapping activities for data migration
    u2022         ABAP developers who write load programs for data migration
    Trainers
    Oren Shatil u2013 SAP Business All-in-One Development
    Frank Densborn u2013 SAP Business All-in-One Development
    To register please use the hyperlink below.
    http://service.sap.com/~sapidb/011000358700000917382010E

    Hello,
    The link does not work. This training is still available ?
    Regards,
    Romuald

  • Best Practices for Configuration Manager

    What all links/ documents are available that summarize the best practices for Configuration Manager?
    Applications and Packages
    Software Updates
    Operating System Deployment
    Hardware/Software Inventory

    Hi,
    I think this may help you
    system center 2012 configuration manager best practices
    SCCM 2012 task-sequence best practices
    SCCM 2012 best practices for deploying application
    Configuration Manager 2012 Implementation and Administration
    Regards, Ibrahim Hamdy

  • Path to best practices.

    Hi experts,
    I am searching for SAP best practises for XI....i am unable to trace the path in sdn and service.sap.com
    Could you let me know the path for it.
    Thanks in advance.
    Kiran.

    Hi Kiran,
    If you are interested in SAP Best Practices please follow the links below for more information:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8519e590-0201-0010-6280-d0766e58de6a
    •  SAP Best Practices on the SAP Service Marketplace
    •  SAP Best Practices for High Tech in the SAP Help Portal
    http://help.sap.com/bp_hightechv1500/HighTech_DE/index.htm
    also read :
    /people/marian.harris/blog/2005/06/23/need-to-get-a-sap-netweaver-component-implemented-quickly-try-sap-best-practices
    *Pls: Reward points if helpful*
    Regards,
    Jyoti

  • What is best practices

    hi gurus,
    I would like know what is best practices.  Where we will use this best practices.  What are the benefits we will get through best practices. For whcih industry it is useful.
    If any one help me in this following subject .  I will be value added for me.
    Thanks in advance

    Dear nag
    SAP Best Practices facilitate a speedy and cost-efficient implementation of SAP Software with a minimal need for planning and resources. SAP Best Practices are suited to the enterprise requirements of different industries. They integrate well with varying financial accounting and human resource management systems and can be used by enterprises of any size.
    => SAP Best Practices are a central component of the second phase of ValueSAP (Implementation). The ValueSAP framework guarantees value throughout the entire life cycle of SAP Software.
    => SAP Best Practices are a cornerstone of mySAP.com, since all the key elements of mySAP.com are linked to SAP Best Practices through preconfiguration. Key elements include:
              <> Preconfigured collaborative business scenarios
              <> Preconfigured mySAP.com Workplaces
              <> Preconfigured access to electronic marketplaces
              <> Preconfigured employee self-services
    Features
    SAP Best Practices consist of:
    An industry-specific version of AcceleratedSAP (ASAP) including various tools such as the Implementation Assistant, the Question & Answer database (Q&Adb), detailed documentation of business processes and accelerators:
    The industry-specific version of ASAP provides extensive business knowledge in a clear, structured format, which is geared towards the needs of your enterprise. You can use the reference structures with industry-specific end-to-end business processes, checklists and questions to create a Business Blueprint. This document contains a detailed description of your enterprise requirements and forms the basis for a rapid and efficient implementation of an SAP System.
    Preconfigured systems providing you with the industry-specific and/or country-specific Customizing settings you need to effectively implement business processes relevant to your enterprise
    Key elements include:
    -> Tried and tested configuration settings for the critical processes in your industry including special functions that are standard across all the preconfigured systems
    -> Documentation on configuration, which provides project team members with a comprehensive overview of the system settings
    -> Master data, which you can easily change or extend, for example, organizational structures, job roles, and customer/vendor master records
    -> Test catalogs that can be used to replay test processes in training courses, for example, to help you gain a better understanding of how processes work in the system
    Thanks
    G. Lakshmipathi

  • Exchange Best Practices Analyzer and Event 10009 - DCOM

    We have two Exchange 2010 SP3 RU7 servers on Windows 2008 R2
    In general, they seem to function correctly.
    ExBPA (Best Practices Analyzer) results are fine. Just some entries about drivers being more than two years old (vendor has not supplied newer drivers so we use what we have). Anything else has been verified to be something that can "safely be ignored".
    Test-ServiceHealth, Test-ReplicationHealth and other tests indicate no problems.
    However, when I run the ExBPA, it seems like the server on which I run ExBPA attempts to contact the other using DCOM and this fails.
    Some notes:
    1. Windows Firewall is disabled on both.
    2. Pings in both directions are successful.
    3. DTCPing would not even run so I was not able to test with this.
    4. Connectivity works perfectly otherwise. I can see/manage either server from the other using the EMC or EMS. DAG works fine as far as I can see.
    What's the error message?
    Event 10009, DistributedCOM
    "DCOM was unable to communiate with the computer --- opposite Exchange server of the pair of Exchange servers---  using any of the configured protocols."
    This is in the System Log.
    This happens on both servers and only when I run the ExBPA.
    I understand that ExBPA uses DCOM but cannot see what would be blocking communications.
    I can access the opposite server in MS Management Consoles (MMC).
    Note: the error is NOT in the ExBPA results - but rather in the Event Viewer System Log.
    Yes, it is consistent. Have noticed it for some time now.
    Does anyone have any idea what could be causing this? Since normal Exchange operations are not affected, I'm tempted to ignore it, but I have to do my "due diligence" and inquire. 
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    Hi David,
    I recommend you refer the following article to troubleshoot this event:
    How to troubleshoot DCOM 10009 error logged in system event
    Why this happens:
    Totally speaking, the reason why DCOM 10009 is logged is that: local RPCSS service can’t reach the remote RPCSS service of remote target server. There are many possibilities which can cause this issue.
    Scenario 1:
     The remote target server happens to be offline for a short time, for example, just for maintenance.
    Scenario 2:
    Both servers are online. However, there RPC communication issue exists between these two servers, for example:  server name resolvation failure, port resources for RPC communication exhaustion, firewall configuration.
    Scenario 3:
    Even though the TCP connection to remote server has no any problem, but if the communication of RPC authentication gets problem, we may get the error status code like 0x80070721 which means “A security package specific
    error occurred” during the communication of RPC authentication, DCOM 10009 will also be logged on the client side.
    Scenario 4:
    The target DCOM |COM+ service failed to be activated due to permission issue. Under this kind of situation, DCOM 10027 will be logged on the server side at the same time.
    Event ID 10009 — COM Remote Service Availability
    Resolve
    Ensure that the remote computer is available
    There is a problem accessing the COM Service on a remote computer. To resolve this problem:
    Ensure that the remote computer is online.
    This problem may be the result of a firewall blocking the connection. For security, COM+ network access is not enabled by default. Check the system to determine whether the firewall is blocking the remote connection.
    Other reasons for the problem might be found in the Extended Remote Procedure Call (RPC) Error information that is available in Event Viewer.
    To perform these procedures, you must have membership in Administrators, or you must have been delegated the appropriate authority.
    Ensure that the remote computer is online
    To verify that the remote computer is online and the computers are communicating over the network:
    Open an elevated Command Prompt window. Click Start, point to
    All Programs, click Accessories, right-click
    Command Prompt, and then click Run as administrator. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    At the command prompt, type ping, followed by a space and the remote computer name, and then press ENTER. For example, to check that your server can communicate over the network with a computer named ContosoWS2008, type
    ping ContosoWS2008, and then press ENTER.
    A successful connection results in a set of replies from the other computer and a set of
    ping statistics.
    Check the firewall settings and enable the firewall exception rule
    To check the firewall settings and enable the firewall exception rule:
    Click Start, and then click Run.
    Type wf.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    In the console tree, click Inbound rules.
    In the list of firewall exception rules, look for COM+ Network Access (DCOM In).
    If the firewall exception rule is not enabled, in the details pane click
    Enable rule, and then scroll horizontally to confirm that the protocol is
    TCP and the LocalPort is 135. Close Windows Firewall with Advanced Security.
    Review available Extended RPC Error information for this event in Event Viewer
    To review available Extended RPC Error information for this event in Event Viewer:
    Click Start, and then click Run.
    Type comexp.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    Under Console Root, expand Event Viewer (Local).
    In the details pane, look for your event in the Summary of Administrative Events, and then double-click the event to open it.
    The Extended RPC Error information that is available for this event is located on the
    Details tab. Expand the available items on the Details tab to review all available information. 
    For more information about Extended RPC Error information and how to interpret it, see Obtaining Extended RPC Error Information (http://go.microsoft.com/fwlink/?LinkId=105593).
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Best Practice - Bounded Task Flows, Regions and Nested Application Modules

    Using JDev 11.1.1.3; understand that it's generally considered good practice to just have 1 root application module servicing model content / services for each page. In our application, we've used a number of bounded task flows and page fragments deployed as af:region's into pages as either a) views targeted in page-flow navigation, b) tab panel content inside a regular jspx, or c) af:popup / af:dialog content. As it stands, we've not engaged nesting of the application modules for this embedded region content, so these regions are no doubt instantiating new AM's if/when invoked. Should the AM's servicing these embedded regions be deployed nested within the root AM's, and then if so, does this change the way that the jsff / fragment content is actually developed (currently as per any other jspx using the DataControl pallete). Or are the best-practice directives talking about a page as being the design-time / declarative composition of content rather than the run-time aggregation of page + fragments ... in which case the fact that our embedded fragments are not using nested AM's is unlikely to concern.
    Thanks,

    Probably a better question for the ADF EMG: http://groups.google.com/group/adf-methodology?hl=en
    CM.

  • Service Model, Health Model, Best Practice (SML)

    Hello
    I am trying to explain to semi-technical people whom do not know SCOM the principle of SCOM when it come to monitoring concepts best practice.
    Therefore what I am looking for please is a set of slides/short video/Q&A etc. which explains the concepts reasoning behind taking the time to workout a Service Model and Health Model at the 'start' of a project (e.g. before installing BusinessAppA)
    so it can be problem monitored and alerts on etc.
    Basically I am trying to get the architects/project managers to think about what I need as a SCOM engineer so I an discover and monitor etc. the Application/System they are proposing to install, rather then picking up this after the event
    Does anyone know of any good resources to explain these concepts to get the message across.
    Thanks All
    AAnotherUser__
    AAnotherUser__

    Hi,
    Please refer to the links below:
    Service Model
    http://technet.microsoft.com/en-us/library/ee957038.aspx
    Health Model Introduction
    http://channel9.msdn.com/Series/System-Center-2012-R2-Operations-Manager-Management-Packs/Mod15
    Health Model
    http://technet.microsoft.com/en-us/library/ff381324.aspx
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • IPS Tech Tips: IPS Best Practices with Cisco Remote Management Services

    Hi Folks -
    Another IPS Tech Tip coming up and this time we will be hearing from some past and current Cisco Remote Services members on their best practice suggestions. As always these are about 30 minutes of content and then Q&A - a low cost high reward event.
    Hope to see you there.
    -Robert
    Cisco invites you to attend a 30-45 minute Web seminar on IPS Best   Practices delivered via WebEx. This event requires registration.
    Topic: Cisco IPS Tech Tips - IPS Best Practices with Cisco Remote Management   Services
    Host: Robert Albach
    Date and Time:
    Wednesday, October 10, 2012 10:00 am, Central Daylight Time (Chicago,   GMT-05:00)
    To register for the online event
    1. Go to https://cisco.webex.com/ciscosales/onstage/g.php?d=203590900&t=a&EA=ralbach%40cisco.com&ET=28f4bc362d7a05aac60acf105143e2bb&ETR=fdb3148ab8c8762602ea8ded5f2e6300&RT=MiM3&p
    2. Click "Register".
    3. On the registration form, enter your information and then click   "Submit".
    Once the host approves your registration, you will receive a confirmation   email message with instructions on how to join the event.
    For assistance
    http://www.webex.com
    IMPORTANT NOTICE: This WebEx service includes a feature that allows audio and   any documents and other materials exchanged or viewed during the session to   be recorded. By joining this session, you automatically consent to such   recordings. If you do not consent to the recording, discuss your concerns   with the meeting host prior to the start of the recording or do not join the   session. Please note that any such recordings may be subject to discovery in   the event of litigation. If you wish to be excluded from these invitations   then please let me know!

    Hi Marvin, thanks for the quick reply.
    It appears that we don't have Anyconnect Essentials.
    Licensed features for this platform:
    Maximum Physical Interfaces       : Unlimited      perpetual
    Maximum VLANs                     : 100            perpetual
    Inside Hosts                      : Unlimited      perpetual
    Failover                          : Active/Active  perpetual
    VPN-DES                           : Enabled        perpetual
    VPN-3DES-AES                      : Enabled        perpetual
    Security Contexts                 : 2              perpetual
    GTP/GPRS                          : Disabled       perpetual
    AnyConnect Premium Peers          : 2              perpetual
    AnyConnect Essentials             : Disabled       perpetual
    Other VPN Peers                   : 250            perpetual
    Total VPN Peers                   : 250            perpetual
    Shared License                    : Disabled       perpetual
    AnyConnect for Mobile             : Disabled       perpetual
    AnyConnect for Cisco VPN Phone    : Disabled       perpetual
    Advanced Endpoint Assessment      : Disabled       perpetual
    UC Phone Proxy Sessions           : 2              perpetual
    Total UC Proxy Sessions           : 2              perpetual
    Botnet Traffic Filter             : Disabled       perpetual
    Intercompany Media Engine         : Disabled       perpetual
    This platform has an ASA 5510 Security Plus license.
    So then what does this mean for us VPN-wise? Is there any way we can set up multiple VPNs with this license?

  • Consuming web services in a jsr 168 portlet best practices.

    I am building portlets (jsr 168 api in Websphere Portal 6.0 using web service client of Rational). Now needed some suggestions on caching the web services data on the portlet. We have a number of portlets (somewhere around 4 or 5) on a portal page which basically rely on a single wsdl Lotus Domino Web Service.
    Is there a way I can cache the data returned by webservice so that I dont make repeated calls to the webservice on every portlet request. Any best practices/ideas on how I could do avoid multiple web service calls would be appreciated ?

    Interestingly, as it often happens with Oracle portal, this has started working without me doing anything special.
    However, the session events my listener gets notified of are (logically, as this portlet works via WSRP) different from user sessions. The problem I'm trying to solve now is that logging off (in SSO) doesn't lead to those sessions being destroyed. They only get destroyed after timeout specified in my web.xml (<session-config><session-timeout>30</session-timeout></session-config>). On the other hand, when they do expire, the SSO session may still be active, in which case the user gets presented with the infamous "could not get markup" error message. The latter is unacceptable in our case, so we had to set session-timeout to a pretty high value.
    So the question is, how can we track when the user logs off. We have found the portal.wwctx_sso_session$ and portal.WWLOG_ACTIVITY_LOG1$ (and ...2$) tables, but no documentation for them. However, the real problem with using those tables is that there's no way we could think of to match the portlet sessions with SSO sessions/actions listed in the tables. (Consider situation when someone logs in from two PCs.)
    Any ideas?

Maybe you are looking for