VMware Performance Event Triggers use case questions.

1) Can the CPO VMware Adapter Performance Event Triggers and their
SAMPLE SIZE, INTERVAL, and CONDITIONS attributes be configured
for the following use case?
2) Can VMWare Performance Event Triggers be CORRELATED?  and if so would it be needed to satisfy the use case?
3) Would an additional performance monitoring tool be required to satisy the use case?:
If a VM CPU or RAM has been 80% for 2 hours then trigger a workflow
and
If a VM CPU or RAM has been at 60% for X days then trigger a workflow

I will first describe how you could instrument a correlation method, but ultimately, I'm not sure it is really necessary for the uses cases described.
To correlate VMware performance events over a designated timeframe, you would create a correlation process that tracks the underlying performance event and decides whether or not to trigger the process you want triggered when the correlated event is detected.
Here's how it works.
First, create a global table with three columns:
1) Virtual Machine path
2) Consecutive Trigger Count
3) Last Trigger time
Create a process that is triggered by a VMware performance event (such as Memory Avg > 80%) where you can set a sample size and interval that makes sense given the timeframe of interest (2 hours or X days). For example, a sample size of 10 and interval of 30 seconds (5 minutes) is a reasonable time slice from vCenter for a 2 hour timeframe. This results in requiring 24 consecutive triggers to raise the actual event of interest. (2 hours divided by 5 minutes)
That is, the formula is:
Sample Size * Interval / Timeframe = Consecutive Trigger Count
The correlation process triggered by the raw VMware event does the following:
1) If there is no existing entry in the global table for the VM, add entry to table with count = 1 and current time
2) if entry exists, check the current time against the last trigger time
   a) if it is the next interval, for example, current time is 12:15 and last triggered time was 12:10 (see note below),
        Increment counter and set last trigger time 
        If count = 24, then run the process that handles the "VM Memory Avg > 80% for at least 2 hours " and delete entry from table.
       If count < 24, the process exits having only incremented the counter and set the last trigger time
   b) if the current time is > than last trigger time + time slice (+ a little padding), set the counter back to 1
Note: When comparing current time w/last time, you should pad to account for slight processing delays so compare current-time < last-time+(time slice*2). Anything less than 10 minutes in this case would be considered a consecutive trigger. You could also compare current-time < last-time+time-slice+1, which is probably also safe.
Now, having gone through all that, you may not need the correlation process after all. Simply adjust your sample size to be large enough to accommodate your ultimate timeframe and you can trigger your event handler directly without the need to correlate. So for the 2 hour window, just create a sample size of 240 (* 30 seconds = 2 hours). This may or may not work depending on how performance metrics have been configured on the server (sample size, intervals and how much is saved). You can only set your own sample and intervals to multiples of those configured values (so be careful and refer to the VMware documentation when relying on such metrics)
You may find that the correlation method I first described is more reliable, especially for longer timeframes such as X days (where you need to sample hourly rather than every 5 minutes).
In any case, I think you can do what is asked without external monitoring, but it will require some experimentation and a deeper knowledge of how performance metric sampling works for ESX.

Similar Messages

  • A Use Case question.

    Hello all
    I am currently developing an event management assignment for my degree.
    A subset of problem statment
    Attendee is a person, who would register with the application
    in order to view the posted events.
    For this part of the problem statment i have identified a usecase
    "register attendee".
    I will type the use case here. Since i am a novice, I would really like
    design gurus to throw some light on the use case and suggest me changes
    I will really appreciate it.
    Use case : register attendee
    Pre-condition: Attendee decides to register.
    Main Success Scenario:
         Actor                         System
    1. Attendee requests to register.
                        2. Provides registration form.
    3. Attendee fills the form and
    submits.                     
                        4. System checks for valid
                        authentication.
    (//for eg. unique username)
                        5. System validates the information
    (//for eg.compulsory fields
                             present or not)
                        6. System notifies through mail.
                        7. Displays registration was
                        successful.
    Alternative Flow :
    4 a) Invalid authentication : Displays appropriate message.
    5 a) Invalid information : Displays appropriate message.
    6 a) Mail notification failure : Logs the failure message and email.
    Please guide me friends. I am a bit confused.
    Thanks in advance.
    Jenny               

    Use case are pretty good for both of these. You know
    your system needs to allow an ATTENDEE to register
    with the SYSTEM using a REGISTRATION FORM. These sound
    like things that either you need to model (what
    attributes do the form and attendee have) or a thing
    that you need to build the "system".
    The steps in the use case tell you how the system must
    behave. It needs to "getRegistrationForm" and give it
    the attendee. The Attendee needs to
    "completeRegistrationForm" and
    "submitRegistrationForm". So, with the authentication
    and validation rounding out the process you have a
    good idea of what kind of methods you need as a base
    in order to deal with the real world concepts you have
    identified.I am through with what behaviour my system will need to provide in
    order to realize the use case. But I will have to assign the behaviour
    appropriately to classes of objects. I would like to go to the design model once i have a good understanding of the domain model. This is where the problem starts, should i identify, AuthenticationVerifier,InformationVerfier as different concepts and give later say AuthenticationVerifier the responsibility of verifying unique authentication and InformationVerfier the responsibility of verifying information received with the form or should I only take one concept of say Administrator that would perform the above two stated behaviours. I understand that it is always good to overspecify concepts rather than under specify but at some point of time i will have to make that decision.
    If its a libray system, it is easy to identify certain priliminary concepts say book, patron, etc. as they occupy some physical space. But here nothing is clear. There isn't any entity in my use case, that occupies a physical space that i can easily identify as a real world concept.
    My question is, how in this case, do someone define the notion of concept when there is nothing in the main success scenario that occupies space. My use case, only specifies what will be done with the
    attendee information, not who will do it. So as a use case developer, am i responsible to map these processes with some physical entities that i have derived for eg. AuthenticationVerfier.
    ps. You will need to add the Attendee as an Actor. She
    is the one that is initiating the activity.Does it mean that as Attendee is an actor, he/she cannot be a concept?

  • Photostream, iCloud Photo Sharing, iPhoto, my phone, and use cases all around

    I have a fre pretty basic use case questions about Photostream.
    I take a lot of pictures on my iPhone. They magically appear in iPhoto on my Mac. I'm told that by having the import feature in iPhoto turned on, those photos will stay in iPhoto on my iMac forever - or at least after I pass the 30 days/1,000 most recent photos limits.
    So that leaves me to ask: what am I supposed to do with the photos on my phone? Just leave them there to eat up storage? Or delete them at some point? Delete all the photos on my phone after I import those that Photostream hadn't already captured? Is there any benefit to importing a duplicate photo if I have Photostream importing turned on? And are they truly, really duplicate photos? Are the iPhone photos in my Photostream duplicated on my phone until they fall out of Photostream? Are iPhone photos duplicated if they're in a shared stream?
    I just don't know what I should do with the photos on my iPhone once they're on my iMac, and I'm not truly confident that they're on my iMac for good or that they're truly the same file as the original.

    on the mac/iphoto, move photos in the Photo stream group into some other album. 
    foatbttpo1567,In iPhoto on a mac you need to download the photos to an event - not an album. An album would only reference them in the photo stream, but not store the photos in the iPhoto library. Turning on "Automatic Import", as Old Toad suggested, will do that and create monthly events.
    I'm told that by having the import feature in iPhoto turned on, those photos will stay in iPhoto on my iMac forever - or at least after I pass the 30 days/1,000 most recent photos limits.
    The 30 days/1000 photos limit applies to the temporary storage in iCloud - the time you have to grab them and to import them. Once they are in an event, you have them safe.
    So that leaves me to ask: what am I supposed to do with the photos on my phone? Just leave them there to eat up storage? Or delete them at some point? Delete all the photos on my phone after I import those that Photostream hadn't already captured?
    Photo Stream is a handy feature for transmitting photos, but don't rely on it for permanent storage. If you ever have to reset your iPhone or reinstall your Mac, or to reset the photo stream, your photos from the stream may be gone. Keep always a copy of your photos either in the Camera roll on your phone or in iPhoto events on your mac, and make sure that these copies are regularly backed up.
    As for the "truly duplicates" - Photo stream will send optimized versions to the devices, but to a mac the full original version. You may want to read the FAQ:  iCloud: Photo Stream FAQ

  • Can I use case statements in triggers?

    I created this trigger, it works BUT I don't like those parentheses at the begining, I would like
    to change those parentheses for case statements, well that is my question, can you use case statements in triggers, how you would translate the following in case statement?
    FOR EACH ROW
    WHEN ( (new.sgbstdn_levl_code = 'UG')
    and
    ( (NEW. SGBSTDN_STST_CODE NOT IN ('GR','SA','AS','IS') )
    OR
    ( (NEW. SGBSTDN_STST_CODE = 'IS' ) AND
    (NEW. SGBSTDN_STYP_CODE IN ('N' , 'T' )) AND
    (OLD. SGBSTDN_STST_CODE = 'AS' ) ) ) )
    ==================================================================================================
    CREATE OR REPLACE TRIGGER CC_STUD_WITHDRAWAL
    AFTER UPDATE OR INSERT ON SATURN . SGBSTDN
    FOR EACH ROW
    WHEN ( (new.sgbstdn_levl_code = 'UG')
    and
    ( (NEW. SGBSTDN_STST_CODE NOT IN ('GR','SA','AS','IS') )
    OR
    ( (NEW. SGBSTDN_STST_CODE = 'IS' ) AND
    (NEW. SGBSTDN_STYP_CODE IN ('N' , 'T' )) AND
    (OLD. SGBSTDN_STST_CODE = 'AS' ) ) ) )
    DECLARE
    v_params gokparm.t_parameterlist;
    event_code gtveqnm.gtveqnm_code%TYPE;
    firstname spriden.spriden_first_name%TYPE;
    lastname spriden.spriden_last_name%TYPE;
    middlename spriden.spriden_mi%TYPE;
    id spriden.spriden_id%TYPE;
    CURSOR get_stud_name IS
    SELECT
    spriden_id ,
    spriden_last_name ,
    spriden_first_name ,
    spriden_mi
    FROM
    saturn.spriden
    WHERE spriden_pidm = :NEW.SGBSTDN_PIDM
    AND spriden_change_ind IS NULL;
    BEGIN
    IF goksyst . f_isSystemLinkEnabled ( 'WORKFLOW' ) THEN
    event_code := SUBSTR ( gokevnt.F_CheckEvent ( 'WORKFLOW' ,'CC_STUDENT_WITHDRAW' ),1,20);
    OPEN get_stud_name ;
    FETCH get_stud_name INTO id , lastname , firstname , middlename ;
    CLOSE get_stud_name ;
    ----pass parameters to the event
    v_params ( 1 ).param_value := 'CC_STUDENT_WITHDRAW' ;
    v_params ( 2 ).param_value := '' ;
    v_params ( 3 ).param_value := 'Student Withdrawal:' || lastname || ',' || firstname || ' ' ||
    middlename ;
    v_params ( 4 ).param_value := :NEW.sgbstdn_pidm ;
    v_params ( 5 ).param_value := id ;
    v_params ( 6 ).param_value := lastname ;
    v_params ( 7 ).param_value := firstname ;
    v_params ( 8 ).param_value := middlename ;
    v_params ( 9 ).param_value := :NEW.sgbstdn_term_code_eff ;
    v_params ( 10 ).param_value := :NEW.SGBSTDN_STST_CODE ;
    v_params ( 11 ).param_value := :NEW.SGBSTDN_STYP_CODE ;
    gokparm.Send_Param_List ( event_code , v_Params );
    END IF;
    END;
    /

    You could delete a fair number of extraneous parentheses.
    CREATE OR REPLACE TRIGGER cc_stud_withdrawal
      AFTER UPDATE OR INSERT
      ON saturn.sgbstdn
      FOR EACH ROW
      WHEN     NEW.sgbstdn_levl_code = 'UG'
           AND (   NEW.sgbstdn_stst_code NOT IN ('GR', 'SA', 'AS', 'IS')
                OR (    NEW.sgbstdn_stst_code = 'IS'
                    AND NEW.sgbstdn_styp_code IN ('N', 'T')
                    AND OLD.sgbstdn_stst_code = 'AS'))

  • Help needed in SQL performance - Using CASE in SQL statement versus 2 query

    Hi,
    I have a requirement to find count from a bunch of tables.
    The SQL I have gives the count of all members.
    I have created 2 queries to find count of active and inactive members.
    The key difference is only the active dates.
    Each query takes 20 seconds to execute.
    I modified the SQL to use CASE statement in the SELECT.
    So after the data is fetched the CASE statement will evaluate the active date and gives 2 counts (active and inactive)
    Is it advisable to use this approach. Will CASE improve SQL performance ? I have to justify this.
    Please let me know your thoughts.
    Thanks,
    J

    Hi,
    If it can be done in single SQL do it in single SQL.
    You said:
    Will CASE improve SQL performance There can be both cases to prove if the performance is better or worse.
    In your case you should tell us how it is.
    Regards,
    Bhushan

  • I HAVE A QUESTION ABOUT USING CASE STRUCTURE

    Can I compute a percentage using case structure?
    In the following query below, I have been able to use CASE STRUCTURE to add to a counter when days were between 0 and 30, or when days were between 31 and 60 or when days were between 61 and 90 or when days were between 91 and 9999. I have also been able to get a GRAND TOTAL of all days between 0 and 9999. This is done in the LAST ITEM of the SELECT STATEMENT. The FIRST ITEM of the SELECT STATEMENT counts rows of records being processed. I want to take the LAST ITEM of the SELECT STATEMENT and MULTIPLY it by 100 and then DIVIDE it by the FIRST ITEM of the SELECT STATEMENT to get a percentage. I know that you can do this with numeric fields in a file but is there a way to do this in CASE STRUCTURE with calculated totals?
    SELECT
    count(distinct v.rowid) v_cnt,
    SUM(CASE WHEN(V.LCL_ER_RECV_DT - ADD_MONTHS(V.IND_ER_PER_END_DT,3)) BETWEEN 0 AND 30 THEN 1
    ELSE 0
    END) one_mo,
    SUM(CASE WHEN(V.LCL_ER_RECV_DT - ADD_MONTHS(V.IND_ER_PER_END_DT,3)) BETWEEN 31 AND 60 THEN 1
    ELSE 0
    END) two_mo,
    SUM(CASE WHEN(V.LCL_ER_RECV_DT - ADD_MONTHS(V.IND_ER_PER_END_DT,3)) BETWEEN 61 AND 90 THEN 1
    ELSE 0
    END) three_mo,
    SUM(CASE WHEN(V.LCL_ER_RECV_DT - ADD_MONTHS(V.IND_ER_PER_END_DT,3)) BETWEEN 91 AND 9999 THEN 1
    ELSE 0
    END) three_pl_mo,
    SUM(CASE WHEN(V.LCL_ER_RECV_DT - ADD_MONTHS(V.IND_ER_PER_END_DT,3)) BETWEEN 0 AND 9999 THEN 1
    ELSE 0
    END) TOT
    FROM NCOER V, NCOER_IN_ERROR NIE, NCOER_ERROR NE, ALL_CMD_VIEW ACV
    WHERE V.MIL_CMD_ASGN_CD IN ('FC')
    and v.lcl_er_form_cd = '4'
    and acv.cmd_cd = v.mil_cmd_asgn_cd
    and nvl(acv.lcl_code_stat,'N') = 'Y'
    and NVL(v.lcl_omit_from_stats_ind,'N') <> 'Y'
    AND V.PSC_CD IN ('FS10')
    AND (V.LCL_ER_RECV_DT >= '01-MAR_2007' AND V.LCL_ER_RECV_DT <= '31-MAR-2007')
    AND V.IND_SSN = NIE.IND_SSN(+)
    AND V.IND_ER_PER_END_DT = NIE.IND_ER_PER_END_DT(+)
    AND V.LCL_ER_RECV_DT = NE.LCL_ER_RECV_DT(+)
    AND V.IND_SSN = NE.IND_SSN(+)
    AND V.IND_ER_PER_END_DT = NE.IND_ER_PER_END_DT(+)
    AND V.LCL_ER_RECV_DT = NE.LCL_ER_RECV_DT(+)

    Solution for you:
    SELECT
    COUNT(V_CNT) PROCESS_RECORD,
    SUM(CASE WHEN(PROCESS_MONTHS) BETWEEN 0 AND 30 THEN 1
    ELSE 0
    END) ONE_MO,
    SUM(CASE WHEN(PROCESS_MONTHS) BETWEEN 31 AND 60 THEN 1
    ELSE 0
    END) TWO_MO,
    SUM(CASE WHEN(PROCESS_MONTHS) BETWEEN 61 AND 90 THEN 1
    ELSE 0
    END) THREE_MO,
    SUM(CASE WHEN(PROCESS_MONTHS) BETWEEN 91 AND 9999 THEN 1
    ELSE 0
    END) THREE_PL_MO,
    SUM(CASE WHEN(PROCESS_MONTHS) BETWEEN 0 AND 9999 THEN 1
    ELSE 0
    END) TOT,
    (SUM(CASE WHEN(PROCESS_MONTHS) BETWEEN 0 AND 9999 THEN 1
    ELSE 0
    END) *100/COUNT(V_CNT)) TOT_PER
    FROM
    (SELECT     V.ROWID V_CNT, V.LCL_ER_RECV_DT - ADD_MONTHS(V.IND_ER_PER_END_DT,3) PROCESS_MONTHS
    FROM     NCOER V, NCOER_IN_ERROR NIE, NCOER_ERROR NE, ALL_CMD_VIEW ACV
    WHERE     V.MIL_CMD_ASGN_CD IN ('FC')
    AND     V.LCL_ER_FORM_CD = '4'
    AND     ACV.CMD_CD = V.MIL_CMD_ASGN_CD
    AND     NVL(ACV.LCL_CODE_STAT,'N') = 'Y'
    AND     NVL(V.LCL_OMIT_FROM_STATS_IND,'N') <> 'Y'
    AND     V.PSC_CD IN ('FS10')
    AND     (V.LCL_ER_RECV_DT >= '01-MAR-2007' AND V.LCL_ER_RECV_DT <= '31-MAR-2007')
    AND     V.IND_SSN = NIE.IND_SSN(+)
    AND     V.IND_ER_PER_END_DT = NIE.IND_ER_PER_END_DT(+)
    AND     V.LCL_ER_RECV_DT = NE.LCL_ER_RECV_DT(+)
    AND     V.IND_SSN = NE.IND_SSN(+)
    AND     V.IND_ER_PER_END_DT = NE.IND_ER_PER_END_DT(+)
    AND     V.LCL_ER_RECV_DT = NE.LCL_ER_RECV_DT(+) )
    Regards,
    Rajs
    www.oraclebrains.com

  • How to disable the "turn page" event triggered by the scroll/swipe function?

    The problem is as follows.
    The default behaviour of Acrobat Reader (both stand alone and browser plug-in) is to allow scrolling/swiping with the mouse wheel/trackpad. This is useful when the pdf's page length is greater than the screen's own length, because you can read the pdf with no need to distract your attention from the text to the scrollbar button. However, the same scroll/swipe function turns into a usability problem when the pdf is embedded in a html page and the pdf's page length is smaller than the browser's length. In this case, the scroll/swipe turns the page, distracting your attention from the text to the unintended behaviour of the browser. What happens is that you are so used to scrolling/swiping that you did it unintentionally in the pdf's caption area. You really did not want to turn pages in the pdf. Furthermore, if the pdf takes the whole html page, being a website, the scroll/swipe function flips the website pages in ways that neither the reader nor the writer had ever intended. Hence the question. How to disable, in this case, the "turn page" event triggered by the scroll/swipe function? A JavaScript should do, but the SDK documents did not help so far...
    Message was edited by: 41457173
    Message was edited by: 41457173

    ... or release a patch for the API,
    ... or suggest an alternative route to achieve the intended result.

  • EVO:RAIL - Use Cases?

    I think as soon as the world becomes aware of EVO:RAIL then folks will start asking what the use cases are. It's clear that there are couple of immediate use cases, but I'm betting that customers will bring to the table applications that perhaps we hadn't considered. The other night I was our Tech Summit event (its private event for VMware SE/TAMs to attend prior to VMworld to get them up to speed on what the announcements are all about), and I was approached by a number of SE/TAMs who are interested in running EVO:RAIL in the basements and garages as the foundation to their homelabs. Even with the onsite of Cloud - and we have an internal cloud that SE/TAMs can use demo products to customers - just like the VMware Community they still have a passion for building out their own demos, just how they like them - that they can tailor make to handle customer questions that come up. I'm the same, despite being at VMware for 2+ years, I'm still running my own gear at home. It just worked out more cost effective form my needs than keeping my 42U rack at a colo.
    I doubt that my might the "WAF" would apply to having a 4U box with 4 nodes inside it - in the spare bedroom. So I'm already looking at running EVO:RAIL in a nested configuration on my home lab gear for testing, videos and demos to customers. That work has already been done for the VMworld HoL, by William Lam (the undisputed king of nested ESX, well, perhaps Simon Gallagher comes close!) - who's done some excellent work on this already. So hopefully it won't take much work to get hold of the bits from the HoL, and pump the lastest build of EVO:RAIL into it...
    Anyway, I digress use cases. It's clear that ROBO, SMB and VDI are the top use cases - situations where an appliance like model, with zero configuration (almost!) is needed. I think we perhaps be a bit careful with the SMB use case. EVO:RAIL is 4xNodes in a 4U configuration with each node presenting 192GB of RAM. One thing I've always been concerned with is how variable "SMB" is as term. Over here in a the US an SMB would be classed as company with <1000 employees. Over here in Europe, that would be regarded as still quite large for an SMB.. The same goes ROBO. So I guess what I'm saying is size is relative, and so we have to be careful when use terms like "small".
    Folks who know me will know that I'm a big fan of VMware Site Recovery Manager and vSphere Replication - having written books about the version 1.0, 40 and 5.0 versions. I think there's use case where an EVO:RAIL is put into some colocation facilitiy and used as DR target. The important thing to remember about EVO:RAIL as its based on vSphere 5.5 U2, other products in the vCloud Suite family will work with it - like SRM, View, vCloud Director or vCAC... So its not some special "moded" version of the platform. Perhaps a better model for DR and EVO:RAIL is to run VR locally on it, and use vCloud Air as the target for recovery...? I guess we have to ask the question what's going to be more popular/easier. Two site with EVO:RAIL at either end, or EVO:RAIL enabled for DR to the cloud. Just to be clear. That isn't baked into EVO:RAIL today, I'm just blue sky thinking about the possibilities in the future.
    So what do folks think here? Can you dream up any other scenarios where an EVO:RAIL would be good fit...?

    You make some valid points there, but there's a couple of things that are worth clarifying....
    Firstly, EVO:RAIL is pre-installed at factory. So really the only "installation" is racking up, and connecting to the TOR. In fact I think an EVO:RAIL is easier to setup that a VSA is, and gives more usable disk space, and lower cost per gigabyte..
    IF the customer has given the OEM the IPs, IP Ranges, Hostnames, and Passwords - then it literally is a 15min setup after clicking "Build Appliance". On the other hand if the customer wants to customize the EVO:RAIL prior to build - such as changing the VLANs Tags (That they supplied) that would add a couple of minutes...
    vDS is NOT a prequistie. EVO:RAIL uses standard switches... Although an EVO:RAIL is licensed for Enterprize Plus. There's no requirement to configure a Distributed Switches.
    The real "asks" here is the customer needs 10Gb switch - I think its fair to say that isn't common in SMB/ROBO environment. I suspect many of the OEMs will be asking the customer to validate their switch configuation. Multicast for IPv4/IPv6 is used by the "Zero Network Configuation" on the management/VSAN networks. As the name suggest no configuration is required on the EVO:RAIL, its used as part of the discovery process for finding the nodes, and finding new appliances. One most switches this tick off box option on the switch for the VLANs that require it...
    Anyone of my plans is to order an EVO:RAIL from one of our suppliers just like a customer would, and document/record that process. So i can get a feel for a real world customer experience would be. In fact I might see if my wife, Carmel can set it up... :-)

  • Which component to chose in my use case ( BPEL / OSB / Mediator)

    All,
    I have gone through various blogs and documentation explaining the reasons for choosing a specific component,But,It is always a close call when it comes to making a very important decision as
    the real time uses case we generally deal with always fall in a border making it difficult to decide.
    Use case:
    A legacy system has to communicate with a third party system for sending some job details. For the same it uses an service intermediary.
    This service intermediary has to
    a) Receive the message from the legacy system. ( Preferably as EDN event as its easy for the legacy system to through an event)
    b) Very light orchestration ( in the future)
    c) Route it to the Mobile enablement application/ system.
    Generic:
    d) Need to provide a fault management / handling.
    e) Authentication / Authorization.
    Having these requirements, I thought an OSB would be a right component to be used as we are focused on routing in a decoupled way with light orchestration and business agility. However can OSB support
    a. subscribe to EDN events ? If yes how ? Can it have Oracle Apps adapter to get the events from R12 like BPEL?
    b. Can it use the same fault management framework written for SOA suite ( policies and bindings) ?
    On the other hand, I am having thoughts on why not use BPEL process itself. We can turn Auditing to off and will not have any dehydration points in the bpel process thus making it stateless ( just like OSB ?) if that is the major difference we are looking at ? Service virtualization ( dynamically changing the end point ) can also be acheived in bpel.
    and why not Mediator ? I know everywhere people talk about using mediator for intra composite commmunication but at the same time they suggest using it while writing to file / adapter or call a external service exposed as SOAP WSDL too. Now for our use case, a mediator can listen to events from the legacy system and route it to the target mobile enablement service.
    ( Note : BPEL and mediator can use fault management , EDN's, Oracle apps adapter and also can be made stateless by turning the audit to off . So if you are still suggesting OSB please back it up with strong reason rather than just theoritcally saying that its a standard to use in case of routing and stateless etc..)
    Kindly help !
    Regards,
    Sridhar.

    Realized that OSB can
    a. Read an AQ using AQ adapter and thus can subscribe to the Events raised using AQ in Oracle Ebsuiness Suite applications.
    b. OSB has its own way of handling faults. ( does not use SCA fault handling framework).
    The important question I want to focus here is
    In the above use case when everything can be acheived using either of OSB, BPEL or Mediator. ( ie., from ease of developement, performance and management) how to chose a specific component ?
    Service virtualization that OSB boasts can be done by BPEL.
    BPEL can also be made lieght weight by turning the dehydration to off ( auditing to off)
    Result cache can be achieved in bpel by using some custom coherence api.( which is one time effort and simple to use)
    Message throttling can be done using a queue in between.
    Really need a very practical reason to prefer OSB and not BPEL or Mediator in my case. Help appreciated

  • Transactions documentation and a difficult(?) use-case...

    I would like detailed information about how TransactionMap work with different isolation levels, ie how both changes performed by the application holding the map and in the distributed cache is propagated between them etc.
    More detailed information about the TransactionMap.Validator would also be very appreciated.
    We also have one specific "use case" I would like advice about - it goes like this:
    We use one type of main object that has a very tight coupling to a varying number (0 to a few hundred in the extreme case) small detail objects. All the detail objects are always required as soon as the main object is used. A given detail object is never referenced from more than one main object. We have (for performance reasons) decided to treat the detail objects as "part of" the main object. The main objects are stored in the cache.
    Users can make changes to the main objects themselves or to there detail objects. A user should be able to perform many changes to many main objects (and there detail objects) and "commit" them all at once pressing a button.
    Now to the problem:
    We would like to allow users to make "non conflicting" changes to a main objects detail objects - ie if two users has changed different detail objects we want to merge the changes instead of refusing the modification at commit. To be able to do this we intend to keep version numbers not only on the main object but also on the detail objects.
    We would like to use "transactions" to handle the requirement that all a users changes should be "committed" at once and either all be introduced or not introduced at all (in the event of hardware failure during update for instance!) but the default behavior of Transaction is as I understand it (I have so far just read about it not played around with it much!) to compare the "whole object" for equality in the prepare (and commit?) steps. We also need exact information about WHAT object(s) that had been concurrently modified in the case a commit cant be performed allowing the user to "refresh" the relevant detail object only and retry committing his changes.
    How would we be able to implement our "use case" in a good and reasonably efficient way given Coherents features? Would it for instance be possible (with a reasonable effort) to create our own transaction validation that could perform "merging of "non-conflicting" changes to the same object and in that case how should we go about it?
    Best Regards
    Magnus

    Hi Magnus,
    Our entry processor functionality is your best solution, but unfortunately is not fully supported within a transactional context.
    I would suggest using a combination of explicit locking (as opposed to implicit transactions) and our entry processor functionality (new in 3.1).
    Using explicit locking, you can enforce atomic access to cache entries. Using the entry processor you can perform partial updates locally on the server (allowing you to send only changes).
    So the sequence would be:
    * lock all "main objects"
    * if necessary, validate the main objects (see below)
    * use entry processors to perform "delta updates" against those main objects
    * unlock the main objects
    The locking is only required for atomicity (ensuring that updates don't overlap), and does require that all modifiers follow the same locking pattern. You may either design your objects so that you know the delta updates will complete successfully, or you'll need to verify the updates will succeed prior to actually executing the updates.
    Jon Purdy
    Tangosol, Inc.

  • What are the performance tradeoffs when using more than one EntityStore?

    Hi,
    I have different accounts where each account has the same set of Entity classes. I don't know yet wheter to have one EntityStore for each account or have one EntityStore for all accounts together and do a lot of 'joining' when retrieving Entities.
    This is a very similar use case as with the FAQ http://www.oracle.com/technology/products/berkeley-db/faq/je_faq.html#37 . The only difference is that I want to use DTP.
    It would be great if you could answer me the following questions:
    1.) Do basicly the same answers as in the FAQ apply to EntityStores?
    2.) How expensive is an EntityStore (memory, speed - creating and releasing)?
    3.) What would you recommend? I have to do a lot of secondary-index lookups which are restricted to one account. There are about 10 Entity classes and about 1000 accounts.
    Thanks,
    Christian

    Hello,
    It would be great if you could answer me the
    following questions:
    1.) Do basicly the same answers as in the FAQ apply
    to EntityStores?Yes, they apply.
    2.) How expensive is an EntityStore (memory, speed -
    creating and releasing)?For each store, there is an underlying Database per entity class and per secondary key. And for each store there is a catalog database which is also kept in memory.
    3.) What would you recommend? I have to do a lot of
    secondary-index lookups which are restricted to one
    account. There are about 10 Entity classes and about
    1000 accounts. I recommend using a single store for best performance and resource utilization.
    Mark

  • How to realize the other three events triggered by a sequence of events occurs?

    How to realize the other three events triggered by a sequence of events that occurs between Labview, and the time interval between three events for the 50ms?
    1110340053

    Are you another student who feels the need to post their "student ID" number as a part of the message?  There is really no need to do that, it is meaningless to us.
    You should ask your instructor to answer your questions rather than allowing them to turn the whole class of students loose on the forums usually asking the same identical question.
    At least in your case, the question is different from most we've seen.  Unfortunately, you haven't asked a clear enough question for us to understand what you are talking about.
    What "events" are you talking about?  Post a VI that demonstrates the code that you have written so far.

  • Performance Impact When Using SNC Communication

    Hello,
    Does anybody know if and how much performance impact there is if we use SNC for communication between the SAP Server and SAPGUI?
    I think there are two areas that may be impacted; Network and server CPU.
    For network load, I did find a part in "Front-End Network Requirements for SAP Business Solutions" document saying "overhead of roughly 350 bytes per user interaction step" but it does not specify the type of encryption.  I wonder if there is any other info on this?
    For CPU impact, how much overhead should I consider for sapgui access?
    I see no field for this in the quicksizer and I can't seem to find any white papers on this subject.
    Thank you in advance.

    >
    Peter Adams wrote:
    > Ken,
    >
    > if you plan to use SAPcryptlib for SNC between SAP servers, then you should use a SAPcryptolib-compatible solution for the SNC communication between SAPGUI and SAP server, and there is only one vendor who can provide this. Let me know, if you need help finding it. My contact information is in my SDN business card.
    Just so Kan is clear - It is not legal to use the SAP cryptolib provided by SAP for SNC between SAP GUI and SAP servers, so if x.509 is the desired mechanism you need to purchase additional software from the company which Peter works for to provide SAP GUI SNC-based SSO. I think instead, Kan might be using the free SAP supplied SNC Kerberos library, which is why I asked him to confirm this in my last post. I doubt he is interested to buy any third party software.
    > As to the performance discussion: first of all, yes, there will be a small performance impact if SNC is used (no matter which type or implementation), but from our experience with many actual SNC implementations, I can state that this is practically not relevant. It is not noticeable by users. There were never any performance discussions with customers. See also SAP Note 1043694.
    I agree with this - the performance impact is not noticed by users, but the system managers who look after the servers where SAP is installed, and the team responsible for the network need to be aware of any differences (if any) when SNC is turned on and when SNC is turned off. I think this is why Kan is asking these questions, not because he is concerned about users noticing any difference when they logon to SAP.
    > Just a first quick comment on certain statements above: Tim's arguments for proving his overall statement are not conclusive from my perspective. Nor do I think his overall statement itself is correct.
    The facts I mentioned are well known facts, e.g. symmetric crypto is far better from performance point of view than asymmetric. I know the examples I have shown which I found when doing a quick google search were not conclusive, but they were shown as initial examples, not necessarily the best examples. This is why I specifically mentioned that if you search in google yourself you will see many more references where comparisons are done between Kerberos (e.g. symmatric) compared with PKI (e.g. asymmetric).
    > First of all, he only selects one aspect of performance - CPU impact of encryption algorithms.
    No, I didn't. Some of the examples I referred to also discuss other differences. I also mentioend other differences such as memory and what protection level is used when configuring SNC.
    > But for a true comparison, you'd have to look at all relevant aspects (latency, network overhead, ...).
    Yes, I agree. No doubts here.
    >Network performance overhead is usuallly worse with Kerberos than with PKI.
    This is not true. When SAP is using SNC, the GSS-API standard is used and so the only network communication involves SAP software sending a standard GSS token from the workstation to the SAP server, and this GSS token is often about the same size, regardless of which mechanism is used, so any network performance differences are not related to the mechanism, but more related to the complexity of the cryptography used on each end (mostly on the server side).
    >Second, you need to look at the specific usage scenario. For example, the first report referenced by Tim is an analysis about different Token Profile mechanism for WS Security, for one specific implementation. This does not allow to draw any conclusion for the SNC use case in general, and for sure not for a specific implemenation. It does not take the overhead for the encryption of the message content into account. Third, Tim associates PKI exclusively with asymmetric encryption. Yes, it is well known that asymmetric algorithms are slower than symmetric ones, but it is also well known that the encryption of the message content (by far the majority of the data) happens with symmetric encryption algorithms in the PKI scenario. With PKI-based SNC, you can even select a symmetric algorithm and use a more performant one that the ones that Kerberos prescribes.
    Kerberos works with many different symmetric algorithms as well, so mentioning that the alg is selectable is not relavent to any comparison.
    > To summarize, I will try and collect facts that will support the opposite point of view. From our practical experience, the performance overhead is not relevant, and criteria like consistency with SAPcryptolib, strength of security, ease of administration, choice of authentication and encryption mechanism, etc. are much more important.
    >
    > Peter

  • Use case for financial data

    Hi All,
    I've a question about potential use case for Oracle spatial. Data structures are following:
    Clients
    Account (have a dimension of balance, can be zero or above zero)
    Client to account relationship
    E.g.
    Client C1 is a borrower to Account A1 (balance = 0)
    Client C1 is a co borrower to Account A2 (balance > 0)
    Client C2 is a co borrower to Account A1 (balance > 0)
    Client C3 is a co borrower to Account A3 (balance > 0)
    Currently, database is modeled as a set of three tables, e.g.
    Client
    ID
    DATA
    Account
    ID
    DATA
    BALANCE
    CLIENT_TO_ACCOUNT
    CLIENT_ID
    RELATIONSHIP (E.g borrower)
    ACCOUNT_ID
    Business limitations:
    We are not interested in independent graphs for which all accounts have balance = 0 (let's call it inactive graph), however we might need occasionally query it
    Users are interested in vertices/edges with account which have balance = 0, but linked (up to level N) to active account for analysis purposes
    There is no well defined root (e.g. there can be 2 or more clients which are co borrowers to same account)
    99% of queries will be against active graphs
    Graphs are mutable, e.g. new relationships (edges) may be created/deleted during the day
    Users are potentially interested in free navigation in whole independent graph, starting from root.
    Root is determined by certain business rule
    Need to process active graphs daily as bulk
    Problems which I am trying to solve:
    Limit the amount of data which may need to be processed - based on the analysis of current system, we only need 5% of data + some delta for 99% processing
    Make sure performance does not degrade with time as we get more historical (processed data) - we can not deleted accounts with balance = 0 as potentially new relationship may arrive with new accounts with balance > 0
    Current solution that I am thinking of :
    Artificially partition the data universe as active and inactive graphs. All indexes would be local to two partitions.
    E.g.
    GROUP
    GROUP_ID PK
    ACTIVE_FLAG (partition key)
    CLIENT
    GROUP_ID (PARTITION BY FK TO GROUP)
    ACCOUT
    GROUP_ID (PARTITION BY FK TO GROUP)
    CLIENT_TO_ACCOUNT
    GROUP_ID (PARTITION BY FK TO GROUP)
    The issues I am seeing right now:
    1. Graphs(groups) may be potentially unlimited, so I will need a artificially limit the size using some dividing algorithms - leading to
    2. Graphs(groups) may need to be joined or divided
    3. Graphs(groups) will have to be activated/deactivated - e.g. moved to different partitions.
    4. Data loading, activation/deactivation algorithms are not simple
    So I am thinking about Oracle Spatial (Network) to model this problem.
    Questions:
    1) Can I model this problem using Oracle Spatial?
    2) Will I gain any performance improvement?
    3) Is there any explanation or white paper on how to do this for this particular type of problem?
    4) Will the solution based on Oracle Spatial solve the problems outlined above?
    5) Will my solution (without using Oracle spatial) work at all? Or there are some fundamental issues..
    Thanks you!

    Either add a LOV to the JobID attribute definition in the VO (if the JobID will be editable) or simply add the job description to the select statement (join to the job table) as a reference attribute

  • Oracle: slow performance with SELECT using ojdbc14 and connection pooling

    Hello,
    i'm working hard the last days to solve a performance problem with our customer using a oracle 10g database. For testing I used our oracle 9.2.0.1.0 database which shows the same symptoms. All doing solved nothing: the performance while using this oracle is much slower than other databases. This result I cannot trust and so I need some advice. What is missing to improve the performance on the java side?
    The webapplication I use runs fast on MySQL 4.x and SQLServer 2000, but on the above mentioned Oracle it was always 4 times slower. The webapplication uses a lot of simple SELECT-Statements without complicated joins and so on (because it should run on many different databases). Doing some days of creating tests within this webapplication, I was not able to find any entrance point for a change. All databases server I'm using, having only the default configurations after a common installation.
    To reduce the complexity I wrote a simple java application with connection pooling using only the latest libraries from apache-commons(dbcp, pool), and the latest ojdbc14 for oracle 9.2.
    First the results than the code: MySQL needed less than 1000 millisecond, SQLServer around 1000 milliseconds and Oracle over 2000 milliseconds. I stopped pooling and the results are for Oracle even worse: over 18000 milliseconds (mysql:2500, sqlserver:4100).
    I changed the classes for Oracle and used the class oracle.jdbc.pool.OracleConnectionCacheImpl from the ojdbc14-library. No difference (around 100 milliseconds more or less).
    The only Select-Statement works on this table, which has one index on HICTGID.
    It contains 259 entrances.:
    CREATE TABLE HIERARCHYCATEGORY (
      HICTGID                 NUMBER (19)   NOT NULL,
      HICTGLEVEL              NUMBER (10)   NOT NULL,
      HICTGEXTID              NUMBER (19)   NOT NULL,
      HICTGEXTPARENTID        NUMBER (19)   NOT NULL,
      HICTGNAME               VARCHAR2(255) NOT NULL
    );The application simply loops through this table using
    SELECT Hictgid, Hictgname FROM HIERARCHYCATEGORY WHERE HICTGID = ?, but I always open a connection before this query and closes this connection afterwards. So I use the pooling as much as possible. That's all SQL I'm using.
        protected static DataSource setupDataSource(String sDriver, String sUrl, String sUser, String sPwd) throws SQLException {
            BasicDataSource ds = new BasicDataSource();
            ds.setDriverClassName(sDriver);
            ds.setUsername(sUser);
            ds.setPassword(sPwd);
            ds.setUrl(sUrl);
            // The maximum number of active connections:
            ds.setMaxActive(3);
            // The maximum number of active connections that can remain idle in the pool,
            // without extra ones being released, or zero for no limit:
            ds.setMaxIdle(3);
            // The maximum number of milliseconds that the pool will wait (when there are no available connections)
            // for a connection to be returned before throwing an exception, or -1 to wait indefinitely:
            ds.setMaxWait(3000);    
            return ds;
        }I can switch by using external properties between three databases (oracle, mysql and sqlserver) and if I want I can switch pooling off. And all actions I'm interested are logged by Log4J.
        public static Connection getConnection() throws SQLException {
            Connection result = null;
            String sJdbcDriver = m_oJbProp.getString("jdbcDriver");
            String sJdbcUrl = m_oJbProp.getString("databaseConnection");
            String sJdbcUser = m_oJbProp.getString("dbUsername");
            String sJdbcPwd = m_oJbProp.getString("dbPassword");
                try {
                    if (m_oJbProp.getString("useConnectionPooling").equals("true")) {
                         if (log.isDebugEnabled()) {
                              log.debug("ConnectionPooling true");
                        if(null == m_ds) {
                            m_ds = setupDataSource(sJdbcDriver,sJdbcUrl,sJdbcUser,sJdbcPwd);
                              if (log.isDebugEnabled()) {
                                   log.debug("DataSource created");
                        result = m_ds.getConnection();
                    } else {
                        // No connection pooling:
                         if (log.isDebugEnabled()) {
                              log.debug("ConnectionPooling false");
                        try {
                            Class.forName(sJdbcDriver);
                            result = DriverManager.getConnection(sJdbcUrl, sJdbcUser, sJdbcPwd);
                        } catch (ClassNotFoundException cnf) {
                            log.error("Exception: Class Not Found. ", cnf);
                            System.exit(0);
    (.. ErrorHandling ...)Here is the code fragment which is doing the work:
                     StringBuffer sb = new StringBuffer();
                while (lNextBottom <= lNextCeiling) {
                     con = getConnection();
                     innerSelStmt = con.prepareStatement("SELECT Hictgid, Hictgname FROM HIERARCHYCATEGORY WHERE HICTGID = ?");
                     innerSelStmt.setLong(1, lNextBottom);
                     rsInner = innerSelStmt.executeQuery();
                     if ((rsInner != null) && (rsInner.next())) {
                         sb.append(rsInner.getLong(1) + ", " + rsInner.getString(2) + "\r");
                          if (log.isDebugEnabled()) {
                               log.debug("Inner Statement: " + rsInner.getLong(1) + "\r");
                     rsInner.close();
                     con.close();
                     lNextBottom++;
                 if (log.isInfoEnabled()) {
                      log.info("\rResult values: Hictgid, Hictgname \r");
                      log.info(sb.toString());
                 }and the main method:
        public static void main(String[] args) {
            try {
                 long lStartTime = System.currentTimeMillis();
                 JdbcBasic oJb = new JdbcBasic();
                 boolean bSuccess = false;
                 bSuccess = oJb.getHierarchycategories();
                 if (log.isInfoEnabled()) {
                      log.info("Running time: " + (System.currentTimeMillis() - lStartTime));
                 if (null != m_ds) {
                     printDataSourceStats(m_ds);
                      shutdownDataSource(m_ds);
                      if (log.isInfoEnabled()) {
                           log.info("Datasource closed.");
             } catch (SQLException sqe) {
                  log.error("SQLException within  main-method", sqe);
        }My database values are
    databaseConnection=jdbc:oracle:thin:@SERVERDB:1521:ora
    jdbcDriver=oracle.jdbc.driver.OracleDriver
    databaseConnection=jdbc:jtds:sqlserver://SERVERDB:1433/testdb
    jdbcDriver=net.sourceforge.jtds.jdbc.Driver
    databaseConnection=jdbc:mysql://localhost/testdb
    jdbcDriver=com.mysql.jdbc.Driver
    dbUsername=testusr
    dbPassword=testpwdThanks for your reading and maybe for your help.

    A few comments.
    There is of course another difference between your test cases then just the database. There is also the driver. And I suspect that in at least the case with the jtds driver it is helping you along where you are doing something silly and the Oracle driver is not.
    Before I explain the next part I would say the speed differences between MS-SQL and MySQL look about right I think you are aiming here for MS-SQL level performance not MySQL. (For a bunch of reasons MySQL is inherently faster but there are MANY drawbacks as well which have been well discussed on previous threads)
    Here is where I believe your problem lies
    while (lNextBottom <= lNextCeiling) {
                     con = getConnection();
                     innerSelStmt = con.prepareStatement("SELECT Hictgid, Hictgname FROM HIERARCHYCATEGORY WHERE HICTGID = ?");
                     innerSelStmt.setLong(1, lNextBottom);
                     rsInner = innerSelStmt.executeQuery();
                     if ((rsInner != null) && (rsInner.next())) {
                         sb.append(rsInner.getLong(1) + ", " + rsInner.getString(2) + "\r");
                          if (log.isDebugEnabled()) {
                               log.debug("Inner Statement: " + rsInner.getLong(1) + "\r");
                     rsInner.close();
                     con.close();
                     lNextBottom++;
                 }There at least four things that are wrong with above.
    1) Why are you preparing the statement INSIDE the loop. Let us for a moment say that the loop will spin 100 times. That means that you are preparing the same statement 100 times. This is bad. It is also very relevant because for example the Jtds driver is going to be caching the prepared statements you make so that actually while you try and prepare it 100 times it only actually does it once... but in Oracle I don't know what it is doing for sure but if it is preparing on each pass well than that bit of it is going take 100 times longer then it should.
    2) You are opening and closing the connection on each pass through the loop... also a terrible idea. You need to fix this first so that you can repeatedly use the same prepared statement.
    3) Why are you looping in the first place? More on this later.
    4) Where do you close the PreparedStatement? It doesn't look like you do.
    Okay so for starters your loop should look a lot more like this...
    code]
    con = getConnection();
    innerSelStmt = con.prepareStatement("SELECT Hictgid, Hictgname FROM HIERARCHYCATEGORY WHERE HICTGID = ?");
    while (lNextBottom <= lNextCeiling) {
    innerSelStmt.setLong(1, lNextBottom);
    rsInner = innerSelStmt.executeQuery();
    if ((rsInner != null) && (rsInner.next())) {
    sb.append(rsInner.getLong(1) + ", " + rsInner.getString(2) + "\r");
    rsInner.close();
    lNextBottom++;
    innerSelStmt.close();
    con.close();
    I think the code above (and you can put your debug stuff back if you want) which uses ONE connection and ONE prepared Statement will improve your performance dramatically.
    The other question though I would as is why in the hell you are doing 100 or whatever number of queries anyway. This can be done all in ONE query which again will improve performance.
    Your query and such should look like this I think.
    String sql = "SELECT Hictgid, Hictgname FROM HIERARCHYCATEGORY WHERE HICTGID >=? AND HICTGID<=?";
    PreparedStatement ps = conn.prepareStatement(sql);
    ps.setLong(1,lNextBottom );
    ps.setLong(2,lNextCeiling);
    ResultSet rs = ps.executeQuery();
    while(rs.next()){
      // your appending to string buffer code goes here
    }and I can't understand why you're not doing that in the first place.

Maybe you are looking for

  • Can I use a flash drive to run a virucide on a HP Notebook Mini 110?

    My niece's HP Mini 119 1025DX notebook got infected with a false virucide program emulating MS Safety Essentials. As soon as the WinXP loads it displays itself. It has hijacked the computer and there is no access to the web or to restore the computer

  • How to limit entry of Ship to party in a sales order

    Dear Gurus, My current project requires the following Scenario: My  client's requirement is that for <b>a single sold to party</b>, there need to be <b>more number of ship to parties</b>.   We were able to achieve the same by <b>assigning </b> the li

  • Required BAPI for CK74N Transaction - Creating Additive Costs

    Hi Experts, Good Day. My functional guy has given a requirement to create BDC for transaction CK74N. But as it is an 'Enjoy' Transaction I thought to opt BAPI instead of BDC. But in the forum I did not find exact BAPI for this and also found some sug

  • In OSB, how to poll message from DTAQ(data queue in AS400)??

    Hi, In our OSB project, I have a data queue(AS400) in proxy side and Oracle AQ at business side, I can easily connect to Oracle AQ, but I am not able to understand how to connect with DTAQ (data queue). Is there any adapter supporting DTAQ  for OSB.

  • Accessing my hotmail.co.uk account through mac mail - deeply frustrated!!!

    Hello everyone, and thanks for your patience as I know very similar queries have been answered already in previous discussions. I have just bought my first mac after years of pc use. I would love to be able to access my hotmail.co.uk account through