OIM and High Availability

Hi All...
Question about OIM and High Availability and licensing:
If I license OIM for production Server and OIM connectors and I want two have a 2nd machine that serves as stand-by machine, do I have to licence OIM and connectors for the stand-by machine?
Thanks.

Scheduled tasks tuns on a specific server so running in a cluster won't really help you with having redundancy for scheduled tasks.
If you have very big scheduled task you have a few options:
1. Introduce milestones
Change the code so that the process does the work in batches. If something happens you can return to the last milestone instead of having to go back to scratch.
2. Multiple communicating threads
Implement batching and also run multiple threads that coordinate the effort and makes sure that there always are instances running on all nodes. If not spawn more instances until you get threads on all nodes. If a node dies the work is journaled back and is divided between the other threads. Would work but would probably get very complex.
Oracle recently ran training courses in OIM HA. I attended the course but I can't really distribute the material so please contact your Oracle account manager and ask them about the training.
Hope this helps
/M

Similar Messages

  • For a true load balancing and high-availability OHS, OPMN, and mod_oc4j

    i have read this link of Enabling Clustering on oc4j9.0.4 standalone app server
    http://www.oracle.com/technology/docs/tech/java/oc4j/htdocs/getstart.htm#1015479
    To test the clustering, start up the load balancer by executing "java -jar loadbalancer.jar".
    C:\OC4J_EXTENDED\j2ee\home>java -jar loadbalancer.jar
    In a future release of Oracle Application Server, loadbalancer.jar will be
    desupported. Because of this, we strongly suggest that you discontinue your use
    of loadbalancer.jar in this release. Under high loads, loadbalancer.jar may not
    function properly. For a true load balancing and high-availability solution,
    please move to use OHS, OPMN, and mod_OC4J. For more information, please see
    http://otn.oracle.com/products/ias/ohs/content.html
    Balancer initialized...
    what load balancer should i use for web clustering
    <frontend host="balancer-host" port="balancer-port" />
    balancer-host=localhost
    balancer-port=80
    for all nodes i mentioned same host and port in http-web-site.xml.Is it correct?
    i completed all the steps and run http://localhost:6666/session/SessionServlet
    i hit 3 times
    in the different browser http://localhost:7777/session/SessionServlet
    instead of coming 4 it starting from 1 only.

    can i use this loadbalancer.jar or not?
    how to mod_oc4j in standalone app server

  • Load balancing and High Availability topology

    Our Forms 6i client-server application currently runs on Citrix farm of 20 Windows 2000 boxes (IBM Blade Servers 2 CPU and 2 Gig Memory).
    Application supports 2000 users.
    We are moving to AS 10g r2, forms 10g and the goal is to use same hardware, 20 Windows boxes (or less), for intranet web deployment.
    What will be our best choices for application Load balancing and High Availability?
    Hardware load balancer, Web Cache, mod-oc4j? Combinations?
    Any suggestions, best practices, your experience?

    Gerd, I understand, that you are running 10g web forms through the browser, but using Citrix for deployment. This means that in addition to Application Server and Forms runtime sessions, it will be separate browser session opened for each user. What the advantage of this configuration?
    Michael, we are aware, that Citrix is not supported by Oracle as a deployment platform. That only means that prior contacting Oracle Support we have to reproduce the problem in standard environment. It was never been a problem to reproduce problem :) We were using Citrix as a deployment platform for Forms 6i client/server for 4 years, but now we are forced to upgrade to 10g.
    We are familiar with various Load balancing options available. The question is which option is the most "workable" in our case.

  • Tuxedo and High Availability

    Can you provide some information on how Tuxedo can be configured in a high availability
    environment. Specifically running Tux 7.1 on AIX 4.3 with HACMP/ES. I am planning
    on running with a 'cascading N+1' configuration and have concerns over the ability
    of the standby node to take over a failed node succesfully due to config dependancies
    on the machine name. Is there a white paper detailing use of Tux in a high availability
    environment ?

    Found the answers and thought would share it.
    1. Can load balancing be achieved in MP setup or is this a high availability configuration?
    Both - MP supports load balancing and high availability 2. In an MP setup, can a workstation client continue to work even after the master node gets migrated? If so, can we have both (or all nodes and their WSL) listed in WSNADDR for this to happen
    Correct.

  • OIM 11g High Availability Deployment

    Hi Experts,
    I'm deploying OIM 11g in High Available schema, following Oracle docs: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF, I have succesfully installed and configured OIM & SOA in weblogic domain on 'OIMHOST1', trying to propagate the configuration from 'OIMHOST1' to 'OIMHOST2' I have packed (using pack.sh) the domain on 'OIMHOST1' and unpacked (using unpack.sh) it to 'OIMHOST2' so I have updated the NodeManager executing setNMProps.sh and finally Ihave started the NodeManager. In order to Test everything is fine and following the documentation I'm traying to perform the following steps, but I'm not succeed
    I'M MUST TO SAY THAT I'M RUNNING ON SINGLE STANDARD EDITION DB INSTANCE AND NOT RAC AS MENTIONED IN ORACLE DOCS, PLEASE CLARIFY IF RAC IS REQUIRED, FOR NOW I'M IN DEVELOPMENT ENVIRONMENT, SO I THINK RAC IS NOT REQUIRED FOR NOW, PLEASE CLARIFY
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1&
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console.
    Here its not possible start AdminServer on OIMHOST2, first of all, it looks like boot.properties file under WLS_OIM_DOMAIN_HOME/servers/AdminSever/security is not valid, the first time I try to execute startWeblogic.sh script, it ask for username/password, I have updated boot.properties (vi boot.properties) and manually set clear username and password, this time startWeblogic.sh script passed this stage, but fails:
    <Error> <util.install.help.BuildMasterHelpSet> <BEA-000000> <IOException ioe java.io.IOException: No such file or directory>
    <Error> <oracle.adf.share.config.ADFMDSConfig> <BEA-000000> <MDSConfigurationException encountered in parseADFConfigurationMDS-01330: unable to load MDS configuration document
    MDS-01329: unable to load element "persistence-config"
    MDS-01370: MetadataStore configuration for metadata-store-usage "writeable" is invalid.
    MDS-00503: The metadata path "/u01/app/oracle/product/Middleware/user_projects/domains/IDMDomain/sysman/mds" does not contain any valid directories.
    I have verified that this directory "mds" does not exists, as reported by the IOException, in OIMHOST2, but it exists in OIMHOST1. from here its not possible for me following Oracle's documentation, I test this starting Adminserver in OIMHOST1, and starting WLS_SOA2 and WLS_OIM2 managed servers from OIMHOST1 AdminServer console, I have tested 2 ways:
    1.- All managed servers in OIHOST1 are shutdown, for this, managed servers in OIMHOST2 works as expected
    2.- All managed servers in OIMHOST1 are RUNNING, for this, first I have started SOA2 managed server, after that, I have fired OIM2 managed server, when it finish boot process the following message appears in server's output:
    <Warning> <org.quartz.impl.jdbcjobstore.JobStoreCMT> <BEA-000000> <This scheduler instance (servername.domainname1304128390936) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.>
    Start the WLS_SOA2 managed server using the WebLogic Administration Console.
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started.
    8.9.3.9 Validate the Oracle Identity Manager Instance on OIMHOST2
    Validate the Oracle Identity Manager Server instance on OIMHOST2 by bringing up the Oracle Identity Manager Console using a web browser.
    The URL for the Oracle Identity Manager Console is:
    http://oimvhn2.mycompany.com:14000/oim
    Log in using the xelsysadm password.
    Your help is highly apprecciated
    Regards
    Juan

    Hi Vaasu,
    I have succeeded deploying OIM in HA, just now my customer and I are working on the installation of webtier. Now I have a better understand about HA concepts and the way weblogic works -really nice, but little tricky-
    All the magic about HA is configuring properly the network interfaces in each Linux boxes (our case) so, first of all you need to create 2 new floating IP's on each Linux boxes (google: how to create virtual Ip in linux, if you don't know) clone and modify your 'eth0' network script to create the virtual IPs
    Follow the procudere in the HA guide: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF
    create DB schemas with RCU
    install weblogic
    install SOA
    patch SOA
    install IAM
    ---if you are working on a virtual machine is good idea to take a snapshot here---
    Create and configure the weblogic domain (special attentention whe configuring the cluster), see step 13 of 8.9.3.2 Creating and Configuring the WebLogic Domain for OIM and SOA on OIMHOST1, here you need to cofigure:
    For the oim_server1 entry, change the entry to the following values:
    Name: WLS_OIM1
    Listen Address: the IP that is confured in eth0:1 of Linux box1
    Listen Port: 14000
    For the soa_server1 entry, change the entry to the following values:
    Name: WLS_SOA1
    Listen Address: the IP configure on eth0:2 of Linux box1
    Listen Port: 8001
    For the second OIM Server, click Add and supply the following information:
    Name: WLS_OIM2
    Listen Address: the IP configured on eth0:1 of Linux box2
    Listen Port: 14000
    For the second SOA Server, click Add and supply the following information:
    Name: WLS_SOA2
    Listen Address: the IP configured on eth0:2 of Linux box2
    Listen Port: 8001
    Click Next.
    On Step 16 ensure you are using the UNIX tab to configure the machines, also ensure that for machine1 you use the IP configured on the eth0 interface of Linux box1, the same for machine2
    please confirm you have performered 8.9.3.3.2 Update Node Manager on OIMHOST1
    if everything is ok you must be able to start the AdminServer as described in the guide.
    configure OIM: 8.9.3.4.2 Running the Oracle Identity Management Configuration Wizard, in my case I don't need LDAPsync, I have skipped this section, if you configure properly OIM, then you mus perform 8.9.3.5 Post-Configuration Steps for the Managed Servers
    resrtar AdminServer then from the weblogic console, start OIM and SOA if node manager is properly configured SOA and OIM must run properly, update deployment mode and coherence as described in the guide and verify that OIM run perfectly in Linux box1.
    Propagate OIM from Linux box1 to Linux box2 as described in the guide, using pack and unpack (you MUST use the same filesystem directory structure on both Linux boxes)
    Update and start NodeManager as described in the guide
    VERY IMPORTAN OBSERVATION
    the guide say:
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    JUAN OBSERVATION:
    IS NOT POSSIBLE TO START OR STOP ADMINSERVER ON HOST2 SINCE ADMIN SERVER WERE CONFIGURED TO LISTEN ON THE IP ADDRES OF eth0 INTERFACE ON HOST1, SO, ITS NOT POSSIBLE TO PLAY IT ON HOST2, I THINK AND ADDITIONAL PROCEDURE SHOULD BE FOLLOWED TO CONFIGURE ADMINSERVER IN HA IN A ACTIVE-PASSIVE MODE
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1& -----NOT APPLICABLE
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console. -----NOT APPLICABLE
    Start the WLS_SOA2 managed server using the WebLogic Administration Console. ----START SOA2 FROM THE CONSOLE RUNNING ON HOST1, IT DOESN'T MATTER
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started. ------ START OIM2 FROM THE CONSOLE RUNNING ON HOST1
    HERE YOU MUST BE ABLE TO LOGIN TO OIM2 SERVER AS DESCRIBED IN THE GUIDE, YOU DON'T NEED TO EXECUTE config.sh SCRIPT THIS SHOULD WORK AS DESCRIBED.
    Server migration should work straight-forward if you have configured the floating IPs as described, I have not configured the persistence yet since my customer does not have the skills to share a storage.
    I hope this helps, and feel free to comment or complement.
    By the way, did you know how to set up a valid SSL certificate in Windows 2003 server??? I need it to test and Exchange 2007 I'm tryin to integrate
    Regards
    Juan

  • Exchange 2010 to Exchange 2013 Migration and Architect a resilient and high availability exchange setup

    Hi,
    I currently have a single Exchange 2010 Server that has all the roles supporting about 500 users. I plan to upgrade to 2013 and move to a four server HA Exchange setup (a CAS array with 2 Server as CAS servers  and one DAG with 2 mailbox Servers). My
    goal is to plan out the transition in steps with no downtime. Email is most critical with my company.
    Exchange 2010 is running SP3 on a Windows Server 2010 and a Separate Server as archive. In the new setup, rather than having a separate server for archiving, I am just going to put that on a separate partition.
    Here is what I have planned so far.
    1. Build out four Servers. 2 CAS and 2 Mailbox Servers. Mailbox Servers have 4 partitions each. One for OS. Second for DB. Third for Logs and Fourth for Archives.
    2. Prepare AD for exchange 2013.
    3. Install Exchange roles. CAS on two servers and mailbox on 2 servers. Add a DAG. Someone had suggested to me to use an odd number so 3 or 5. Is that a requirement?
    4. I am using a third party load balancer for CAS array instead of NLB so I will be setting up that.
    5. Do post install to ready up the new CAS. While doing this, can i use the same parameters as assigned on exchange 2010 like can i use the webmail URL for outlook anywhere, OAB etc.
    6. Once this is done. I plan to move a few mailboxes as test to the new mailbox servers or DAG.
    7. Testing outlook setups on new servers. inbound and outbound email tests.
    once this is done, I can migrate over and point all my MX records to the new servers.
    Please let me know your thoughts and what am I missing. I like to solidify a flowchart of all steps that I need to do before I start the migration. 
    thank you for your help in advance

    Hi,
    okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
    and so on. It's much more simpler, better and less expensive.
    CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
    http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
    First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
    On channel 9 there is many stuff from MEC:
    http://channel9.msdn.com/search?term=exchange+2013
    Migration:
    http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
    Additional informations:
    http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
    Hope this helps :-)

  • Windows Event Collector - Built-in options for load balancing and high availability ?

    Hello,
    I have a working collector. config is source initiated, and pushed by GPO.
    I would like to deploy a second collector for high availability and load balancing. What are the available options ? I have not found any guidance on TechNet articles.
    As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
    In my GPO Policy, if I individually declare both servers, events are forwarded twice, once for each server. Indeed it does cover high availability, but not really optimized.
    Thanks for your help.

    Hi,
    >>As a low cost option, is it fine to simply start using DNS round-robin with a common alias for both servers pushed as a collector name through GPO ?
    Based on the description, we can utilize DNS round robin to distribute workloads and increase fault tolerance. By default, DNS uses round robin to rotate the order of RR data returned in query answers where multiple RRs of the same type exist for a queried
    DNS domain name. This feature provides a simple method for load balancing client use of Web servers and other frequently queried multihomed computers. Besides, by default, DNS will perform round-robin rotation for all RR types.
    Regarding DNS round robin, the following article can be referred to for more information.
    Configuring round robin
    http://technet.microsoft.com/en-us/library/cc787484(v=ws.10).aspx
    TechNet Subscriber Support
    If you are TechNet Subscription user and have any feedback on our support quality, please send your feedback here.
    Best regards,
    Frank Shen

  • WAE Inline Adapter and High Availability

    What is the recommendation to achieve high availability (and load sharing?) using two WAE's in a DC with inline adapters being used for interception rather than WCCPv2? Any ideas greatly appreciated.

    Ken,
    Generally I would recommend interception with either WCCP or a L4-7 load balancer like ACE/ACE Appliance/CSM. Both of these are considered best practice for DC deployments as they both can handle high amounts of traffic, can handle WAEs going offline and can load balance many WAEs in a cluster. WCCP might be slightly more flexible where and how you can deploy it.
    Both of the following interception techniques are not recommended for data centers, but I list just as FYI:
    - Inline cards in appliances can be configured in serial to give you high availability, but you don't get load balancing. The second appliance only starts optimizing traffic once the first WAE starts passing traffic through on overload. Inline interception is usually used for remote offices, however it has been deployed in DCs.
    - PBR can be configured for high availability with next-hop features, however there is no available load balancing. The first WAE has to fail completely and then a second WAE can come into play. Overloaded traffic is not optimized at all.
    Hope that helps,
    Dan

  • InterConnect and High Availability

    We use Oracle InterConnect Server (9.0.4) to integrate the e-Business Suite with our legacy applications. We are currently planning the deployment to production of these integrations. The documentation how to use RAC (real application cluster) is rather limited. Does anyone have experience with InterConnect in conjunction with RAC?
    We need to have high availability but we don't want to run multiple instances of an adapter simultaneously (to ensure serialisation of the processing of the messages coming from the legacy applications). How is the persistence, caching etc.. done in such a setup?
    thank you,
    Claude Boussemaere

    Claude,
    If it is just HA that you want in IC (which means running the adapters against a RAC database for hub and the spoke connections), all you need to do is configure the adapter.ini and hub.ini parameters with connection information of the alternate nodes.
    In terms of persisting runtime data in the local file system, you can turn pipelining option off.
    adapter.ini parameters:
    //Set this to false if you want to turn off the pipelining from the Bridge towards the hub.
    agent_pipeline_to_hub=true
    //Set this to false if you want to turn off the pipelining from the hub towards the bridge.
    agent_pipeline_from_hub=true
    The adapters will then not persist the messages in a local file persistence. Any message at any given time will either be sitting in the hub db or in the EBS instance.
    The adapters cache only the metadata locally and hence you wont have to worry about metadata cache.
    If the hub/spoke db fails, the adapter's JDBC connections will fail over to the alternate instance(s) in a round robin fashion.
    HTH,
    Maneesh

  • Grid Control 12c and high availability databases

    Hello,
    In the context of high availability databses, my configuration is as follow:
    2 AIX servers on 2 sites with shared storage -> Site1 and site2.
    During normal operations databases are on site1. In case of disaster, databases can fallover on site2.
    On my Grid Control, 2 agents are installed: One for AIX on site1 and one for AIX on site2.
    Databases have been discovered on AIX:site1
    Listener is a virtual address and can also fallover.
    What are the possibility on the grid side ?
    Do I have to discover also the databases on site2 ?
    Actually, Listener is seen down on the site who has not the databases.
    Is there a way to only configure the Listener virtual address on the Grid ?
    Regards

    Hi,
    Basically you have to relocate the targets to the new agent (the one installed on site2). I haven't done that, but I expect to work fairly easy and smooth. Refer to the following section of Cloud Control 12c manual:
    Configuring Targets for Failover in Active/Passive Environments
    Regards,
    Sve

  • OIM 11g high availability - is LDAP required for Weblogic credential store

    Hi all,
    Trying to understand whether we need an LDAP in an HA architecture with [OIM/SOA] - [OIM/SOA/Admin]?
    The HA guide: http://docs.oracle.com/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF
    Does not mention this requirement, in fact it specifically says you only need an LDAP if: "only for LDAPSync-enabled Oracle Identity Manager installations and for Oracle Identity Manager installations that integrate with Oracle Access Manager. "
    However I have seen mention of issues with viewing tasks in SOA from OIM:
    How To : ORABPEL-30504: After Oim 11g Installation, Approval Tasks Cannot Be Read Through OIM Console
    Stating then when using OIM, SOA and an isolated Admin server, you need to switch to a proper LDAP as a credential and policy store:
    http://docs.oracle.com/cd/E17904_01/core.1111/e12036/net.htm#CIHIDJCC
    "2.4 LDAP as Credential and Policy Store
    With Oracle Fusion Middleware, you can use different types of credential and policy stores in a WebLogic domain. Domains can use stores based on XML files or on different types of LDAP providers. When a domain uses an LDAP store, all policy and credential data is kept and maintained in a centralized store. However, when using XML policy stores, the changes made on managed servers are not propagated to the Administration Server unless they use the same domain home.
    An Oracle Fusion Middleware SOA Suite Enterprise Deployment Topology uses different domain homes for the Administration Server and the managed server as described in the Section 2.3, "Shared Storage and Recommended Directory Structure." Derived from this, and for integrity and consistency purposes, Oracle requires the use of an LDAP as policy and credential store in context of Oracle Fusion Middleware SOA Suite Enterprise Deployment Topology. To configure the Oracle Fusion Middleware SOA Suite Enterprise Deployment Topology with an LDAP as Credential and Policy store, follow the steps in Section 11.1, "Credential and Policy Store Configuration."
    So which is it does anyone know?
    Thanks,
    Wayne.
    Edited by: wblacklock on May 17, 2012 6:12 AM

    Note that you can use the internal LDAP that comes with WebLogic, for your users and groups if you want.
    When you have multiple domains, you have a problem with this set-up as the internal LDAP is coupled to
    a specific domain. This means that users you created in one domain are not visible in the other. When using
    a separate LDAP that contains the users. You can configure in each domain an authenticator that points
    to the LDAP. In this way you can share to user accross multiple domains.
    When you are planning to use one domain you can stick with the internal LDAP if you want.
    An example set-up (that uses access manager not identity manager) can be found here: http://middlewaremagic.com/weblogic/?p=7819,
    which might help you in how to proceed.

  • OSMF and High Availability Streams

    Currently we are working on a High Availablility Stream fail-over for our FMS systems.
    We are working with two simultaneous streams from different servers, and using our FMS system to switch over between the two; however, I noticed an command with OSMF that is causing us an issue.
    According to the NetStreamTimeTrait class when the player receives a NetStream.Play.UnpublishNotify - it fires a signalComplete() - which in turn switches the player to stop.
    Unfortunately, the Unpublish subscribe comes straight from the "root" ingest point of the stream, and it appears that it cannot be intercepted before it gets to the client player.  Our FMS/Ops doesn't want to set up a dummystream to each player as that would claim more resources/overhead were the number of streams/clients to get large.  I've tested this switchover with a standard OVP player, and this issue does not occur (the stream continues playing after the stream has switched over.) -and I believe it's because it doesn't do any actions associated with the Unpublish event.
    Currently we are looking at extending our current plug-in to subclass/rewrite the NetStreamTimeTrait class (and all classes that touch it), but it's not my first choice.
    Does anyone know another option?
    Thanks in advance.
    -Will

    One other possibility would be to try to suppress the Unpublish.Notify event.  You could do this by accessing the NetStream (I'm fairly sure how to do so is described in a dozen other threads), adding an event listener with a high priority, checking to see if the received event is an Unpublish.Notify event, and if so calling event.stopImmediatePropagation() so that the NetStreamTimeTrait never receives it.
    The proxy approach that Wei mentions would work as follows:  You would create a ProxyElement subclass that proxies the VideoElement.  The ProxyElement subclass would gain access to the NetStream, and override the TimeTrait with a custom TimeTrait.  For most operations, it would simply pass the values from the proxied TimeTrait up (i.e. you would add listeners to the proxied TimeTrait, and then update the overriding TimeTrait accordingly).  However, if it receives a "complete" event, and that event is in response to an Unpublish.Notify (which you would determine by listening for the event on the NetStream), then it would not propagate that same event up to the overriding TimeTrait.  Hopefully that makes sense -- if not, I can provide more specifics.

  • High load and high availability

    Hi, my problem is simple, i guess... I have ONE database with only one table, and i'm using sqlldr to insert data into it, the problem is that ther's a lot of data to be inserted and constantly. I'm trying to figure out what would be the best practice to improve the availability of my database. If i drop the indexes the sqlldr will insert the information quickly but i really think that it's not a good thing to do, because it would "kill" the select's to my db from the client side.
    Can anyone give a hand or guidance here?
    Thanks in advance, and sorry for my bad english!
    SD

    There are many possibilities.
    Some depend on knowing the database version and the operating system.
    Some depend on your skill
    Some depend on the available hardware and the ability and desire to throw hardware at the problem
    Some depend on the supporting statistics, both for the environment and the runtime
    Some depend on whether you are serious enough to actually learn the architecture of the tool to use it wisely
    My guidance would be to purchase Tom Kyte's book from APress.com and use the utilities he includes in the book to understand where the problems are and what can be dome about them.

  • Search index - and high availability

    Running Search on WFE(A) and APP(B) server - with this topology.
    The company relies heavily on the Search Index - so a reset of the index isn't always an option, and the re-build takes a lot of time, so in order to avoid a index reset. 
    Is it possible to build a server X that holds the Index copy for server A+B and then when a reset is necessary on server A+B - tell SP2013, it now have to utilize the index that "sits" on server X, while rebuilding is done on server A
    + B?
    Can I build a separate index for a specific content source/site collection? - so the content source
    e.g. "http://contoso.com" are in search indexY and the "http://contoso.com/sites/" is in IndexZ   - and that way only reset indexZ and not touch indexY? 
    Does a second "Search Service Application" contribute/solve any? 

    Hi,
    1.Yes,you can add a new index partition consists of server A and server X(replica) , and add another index partition consists of server B and server Y(replica). For more information, you can refer to the article:
    http://technet.microsoft.com/en-us/library/jj862355(v=office.15).aspx#Search_Index_Part
    2.No,you cannot. But you can reset index at site level and list level.
    https://support.office.com/en-US/article/Manually-request-crawling-and-reindexing-of-a-site-9afa977d-39de-4321-b4ca-8c7c7e6d264e
    3.No, you donot need to create a seacond "Search Service Application".
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Eric Tao
    TechNet Community Support

  • RAC and High Availability

    Reading matt manfredonias Q/A about Load Balancing, I stil wonder, how the "outer world" sees the RAC.
    What happens in case of one of two nodes vanishing from the network. What if this node was the dispatcher?
    The user/application has to switch over somehow, hasn't he?
    Klaus

    for more information :
    Real Application Clusters Administration Contents / Search / Index / PDF
    Real Application Clusters Concepts Contents / Search / Index / PDF
    Real Application Clusters Deployment and Performance Contents / Search / Index / PDF
    Real Application Clusters Documentation Online Roadmap Contents / Search / /
    Real Application Clusters Real Application Clusters Guard I - Concepts and Administration Contents / Search / Index / PDF
    Real Application Clusters Real Applications Clusters Guard I Configuration Guide Release 2 (9.2.0.1.0) for UNIX Systems: AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, and Sun Solaris Contents / Search / Index / PDF
    Real Application Clusters Setup and Configuration Contents / Search / Index / PDF
    http://otn.oracle.com/pls/db92/db92.docindex?remark=homepage
    Joel P�rez

Maybe you are looking for

  • Cannot download music from ITunes

    I have most recent version of lion and Itune on my IMAC.  WhenI try to buy & download a new song I get the following issue: "You do not have enough access privileges for this operation" I am the admistrator on the machinie.  I have gone to store/chec

  • PI 2.2 automatically export ClientsAndUsers data to .csv

    Hello dear community, i try to automate export the data from the Monitor > Client and Users to .csv file. Is it possible to export the file every week automatically in .csv format to an external server (preferably SCP or SFTP). Or any other way.? Reg

  • After upgrade LDB pnp entry date issue

    i upgrade from 4.6c to ECC6, after upgrade my many reports which based on hr data,not showing entry date . like country effective date , nationality date ...entry date...etc.. any idea???? many reports based on LDB PNP. Moderator message - Cross post

  • CS 4 InDesign Document opening full size of window

    I upgraded to CS 4 on one machine. Now whenever I open any document, new or old, even if saved, the document won't open to my working area but goes the whole width of the screen including under my palettes. I've tried to save but every time I open, i

  • Reference partitioning?

    If I have tables A, B, C where I have 1->M->M if I have reference partitioning everything ends up in the same partition based on A's partitioning defintion. If when I query I only specify the partition key to quality C when I'm joining from A to B to