Clustering/Failover strategy

Hi,
          I would like comments on the following design strategy as to a) whether it
          is feasible and b) is there a better way of doing it:
          We have a main server that will host two instances of WebLogic. Another
          server is intended as a disaster recovery machine and will host a further
          two instances of WebLogic.
          All four instances of WebLogic will be clustered together. However, on the
          main server, a database will also be running and WebLogic will be pulliing
          over megabytes of data every minute from the database. Because of this I do
          not want the two instances of WebLogic running on the disaster recovery
          machine to service client requests as they would be transfering the data
          over the network. I want these two instances purely to be for disaster
          recovery.
          To achieve this I was thinking of using a weighting of 100 on both instances
          of WebLogic on the main server, while the disaster recovery instances would
          have a weighting of 0. The result would mean that both instances of
          WebLogic on the main server would recieve all client requests and if they
          went down the two distaster recovery instances would kick in and client
          connections to WebLogic would be preserved. However, during normal
          processing, the disaster recovery instances would not receive any client
          requests.
          Is this a sensible approach or too over-engineered? Comments appreciated.
          Thanks,
          Myles Jeffery
          

I agree; given the limitation of 2 physical boxes, I would rather put a db server on one and WL app
          server(s) on another. This seems like a much more robust architecture.
          Gene Chuang
          Join Kiko.com!
          "Sriram Narayan" <[email protected]> wrote in message news:[email protected]...
          >
          > hi
          >
          > imho, its quite a dubious strategy to have ur database hosted on the same box as ur appserver. I
          don't think that the concept of entity beans came about with such a configuration in mind.
          >
          > do correct me if i have got my basics wrong.
          >
          > thanks
          > sriram
          > "Myles Jeffery" <[email protected]> wrote:
          > >Hi,
          > >
          > >I would like comments on the following design strategy as to a) whether it
          > >is feasible and b) is there a better way of doing it:
          > >
          > >We have a main server that will host two instances of WebLogic. Another
          > >server is intended as a disaster recovery machine and will host a further
          > >two instances of WebLogic.
          > >
          > >All four instances of WebLogic will be clustered together. However, on the
          > >main server, a database will also be running and WebLogic will be pulliing
          > >over megabytes of data every minute from the database. Because of this I do
          > >not want the two instances of WebLogic running on the disaster recovery
          > >machine to service client requests as they would be transfering the data
          > >over the network. I want these two instances purely to be for disaster
          > >recovery.
          > >
          > >To achieve this I was thinking of using a weighting of 100 on both instances
          > >of WebLogic on the main server, while the disaster recovery instances would
          > >have a weighting of 0. The result would mean that both instances of
          > >WebLogic on the main server would recieve all client requests and if they
          > >went down the two distaster recovery instances would kick in and client
          > >connections to WebLogic would be preserved. However, during normal
          > >processing, the disaster recovery instances would not receive any client
          > >requests.
          > >
          > >Is this a sensible approach or too over-engineered? Comments appreciated.
          > >
          > >Thanks,
          > >
          > >Myles Jeffery
          > >
          > >
          > >
          >
          

Similar Messages

  • Best failover strategy for 2 node EBS 11i?

    Environment: 11.5.10.2
    Number of Node : 2
    DB& Concurrent Manager on 1 Node
    APPS on another node
    Concurrent Manager and APPS are using SHARED APPL_TOP.
    When DB node / APPS nodes are fails, what is the best failover method can be adopted for E-Business suite?

    John,
    With the current setup you have, high availability cannot be achieved for the following reasons:
    If Node 1 fails --> Database is down and you cannot run it on Node 2
    If Node 2 fails --> You will need to run all application services on Node 1 (as a single node installation), and this is not feasible as lots of changes need to be done in the application context file
    To achieve high availability you need to have RAC and/or Data Guard implemented. The following notes provide details about RAC and Data Guard implementation with Oracle E-Business Suite:
    Note: 341437.1 - Business Continuity for Oracle Applications Release 11i Using Oracle Real Application Clusters and Physical Standby Database
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=341437.1
    Note: 403347.1 - MAA Roadmap for the E-Business Suite
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=403347.1
    If you cannot afford implementing RAC and/or Data Guard, you may go with OS cluster (contact your vendor for details), but again this will not protect you if both servers and/or the shared storage goes down.

  • CAPS clustering\failover

    Hi all,
    I've seen several posts where people indicated that they are using a clustered and/or failover environment.
    This is something that we want to implement. Could you tell me what products or OS that you are using?
    We currently are using SuSE Linux ES sp3 but are moving to Red Hat 5.
    Regards,
    John

    hi john,
    we are now using an active active configuration. with basic sw components on HP-UX. all our components are designed to support active active configuration.
    we plan to use either java mq with HA features in 5.1 or 5.2 or JMS grid. with this we want to overcome the datafreeze if a host crashes completeley.
    regards chris

  • Clustering/Failover/ORB

    Hello,
    I am trying to make a C++ client to connect to WLS6b2 on Win2000.
    Although there is a lack of documentation, I hope I willbe able
    to make the example in rmi_iiop work.
    Beside that, I was wondering, and please tell me if I am wrong
    or not, that with a C++ client using IIOP, we no longer have
    load balance and failover if we have a cluster of WLS instances.
    Let's me explain, the example shows a C++ client to use the IOR
    of the WLS instance, which means I can only connect to this one.
    I no longer have the Smart Stub approach in a C++ client case?!
    Thank you.
    Thierry

    Hi,
    My comments below.
    "Eduardo Ceballos" <[email protected]> wrote in message
    news:[email protected]...
    Hey!
    Thierry Janaudy wrote:
    Hi Eduardo,
    1. I successfully ran the examples in rmi_iiop yesterday.
    Cool. Thank you. (C++, Inprise VisiBroker for C++, WLS6b2)Good. I expect more examples to follow after the GA date. Calling a CORBAserver from within WLS, an SSL example, to name two.
    That is really nice. It does save a lot of time having these examples.
    >
    >>
    >>
    2. About the clustering stuff, on the client side, the ORB as
    you said does not manage anything.What if the IOR had the DNS name of the cluster instead of the IP addressof a single server? The DNS name resolution will round-robbin to another
    cluster member every time DNS resolves the name, no?
    Okay, it works.
    >
    >
    So your idea about a
    special API giving back a set of IORs would be nice, but
    you may have to provide more information. What happen
    if I create a SFSB on a "Master server", WLS replicates
    the content on a "Slave server". If my initial request to
    the master fails, from your API, I should be able to know
    the IOR of the Slave (which becomes the master), and therefore
    be able to re-initiate a connection, and get the remote
    reference to the "same" (conceptually) SFSB.
    But... hmmm how do I do that?I think if you get the next IOR and narrow it to the SFSB interface youshould be ok.
    But this IOR must represent the slave server where the information is
    replicated.
    Here is an example:
    Let's have a cluster or 3 WLS instances, W1, W2, W3
    My C++ client gets back an IOR for W1 and create a SFSB, which is replicated
    on W2.
    From your API, I know that W1 is the master and W2 is the slave.
    W1 fails, I get a reference to W2 (Master now), and W3 is the slave. I am
    able to
    narrow and get the right SFSB.
    But now, if I have an IOR with a DNS name, and if I am using DNS routing,
    when W1 fails, I may go to W3, and then I have lost my SFSB.
    So using DNS/IOR we have the load balance but not the failover.
    The IOR API gives us a way to write smart CORBA stub, but we have
    to write the code for failover and load balancing.
    >
    >
    Must I store the EJB Handle
    (No because it is tied to one app server)..Handle would be alright, too, if it holds the DNS name of the cluster.Okay.
    >
    >>
    Same problem with R/W entity beans (It does not matter for
    SLSB and R/O EB)..With entity beans, you need at most once activation, which you will get inthe next release.
    Which release are you talking about? WLS 6 FINAL?
    >
    >>
    This API has to be defined in IDL. Or a Remote
    Interface from which we can generate IDL anyway...
    Yup, either way, you need an API for the C++ client to get a snap shot ofthe list of replicants.
    >
    >
    >>
    Here are some thoughts, I have to read the spec for the EJB
    Handle.
    3. I have found the newsgroup for RMI/IIOP. I have another
    question. I would like to do a callback from a SFSB to a C++
    CORBA client. Therefore I need and want to store this
    reference as a private field of my SFSB... is there any doc
    for this particular point?You mean you want the SFSB to call on a CORBA server object that is hostedin the C++ orb, yes? That works, but there's no example, yet. WHat you have
    to do is define an RMI interface and RMI server class, generate the WLS RMI
    stubs (using weblogic.rmic), and use COS Naming to bind the C++ server
    object into the JNDI tree, where the WLS object can look it up. Of course,you don't need to bind servers into the JNDI tree to get this to work, I'm
    just suggestion that you try to get this working before you attempt to
    integrate the whole thread into EJB, etc.
    Yes. I would like a C++ CORBA client to be able to create a SFSB, give it
    areference
    (in the create method) to a C++ callback object, store this reference as a
    private
    field of a SFSB. Therefore my SFSB would be able to make calls to this C++
    callback object.
    Best regards,
    Thierry
    http://www.mycgiserver.com/~janaudy/

  • Oracle 9i Real Application Clusters Failover

    I am running CFMX 6.1 with the 3.5 JDBC drivers and
    connecting to a 2 node Oracle 9i Real Application Cluster. The
    problem I am having is when one of the 2 nodes becomes unavailable;
    CFMX does not seem to be failing over seamlessly to the second
    node. Its almost as if the database connections need to first reach
    the timeout limit setup in the datasource setting in CF Admin
    before the will start to failover. I’m not 100% sure that
    they are always even failing over after that has expired. I usually
    end up having to restart CFMX service to renew the DB connections.
    This is a pest when doing DB maintenance since it causes errors on
    our site.
    I do have Maintain Connections checked, with a 5 minute
    timeout. I have my datasources setup as "Other" and am using the
    following connection string:
    jdbc:macromedia:oracle://Node1:1521;SERVICENAME=heartdrp;AlternateServers=(Node2:1521);Loa dBalancing=true
    Does anyone have an experience of had similar problems with
    CFMX not failover DB connections on an Oracle RAC system? Ideally,
    the connections would immediately go to the other node in the RAC
    if one node went down, but CF seems to keep trying to connect to
    the original node, thus not allowing for a true "failover" setup.
    I can provide more info if needed......THANKS!!

    In this weekend I will begin that odessy...
    I will give some feedback by monday!
    FS

  • DHCP Failover strategy

    Hi. 
    I would like to configure a DCHP server failover on Windows 7.
    On a part I have a windows 2008 R2 server which currently leases adresses. I recently had a crash of this server.
    In order to insure avaibility of dhcp on the next crash, I whish install a DHCP server freeware and configure a script powershell to start dhcp on failover of the first one.
    An other tip is to split the pool of leasing. Is it possible to do it with a windows 7 software?
    Thanks a lot for your help.

    Hi,
    The Windows DHCP server on 2008R2 platform support the following DHCP high ability method:
    DHCP in a Windows failover cluster.
    Split scope DHCP
    The related KB:
    Understand and Deploy DHCP Failover
    http://technet.microsoft.com/en-us/library/dn338978.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Clustering failover issue

    Hi All,
    During scheduled re-boot when the sql resources of cluster are moved from primary Node to seconday it is taking long time for the databases to get online in secondary node . But this is not the same when the resources are moved from secondary to primary node
    . Please let me know if any settings to be changed in Maximum failures in the specified period and
    Period (hours) ?
    Thanks

    Perform Volume Maintenance Tasks permission for instant file initialization only works for data files that need to be created. In both FCI and AG, the data files don't need to be created because in an FCI the database files are in a shared storage and in
    AG, the data files have already been created prior to synchronizing the replicas.
    I would check the cluster error log for details about what is happening when you are failing over between cluster nodes. Keep in mind that beyond the crash recovery process (for FCI) and undo process (for AG) happening when you failover is the checking of
    AD and DNS objects associated with the cluster resource group. It could be that, in a multi-subnet cluster, the AD and DNS objects have not been replicated over and will have to be created for the first time upon failover on the second node and since it is
    already created, failing back to the first node doesn't take much time. Again, your best bet will be the cluster error log to tell you more details.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • EJB3 Clustering Failover

    Assume I have an EJB that uses anonther EJB. I inject it with EJB3 annotations. The calling EJB starts executing. While it is running the server on which the other EJB is running fails. The calling EJB then tries to invoke the EJB whose server has failed. What happens?
    Does the Oracle Cluster transparently detect the failure, re-inject a valid EJB instance, and reinvoke the new instance? That would be best case.
    Does the Oracle AOS simply throw the standard RMI exception, which we can presumably trap. If we trap it, detect that it is an exception caused by a failed cluster element (rather than say a business logic error or a standard runtime error like deadlock, etc) can we take the usual action, i.e. get another EJB instance from a valid cluster server and re-invoke it? If so, is there any problem setting the EJB3 instance variable that was originally set by the EJB3 dependency injection framework?
    Thanks in advance for an insight.

    Hi,
    Have you find out what was the cause you get the BasicRemoteRef as a reference?
    I'm having the same problem: I have two references to different EJB's, one is resolved as ClusterableRemoteRef which never fails, and the one resolved as BasicRemoteRef is the one that fails when for example, invoke both services when my Weblogic 11 is shutting down.
    Regards,
    Juan

  • Clustering read-only bean-managed entity ejbs

              I'm designing a data caching approach that relies on using read-only entity ejbs with bean-managed persistence. My design is based on the fact that WebLogic blocks on entity bean access by concurrent users for a given bean instance (unique primary key). I would like to keep only one entity bean instance active(timeoutSetting=0) for eacy primary key for all users to share. That way I only have to hit the database one time to initially populate data in the entity bean. I'm worried about this approach in a WebLogic clustered environment. From reading notes in this newsgroup and other doc, it appears that WebLogic might not use one instance of the entity bean (based upon unique primary key) in a clustered environment. Is that true (that being multiple users could get their own instance of the entity bean with the same primary key)?
              Thanks,
              Bryan
              

    Typically, the read-write EJBs are on each WL server instance, so there is
              no remote invocation -- it is all done by reference.
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com
              +1.617.623.5782
              WebLogic Consulting Available
              "Bryan Dixon" <[email protected]> wrote in message
              news:[email protected]...
              >
              > I guess I'm confused about read-write (not read-only) entity beans being
              pinned or not. This is from WebLogic 5.1 EJB doc:
              > "read-write entity EJBs do not use a clustered EJBObject stub; a client's
              method calls to a particular EJB always go to a single WebLogic Server
              instance. If the server that a client is using fails, the client must
              re-find the entity EJB using the cluster-aware home stub."
              >
              > Doesn't that mean the entity bean instance for a primary key is pinned to
              a single WebLogic Server instance? Maybe I'm just misunderstanding
              terminology about what a "particular EJB" is - I was thinking it is an
              entity bean instance for a unique primary key.
              >
              > Thanks,
              > Bryan
              >
              >
              > "Cameron Purdy" <[email protected]> wrote:
              > >>I was thinking that read-write entity beans were pinned.
              > >
              > >Not unless you pin them. Basically, that means that the JAR/XML that
              > >contains/specifies the EJB only is on one server.
              > >
              > >> So if two separate weblogic instances in a cluster did a find on an
              entity
              > >ejb with the same primary key and then performed some business method on
              > >that entity ejb, would there really be two separate bean instances in
              each
              > >weblogic instance for the same primary key?
              > >
              > >If it is not pinned, yes.
              > >
              > >--
              > >Cameron Purdy
              > >Tangosol, Inc.
              > >http://www.tangosol.com
              > >+1.617.623.5782
              > >WebLogic Consulting Available
              > >
              > >
              > >"Bryan Dixon" <[email protected]> wrote in message
              > >news:[email protected]...
              > >>
              > >> Toa, thanks again.
              > >>
              > >> There are a couple of things I'm not clear about though. One is that I
              > >want one instance of an entity bean per primary key, not a singleton of
              the
              > >entity bean itself. How would JNDI in a clustered environment help me
              > >there?
              > >>
              > >> The other question I have is about having multiple instances of an
              entity
              > >bean for the same primary key in the cluster. I was thinking that
              > >read-write entity beans were pinned. So if two separate weblogic
              instances
              > >in a cluster did a find on an entity ejb with the same primary key and
              then
              > >performed some business method on that entity ejb, would there really be
              two
              > >separate bean instances in each weblogic instance for the same primary
              key?
              > >>
              > >> Thanks again,
              > >> Bryan
              > >>
              > >> "Tao Zhang" <[email protected]> wrote:
              > >> >
              > >> >"Bryan Dixon" <[email protected]> wrote:
              > >> >>
              > >> >>The reason I was wanting one instance per primary key is that I want
              to
              > >use this entity bean to cache some data from database tables. This data
              > >doesn't change frequently, so we were wanting to get it from this entity
              > >bean's memory instead of constantly hitting the database. This data is
              > >global to all users, so we don't want to store it in stateful session
              beans.
              > >> >>
              > >> >>After reading more about the read-only cache-strategy it doesn't
              appear
              > >that any sycnhronization will occur if the entity bean for a given
              primary
              > >key is updated (state data is updated) in one weblogic instance, that
              change
              > >will not get synced up with other weblogic instances for that same
              primary
              > >key. Is that correct?
              > >> >>
              > >> >It's correct. If you do want to have exact one copy in the cluster.
              You
              > >can read Using JNDI in cluster environment.
              > >> >
              > >> >>If I deploy this entity bean with read-write cache-strategy the
              WebLogic
              > >doc reads as if I will get one instance per primary key that is pinned to
              > >one WebLogic instance and I won't get any fail-over or load-balancing on
              the
              > >ejbObject (the entity bean instance). Did I read this correctly? If
              that
              > >is the case, what is the advantage of setting up read-write entity beans
              to
              > >be clusterable - just the Home objects? I definitely could be
              > >misunderstanding something in the doc since I'm very new to clustering.
              > >> "Tao
              > >> >Zhang"
              > >> ><[email protected]> wrote:
              > >> >>>
              > >> >
              > >> >It's not only instance in the cluster. Probably many instances.
              > >> >
              > >> >If you use 2 tier clustering, failover will not happen because of
              > >co-location. But if you use 3 tier cluster, you can write the special
              code
              > >in the client side to do failover and load-balance.
              > >> >
              > >> >In a 2 tier cluster, actually the ejb load balancing and failover is
              > >almost useless.
              > >> >
              > >> >But in 3 tier, you can use it.
              > >> >
              > >> >Hope this help.
              > >> >
              > >> >
              > >> >
              > >> >>>"Bryan Dixon" <[email protected]> wrote:
              > >> >>>>
              > >> >>>>Thanks Tao.
              > >> >>>>
              > >> >>>>A couple more questions...
              > >> >>>>I was planning deploying this entity bean with the read-only
              > >cache-strategy which means our transaction attribute would be
              > >TXN_NOT_SUPPORTED. Also, our db isolation is TRANSACTION_READ_COMMITTED.
              > >> >>>>
              > >> >>>>Based upon how I was planning on deploying this entity bean, would
              > >WebLogic create an instance of the bean for each primary key in each
              > >cluster? I'm just trying to figure out how many duplicate bean instances
              > >for the primary key I could have across all clusters. I was really just
              > >wanting one instance that is shared among all clients and was hoping that
              > >the clustering would provide me with fail-over if that one cluster went
              > >down.
              > >> >>>>
              > >> >>>If you use 3 tier cluster structure, it's impossible to know how
              many
              > >instances of ejb with the same primary key. Probably one instance for
              each
              > >wls instance.
              > >> >>>In wls5.1, it's impossible to host only read only entity bean
              instance
              > >in the 3-tier cluster. Because read only entity bean are clusterable in
              both
              > >home and remote interface.
              > >> >>>
              > >> >>>>Regarding making the bean a pinned service, which it sounds like I
              > >might have to do to get the results I want, how do I do that? Is that a
              > >deployment descriptor setting? Also, if I make it a pinned service, do I
              > >get any fail-over suport by clustering the bean?
              > >> >>>>
              > >> >>>For the pinned service, you can just deployed on one or several
              server
              > >instances. The per-server properties file is a good place to put the
              > >weblogic.ejb.deploy property. If only one pinned service in the cluster,
              you
              > >can't get fail over. If the that server instance fails, the home stub
              will
              > >be removed from the jndi in other server instances.
              > >> >>>
              > >> >>>Why do you must need only one instance in the cluster? Do you want
              > >exact-only-copy? You can read Using JNDI doc about its in cluster
              > >environment.
              > >> >>>
              > >> >>>
              > >> >>>
              > >> >>>
              > >> >>>>Thanks again,
              > >> >>>>Bryan
              > >> >>>>
              > >> >>>>
              > >> >>>>
              > >> >>>>"Tao Zhang" <[email protected]> wrote:
              > >> >>>>>
              > >> >>>>>
              > >> >>>>>Bryan Dixon <[email protected]> wrote in message
              > >> >>>>>news:[email protected]...
              > >> >>>>>>
              > >> >>>>>> I'm designing a data caching approach that relies on using
              > >read-only
              > >> >>>>>entity ejbs with bean-managed persistence. My design is based on
              the
              > >fact
              > >> >>>>>that WebLogic blocks on entity bean access by concurrent users for
              a
              > >given
              > >> >>>>>bean instance (unique primary key). I would like to keep only one
              > >entity
              > >> >>>>>bean instance active(timeoutSetting=0) for eacy primary key for
              all
              > >users to
              > >> >>>>>share. That way I only have to hit the database one time to
              > >initially
              > >> >>>>>populate data in the entity bean. I'm worried about this approach
              in
              > >a
              > >> >>>>>WebLogic clustered environment. From reading notes in this
              newsgroup
              > >and
              > >> >>>>>other doc, it appears that WebLogic might not use one instance of
              the
              > >entity
              > >> >>>>>bean (based upon unique primary key) in a clustered environment.
              Is
              > >that
              > >> >>>>>true (that being multiple users could get their own instance of
              the
              > >entity
              > >> >>>>>bean with the same primary key)?
              > >> >>>>>>
              > >> >>>>>
              > >> >>>>>
              > >> >>>>>It's true. In a cluster environment, each wls instance can have
              their
              > >ejb
              > >> >>>>>instance. The block of concurrent access to the ejb data is up to
              > >your
              > >> >>>>>transaction attribute and isolation level and your database.
              > >> >>>>>
              > >> >>>>>If you only want to keep one instance active, you can make the
              > >read-only
              > >> >>>>>entity bean a pinned service, to be deployed in one instance. But
              the
              > >> >>>>>network overhead is worse.
              > >> >>>>>
              > >> >>>>>
              > >> >>>>>> Thanks,
              > >> >>>>>> Bryan
              > >> >>>>>
              > >> >>>>>
              > >> >>>>
              > >> >>>
              > >> >>
              > >> >
              > >>
              > >
              > >
              >
              

  • Re: Failover for SO's with context

    Right, delivery of events is not guaranteed by Forte, even though
    it is reasonable to rely on it in the case of two Forte servers on a LAN.
    I would not go towards a solution for securing events delivery by
    an acknowledgement mechanism (ack event or shared object notifier),
    because of increased complexity and performance overhead.
    On the other hand, a second simple security level can be provided by
    enabling
    your mirror/backup SO to be refreshed at will, by letting it get a
    snapshot
    of the current transient data to be mirrored, so you can :
    - Start your partitions in any order (The mirror partition will first
    task a
    snapshot of the transient data, then will register for mirror events)
    - Start and stop the mirror partition at will, without disrupting the
    application
    Then, if you do not trust events delivery, you can reinitialize your
    mirror
    periodically (say every 12 hours) to minimize the risks of losing
    transient
    data events.
    Again, this solution is suited to low volumes of transient data.
    I guess what Chad means by journaling is writing to a log file any
    event (in a large sense) happening on data from its initial value. Then
    if
    you need to restore state, you re-play the events from the initial value.
    This is a common solution in the banking area where you need to backup
    values but also events on the values. I do not know how this can be
    applied
    to a generic mechanism with Forte, but it may be a good way to explore,
    although probably more complex to implement with Forte than the
    Backup SO/ Events pattern.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 10:39:03 -0600 Chad Stansbury
    <[email protected]> writes:
    Actually, since events (let alone distributed events) are not
    'guaranteed delivery' in Forte, I would hesitate to use events
    as a mechanism of mirroring your data - unless, of course, you
    really don't require an industrial strength failover strategy.
    This would also apply to asynchronous messaging (unless you
    are careful to register for exception events (which again, aren't
    guaranteed delivery) and have a mechanism to handle said
    asynchronous exception events. I also know that Forte will retry
    certain tasks when the service object it is sent to fails com-
    pletely (like a NIL object exception), but don't know enough
    about the internal workings of Forte to know under which conditions
    this will occur.
    I think that the most common method of a truly industrial-
    strength, guaranteed-delivery mechanisms is via journaling...
    which I know very little about, but is something that you should
    be able to look up and study if that's what you require.
    Again, if you don't care about the (admittedly small) chance
    of an asynchronous call failing, then the suggestions that
    Vincent has already made are good ones.
    From: [email protected]
    To: [email protected]
    Cc: [email protected]
    Sent: 2/13/98 9:13:17 AM
    Subject: Re: Failover for SO's with context
    Steven,
    The pattern choice between external resource vs SO is dependent on the
    type
    of transient data you want to backup. Probably the external resource
    is
    better
    suited to high volumes of data. We have implemented the 'Backup SO'
    pattern because our transient data volumes are rather low (which I
    guess
    must
    be the most common case for global, transient data).
    Whatever the choice you do :
    - Be sure to enforce encapsulation for updating the transient data, in
    order to
    guarantee that any modification to your transient data is duplicated
    on
    the backup
    SO or the external resource
    - About performances, the CPU cost is fairly low for your 'regular'
    application if you
    take care to :
    * use asynchronous tasks to update the external resource
    or
    * use events to notify the backup SO
    Now it is true that you will have a network overhead when using
    events,
    as your
    backup SO shall be isolated in a remote partition on a remote
    server.
    That is one good argument to select the Backup SO pattern for low
    volumes of
    transient data.
    If you choose the 'Backup SO' pattern, you will also have to be
    careful
    not sending
    any distributed reference to your Backup SO but only clones.
    Anyway, the backup SO pattern works fairly well for low volumes of
    data,
    but requires lots of testings and a good understanding of events and
    communication
    across partitions.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 09:24:57 +0100 Steven Arijs <[email protected]>
    writes:
    We're going to implement a failover scenario for our application.
    Unfortunately, we also have to replicate the state of our failed
    service
    objects.
    I've browsed the Forte site and found a TechNote concerning this
    (TechNote 11074).
    In this TechNote they talk about a service object that is responsible
    for updating all backup service objects when needed.
    It seems to me that when I implement that way, I will be creating a
    lot
    of overhead, i.e. I will be doing a lot of stuff several times.
    What will be the effects on my performance ?
    The way with the least performance loss would be to use an external
    resource that is updated. But what if this external resource also
    fails
    Is there any one who has already implemented a failover scenario for a
    service objects with state ?
    Any help would be appreciated.
    Steven Arijs
    ([email protected])
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    Right, delivery of events is not guaranteed by Forte, even though
    it is reasonable to rely on it in the case of two Forte servers on a LAN.
    I would not go towards a solution for securing events delivery by
    an acknowledgement mechanism (ack event or shared object notifier),
    because of increased complexity and performance overhead.
    On the other hand, a second simple security level can be provided by
    enabling
    your mirror/backup SO to be refreshed at will, by letting it get a
    snapshot
    of the current transient data to be mirrored, so you can :
    - Start your partitions in any order (The mirror partition will first
    task a
    snapshot of the transient data, then will register for mirror events)
    - Start and stop the mirror partition at will, without disrupting the
    application
    Then, if you do not trust events delivery, you can reinitialize your
    mirror
    periodically (say every 12 hours) to minimize the risks of losing
    transient
    data events.
    Again, this solution is suited to low volumes of transient data.
    I guess what Chad means by journaling is writing to a log file any
    event (in a large sense) happening on data from its initial value. Then
    if
    you need to restore state, you re-play the events from the initial value.
    This is a common solution in the banking area where you need to backup
    values but also events on the values. I do not know how this can be
    applied
    to a generic mechanism with Forte, but it may be a good way to explore,
    although probably more complex to implement with Forte than the
    Backup SO/ Events pattern.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 10:39:03 -0600 Chad Stansbury
    <[email protected]> writes:
    Actually, since events (let alone distributed events) are not
    'guaranteed delivery' in Forte, I would hesitate to use events
    as a mechanism of mirroring your data - unless, of course, you
    really don't require an industrial strength failover strategy.
    This would also apply to asynchronous messaging (unless you
    are careful to register for exception events (which again, aren't
    guaranteed delivery) and have a mechanism to handle said
    asynchronous exception events. I also know that Forte will retry
    certain tasks when the service object it is sent to fails com-
    pletely (like a NIL object exception), but don't know enough
    about the internal workings of Forte to know under which conditions
    this will occur.
    I think that the most common method of a truly industrial-
    strength, guaranteed-delivery mechanisms is via journaling...
    which I know very little about, but is something that you should
    be able to look up and study if that's what you require.
    Again, if you don't care about the (admittedly small) chance
    of an asynchronous call failing, then the suggestions that
    Vincent has already made are good ones.
    From: [email protected]
    To: [email protected]
    Cc: [email protected]
    Sent: 2/13/98 9:13:17 AM
    Subject: Re: Failover for SO's with context
    Steven,
    The pattern choice between external resource vs SO is dependent on the
    type
    of transient data you want to backup. Probably the external resource
    is
    better
    suited to high volumes of data. We have implemented the 'Backup SO'
    pattern because our transient data volumes are rather low (which I
    guess
    must
    be the most common case for global, transient data).
    Whatever the choice you do :
    - Be sure to enforce encapsulation for updating the transient data, in
    order to
    guarantee that any modification to your transient data is duplicated
    on
    the backup
    SO or the external resource
    - About performances, the CPU cost is fairly low for your 'regular'
    application if you
    take care to :
    * use asynchronous tasks to update the external resource
    or
    * use events to notify the backup SO
    Now it is true that you will have a network overhead when using
    events,
    as your
    backup SO shall be isolated in a remote partition on a remote
    server.
    That is one good argument to select the Backup SO pattern for low
    volumes of
    transient data.
    If you choose the 'Backup SO' pattern, you will also have to be
    careful
    not sending
    any distributed reference to your Backup SO but only clones.
    Anyway, the backup SO pattern works fairly well for low volumes of
    data,
    but requires lots of testings and a good understanding of events and
    communication
    across partitions.
    Hope this helps,
    Vincent Figari
    On Fri, 13 Feb 1998 09:24:57 +0100 Steven Arijs <[email protected]>
    writes:
    We're going to implement a failover scenario for our application.
    Unfortunately, we also have to replicate the state of our failed
    service
    objects.
    I've browsed the Forte site and found a TechNote concerning this
    (TechNote 11074).
    In this TechNote they talk about a service object that is responsible
    for updating all backup service objects when needed.
    It seems to me that when I implement that way, I will be creating a
    lot
    of overhead, i.e. I will be doing a lot of stuff several times.
    What will be the effects on my performance ?
    The way with the least performance loss would be to use an external
    resource that is updated. But what if this external resource also
    fails
    Is there any one who has already implemented a failover scenario for a
    service objects with state ?
    Any help would be appreciated.
    Steven Arijs
    ([email protected])
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

  • Can SQL server 2014 instance be introduced into SQL server 2012 clustering

    Hello, everyone,
    We have SQL server clustering set up in place (it is actually windows clustering), I am wondering if it is possible to introduce SQL server 2014 instance
    into this clustering. Please share us your thoughts and experiences.
    Thanks

    if you are talking about SQL clustering -(FAILOVER CLUSTERED  INSTANCES) , yes, you can do that.
    you have windows cluster with nodea,nodeb....and you can install two different instances of failover clustered sql - sql 2012 and sql 2014.
    Hope it Helps!!

  • How to replicate the EJB state between a clusters

    We had developed shopping card EJB and deployed to a oracle AS clusters, and we want the EJB replicated its state such that the client is transparent to clusters failover. However, we try many times also failure.
    Any brothers and sisters know how to do it????

    You can check the chapter - "EJB Clustering" http://ftp.unex.es/oradoc/form_y_report_10g/web.904/b10324/toc.htm
    from Oracle Application Server Containers for J2EE Enterprise JavaBeans Developer's Guide

  • Perm Siebel Opportunity South Florida

    Hey Everyone,
    I have a perm Siebel Architect opportunity down here in South Florida. The job description is below. Let me know what everyone thinks.
    The Lead-to-Order Technical Architect works closely with the Infrastructure Technical Lead and the Program Team to support the design of solutions that support business process requests. The Architect will also support performance management for hardware and software, supporting the Siebel application. This role will create, adjust, and maintain best practices for High Availability, Failover, and Highly Reliable server architecture. The Architect will implement Siebel industry best practices and standards to implement complete business solutions. Will also lead all software upgrades, hardware upgrades, patches, and environment changes to support the needs of the business.
    Duties and Responsibilities:
    1. Architecture and Management all the Siebel Environments from Software and Server Administration perspective
    2. Architectural Design, Development and Review of all the Major and Minor Siebel Releases.
    3. Architectural Design for all the interfaces flowing IN and Out of Siebel environments
    4. Application Performance Management from Hardware, Software and implement best practices for the same.
    5. Application Design for High Availability, Failover strategy and Reliable server architecture. Implement best industry standard and practices to implement business solutions in Siebel Systems.
    6. Lead any and all the Software Upgrades, hardware Upgrades, Siebel Software Patches implementation, Environments Clustering solutions etc. - Provide design for critical day to day fixes required in production system. Deployment of all the release flow and any deployment in Production.
    Education:
    Bachelor's Degree
    Experience:
    Siebel 7.8xxx
    Oracle 9g
    Mercury Tools
    IBM AIX
    Skills:
    Siebel 7.8xxx Server Administration
    SQL
    Unix Scripting
    Microsoft Tools
    Communication Skills
    Feel free to call or email me.
    Manny Guerrero
    Technical Recruiter
    Consultis
    [email protected]
    561-922-5701
    Edited by: user10479626 on Oct 27, 2008 11:27 AM

    "If you possess Strong knowledge with Hyperion Essbase; Hyperion Planning; and Hyperion Reports you are one phone call away to get hired."Wow.

  • Please answer these questions.....Urgent

    Q You are using Data Guard to ensure high availability. The directory structures on the primary and the standby hosts are different.
    Referring to the scenario above, what initialization parameter do you set up during configuration of the standby database?
    db_convert_dir_name
    db_convert_file_name
    db_dir_name_convert
    db_directory_convert
    db_file_name_convert
    Oracle 9i Administration, Question 1 of 12
    Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
    The RDBMS cannot detect this. It must use regular export and import with compress=y to remove chained and migrated rows as part of the regular database.
    The UTLCHAIN utility
    The DBMS_REPAIR package
    The ANALYZE command with the LIST CHAINED ROWS option
    The DBMS_MIG_CHAIN built-in package
    Q While doing an export, the following is encountered:
    ORA-1628 ... max # extents ... reached for rollback segment ..
    Referring to the scenario above, what do you do differently so that the export is resumed even after getting the space allocation error?
    Use the RESUMABLE=Y option for the export.
    Run the export with the AUTO_ROLLBACK_EXTEND=Y option.
    Increase the rollback segment extents before running the export.
    Use THE RESUME=Y option for the export.
    Monitor the rollback segment usage while the export is running and increase it if it appears to be running out of space.
    Q
    The DBCA (Database Configuration Assistant) prompts the installer to enter the password for which default users?
    SYS and SYSTEM
    OSDBA and INTERNAL
    SYSOPER and INTERNAL
    SYS and INTERNAL
    SYSTEM and SYSDBA
    Q You are designing the physical database for an application that stores dates and times. This will be accessed by users from all over the world in different time zones. Each user needs to see the time in his or her time zone.
    Referring to the scenario above, what Oracle data type do you use to facilitate this requirement?
    DATE
    TIMESTAMP WITH TIME ZONE
    TIMESTAMP
    DATETIME
    TIMESTAMP WITH LOCAL TIME ZONE
    Q Which one of the following conditions prevents you from redefining a table online?
    The table has a composite primary key.
    The table is partitioned by range.
    The table's organization is index-organized.
    The table has materialized views defined on it.
    The table contains columns of data type LOB.
    Q An Oracle database administrator is upgrading from Oracle 8.1.7 to Oracle 9i.
    Referring to the scenario above, which one of the following scripts does the Oracle database administrator run after verifying all steps in the upgrade checklist?
    u8.1.7.sql
    u81700.sql
    u0900020.sql
    u0801070.sql
    u0817000.sql
    Q What command do you use to drop a temporary tablespace and the associated OS files?
    ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP;
    ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP;
    ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
    ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP CASCADE;
    ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP INCLUDING CONTEN
    Q You wish to use a graphical interface to manage database locks and to identify blocking locks.
    Referring to the scenario above, what DBA product does Oracle offer that provides this functionality?
    Oracle Expert, a tool in the Oracle Enterprise Manager product
    Lock Manager, a tool in the base Oracle Enterprise Manager (OEM) product, as well as the console
    Lock Manager, a tool in Oracle Enterprise Manager's Tuning Pack
    The console of Oracle Enterprise Manager
    Viewing the Lock Manager charts of the Oracle Performance Manager, a tool in the Diagnostics Pack add on
    Q CREATE DATABASE abc
    MAXLOGFILES 5
    MAXLOGMEMBERS 5
    MAXDATAFILES 20
    MAXLOGHISTORY 100
    Referring to the code segment above, how do you change the MAX parameters shown?
    They can be changed using an ALTER SYSTEM command, but the database must be in the NOMOUNT state.
    The MAX parameters cannot be changed without exporting the entire database, re-creating it, and importing.
    They can be changed using an ALTER SYSTEM command while the database is open.
    They can be changed in the init.ora file, but the database must be restarted for the values to take effect.
    They cannot be changed unless you re-create your control file
    Q You need to change the archivelog mode of an Oracle database.
    Referring to the scenario above, what steps do you take before actually changing the archivelog mode?
    Execute the archive log list command
    Start up the instance and mount the database but do not open it.
    Start up the instance and mount and open the database in restricted mode.
    Kill all user sessions to ensure that there is no database activity that might trigger redolog activity.
    Take all tablespaces offline
    Q You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
    Referring to the scenario above, why do you change the SDU size?
    A high-speed network is available where the data transmission effect is negligible.
    The application can be tuned to account for the delays.
    The requests to the database return small amounts of data as in an OLTP system.
    The data coming back from the server are fragmented into several packets.
    A large number of users are logged on concurrently to the system.
    Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
    Choice 1 The statistics are static and must be updated by running the analyze command to include the most recent activity.
    Choice 2 The statistics are only valid as a point in time snapshot of activity.
    Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
    Choice 4 The statistics do not include administrative users.
    Choice 5 The statistics gathered are based on individual sessions, so you must interpret them based on the activity and application in which the user was involved at the time you pull the statistics.
    Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
    Choice 1 The statistics are static and must be updated by running the analyze command to include the most recent activity.
    Choice 2 The statistics are only valid as a point in time snapshot of activity.
    Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
    Choice 4 The statistics do not include administrative users.
    Choice 5 The statistics gathered are based on individual sessions, so you must interpret them based on the activity and application in which the user was involved at the time you pull the statistics.
    Q You want to shut down the database, but you do not want client connections to lose any non-committed work. You also do not want to wait for every open session to disconnect.
    Referring to the scenario above, what method do you use to shut down the database?
    Choice 1 Shutdown abort
    Choice 2 Shutdown immediate
    Choice 3 Shutdown transactional
    Choice 4 Shutdown restricted sessions
    Choice 5 Shutdown normal
    Q What step or steps do you take to enable Automatic Undo Management (AUM)?
    Choice 1 Create the UNDO tablespace, then ALTER SYSTEM SET AUTO_UNDO.
    Choice 2 Use ALTER SYSTEM SET AUTO_UNDO; parameter.
    Choice 3 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, stop/start the database.
    Choice 4 Add UNDO_AUTO to parameter to init.ora, stop/start the database, and create the UNDO tablespace.
    Choice 5 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, create the UNDO tablespace, stop/start the database
    AUTOMATIC UNDO PARAMETER SETTINGS.
    Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
    Choice 1 Dynamic SGA
    Choice 2 Advanced Replication
    Choice 3 Data Guard
    Choice 4 Oracle Managed Files
    Choice 5 External Tables
    Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
    Choice 1 Dynamic SGA
    Choice 2 Advanced Replication
    Choice 3 Data Guard
    Choice 4 Oracle Managed Files
    Choice 5 External Tables
    Q What package is used to specify audit requirements for a given table?
    Choice 1 DBMS_TRACE
    Choice 2 DBMS_FGA
    Choice 3 DBMS_AUDIT
    Choice 4 DBMS_POLICY
    Choice 5 DBMS_OBJECT_AUDIT
    Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
    Choice 1 The ANALYZE command with the LIST CHAINED ROWS option
    Choice 2 The RDBMS cannot detect this. It must use regular export and import with compress=y to remove chained and migrated rows as part of the regular database.
    Choice 3 The DBMS_MIG_CHAIN built-in package
    Choice 4 The DBMS_REPAIR package
    Choice 5 The UTLCHAIN utility
    Q What are the three functions of an undo segment?
    Choice 1 Rolling back archived redo logs, database recovery, recording user trace information
    Choice 2 The rollback segment has only one purpose, and that is to roll back transactions that are aborted.
    Choice 3 Rolling back uncommitted transactions, maintaining read consistency, logging processed SQL statements
    Choice 4 Rolling back transactions, maintaining read consistency, database recovery
    Choice 5 Rolling back transactions, recording Data Manipulation Language (DML) statements processed against the database, recording Data Definition Language (DDL) statements processed against the database
    Q Which one of the following describes locally managed tablespaces?
    Choice 1 Tablespaces within a Recovery Manager (RMAN) repository
    Choice 2 Tablespaces that are located on the primary server in a distributed database
    Choice 3 Tablespaces that use bitmaps within their datafiles, rather than data dictionaries, to manage their extents
    Choice 4 Tablespaces that are managed via object tables stored in the system tablespace
    Choice 5 External tablespaces that are managed locally within an administrative repository serving an Oracle distributed database or Oracle Parallel Server
    Q The schema in a database you are administering has a very complex and non-user friendly table and column naming system. You need a simplified schema interface to query and on which to report.
    Which one of the following mechanisms do you use to meet the requirement stated in the above scenario?
    Choice 1 Synonym
    Choice 2 Stored procedure
    Choice 3 Labels
    Choice 4 Trigger
    Choice 5
    View
    Q You need to change the archivelog mode of an Oracle database.
    Referring to the scenario above, what steps do you take before actually changing the archivelog mode?
    Choice 1 Start up the instance and mount the database but do not open it.
    Choice 2 Execute the archive log list command
    Choice 3 Kill all user sessions to ensure that there is no database activity that might trigger redolog activity.
    Choice 4 Take all tablespaces offline.
    Choice 5 Start up the instance and mount and open the database in restricted mode.
    Q The Oracle Internet Directory debug log needs to be changed to show the following events information.
    Given the Debug Event Types and their numeric values:
    Starting and stopping of different threads. Process related. - 4
    Detail level. Shows the spawned commands and the command-line arguments passed - 32
    Operations being performed by configuration reader thread. Configuration refresh events. - 64
    Actual configuration reading operations - 128
    Operations being performed by scheduler thread in response to configuration refresh events, and so on - 256
    What statement turns debug on for all of the above event types?
    Choice 1 oidctl server=odisrv debug=4 debug=32 debug=64 debug=128 debug=256 start
    Choice 2 oidctl server=odisrv debug="4,32,64,128,256" start
    Choice 3 oidctl server=odisrv flags="debug=4 debug=32 debug=64 debug=128 debug=256" start
    Choice 4 oidctl server=odisrv flags="debug=484" start
    Choice 5 oidctl server=odisrv flags="debug=4,32,64,128,256" start
    Q Which Data Guard mode has the lowest performance impact on the primary database?
    Choice 1 Instant protection mode
    Choice 2 Guaranteed protection mode
    Choice 3 Rapid protection mode
    Choice 4 Logfile protection mode
    Choice 5 Delayed protection mode
    Q In a DSS environment, the SALES data is kept for a rolling window of the past two years.
    Referring to the scenario above, what type of partitioning do you use for this data?
    Choice 1 Hash Partitioning
    Choice 2 Range Partitioning
    Choice 3 Equipartitioning
    Choice 4 List Partitioning
    Choice 5 Composite Partitioning
    Q What are the three main areas of the SGA?
    Choice 1 Log buffer, shared pool, database writer
    Choice 2 Database buffer cache, shared pool, log buffer
    Choice 3 Shared pool, SQL area, redo log buffer
    Choice 4 Log writer, archive log, database buffer
    Choice 5
    Database buffer cache, log writer, shared pool
    Q When performing full table scans, what happens to the blocks that are read into buffers?
    Choice 1 They are put on the MRU end of the buffer list by default.
    Choice 2 They are put on the MRU end of the buffer list if the NOCACHE clause was used while altering or creating the table.
    Choice 3 They are read into the first free entry in the buffer list.
    Choice 4 They are put on the LRU end of the buffer list if the CACHE clause was used while altering or creating the table.
    Choice 5 They are put on the LRU end of the buffer list by default
    Q Standard security policy is to force users to change their passwords the first time they log in to the Oracle database.
    Referring to the scenario above, how do you enforce this policy?
    Choice 1 Use the FORCE PASSWORD EXPIRE clause when the users are first created in the database.
    Choice 2 Ask the users to follow the standards and trust them to do so.
    Choice 3 Periodically compare the users' passwords with their initial password and generate a report of the users violating the standard.
    Choice 4 Use the PASSWORD EXPIRE clause when the users are first created in the database.
    Choice 5 Check the users' passwords after they first log in to see if they have changed it. If not, remind them to do so.
    Q What object privilege is necessary for a foreign key constraint to be created and enforced on the referenced table?
    Choice 1 References
    Choice 2 Alter
    Choice 3 Update
    Choice 4 Resource
    Choice 5 Select
    Q What command do you use to drop a temporary tablespace and the associated OS files?
    Choice 1 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP INCLUDING CONTENTS
    Choice 2 ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
    Choice 3 ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP;
    Choice 4 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP;
    Choice 5 ALTER DATABASE DATAFILE '/data/oracle/temp01.dbf' DROP CASCADE;
    Q You need to implement a failover strategy using TAF. You do not have enough resources to ensure that your backup Oracle instance will be up and running in parallel with the primary.
    Referring to the scenario above, what failover mode do you use?
    Choice 1 FAILOVER_MODE=manual
    Choice 2 FAILOVER_MODE=none
    Choice 3 FAILOVER_MODE=auto
    Choice 4 FAILOVER_MODE=basic
    Choice 5 FAILOVER_MODE=preconnect
    Q An Oracle database used for an OLTP application is encountering the "snapshot too old" error.
    Referring to the scenario above, which database object or objects do you query in order to set the OPTIMAL parameter for the rollback segments?
    Choice 1 V$ROLLNAME and V$ROLLSTAT
    Choice 2 V$ROLLNAME
    Choice 3 V$ROLLSTAT
    Choice 4 DBA_ROLL and DBA_ROLLSTAT
    Choice 5 DBA_ROLLBACK_SEG
    QWhat are five background processes that must always be running in a functioning Oracle Instance?
    Choice 1 SMON (system monitor), PMON (process monitor), RECO (recoverer process), ARCH (archive process), CKPT (checkpoint process)
    Choice 2 DBW0 (database writer), SMON (system monitor), PMON (process monitor), LGWR (log writer), CKPT (checkpoint process)
    Choice 3 DBW0 (database writer), SMON (system monitor), PMON (process monitor), D000 (Dispatcher process), CKPT (checkpoint process)
    Choice 4 DBW0 (database writer), CKPT (checkpoint process), RECO (recoverer process), LGWR (log writer), ARCH (archive process)
    Choice 5 DBW0 (database writer), LGWR (log writer), ARCH (archive process), CKPT (checkpoint process), RECO (recoverer process)
    You have two large tables with thousands of rows. To select rows from the table_1, which are not referenced by an indexed common column (e.g. col_1) in table_2, you issue the following statement:
    select * from table_1
    where col_1 NOT in (select col_1 from table_2);
    This statement is taking a very long time to return its result set.
    Referring to the scenario above, which equivalent statement returns much faster?
    Choice 1
    select * from table_1
    where not exists (select * from table_2)
    Choice 2
    select * from table_2
    where col_1 not in (select col_1 from table_1)
    Choice 3
    select * from table_1
    where col_1 in (select col_1 from table_2 where col_1 = table_1.col_1)
    Choice 4
    select * from table_1
    where not exists (select 'x' from table_2 where col_1 = table_1.col_1)
    Choice 5
    select table_1.* from table_1, table_2
    where table_1.col_1 = table_2.col_1 (+)
    Performance is poor during peak transaction periods on a database you administer. You would like to view some statistics on areas such as LGWR (log writer) waits.
    Referring to the scenario above, what performance view do you query to access these statistics?
    Choice 1
    DBA_CATALOG
    Choice 2
    V$SESS_IO
    Choice 3
    V$SYSSTAT
    Choice 4
    V$PQ_SYSSTAT
    Choice 5
    V$SQLAREA
    You need to assess the performance of your shared pool at instance startup, but you cannot restart the database.
    Referring to the scenario above, how do you empty your SGA?
    Choice 1
    Execute $ORACLE_HOME/bin/db_shpool_flush
    Choice 2
    ALTER SYSTEM FLUSH SHARED_POOL
    Choice 3
    ALTER SYSTEM CLEAR SHARED POOL
    Choice 4
    DELETE FROM SYS.V$SQLAREA
    Choice 5
    DELETE FROM SYS.V$SQLTEXT
    You are reading the explain plan of a problem query and notice that full table scans are used with a HASH join.
    Referring to the scenario above, in what instance is a HASH join beneficial?
    Choice 1
    When joining two small tables--neither having any primary keys or unique indexes
    Choice 2
    When no indexes are present
    Choice 3
    When using the parallel query option
    Choice 4
    When joining two tables where one table may be significantly larger than the other
    Choice 5
    Only when using the rule-based optimizer
    An Oracle database administrator is upgrading from Oracle 8.1.7 to Oracle 9i.
    Referring to the scenario above, which one of the following scripts does the Oracle database administrator run after verifying all steps in the upgrade checklist?
    Choice 1
    u0817000.sql
    Choice 2
    u0900020.sql
    Choice 3
    u8.1.7.sql
    Choice 4
    u81700.sql
    Choice 5
    u0801070.sql
    You have a large On-Line Transaction Processing (OLTP) database running in archive log mode with two redo log groups that have two members each.
    Referring to the above scenario, to avoid stalling during peak activity periods, which one of the following actions do you take?
    Choice 1
    Add a third member to each of the groups.
    Choice 2
    Increase your LOG_CHECKPOINT_INTERVAL setting.
    Choice 3
    Turn off archive logging.
    Choice 4
    Add a third redo log group.
    Choice 5
    Turn off redo log multiplexing
    What object does a database administrator create to store precompiled summary data?
    Choice 1
    Replicated Table
    Choice 2
    Archive Log
    Choice 3
    Temporary Tablespace
    Choice 4
    Cached Table
    Choice 5
    Materialized View
    Which one of the following statements do you execute in order to find the current default temporary tablespace?
    Choice 1
    SELECT property_name, property_value FROM v$database_properties
    Choice 2
    show parameter curr_default_temp_tablespace
    Choice 3
    SELECT property_name, property_value FROM all_database_properties
    Choice 4
    SELECT property_name, property_value FROM database_properties
    Choice 5
    SELECT property_name, property_value FROM dba_database_properties
    In which one of the following situations do you use a bitmap index?
    Choice 1
    With column values that are guaranteed to be unique
    Choice 2
    With column values having a high cardinality
    Choice 3
    With column values having a consistently uniform distribution
    Choice 4
    With column values having a low cardinality
    Choice 5
    With column values having a non-uniform distribution
    A table has more than two million rows and, if exported, will exceed 4 GB in size with data, indexes, and constraints. The UNIX you are using has a 2 GB limit on file sizes. This table needs to be backed up using Oracle EXPORT.
    There are two ways this table can be exported and split into multiple files. One way is to use the UNIX pipe, split, and compress commands in conjunction with the Oracle EXPORT utility to generate multiple equally-sized files.
    Referring to the scenario above, what is the other way that you can export and split into multiple files?
    Choice 1
    Export the data into one file and the index into another file.
    Choice 2
    Use a WHERE clause with the export to limit the number of rows returned.
    Choice 3
    Vertically partition the table into sizes of less than 2 GB and then export each partition as a separate file.
    Choice 4
    Specify the multiple files in the FILE parameter and specify the FILESIZE in the EXPORT parameter file.
    Choice 5
    Horizontally partition the table into sizes of less than 2 GB and then export each partition as a separate file.
    Which one of the following statements describes the PASSWORD_GRACE_TIME profile setting?
    Choice 1
    It specifies the grace period, in days, for changing the password once expired.
    Choice 2
    It specifies the grace period, in days, for changing the password from the time it is initially set and the time the account is made active.
    Choice 3
    It specifies the grace period, in minutes, for changing the password once expired.
    Choice 4
    It specifies the grace period, in days, for changing the password after the first successful login after the password has expired.
    Choice 5
    It specifies the grace period, in hours, for changing the password once expired.
    In OEM, what color and icon are associated with a warning?
    Choice 1
    Yellow hexagon
    Choice 2
    Yellow flag
    Choice 3
    Red flag
    Choice 4
    Gray flag
    Choice 5
    Red hexagon
    What parameter in the SQLNET.ORA file specifies the order of the naming methods to be used?
    Choice 1
    NAMES.SEARCH_ORDER
    Choice 2
    NAMES.DOMAIN_HINTS
    Choice 3
    NAMES.DIRECTORY_PATH
    Choice 4
    NAMES.DOMAINS
    Choice 5
    NAMES.DIRECTORY
    An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
    Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
    Choice 1
    UNDO_TABLESPACE
    Choice 2
    UNDO_TIMELIMIT
    Choice 3
    UNDO_MANAGEMENT
    Choice 4
    UNDO_FLASHBACKTO
    Choice 5
    UNDO_RETENTION
    An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
    Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
    Choice 1
    UNDO_TABLESPACE
    Choice 2
    UNDO_TIMELIMIT
    Choice 3
    UNDO_MANAGEMENT
    Choice 4
    UNDO_FLASHBACKTO
    Choice 5
    UNDO_RETENTION
    DB_BLOCK_SIZE=8192
    DB_CACHE_SIZE=128M
    DB_2K_CACHE_SIZE=64M
    DB_4K_CACHE_SIZE=32M
    DB_8K_CACHE_SIZE=16M
    DB_16K_CACHE_SIZE=8M
    Referring to the initialization parameter settings above, what is the size of the cache of standard block size buffers?
    Choice 1
    8 M
    Choice 2
    16 M
    Choice 3
    32 M
    Choice 4
    64 M
    Choice 5
    128 M
    DB_CREATE_FILE_DEST='/u01/oradata/app01'
    DB_CREATE_ONLINE_LOG_DEST_1='/u02/oradata/app01'
    Referring to the sample code above, which one of the following statements is NOT correct?
    Choice 1
    Data files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
    Choice 2
    Control files created with no location specified are created in the DB_CREATE_ONLINE_LOG_DEST_1 directory.
    Choice 3
    Redolog files created with no location specified are created in the DB_CREATE_ONLINE_LOG_DEST_1 directory.
    Choice 4
    Control files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
    Choice 5
    Temp files created with no location specified are created in the DB_CREATE_FILE_DEST directory.
    LogMiner GUI is a part of which one of the following?
    Choice 1
    Oracle Enterprise Manager
    Choice 2
    Oracle LogMiner Plug-In
    Choice 3
    Oracle Diagnostics Pack
    Choice 4
    Oracle Performance Tuning Pack
    Choice 5
    Oracle LogMiner StandAlone GUI
    The schema in a database you are administering has a very complex and non-user friendly table and column naming system. You need a simplified schema interface to query and on which to report.
    Which one of the following mechanisms do you use to meet the requirement stated in the above scenario?
    Choice 1
    View
    Choice 2
    Trigger
    Choice 3
    Stored procedure
    Choice 4
    Synonym
    Choice 5
    Labels
    alter index gl.GL_JE_LINES_N1 rebuild
    You determine that an index has too many extents and want to rebuild it to avoid fragmentation performance degradation.
    When you issue the above scenario, where is the rebuilt index stored?
    Choice 1
    In the default tablespace for the login name you are using
    Choice 2
    You cannot rebuild an index. You must drop the existing index and re-create it using the create index statement.
    Choice 3
    In the system tablespace
    Choice 4
    In the same tablespace as it is currently stored
    Choice 5
    In the index tablespace respective to the data table on which the index is built
    Which one of the following describes locally managed tablespaces?
    Choice 1
    Tablespaces within a Recovery Manager (RMAN) repository
    Choice 2
    External tablespaces that are managed locally within an administrative repository serving an Oracle distributed database or Oracle Parallel Server
    Choice 3
    Tablespaces that are located on the primary server in a distributed database
    Choice 4
    Tablespaces that use bitmaps within their datafiles, rather than data dictionaries, to manage their extents
    Choice 5
    Tablespaces that are managed via object tables stored in the system tablespace
    Which method of database backup supports true incremental backups?
    Choice 1
    Export
    Choice 2
    Operating System backups
    Choice 3
    Oracle Enterprise Backup Utility
    Choice 4
    Incremental backups are not supported. You must use full or cumulative backups.
    Choice 5
    Recovery Manager
    You are using Data Guard to ensure high availability. The directory structures on the primary and the standby hosts are different.
    Referring to the scenario above, what initialization parameter do you set up during configuration of the standby database?
    Choice 1
    db_dir_name_convert
    Choice 2
    db_convert_dir_name
    Choice 3
    db_convert_file_name
    Choice 4
    db_directory_convert
    Choice 5
    db_file_name_convert
    Tablespace APP_INDX is put in online backup mode when redo log 744 is current. When APP_INDX is taken out of online backup mode, redo log 757 is current.
    Referring to the scenario above, if the backup is restored, what are the start and end redo logs used, in order, to perform a successful point-in-time recovery of APP_INDX?
    Choice 1
    Start Redo Log 744, End Redo Log 757
    Choice 2
    Start Redo Log 743, End Redo Log 756
    Choice 3
    Start Redo Log 745, End Redo Log 756
    Choice 4
    Start Redo Log 744, End Redo Log 756
    Choice 5
    Start Redo Log 743, End Redo Log 757
    You want to make new data entered or changed in a table adhere to a given integrity constraint, but data exist in the table that violates the constraint.
    Referring to the scenario above, what do you do?
    Choice 1
    Use an enabled novalidate constraint.
    Choice 2
    Use an enabled validate constraint.
    Choice 3
    Use a deferred constraint.
    Choice 4
    Use a disabled constraint.
    Choice 5
    You cannot enforce this type of constraint
    In Oracle 9i, the connect internal command has been discontinued.
    Referring to the text above, how do you achieve a privileged connection in Oracle 9i?
    Choice 1
    CONNECT <username> AS SYSOPER where username has DBA privileges.
    Choice 2
    CONNECT <username> as SYSDBA.
    Choice 3
    Connect using Enterprise Manager.
    Choice 4
    CONNECT sys.
    Choice 5
    Use CONNECT <username> as normal but include the user in the external password file.
    How many partitions can a table have?
    Choice 1
    64
    Choice 2
    255
    Choice 3
    1,024
    Choice 4
    65,535
    Choice 5
    Unlimited
    In Cache Fusion, when does a request by one process for a resource owned by another process fail?
    Choice 1
    When a null mode resource request is made for a resource already owned in exclusive mode by another process
    Choice 2
    When a shared mode resource request is made for a resource already owned in shared mode by another process
    Choice 3
    When a shared mode resource request is made for a resource already owned in null mode by another process
    Choice 4
    When an exclusive mode resource request is made for a resource already owned in null mode by another process
    Choice 5
    When an exclusive mode resource request is made for a resource already owned in shared mode by another process
    The Oracle Internet Directory debug log needs to be changed to show the following events information.
    Given the Debug Event Types and their numeric values:
    Starting and stopping of different threads. Process related. - 4
    Detail level. Shows the spawned commands and the command-line arguments passed - 32
    Operations being performed by configuration reader thread. Configuration refresh events. - 64
    Actual configuration reading operations - 128
    Operations being performed by scheduler thread in response to configuration refresh events, and so on - 256
    What statement turns debug on for all of the above event types?
    Choice 1
    oidctl server=odisrv flags="debug=4 debug=32 debug=64 debug=128 debug=256" start
    Choice 2
    oidctl server=odisrv debug="4,32,64,128,256" start
    Choice 3
    oidctl server=odisrv flags="debug=4,32,64,128,256" start
    Choice 4
    oidctl server=odisrv flags="debug=484" start
    Choice 5
    oidctl server=odisrv debug=4 debug=32 debug=64 debug=128 debug=256 start
    A new OFA-compliant database is being installed using the Oracle installer. The mount point being used is /u02.
    Referring to the scenario above, what is the default value for ORACLE_BASE?
    Choice 1
    /usr/app/oracle
    Choice 2
    /u02/oracle
    Choice 3
    /u02/app/oracle
    Choice 4
    /u01/app/oracle
    Choice 5
    /u02/oracle_base
    You need to start the Connection Manager Gateway and the Connections Admin processes.
    Referring to the scenario above, what command do you execute?
    Choice 1
    CMCTL START CM
    Choice 2
    CMCTL START CMADMIN
    Choice 3
    CMCTL START CMAN
    Choice 4
    CMCTL START CMGW
    Choice 5
    CMCTL START CMGW CMADM
    When performing full table scans, what happens to the blocks that are read into buffers?
    Choice 1
    They are read into the first free entry in the buffer list.
    Choice 2
    They are put on the MRU end of the buffer list if the NOCACHE clause was used while altering or creating the table.
    Choice 3
    They are put on the LRU end of the buffer list if the CACHE clause was used while altering or creating the table.
    Choice 4
    They are put on the LRU end of the buffer list by default.
    Choice 5
    They are put on the MRU end of the buffer list by default.
    You wish to take advantage of the Oracle datatypes, but you need to convert your existing LONG or LONG RAW columns to Character Large Object (CLOB) and Binary Large Object (BLOB) datatypes.
    Referring to the scenario above, what is the quickest method to use to perform this conversion?
    Choice 1
    Use the to_lob function when selecting data from the existing table into a new table.
    Choice 2
    Use the ALTER TABLE statement and MODIFY the column to the new LOB datatype.
    Choice 3
    You must export the existing data to external files and then re-import them as BFILE external LOBS.
    Choice 4
    Create a new table with the same columns but with the LONG or LONG RAW column changed to a CLOB or BLOB type. The next step is to INSERT INTO newtable select * from oldtable.
    Choice 5
    LONG and LONG RAW datatypes are not compatible with LOBS and cannot be converted within the Oracle database.
    You need to redefine the JOURNAL table in the stress test environment. You want to check first to see if it is possible to redefine this table online.
    Referring to the scenario above, what statement do you execute that checks whether or not the JOURNAL table can be redefined online if you are connected as the table owner?
    Choice 1
    Execute DBMS_REDEFINITION.CHECK_TABLE_REDEF(USER,'JOURNAL');
    Choice 2
    Execute DBMS_REDEFINITION.VERIFY_REDEF_TABLE(USER,'JOURNAL');
    Choice 3
    Execute DBMS_REDEFINITION.CAN_REDEF_TABLE(USER,'JOURNAL');
    Choice 4
    Execute DBMS_REDEFINITION.START_REDEF_TABLE(USER,'JOURNAL');
    Choice 5
    Execute DBMS_REDEFINITION.SYNC_INTERIM_TABLE(USER,'JOURNAL');
    An Oracle 9i database instance has automatic undo management enabled. This allows you to use the Flashback Query feature of Oracle 9i.
    Referring to the scenario above, what UNDO parameter needs to be set so that this feature allows consistent queries of data up to 90 days old?
    Choice 1
    UNDO_TIMELIMIT
    Choice 2
    UNDO_MANAGEMENT
    Choice 3
    UNDO_RETENTION
    Choice 4
    UNDO_TABLESPACE
    Choice 5
    UNDO_FLASHBACKTO
    Which one of the following procedures is used for the extraction of the LogMiner dictionary?
    Choice 1
    DBMS_LOGMNR_D.EXTRACT
    Choice 2
    DBMS_LOGMNR.BUILD
    Choice 3
    DBMS_LOGMINER_D.BUILD
    Choice 4
    DBMS_LOGMNR_D.BUILD_DICT
    Choice 5
    DBMS_LOGMNR_D.BUILD
    set pause on;
    column sql_text format a35;
    select sid, osuser, username, sql_text
    from v$session a, v$sqlarea b
    where a.sql_address=b.address
    and a.sql_hash_value=b.hash_value
    Why is the SQL*Plus sample code segment above used?
    Choice 1
    To view full text search queries by issuing user
    Choice 2
    To list all operating system users connected to the database
    Choice 3
    To view SQL statements issued by connected users
    Choice 4
    To detect deadlocks
    Choice 5
    To view paused database sessions
    When dealing with very large tables in which the size greatly exceeds the size of the System Global Area (SGA) data block buffer cache, which one of the following operations must be avoided?
    Choice 1
    Group operations
    Choice 2
    Aggregates
    Choice 3
    Index range scans
    Choice 4
    Multi-table joins
    Choice 5
    Full table scans
    You are reading the explain plan of a problem query and notice that full table scans are used with a HASH join.
    Referring to the scenario above, in what instance is a HASH join beneficial?
    Choice 1
    Only when using the rule-based optimizer
    Choice 2
    When joining two small tables--neither having any primary keys or unique indexes
    Choice 3
    When no indexes are present
    Choice 4
    When joining two tables where one table may be significantly larger than the other
    Choice 5
    When using the parallel query option
    Performance is poor during peak transaction periods on a database you administer. You would like to view some statistics on areas such as LGWR (log writer) waits.
    Referring to the scenario above, what performance view do you query to access these statistics?
    Choice 1
    V$SQLAREA
    Choice 2
    V$SYSSTAT
    Choice 3
    V$SESS_IO
    Choice 4
    V$PQ_SYSSTAT
    Choice 5
    DBA_CATALOG
    What security feature allows the database administrator to monitor successful and unsuccessful attempts to access data?
    Choice 1
    Autotrace
    Choice 2
    Fine-Grained Auditing
    Choice 3
    Password auditing
    Choice 4
    sql_trace
    Choice 5
    tkprof
    You need to configure a default domain that is automatically appended to any unqualified net service name.
    What Oracle-provided network configuration tool do you use to accomplish the above task?
    Choice 1
    Oracle Names Control Utility
    Choice 2
    Configuration File Utility
    Choice 3
    Oracle Network Configuration Assistant
    Choice 4
    Listener Control Utility
    Choice 5
    Oracle Net Manager
    You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
    Referring to the scenario above, why do you change the SDU size?
    Choice 1
    The requests to the database return small amounts of data as in an OLTP system.
    Choice 2
    The application can be tuned to account for the delays.
    Choice 3
    The data coming back from the server are fragmented into several packets.
    Choice 4
    A large number of users are logged on concurrently to the system.
    Choice 5
    A high-speed network is available where the data transmission effect is negligible.
    You have partitioned the table ORDER on the ORDERID column using range partitioning. You want to create a locally partitioned index on this table. You also want this index to be unique.
    Referring to the scenario above, what is required for the creation of this unique locally partitioned index?
    Choice 1
    A unique partitioned index on a table cannot be local.
    Choice 2
    There can be only one unique locally partitioned index on the table.
    Choice 3
    The index has to be equipartitioned.
    Choice 4
    The table's primary key columns should be included in the index key.
    Choice 5
    The ORDERID column has to be part of the index's key.
    You have a large On-Line Transaction Processing (OLTP) database running in archive log mode with two redo log groups that have two members each.
    Referring to the above scenario, to avoid stalling during peak activity periods, which one of the following actions do you take?
    Choice 1
    Turn off redo log multiplexing.
    Choice 2
    Increase your LOG_CHECKPOINT_INTERVAL setting.
    Choice 3
    Add a third member to each of the groups.
    Choice 4
    Add a third redo log group.
    Choice 5 Turn off archive logging
    When transporting a tablespace, the tablespace needs to be self-contained.
    Referring to the scenario above, in which one of the following is the tablespace self-contained?
    Choice 1 A referential integrity constraint points to a table across a set boundary.
    Choice 2 A partitioned table is partially contained in the tablespace.
    Choice 3 An index inside the tablespace is for a table outside of the tablespace.
    Choice 4 A corresponding index for a table is outside of the tablespace.
    Choice 5 A table inside the tablespace contains a LOB column that points to LOBs outside the tablespace.
    You have experienced a database failure requiring a full database restore. Downtime is extremely costly, as is any form of data loss. You run the database in archive log mode and have a full database backup from three days ago. You have a database export from last night. You are not running Oracle Parallel Server (OPS).
    Referring to the above scenario, how do you minimize downtime and data loss?
    Choice 1 Import the data from the export using direct-path loading.
    Choice 2 Create a standby database and activate it.
    Choice 3 Perform a restore of necessary files and use parallel recovery operations to speed the application of redo entries.
    Choice 4 Conduct a full database restore and bring the database back online immediately. Apply redo logs during a future maintenance window.
    Choice 5 Perform a restore and issue a recover database command
    You have two large tables with thousands of rows. To select rows from the table_1, which are not referenced by an indexed common column (e.g. col_1) in table_2, you issue the following statement:
    select * from table_1
    where col_1 NOT in (select col_1 from table_2);
    This statement is taking a very long time to return its result set.
    Referring to the scenario above, which equivalent statement returns much faster?
    Choice 1 select * from table_1
    where col_1 in (select col_1 from table_2 where col_1 = table_1.col_1)
    Choice 2 select * from table_2
    where col_1 not in (select col_1 from table_1)
    Choice 3 select * from table_1
    where not exists (select 'x' from table_2 where col_1 = table_1.col_1)
    Choice 4 select table_1.* from table_1, table_2
    where table_1.col_1 = table_2.col_1 (+)
    Choice 5 select * from table_1
    Which one of the following initialization parameters is obsolete in Oracle 9i?
    Choice 1 LOG_ARCHIVE_DEST
    Choice 2 GC_FILES_TO_LOCKS
    Choice 3 FAST_START_MTTR_TARGET
    Choice 4 DB_BLOCK_BUFFERS
    Choice 5 DB_BLOCK_LRU_LATCHES
    You find that one of your tablespaces is running out of disk space.
    Referring to the scenario above, which one of the following is NOT a valid option to increase the space available to the tablespace?
    Choice 1 Move some segments to other tablespaces.
    Choice 2 Resize an existing datafile in the tablespace.
    Choice 3 Add another datafile to the tablespace.
    Choice 4 Increase the MAX_EXTENTS for the tablespace.
    Choice 5 Turn AUTOEXTEND on for one or more datafiles in the tablespace.
    What tools or utilities do you use to transfer the data dictionary's structural information of transportable tablespaces?
    Choice 1 DBMS_TTS
    Choice 2 SQL*Loader
    Choice 3 Operating System copy commands
    Choice 4 DBMS_STATS
    Choice 5 EXP and IMP
    Which one of the following, if backed up, is potentially problematic to a complete recovery?
    Choice 1
    Control file
    Choice 2
    System Tablespace
    Choice 3
    Data tablespaces
    Choice 4
    Online Redo logs
    Choice 5
    All archived redologs after the last backup
    Your database warehouse performs frequent full table scans. Your DB_BLOCK_SIZE is 16,384.
    Referring to the scenario above, what parameter do you use to reduce disk I/O?
    Choice 1 LOG_CHECKPOINT_TIMEOUT
    Choice 2 DBWR_IO_SLAVES
    Choice 3 DB_FILE_MULTIBLOCK_READ_COUNT
    Choice 4 DB_WRITER_PROCESSES
    Choice 5 DB_BLOCK_BUFFERS
    Which one of the following describes the "Reset database to incarnation" command used by Recovery Manager?
    Choice 1 It performs a resynchronization of online redo logs to a given archive log system change number (SCN).
    Choice 2 It performs point-in-time recovery when using Recovery Manager.
    Choice 3 It restores the database to the initial state in which it was found when first backing it up via Recovery Manager.
    Choice 4 It restores the database to a save point as defined by the version control number or incarnation number of the database.
    Choice 5 It is used to undo the effect of a resetlogs operation by restoring backups of a prior incarnation of the database.
    You are using the CREATE TABLE statement to populate the data dictionary with metadata to allow access to external data, where /data is a UNIX writable directory and filename.dbf is an arbitrary name.
    Referring to the scenario above, which clause must you add to your CREATE TABLE statement?
    Choice 1
    organization external
    Choice 2 external file /data/filename.dbf
    Choice 3 ON /data/filename.dbf
    Choice 4 organization file
    Choice 5 file /data/filename.dbf
    Your business user has expressed a need to be able to revert back to data that are at most eight hours old. You decide to use Oracle 9i's FlashBack feature for this purpose.
    Referring to the scenario above, what is the value of UNDO_RETENTION that supports this requirement?
    Choice 1 480
    Choice 2 8192
    Choice 3 28800
    Choice 4 43200
    Choice 5 28800000
    Materialized Views constitute which data warehousing feature offered by Oracle?
    Choice 1 FlashBack Query
    Choice 2 Summary Management
    Choice 3 Dimension tables
    Choice 4 ETL Enhancements
    Choice 5 Updateable Multi-table Views
    DB_BLOCK_SIZE=8192
    DB_CACHE_SIZE=128M
    DB_2K_CACHE_SIZE=64M
    DB_4K_CACHE_SIZE=32M
    DB_8K_CACHE_SIZE=16M
    DB_16K_CACHE_SIZE=8M
    Referring to the initialization parameter settings above, what is the size of the cache of standard block size buffers?
    Choice 1 8 M
    Choice 2 16 M
    Choice 3 32 M
    Choice 4 64 M
    Choice 5 128 M
    You need to send listener log information to the Oracle Support Services. The listener name is LSNRORA1.
    Referring to the scenario above, which one of the following statements do you use in the listener.ora file to generate this log information?
    Choice 1 TRACE_LEVEL_LSNRORA1=debug
    Choice 2 TRACE_LEVEL_LSNRORA1=admin
    Choice 3 TRACE_LEVEL_LSNRORA1=5
    Choice 4 TRACE_LEVEL_LSNRORA1=support
    Choice 5 TRACE_LEVEL_LSNRORA1=on
    Which one of the following statements causes you to choose the NOARCHIVELOG mode for an Oracle database?
    Choice 1
    The database does not need to be available at all times.
    Choice 2
    The database is used for a DSS application, and updates are applied to it once in 48 hours.
    Choice 3
    The database needs to be available at all times.
    Choice 4
    It is unacceptable to lose any data if a disk failure damages some of the files that constitute the database.
    Choice 5
    There will be times when you will need to recover to a point-in-time that is not current.
    You are experiencing performance problems due to network traffic. One way to tune this is by setting the SDU size.
    Referring to the scenario above, why do you change the SDU size?
    Choice 1 A large number of users are logged on concurrently to the system.
    Choice 2 A high-speed network is available where the data transmission effect is negligible.
    Choice 3 The data coming back from the server are fragmented into several packets.
    Choice 4 The application can be tuned to account for the delays.
    Choice 5 The requests to the database return small amounts of data as in an OLTP system.

    Post a few if you need answers to a few.
    Anyway, my best shot:-
    Q. Directories are different
    A. Use db_file_name_convert why? read about it.
    Q What facility does Oracle provide to detect chained and migrated rows after the proper tables have been created?
    A.The ANALYZE command with the LIST CHAINED ROWS option
    Q While doing an export, the following is encountered:
    my best guess
    Use the RESUMABLE=Y option for the export.
    Q. The DBCA (Database Configuration Assistant) prompts the installer to enter the password for which default users?
    A. SYS and SYSTEM
    Q You are designing the physical database for an application that stores dates and times. This will be accessed by users from all over the world in different time zones. Each user needs to see the time in his or her time zone.
    A. TIMESTAMP WITH LOCAL TIME ZONE
    Q What command do you use to drop a temporary tablespace and the associated OS files?
    A. ALTER DATABASE TEMPFILE '/data/oracle/temp01.dbf' DROP INCLUDING DATAFILES;
    Q You wish to use a graphical interface to manage database locks and to identify blocking locks.
    A. Lock Manager, a tool in the base Oracle Enterprise Manager (OEM) product, as well as the console
    Q CREATE DATABASE abc
    A. They cannot be changed unless you re-create your control file
    Q You need to change the archivelog mode of an Oracle database.
    A. Execute the archive log list command
    Q When interpreting statistics from the v$sysstat, what factor do you need to keep in mind that can skew your statistics?
    A.
    Choice 3 The statistics gathered by v$sysstat include database startup activities and database activity that initially populates the database buffer cache and shared pool.
    Q You want to shut down the database, but you do not want client connections to lose any non-committed work. You also do not want to wait for every open session to disconnect.
    Choice 3 Shutdown transactional
    Q What step or steps do you take to enable Automatic Undo Management (AUM)?
    A.Choice 5 Add UNDO_MANAGEMENT=AUTO parameter to init.ora, create the UNDO tablespace, stop/start the database
    Q What Oracle 9i feature allows the database administrator to create tablespaces, datafiles, and log groups WITHOUT specifying physical filenames?
    A. Choice 4 Oracle Managed Files

  • Questions in Lync 2013 HADR

    Hi Team,
    One of the customer raised the query:
    In our scenario, we want Active/Active High availability between different geolocations with RPO=0 and RTO near zero (seconds).
    Questions:
    1. Isn’t this possible with pool pairing and database availability AlwaysOn synchronous commit?
    2. What is the bandwidth needed between both sites?
    3. Do you think to achieve Active/Active high availability (RPO=0, RTO=+/-0) for Lync between 2 datacenters we should go with the following scenario:
    --> Storage: virtualization (stretched LUNs)
    -->Compute: Hyper-v Clustering (failover cluster)
    -->DNS: Global Datacenter Server Load Balancer
    4. What is the RTO and RPO in your proposed solution?
    Please advise. Many Thanks.

    1) No.  Pool paring doesn't automatically failover, therefore it does not meet the requirements.  Also, HA within a pool isn't supported across geographic locations, so I don't believe this requirement can be met within the supported model. 
    It's possible if you have a solid enough pipe between the locations with very low latency that you could go unsupported with the old Metropolitan Site Resiliency model:
    https://technet.microsoft.com/en-us/library/gg670905(v=ocs.14).aspx but not supported in 2013.
    2) This can't be answered easily, it depends on what they're doing and using. How many users, how much archived data... the SQL mirroring will be quite a bit, as well as the shared presence data on front ends.  Will they use video between sites?  
    Too many questions to get any kind of reliable answer.
    3) If RTO/RPO is this critical, then I'm assuming it's voice.  If it's not, then a short outage should be more tolerable.  If it is voice, do not leave the supported model... just don't.  You don't want to be in that
    situation when systems are down and it's your phone.  No live migrations, just what's supported via TechNet and virtualization whitepapers.
    4) My proposed solution would be HA pools in both datacenters, built big enough it's unlikely to go down.   If the site does go down, pool failover can happen in a reasonable amount of time, perhaps 15 minutes if you're well prepared,
    but phones could potentially stay online during this time. 
    -Anthony
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications
    This forum post is based upon my personal experience and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Maybe you are looking for

  • HP LaserJet Pro 200 color MFP M276nw accidently plugged into 220V

    I bought an HP LaserJet Pro 200 color MFP M276nw printer from US (110V). The printer was shipped to India and was accidently plugged into 220V. This did damage the printer and now the printer would not start even on 110V. my questions are: 1. is the

  • Purchase order for part C which is made up of parts A and B

    I'm trying to create a purchase order for part C which is made up of parts A and B: The customer orders part C (Key/ignition combination) through a sales order, I must create a purchase order for part C which will consist of parts A (Key) and B (Igni

  • 4 gb or 8 gb for vista x64

    Should I stay with 4 gb or shall I go for 8 gb in a vista x64 OS

  • Using Layout Or Pixel placements ?

    Hi there, Iam designing a presentation layer for a standalone application. I use a tool which gives a very rich drap and drop utility for designing a presentation layer. According to my requirement, my application will have a standard fixed resolutio

  • REPEATABLE CRASH BUG DWCS3  Mac

    Working with DW CS3 templates and FIreworks CS3, using the edit button on the properties panel, editing a graphic and clicking DONE on the fireworks popup image, reinserting to the templated page causes repeated and repeateable crashes after about 5