EM 12c and Solaris Zone Monitoring

Hi all,
I am using Oracle EM 12c to monitor Oracle database server which run on Solaris zone, which is capped to run using only 4 cores.
BUT the Host management home page is still showing "Total Cores" of 8 (I am using Sun T4-1).
Is there some configuration which required to be done to reflect the number of capped CPU.
Also is there a way to isolate the CPU, Memory, Filesystem and Network utilization to only look at the local zone?
I suspect what I see now represent the resource utilization of the physical machine.
Thank you.
-Joel

Loc,
I've opened quite a few SRs, some handled well, some not. My biggest issue perhaps is that SRs are needed at all. We are a small shop that relied on OEM 10.2 for a long time, skipped over 11G and went right to 12R2. It was a culture shock, and I'm not sure if skipping the version added to our woes.
There are 3 DBAs here, with Oracle experience between 13 and 18 years, we are not newbies. I took the em12c class from Oracle.
But still, acceptance of its use has not been high, mostly because it's no longer the quick, targeted place we 'go to'. It's a huge product now, with much built-in. We are finding we need a mind adjustment to use it and the transition period related to how much time we can spend on that. Our intent, though, was to make things easier, not harder.
Case in point is our implementation of DataGuard. This project was new to us and I made EM12C upgrade as a predecessor project because I needed its help. I ultimately abandoned it, we hired a consultant, we learned manual steps, and that's what we use today. Everything is homegrown.
I feel as through the purpose for EM moved from a DBA monitoring tool. Remember OEM 10.2 and its initial summary page that showed the overall health of everything on one shot? I can't do that in em12c, or at least haven't found that.
Here are some SRs:
SR 3-6392247151
Sr 3-7159111371
SR 3-6645421051
* SR 3-6608820051
SR 3-6667182071
* Covers "invalid objects" missing, something we previously relied on.
Back to the original question of this post - I understand now that if we use EM to switch our databases, the role reversal will show. We would have loved to rely on EM for everything. Now that we must do everything manually, I have to then manually drop the target being monitored and re-add it. Again, something that should make our jobs easier is now harder.
Thanks for listening. Am I really the only DBA that says EM12C is harder to use for basic functionality?
Sherrie

Similar Messages

  • Resource Management and Solaris Zones Developer Guide

    Solaris Information Products ("Pubs") is creating a
    developer guide for resource management and Solaris Zones.
    The department is seeking input on content from application
    developers and ISVs.
    We plan to discuss the different categories of applications
    that can take advantage of Solaris resource management
    features, and provide implementation examples that discuss
    the particular RM features that can be used.
    Although running in a zone poses no differences to most
    applications, we will describe any possible limitations and
    offer appropriate workarounds. We will also provide
    information needed by the ISV, such as determining
    the appropriate system calls to use in a non-global zone.
    We plan to use case studies to document the zones material.
    We would like to know the sorts of topics that you would
    like to see covered. We want to be sure that we address
    your specific development concerns with regard to these
    features.
    Thank you for your comments and suggestions.

    Hi there, i'm using solaris resource management in a
    server with more thant 2thousand acounts.
    Created profiles for users, defaul, staff, root and
    services.Seeing the contents of your /etc/project file could be helpful.
    But while using rctladm to enable syslog'ing, I set up
    global flags of "deny" and "no-local-action" in almos
    everything.The flags on the right hand side of the rctladm(1M) output are read-only:
    they are telling you the characteristics of the resource control in question (what
    operations the system will allow the resource control to take).
    Now, many aplications don't work because they are
    denied enough process.max-stack-size and
    process.max-file-descriptor for them to work.
    Applications such has prstat.If prstat(1) is failing due to the process.max-file-descriptor control value, that's
    probably a bug. prstat(1) is more likely bumping into the limit to assess how many file
    descriptors are available, and then carrying on--you're just seeing a log message since
    prstat(1) tested the file descriptor limit, and you've enabled syslog for that control. Please
    post the prstat(1) output, and we'll figure out if something's breaking.
    I don't find a way to disable the global flags. You can't. I would disable the syslog action on the process.max-stack-size control first;
    there is an outstanding bug on this control, in that it will report a false triggering event--
    no actual effect to the process. (If you send me some mail, I will add you as a call record
    on the bug.)
    Can anyone tell me:
    how to disable global flags?
    how to disable and enable solaris resource management
    all together?You could raise all of the control values, but the resource control facility (like the resource
    limit facility it superseded) is always active. Let's figure out if you're hitting the bug I mentioned,
    and then figure out how to proceed.
    - Stephen
    Stephen Hahn, PhD Solaris Kernel Development, Sun Microsystems
    [email protected]

  • SAP Java and Solaris Zones SolMan 4.0

    May require Solaris Zones experience to continue.
    I have three SAP database instances/central instances running in three sparse Solaris zones with no problem.
    I have created a new sparse zone for a new SAP installation (Solution Manager 4.0) and started the installation. SAP requires a 1.4.2 SDK even though Java 1.5 comes with Solaris 10. The 1.4.2 SDK is in /usr/j2se. The installation in the sparse zone errors out because it can't get "write" rights to /usr/j2se/jre/lib/security/local_policy.jar as it is trying to install some security encryption JCE component.
    I have thought about creating a /usr/j2se_zonename file system, copying the contents of /usr/j2se into it and then mounting /usr/j2se_zonename in the zone as a lofs with the name /usr/j2se. However when I do the copy of /usr/j2se I get some recursion errors.
    Any thoughts about how to add a writable /usr/j2se into the sparse zone with the least amount of effort ? Otherwise plan B would be to create a "large" zone with a writable /usr directory.
    Received a great answer, that while it may not be architecturally "pure" it may get the job done.
    You might just download the relevant JDK tarball and unpack that
    somewhere in your zone (anywhere you like), and point SAP at it...
    http://java.sun.com/j2se/1.4.2/download.html
    Get the one called "self extracting file"-- you can unpack that anywhere
    you want.
    Message was edited by: Atis Purins

    Hi Russ,
    no you only have to generate two RFCs to your R/3 and assign them in SMSY to for system monitoring
    Then you need a Solution, assign your R/3 to the Solution, setup the system monitoring.
    Regards,
    uDo

  • OSB and Solaris Zones

    Hi,
    Does anybody have any experience of running OSB inside a Solaris zone?
    I'm experimenting with this at the moment and would like to share the OSB installation with the global zone, but keep the /usr/etc and /usr/tmp directories where the host-specific stuff is stored private.
    Thanks.

    I following Oracle® Secure Backup Installation and Configuration Guide
    Release 10.3 :
    - To Viewing SCSI Bus Name-Instance Parameter Values in Solaris :
    # cd /usr/local/oracle/backup/install
    # installdriver
    bash: installdriver: command not found
    # ./installdriver
    case: Too many arguments
    How could i can run the installdriver script for SCSI information
    Best Regards
    Ch

  • Firmware update and Solar Power monitor

    We have a solar power monitor that feeds information to the electricity company via our hub. It is connected directly to the router. Recently the connection has dropped. The cable is not loose. The electricity company suggest I contact BT to find out if a firmware update had been put through on my line, as that may have closed a port.
    I've spoken to three BT people on the phone, none of them knew what i was talking about. Can anyone here assist?
    Thanks

    If you had to manually forward any ports, then you would have to set them up again. This guide may help.
    Port forwarding problems
    If you have a home hub 3 version A, then port forwarding can be very difficult to get working.
    There are some useful help pages here, for BT Broadband customers only, on my personal website.
    BT Broadband customers - help with broadband, WiFi, networking, e-mail and phones.

  • Solaris zone monitoring zones independantly

    Any utilities to monitor each zone on its own. Looking to monitor space on each zone for reporting..

    I know i can create a simple script to do it, but was looking for something more robust...
    Thanks
    -C

  • Req: Checkpointing Disks for Solaris Zones

    Is there a way using Solaris Zones to checkpoint a disk? I would like to be able to carry out a series of experiments on a test system and start from an identical configuration each time. VMWare calls this capability "Disk checkpointing," and Solaris Zones appears to somewhat resemble VMWare.

    Hi,
    since the SCOM Agent is not 'cluster aware' you probably have no other choice than restarting the agent and wait for the next discovery cycle of the solaris logical disks. You can speed this up by manually override the discovery interval for that specific
    server. After that the disk is not longer monitored.
    If you have the time and knowledge you could set up a custom monitoring for clustered filesystems and disable the standard monitoring for that filesystems. But I think thats quite time-consuming. We have the same issues here with our Solaris and Linux clustered/shared
    filesystems but we still live with the workaround of restarting the agent.
    Regards

  • Oracle 9i Database and Solaris 10 Zones

    Can an existing oracle 9i database be moved into a new zone? The database resides on it's own filesystem. The server is runnign Solaris 10, and the zones are not set up yet, but Oracle is installed, and the 2 databases are up and running.
    Basically there are 2 existing oracle 9i databases, and I want to setup 2 zones, where none other than the default global exist right now, and have each database in a zone.
    Thanks in advance.

    You need to do the following -
    Configure loopback mount points from the global zone into the local zone through zonecfg (one for Oracle binary, other for Oracle data). I am assuming that you want to share the same Oracle binary location between all the zones. The Oracle database mounts must be separate & make sure that you put them in the respective zone's config only.
    Create an oracle user with dba group in both the zones. It's best if the user IDs & group IDs across all the zones & global zone match.
    Stop both the database instances in the global zone.
    zlogin to a zone, su as oracle and startup the instances.
    Hope that works!

  • Solaris 8, 9, and 10 zones in logical domains

    We are planning to migrate our current environment to T5-2.
    Current application environment is running with Solaris 8, 9 and 10 OS versions in V8* series and M5000.
    Our plan is to install multiple logical domains in T5-2 with Solaris 10 or 11 and migrate the current running Solaris 8, 9 and 10 servers as zones. Plan is to create the flash using flar and restore in the target T5-2.
    Will there be any issue with the said OS versions on migration?
    Please suggest.

    Lars,
    Use ldmp2v to convert existing physical server to VM  and
    Convert Solaris 8 and 9 physical servers to container in Solaris 10 VM?
    Is that what you are saying?
    Major constraint I have is, applications running in the current physical servers have no vendor support, vendor doesn't exist any more. Application has been locked to run only on the same OS version. What I am worried is, while running ldmp2v should not prevent the application coming up in the new virtualized environment. Keeping that in mind, I gave a thought like
    1. Install and configure the CDOM (control domain)
    2. configure and Install LDOMs with Solaris 10 OS
    3. Run flar on the existing Solaris 10 physical server
    4. Transfer the flar created to Solaris 10 LDOM and configure it as a zone
    5. For Solaris 8 and 9, create LDOMs with Solaris 10 OS
    6. Install additional patches and packages needed for supporting Solaris 8 and 9 zones
    6. create flar images on the existing Solaris 8 and 9 physical servers
    7. transfer the images to newly created LDOM and configure the zone.
    8. Current servers sun4u which has to be converted to sun4v.

  • Clearcase and Solaris 10/zones

    Hi All
    Any one installed Rational Clearcase on a node with zones. If so
    1. If each zone requires clearcase access, should clearcase be installed per zone
    2. Or can it be installed in the global zone once and the zones can get access to it.
    Your help will greatly enlighten and help me
    Regards

    The patch for ClearCase that adds Solaris 10 support specifically states that it does not support zones (yet). Maybe in ClearCase 7?

  • Missing data in server monitor under linux and solaris

    Some metrics are not displayed in our environments, specifically under the statistics tab, request statistics, active coldfusion thread, we always have a zero line.  Also under memory usage, "cf threads by memory usage" is always empty.  I have all three buttons at the top checked so they are monitoring.  Is there something else I'm doing wrong?
    Environment 1 : dell2850->centos5->vmware->centos5->32bitJDK5->tomcat6->coldfusion8
    Environment 2 : sun5120->solaris10->64bitJDK5->tomcat6->coldfusion8
    I'm specifically wanting thread info to check if I should increase the defaults in CFIDE configuration.  Most everything on the server is being delivered faster now that we are using a 64bit JVM and have moved to solaris in production (from windows).  But there are some sections of our cfm logic that are taking much longer now (2000% longer)
    Thanks
    Ahnjoan

    Hi all,
    does anyone can write some info why java Threads are
    recorded in the list of process (ps -ef) when you run
    on a Linux box, but not the same when you run on
    Solaris ? Which Thread support is more
    performant/stable that on Linux or that on Solaris?
    Thanks
    FrancescoLinux treats kernell threads as light weight processes and displays them as if they are actual processes - they of course are not, so the results of 'ps' can be misleading. Solaris fully differentiates between its three concepts of threads, lightweight processes and processes and 'ps' only shows actual processes.
    Both implementations in Linux and Solaris perform well.
    By the way, Solaris 8 has an optional, slightly different thread model than earlier versions of Solaris (in fact it is more like NT's) and that can be more efficient for JVM's or other multithreaded systems running on SMP systems. It can also be worse - your mileage may vary.

  • LDOMs, Solaris zones and Live Migration

    Hi all,
    If you are planning to use Solaris zones inside a LDOM and using an external zpool as Solaris zone disk, wouldn't this break one of the requirements for being able to do a Live Migration ? If so, do you have any ideas on how to use Solaris zones inside an LDOM and at the same time be able to do a Live Migration or is it impossible ? I know this may sound as a bad idea but I would very much like to know if it is doable.

    Thanks,
    By external pool I am thinking of the way you probably are doing it, separate LUNs mirrored in a zpool for the zones coming from two separate IO/Service domains. So even if this zpool exist inside the LDOM as zone storage this will not prevent LM ? That's good news. The requirement "no zpool if Live Migration" must then only be valid for the LDOM storage itself and not for storage attached to the running LDOM. I am also worried about a possible performance penalty introducing an extra layer of virtualisation. Have you done any tests regarding this ?

  • Solaris Zones and NFS mounts

    Hi all,
    Got a customer who wants to seperate his web environments on the same node. The release of apache, Java and PHP are different so kind of makes sense. Seems a perfect opportunity to implement zoning. It seems quite straight forward to setup (I'm sure I'll find out its not). The only concern I have is that all Zones will need access to a single NFS mount from a NAS storage array that we have. Is this going to be a problem to configure and how would I get them to mount automatically on boot.
    Cheers

    Not necessarily, you can create (from Global zone) a /zone/zonename/etc/dfs/dfstab (NOT a /zone/[i[zonename[/i]/root/etc/dfs/dfstab notice you don't use the root dir) and from global do a shareall and the zone will start serving. Check your multi-level ports and make sure they are correct. You will run into some problems if you are running Trusted Extensions or the NFS share is ZFS but they can be overcome rather easily.
    EDIT: I believe you have to be running TX for this to work. I'll double check.
    Message was edited by:
    AdamRichards

  • Opscenter 12c creating new zones on x86 server failing

    Asking for some help!!!,
    We are in the process of implementing Oracle Cloud Control 12c and of course Opscenter is a piece of it. When using Opscenter to install new zones on Sparc platform all works as expected. When using Opscenter to install new zones on a x86 server the process fails with not much information to go on. Without being too verbose with error messages we found the following in the /var/log/zones/zoneadm.323213129xxxx.ddasdhasdj.install file
    Running auto-install: '/usr/bin/auto-install -z ddasdhasdj -Z dsaas7990000000000000dasdasda7777/rpool/ -m /tmp/manifest.xml.dsada -c /var/mnt/virtlibs/312321312890/zone-config-profile.xml
    Error: auto-install failed.
    Exiting with exit code 255.
    ===== Completed: /usr/lib/brand/solaris/pkgcreatezone this command is also followed by much of the same command-line qualifiers from above.
    I'll will say this, the auto-install failed message is actually coming from line 242 fo the pkgcreatezone script found in /usr/lib/brand/solaris
    Anyone with experience or knowledge in the area is greatly appreciated.
    Thanks,
    Leigh

    Thanks for the reply.
    As it happens, yesterday, one our guys was working on the machine and rebooted it removing the /system/volatile/install_log so I can't say at this point. I did take a look at the log earlier and can't remember what I saw. I'll have him do another install attempt and let you know what we see.
    Leigh

  • ORACLE SERVER AND UNIX TP MONITOR-1

    제품 : ORACLE SERVER
    작성날짜 : 2002-05-17
    ====================================================================
    Subject: Oracle Server and UNIX Transaction Processing Monitors - 1
    =====================================================================
    PURPOSE
    This file contains commonly asked questions about Oracle Server and UNIX
    Transaction Processing Monitors (TPMs). The topics covered in this article are
         o What is a Transaction Processing Monitor (TPM)?
         o What is the X/Open Distributed Transaction Processing Model?
         o How does the Oracle Server works with TPMs?
         o How should I position TPMs with my customer?
         o What Oracle products must a customer purchase?
         o Where can my customer purchase a TPM?
         o Availability and packaging
    Explanation & Example
    What is a Transaction Processing Monitor?
    =========================================
    Under UNIX, a Transaction Processing Monitor (TPM) is a tool that coordinates
    the flow of transaction requests between front-end client processes that issue
    requests and back-end servers that process them. A TPM is used as
    the "glue" to coordinate transactions that require the services of several
    different types of back-end processes, such as application servers and
    resource managers, possibly distributed over a network.
    In a typical TPM environment, front-end client processes perform screen
    handling and ask for services from back-end server processes via calls to the
    TPM. The TPM then routes the requests to the appropriate back-end server
    process or server processes, wherever they are located on the network. Through
    configuration information, the TPM knows what services are available and where
    they are located. Generally, the back-end server processes are specialized so
    that each one handles one type of requested service. The TPM provides
    location transparency as well and can send messages through the network
    utilizing lower-level transport services such as TCP/IP or OSF DCE.
    The back-end servers process the requests as necessary and
    return the results back to the TP monitor. The TP monitor then routes
    these results back to the original front-end client process.
    A TPM is instrumental in the implementation of truly distributed processing.
    Front-end clients and back-end processes have no knowledge of each
    other. They operate as separate entities, and it is this concept that provides
    flexibility in application development. Front-end and back-end processes are
    developed in the UNIX client-server style, with each side optimized for its
    particular task. Server functionality can be deployed in stages, which makes
    it easy to add functionality as needed later in the product cycle. It also
    makes it easy to distribute both the front-end and back-end processes
    throughout the network on the most appropriate hardware for the job. In
    addition, multiple back-end server processes of the same type might be
    activated to handle increasing numbers of users.
    What is the X/Open Distributed Transaction Processing Model?
    ============================================================
    The X/Open Transaction Processing working group has been working
    for several years to establish a standard architecture to implement
    distributed transaction processing on open systems. In late 1991,
    X/Open published the initial Distributed Transaction Processing (DTP)
    model specification and defined the first of several interfaces that
    exist between the components of the model. Subsequently, other publications
    and a revised model specification have been published.
    An important function of the TPM in the X/Open DTP model is the
    synchronization of any commits and rollbacks that are required to complete
    a distributed transaction request. The Transaction Manager (TM) portion
    of the TPM is the entity responsible for ordering when distributed commits
    and rollbacks will take place. Thus, if a distributed application program
    is written to take advantage of the TM portion of the TPM, then it,
    and not the DBMS, becomes responsible for enabling the two-phase commit
    process. Article 2 has more detail on this model.
    How does the Oracle Server work with TPMs?
    ==========================================
    When a TPM is used without invoking an X/Open TM component to manage the
    transactions, Oracle Server needs no special functionality. The transaction
    will be managed by Oracle itself. However, when the TPM X/Open TM component
    is used to manage the transaction, the Oracle Server, that is the Oracle DBMS,
    acts as a Resource Manager--a type of back-end process. In the case of
    TPM-managed transactions, the TM needs a way to tell the RMs about the stages
    of the transaction. This is done by a standard, X/Open defined interface
    called XA. Article 2 of of this document gives more information about both
    the X/Open model and Oracle7's use of XA.
    Because the XA interface provides a standard interface between the TM and the
    resource manager, it follows that the TM can communicate with any XA-compliant
    resource manager (e.g., RDBMS), and, conversely, that a resource manager can
    communicate with any XA-compliant TM. Thus, the Oracle Server, beginning with
    Oracle7, works with any XA-compliant TM.
    How should I position TPMs with my customer?
    ============================================
    There's been a great deal of confusion about the need for TPM technology. Some
    software suppliers, most notably IBM, will assert that a TPM like CICS is a
    necessary requirement for high volume OLTP. Other vendors will assert that
    there is seldom a need for such technology. And yet others promote TPMs as
    providers of higher transaction throughput.
    From Oracle's standpoint, customers might choose TPM technology under any of
    the following conditions:
    1. For heterogeneous database access, especially for 2PC capability
         This means that a TPM can be used to coordinate 2PC between Oracle
         DBMS and any other XA-compliant database, such as Informix. This
         does NOT provide SQL heterogeneity - SQL calls to Oracle DBMS may be
         different than SQL calls to Informix. The TPM handles the routing,
         communication, and two-phase commit portion of the transaction, but
         does not translate one type of SQL call into another.
    2. For transaction monitoring and workload control
         The leading TPMs supply tools to actively manage the flow of
         transactions between clients and servers and to load balance the work
         load across all available processors on a network, not just on a
         single multi-processor system. Some TPMs also have the ability to
         dynamically bring up additional back-end services during peak work
         hours.
    3. For more flexible application development and installation
         One of the key features of the DTP model is application modularity.
         Modularity, that is, the decomposition of a large program into small,
         easily defined, coded and maintained "mini-programs" makes it easy to
         add new functionality as needed. Modularity also makes it much easier
         to distribute the front-end and back-end processes and the resource
         managers across hardware throughout a network.
    4. For isolating the client from details of the data model
    By using the service oriented programming model, the client program
         is unaware of the data model. The service can be recoded to use a
         different one with no change to the client. To get this advantage,
         the application developer must explicitly code the server and client
         to fit the service model.
    5. For connection of thousands of users
         TP Monitors, because of their three-tier architecture, can be used
         to connect users to an intermediate machine or machines, removing
         the overhead of handling terminal connections from the machine
         actually running the database. See Article 4 for more information.
    There are also several cases where TPM technology is not the right answer.
    These include:
    1. If the customer is simply looking for a performance improvement
         The customer may have heard a theory that "higher performance
         is possible for large scale applications only if they use a
         TP monitor". First, no performance gain can be achieved for
         existing applications; in fact, they won't even run under a TP
         Monitor without recoding. Second, performance improvements have
         only been documented for large numbers of users, and "large"
         means many hundreds or thousands. Without a TP Monitor,
         Oracle Server can handle several hundred users with its normal
         two-task architecture and several times that using the Multi
         Threaded Server. For more on performance, see Article 4.
    2. If the customer has made large investment in his existing Oracle
    applications
         TP monitor applications must be designed from the ground up to take
         advantage of TP monitor technology. Current Oracle customers will find
         it difficult to "retrofit" a TP monitor to their existing applications.
         The Multi Threaded Server, on the other hand, allows the use of
         existing Oracle applications without change.
    3. If the customer is committed to the Oracle tool set
         Currently, none of Oracle's front-end tools (Oracle Forms, etc.) is
         designed to work with TP monitors. It is possible to invoke a
         TP Monitor by using user exits. However, the fact that the TP
         Monitor model hides the data model from the client means that only
         the screen display parts of Forms can be used, not the automatic
         mapping from screen blocks to tables.
    4. If the customer does not have a staff of experienced software engineers
         This is still very young technology for UNIX. There is not a lot of
    knowledge in the industry on how to build TP monitor applications or
    what techniques are most useful and which are not. Furthermore,
         integrating products from different vendors, even with the support
         of standard interfaces, is more complex than deploying an integrated
         all-Oracle solution. Because TP monitor technology is fairly
         complex, we recommend that you let the TP monitor supplier promote
         the virtues of their technology and differentiate themselves from
         their competitors.
    What Oracle products must a customer purchase?
    ==============================================
    If your customer is only interested in building Oracle-managed TP Monitor
    transactions, the only Oracle products required are the Oracle Server
    and the appropriate Oracle precompiler for whatever language the
    application is being written in--most likely C or Cobol. If TPM-managed
    transactions are required, the Oracle7 Server with the distributed option
    is also required. SQL*Net is optional because the TPM takes care of the
    network services. Article 2 describes when you would choose to have the TP
    Monitor manage the transactions.
    Where can my customer purchase a TPM?
    =====================================
    There are many vendors offering the UNIX TPM products. (Oracle does not
    relicense TPMs.) Information on the most well known products is provided
    below:
    The following support XA:
    Product & Vendor     FCS          Known OS/Platform Ports
    "TUXEDO System/T"     1986          UNIX SVR4 & SVR3: Amdahl, AT&T,
    UNIX System Laboratories          Bull, Compaq, Dell, Fujitsu, ICL,
    190 River Road                    Motorola, Olivetti, Pyramid,Sequent,
    Summit, NJ 07901               Sun, Toshiba, Unisys, NCR, Stratus
                             Other: IBM AIX, HP/UX, DEC Ultrix
    "TOP END"      1992          UNIX SVR4: NCR
    NCR Corporation
    1334 S. Patterson Blvd.
    Dayton, OH 45479
    "ENCINA"          1992          IBM AIX, HP, Sun (SunOS and Solaris)
    Transarc Corporation               Other: OS/2, DOS, HP-UX, STRATUS
    707 Grant Street (Depends on DCE)
    Pittsburgh, PA 15219
    "CICS/6000" 1993          AIX: IBM
    IBM Corporation                    (Depends on DCE)
    "CICS 9000" 1994          HP-UX
    HP
    The following do not currently support XA:
    Product & Vendor     FCS          Known OS/Platform Ports
    "VIS/TP"          unknown          unknown
    VISystems, Inc.
    11910 Greenville Avenue
    Dallas, TX 75243
    "UniKix"          1990          UNIX: ARIX, AT&T, NCR, Pyramid,
    UniKix                     Sequent, Sun, Unisys      
    "MicroFocus           1993          SCO Unix, AIX
    Transaction System"
    Micro Focus
    26 West Street
    Newbury RG13 1JT
    UK
    There are also several third parties who are reselling the products listed
    above.
    In addition, Groupe Bull, Digital, Siemens-Nixdorf, and several other hardware
    vendors are planning to redesign their proprietary TPMs to be XA-compliant and
    suitable for use on UNIX systems.
    Availability and Packaging
    ==========================
    On what platforms is the XA Library available?
    Oracle provides the XA interface with Oracle7 Server on all platforms that
    support an XA-compliant TPM. Support for XA is included as part of the
    Oracle7 Server distributed option and has no extra charge in and of itself.
    Which version of XA does Oracle Server support?
    Oracle7 Server supports the Common Application Environment (CAE) version of
    XA, based on the specification published by X/Open in late 1991. It will
    require that the TM also be at that level. This means Tuxedo /T version 4.2,
    for example.
    Oracle Server supports all required XA functions. There are some optional
    features Oracle Server does not support, such as asynchronous operation.
    None of those options affect application programming.
    Page (2/4)
    This file contains commonly asked questions about Oracle Server and UNIX
    Transaction Processing Monitors (TPMs). The topics covered in this article are
         o Oracle Server Working with UNIX TPMs
         o TPM Application Architecture
    The questions answered in part 2 provide additional detail to the information
    provided in part 1.
    Oracle Server Working with UNIX TP Monitors
    ===========================================
    Do I need XA to use Oracle Server with TPMs? If I don't use it, what are
    the consequences?
    There are a number of real applications running today with Oracle Server and
    TPMs but not using XA. To use a TPM with Oracle without using XA, the user
    would write an "application server" program which could handle one or more
    "services". For example, a server program might handle a service called
    "debit_credit". The key requirement is that the entire transaction,
    including the "commit work", must be executed within a single service. This
    is the restriction which XA will remove, as we'll see later. Each
    server process can serially handle requests on behalf of different clients.
    Because a server process can handle many client processes, this can
    reduce the total number of active processes on the server system,
    thereby reducing resource requirements and possibly increasing overall
    throughput.
    When Oracle is used with a TPM in this mode, we call it an Oracle-managed
    transaction since the transaction commit or rollback is done with a SQL
    statement.
    What is XA? How does XA help Oracle7 work with UNIX TPMs?
    XA is an industry standard interface between a Transaction Manager and a
    Resource Manager. A Resource Manager (RM) is an agent which
    controls a shared, recoverable resource; such a resource can be
    returned to a consistent state after a failure. For example, Oracle7 Server
    is an RM and uses its redo log and undo segments to be able to do this.
    A Transaction Manager (TM) manages a transaction including the
    commitment protocol and, when necessary, the recovery after a failure.
    Normally, Oracle Server acts as its own TM and manages its own commitment
    and recovery. However, using a standards-based TM allows Oracle7 to
    cooperate with other heterogeneous RMs in a single transaction.
    The commonly used TPMs include a TM component for this purpose. In order to
    use the TM capability of the TPM rather than Oracle7's own transaction
    management, the application uses a transaction demarcation API (called TX)
    provided by the TPM rather than the SQL transaction control statements (e.g.
    "commit work"). For each TX call, the TM then instructs all RMs, by the
    appropriate XA commands, to follow the two-phase commit protocol. We
    call this a TPM-managed transaction.
    The following picture shows these interfaces within a monolithic application
    program model. This is the model most commonly described in the
    DTP literature. We'll see later what the picture looks like when we add
    Oracle7 and when we switch to a modularized client-server application
    program model.
              | |
              | |
              | Application Program (AP) |
              | |
              | |
                   | | |                    |
    Resource Manager API | | | |
    (e.g. SQL) -----|--|------------- | TX API
              | | v |          |
              --------|-------------     |          |
              | v | | v
         ---------------------- | | --------------------
         | | | | | |
         | Resource | | |<----->| Transaction |
         | Managers | |--- | Manager |
         | (RMs) | |<-------->| (TM) |
         | |--- | |
         | |<----------->| |
         ---------------------- XA --------------------
                        Interface
    The XA interface is an interface between two system components, not
    an application program interface; the application program does
    not write XA calls nor need to know the details of this interface.
    The TM cannot do transaction coordination without the assistance of
    the RM; the XA interface is used to get that assistance.
    How does the DTP Model support client-server?
    The above picture was actually simplified to make it easier to explain
    the role of XA. In a true distributed transaction architecture, there
    are multiple applications, each with an Application Program, a Resource
    Manager, and a Transaction Manager. The applications communicate by
    using a Communication Resource Manager. The CRM is generally provided
    as a component of the TPM. It includes the transaction information when
    it sends messages between applications, so that both applications can
    act of behalf of the same transaction. The following picture
    illustrates this:
    Client Application
    | AP |
    ||| | |
    SQL ||| | TX | CRM
    ||V V | API
    -||-- ----- |
    | |V | | | V
    --|-- |<---| | -----
    | V || | | | |
    ----- |<----| TM |<-->| CRM |
    | || | |XA+ | |
    | RMs |<-----| | -----
    | | XA | | A
    ----- ----- | Server Application
    | -----------------------------
    | | AP |
    | -----------------------------
    | ||| | |
    | SQL ||| | TX | CRM
    | ||V V | API
    | -||-- ----- |
    | | |V | | | V
    | --|-- |<---| | -----
    | | V || | | | |
    | ----- |<----| TM |<-->| CRM |
    | | || | |XA+ | |
    | | RMs |<-----| | -----
    | | | XA | | A
    | ----- ----- |
    | |
    | |
    -------- |
    / |
    / |
    / |
    Most TP Monitor products include both a TM and a CRM, and also provide
    additional functions such as task scheduling and workload monitoring.
    What is XA+? What does Oracle need to do to comply with it?
    XA+ is an interface that lets the X/Open model actually be distributed
    because it allows a communication resource manager to tell a TM on the
    server that a message from a client just came in for a particular
    transaction. Oracle is not currently planning to provide an X/Open
    communication resource manager, so we don't have any plans right now
    to do XA+. Version 2 of the DTP model paper from X/Open describes it.
    The status of the current XA+ specification is "snapshot".
    When would I choose an Oracle-managed transaction vs a TPM-managed
    transaction?
    Oracle Server is very efficient at managing its own transactions. If
    the TPM manages the transaction, in general some additional overhead
    will be incurred.
    The two main reasons a customer might prefer to use a TPM-managed
    transaction are as follows:
    (1) He may need to update RMs from different vendors. Experience so far
    has been that the most common case is wanting to update both Oracle and
    a TP Monitor managed resource such as a transactional queuing service
    in the same transaction (see Article 3).
    (2) He may want to use the model of having several different services in
    a transaction, even to the same database. For example, the
    "debit_credit" service could be split into a "debit" service and a
    "credit" service. This is a very attractive model, but this type of
    modularity does exact a performance penalty (see Article 4).
    Can I get a version of XA to run on Oracle Server version 6?
    No, the XA functionality uses two underlying mechanisms in the Oracle
    Server which are not available in version 6: two-phase commit and
    session switching. The upi calls for these functions do not not exist
    in version 6.
    When would I use XA vs Oracle7 to coordinate all-Oracle distributed
    transactions?
    Generally speaking, Oracle Server should be used to coordinate all-Oracle
    distributed transactions. The main reason for using XA to coordinate
    transactions would be that you want to use the TP Monitor service-oriented
    architecture. That is, you would like to construct an application built of
    services and service requests in order to benefit from the modularity and
    workload control such an environment provides.
    TP Monitor Application Architecture
    ===================================
    What might a TP Monitor application look like?
    Most TPM applications will consist of two more more programs, where
    there are front-end client programs which request services and back-end
    server programs which provide services. In this case, the TPM supplies an
    additional capability which is transactional communication. The client
    describes the boundaries of the transaction, through the use of the TX API,
    and the TPM relays that transaction information to each requested service.
    The overall application structure generally looks like the following in the
    client-server model. The "TP Monitor Services" box is not necessarily a
    process. It could be one or more processes, or just libraries coordinating
    through shared memory. Each client process and server process could be on
    a different machine. Normally, the application server processes would be
    connected to their Oracle Server processes using the IPC driver; the TPM
    would be used to deliver messages between application client processes on
    one machine and application server processes on another. However, the
    application server processes could also be connected with the standard
    Oracle SQL*Net to shadow processes on different machines. This might be
    useful if one of the databases was on a machine which did not support TPMs.
    |Application| |Application| |Application|
    | Client 1 | | Client 2 | | Client 3 |
    | | | | | |
    \ TPM API | TPM API / TPM API
    | |
    | TP Monitor Services |
    | |
         | --------------------- |
    | | Transaction Manager | |
    ---------------|---------------|---------------------
    TPM API | | XA | XA | TPM API
    | | inter- | inter- |
              | | face | face |
              | | | |
    ----------- | | -----------
    |Application| | | |Application|
    | Server 1 |--- ---| Server 2 |
    | (Pro*C) | | | | (Pro*C) |
    | SQL | SQL
    | | | |
    | Resource ----------- ----------- |
    | Manager | | | | |
    | | Oracle7 | | Oracle7 | |
    | | Server | | Server | |
    | | Process | | Process | |
    | | | | | |
    | ----------- ----------- |
    | | | |
    | ----------------------------------------------------- |
    | | | |
    | | SGA | |
    | | | |
    | ----------------------------------------------------- |
    | |
    Application client programs might be written in C and be linked with
    TPM libraries. Alternatively, they could use a screen painter product.
    Application server programs would be written in Pro*C or Pro*COBOL and
    be linked with TPM libraries, the normal Oracle7 user-side libraries
    and libxa.a. The Oracle7 Server process is the regular Oracle7 executable.
    More complicated application architectures can also be constructed. Most of
    the TPMs allow a server to become a client of another service, so you can
    involve additional servers.
    Could I use Oracle7's Multi Threaded Server as the SQL*Net connection in the
    previous picture?
    Yes, but that will not be needed in many cases. For example, both
    application server processes in the previous picture could talk to a
    single Oracle7 Server process through the Multi Threaded Server in the
    previous picture. However, since the TPM architecture typically reduces
    the number of server processes, the reduction in processes using Multi
    Threaded Server may be less significant than in an architecture without
    TPMs. If the application will use database links, however, then MTS will
    be required.
    How do I write an Oracle TP Monitor application?
    The actual API used to talk to the TPM varies between vendors, so you need
    to get the documentation from the vendor. However, all have a way to
    indicate where a transaction begins and ends and a way to send a request
    and receive a response from a client to a server. Some use an RPC model,
    some use a pseudo-RPC model, and some use a send/receive model. The TX API
    described earlier is a subset of the TPM API as defined by each of
    the TPM providers.
    The client program and server program might look something like the
    following examples. We h (such as Tuxedo's
    "tpacall
    Reference Ducumment
    ---------------------

    hello,
    the role is the same on all plattforms. the reports server takes requests for running reports, spawns an engine that executes the request. in addition to that, the server also provides scheduling services and security features for the reports environment.
    regards,
    the oracle reports team

Maybe you are looking for

  • Flashing black screen when pasting?

    I have a student that when she pasted some image (jpg) and text elements from  from one indesign cs4 document to another the indesign window would flash black then part of what she was pasting would appear. Then it would flash black again and the res

  • Problem setting up DSN for 10g ODBC driver

    Hello, I am simply trying to install an ODBC driver for 10g. To accomplish this, I first downloaded the odbc driver for 10g 10.2.0.2.0. The Readme notes said that I need to install the odbc driver using the Oracle Universal Installer. So I got a copy

  • REPOSITORY CREATION PROBLEM IN DARA INTEGRATOR 3.1

    hi THIS IS SRIDHAR i have installed BO 3.0 ON My system after that i am installing BO Dataservices 3.1 on my system, in installation time when it is creating repository i am using BO mysql DATABASE,but it is giving problem i.e cannot open connection

  • Problem installing Speech Analysis Module

    I have CS6 and trial of CC which I'm evaluating now. When I installed the French speech module it defaulted to CS6. Is there some way to direct it to install in CC? The same module is specified for both versions. Thanks

  • Possible reconfigur​ation of a preset LabVIEW VI?

    Hey all, I'm currently working with both VBAI 2012 and LV 2012 for machine vision purposes. Our project requires detection of objects by the HSL (hue, saturation, luminance) method, and we'd like to implement a customized RGB-to-HSL formula in a .vi