Sun Cluster question

Hello everyone
I've inherited an Oracle Solaris system holding ASE Sybase databases. The system consists of two nodes inside a Sun Cluster. Each of the nodes is hosting 2 Sybase database instances, where one of the nodes is active and other is standing by. The scenario at hand is that when any of the databases on one node fails for whatever reason, the whole system gets shifted to the second node to keep the environment going. That works fine.
My intended scenario:
Each node is holding 2 database instances, both nodes ARE working at the same time so that each one is serving one instance of the database. In the event of failure on one node, the other one should assume the role of BOTH database instances till the first one gets fixed.
The question is: is that possible? and if it is, does that require breaking the whole cluster and rebuilding it? or can this be done online without bringing down the system?
Thanks a lot in advance

What you propose will not work either. E.g. there is no logic implemented to fence the underlying zpool from one node to the other in such a configuration.
Also the current SUNW.HAStoragePlus(5) manpage document:
        Note -   SUNW.HAStoragePlus does not support  file  sys-
                 tems created on ZFS volumes.
                 You cannot use SUNW.HAStoragePlus  to  manage  a
                 ZFS storage pool that contains a file system for
                 which the ZFS  mountpoint  property  is  set  to
                 legacy or none.[...]
Greets
Thorsten

Similar Messages

  • Apply one non-kernel Solaris10 patch at Sun Cluster ***Beginner Question***

    Dear Sir/Madam,
    Our two Solaris 10 servers are running Sun Cluster 3.3. One server "cluster-1" has one online running zone "classical". Another server
    "cluster-2" has two online running zones, namely "romantic" and "modern". We are tying to install a regular non-kernel patch #145200-03 at cluster-1 LIVE which doesn't have prerequisite and no need to reboot afterwards. Our goal is to install this patch at the global zone,
    three local zones, i.e., classical, romantic and modern at both cluster servers, cluster-1 and cluster02.
    Unfortunately, when we began our patching at cluster-1, it could patch the running zone "classical" but we were getting the following errors which prevent it from continuing with patching at zones, i.e., "romantic" and "modern" which are running on cluster-2. And when we try to patch cluster-2, we are getting similiar patching error about failing to boot non-global zone "classical" which is in cluster-1.
    Any idea how I could resolve this ? Do we have to shut down the cluster in order to apply this patch ? I would prefer to apply this
    patch with the Sun Cluster running. If not, what's the preferred way to apply simple non-reboot patch at all the zones at both nodes in the Sun Cluster ?
    Like to hear from folks who have experience in dealing with patching in Sun Cluster.
    Thanks, Mr. Channey
    p.s. Below are output form the patch #145200-03 run, zoneadm and clrg
    outputs at cluster-1
    root@cluster-1# patchadd 145200-03
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    Checking patches that you specified for installation.
    Done!
    Approved patches will be installed in this order:
    145200-03
    Preparing checklist for non-global zone check...
    Checking non-global zones...
    Failed to boot non-global zone romantic
    exiting
    root@cluster-1# zoneadm list -iv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    15 classical running /zone-classical native shared
    - romantic installed /zone-romantic native shared
    - modern installed /zone-modern native shared
    root@cluster-1# clrg status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    classical cluster-1 No Online
    cluster-2 No Offline
    romantic cluster-1 No Offline
    cluster-2 No Online
    modern cluster-1 No Offline
    cluster-2 No Online

    Hi Hartmut,
    I kind of got the idea. Just want to make sure. The zones 'romantic' and 'modern' show "installed" as the current status at cluster-1. These 2 zones are in fact running and online at cluster-2. So I will issue your commands below at cluster-2 to detach these zones to "configured" status :
    cluster-2 # zoneadm -z romantic detach
    cluster-2 # zoneadm -z modern detach
    Afterwards, I apply the Solaris patch at cluster-2. Then, I go to cluster-1 and apply the same Solaris patch. Once I am done patching both cluster-1 and cluster-2, I will
    go back to cluster-2 and run the following commands to force these zones back to "installed" status :
    cluster-2 # zoneadm -z romantic attach -f
    cluster-2 # zoneadm -z modern attach -f
    CORRECT ?? Please let me know if I am wrong or if there's any step missing. Thanks much, Humphrey
    root@cluster-1# zoneadm list -iv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    15 classical running /zone-classical native shared
    - romantic installed /zone-romantic native shared
    - modern installed /zone-modern native shared

  • Upgrading Solaris OS (9 to 10)  in sun cluster 3.1 environment

    Hi all ,
    I have to upgrade the solaris OS 9 to 10 in Sun cluster 3.1.
    Sun Cluster 3.1
    data service - Netbackup 5.1
    Questions:
    1 .Best ways to upgrade the Solaris 9 to 10 and the Problems while upgrading the OS?
    2 .Sun Trunking support in Sun Cluster 3.1?
    Regards
    Ramana

    Hi Ramana
    We had used the live upgrade for upgrading Solaris 9 to 10 and its the best method for less downtime and risk but you have to follow the proper procedure as it is not the same for normal solaris. Live upgrade with sun cluster is different . you have to take into consideration about global devices and veritas volume manager. while creating new boot environment.
    Thanks/Regards
    Sadiq

  • Recommendations for Multipathing software in Sun Cluster 3.2 + Solaris 10

    Hi all, I'm in the process of building a 2-node cluster with the following specs:
    2 x X4600
    Solaris 10 x86
    Sun Cluster 3.2
    Shared storage provided by a EMC CX380 SAN
    My question is this: what multipathing software should I use? The in-built Solaris 10 multipathing software or EMC's powerpath?
    Thanks in advance,
    Stewart

    Hi,
    according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
    So at the end it all boils down to
    - cost: Solaris multipathing is free, as it is bundled
    - support: Sun can offer better support for the Sun software
    You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
    Hartmut

  • DS6 in a zone on a Sun Cluster

    I have a sun cluster that I am trying to configure and I don't know if I am trying to do something wrong, so I thought I would ask.
    I am using Sun Cluster 3.2 on a pair of Sun T2000s with a Fiber Channel disk array attached to both nodes. I have configured the disk array to have two file systems. One for each server. I have configured two resource groups in the global zone and setup a HAStoragePlus resource for each file system. I am successfully able to fail the file systems between the two nodes. On each of the file systems I have installed a zone. The zone is managed with the resource type provided by the SUNWsczone package to start and stop the zone. The resource is in the same resource group has the HAStoragePlus resource.
    At this point I have created resource group for the zone to manage the directory server. After creating the resource group I am trying to create a resource for the directory service HA service. When I use the clresource command it complains that the resource group does not contain a logical hostname. When using the services provided by the SUNWsczone package I created a logical hostname that is being assigned to the zone in question. Is there a way to install the Directory Server HA resource into the resource group for the zone?

    Philippe,
    DS 6 Sun Cluster Agent was not tested with SC 3.2 in Zones.
    Zone support came with SC 3.2, and DS 6 Cluster Agent was built with SC 3.1, tested with SC 3.1 and 3.2 in the Global zone.
    Regards,
    Ludovic.

  • SUN CLuster probe value

    Hi,
    I've a little question about probe value when creating a probe script.
    Exit code 100 (automatic failover) means that the probe is not valid and it should restart during the rety-count in the retry_interval,
    Exit code 0 means that everything is OK
    What about the other values? (1,2,......99). Is there other values ??
    Thanks.

    Pat,
    For GDS there is also exit 201, which will perform an immediate failover.
    Your exit 100 ---> to immediate failover is not completely true. An exit 100 from the probe will inform GDS that the application has failed and requires immediate attention. That attention is determined by other resource properties, i.e. Retry_count and retry_interval. So, assuming Retry_count=2, then GDS will attempt a resource restart and only consider a failover to another node once Retry_count is exceeded within Retry_interval.
    The SUNW.gds man page provides further information, i.e.
    The exit status of the probe command is used to deter-
    mine the severity of the failure of the application.
    This exit status, called probe status, is an integer
    between 0 (for success) and 100 (for complete failure).
    The probe status can also be 201, which causes the
    application to fail over unless Failover_enabled is set
    to False.
    One point to also consider is that Sun Cluster also sums the failure history, so 100 indicates a complete failure. This implies that your probe could exit 50 and if the next time the probe runs it also exit's 50, you'll have a failure history sum of 100 which would trigger a reaction for a complete failure, e.g.
    25 + 25 + 25 +25 = 100 would trigger a complete failure
    50 + 50 = 100 would trigger a complete failure
    Please note that if you consider exit values such as 25 or 50, then the failure history must be summed within the moving Retry_interval window. So if Retry_interval was set to 300 then you have a 5 minute moving window in which to sum 100 in order to get GDS to react to a complete failure. This implies that if your probe exits 50 and then 301 seconds later exits 50 again GDS won't react unless your probe exists sum 100 with Retry_interval.
    Hope this makes sense.
    Regards
    Neil

  • Sun Cluster + meta set shared disks -

    Guys, I am looking for some instructions that most sun administrators would mostly know i believe.
    I am trying to create some cluster resource groups and resources etc., but before that i am creating the file systems that is going to be used by two nodes in the sun cluster 3.2. we use SVM.
    I have some drives that i plan to use for this specific cluster resource group that is yet to be created.
    i know i have to create a metaset since thats how other resource groups in my environment are setup already so i will go with the same concept.
    # metaset -s TESTNAME
    Set name = TESTNAME, Set number = 5
    Host Owner
    server1
    server2
    Mediator Host(s) Aliases
    server1
    server2
    # metaset -s TESTNAME -a /dev/did/dsk/d15
    metaset: server1: TESTNAME: drive d15 is not common with host server2
    # scdidadm -L | grep d6
    6 server1:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
    6 server2:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
    # scdidadm -L | grep d15
    15 server1:/dev/rdsk/c10t6005076307FFC4520000000000004121d0 /dev/did/rdsk/d15
    Do you see what i am trying to say ? If i want to add d6 in the metaset it will go through fine, but not for d15 since it shows only against one node as you see from the scdidadm output above.
    Please Let me know how i share the drive d15 same as d6 with the other node too. thanks much for your help.
    -Param
    Edited by: paramkrish on Feb 18, 2010 11:01 PM

    Hi, Thanks for your reply. You got me wrong. I am not asking you to be liable for the changes you recommend since i know thats not reasonable while asking for help. I am aware this is not a support site but a forum to exchange information that people already are aware of.
    We have a support contract but that is only for the sun hardware and those support folks are somewhat ok when it comes to the Solaris and setup but not that experts. I will certainly seek their help when needed and thats my last option. Since i thought this problem that i see is possibly something trivial i quickly posted a question in this forum.
    We do have a test environment but that do not have two nodes but a 1 node with zone clusters. hence i dont get to see this similar problem in the test environment and also the "cldev populate" would be of no use as well to me if i try it in the test environment i think since we dont have two nodes.
    I will check the logs as you suggested and will get back if i find something. If you have any other thoughts feel free to let me know ( dont bother about the risks since i know i can take care of that ).
    -Param

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • Sun Cluster 3.1 Failover Resource without Logical Hostname

    Maybe it could sound strange, but I'd need to create a failover service without any network resource in use (or at least with a dependency on a logical hostname created in a different resource-group).
    Does anybody know how to do that?

    Well, you don't really NEED a LogicalHostname in a RG. So, i guess i am not understanding
    the question.
    Is there an application agent which demands to have a network resource in the RG? Sometimes
    the VALIDATE method of such agents refuses to work if there is no network resource in
    the RG.
    If so, tell us a bit more about the application. Is this GDS based and generated by
    Sun Cluster Agent Builder? The Agent Builder has a option of "non Network Aware", if you
    select that while building you app, it ought to work without a network resource in the RG.
    But maybe i should back up and ask the more basic question of exactly what is REQUIRING
    you to create a LogicalHostname?
    HTH,
    -ashu

  • Node shutdown in sun cluster

    I have a two node cluster configured for High availability..
    My resourcegroup is online on node1..
    so the resources, logical hostname resource and my application resource are online on node1.
    when the node1 is shutdown, the resource group is failovered to node2 and is online..But when the node1 is brought back, the logical hostname is plumbed up on node1 also.. So both nodes have the logical hostname plumbed up..(from ifconfig -a output)
    which is causing the problem.
    My question is "Does sun cluster check the status of resources in the resource group on the node where my resource group is offline" . If it does, what additional configuration is required.

    This is a pretty old post and you probably have the answer by now )or have abandoned all hope), but it seems to me that what you want is to reset the resource/resource group dependencies for node1.
    If node1 is coming online under logicalhostname without all the resources coming up, you just don't have the resource dependency set up. You can do this in the SunPlex Manager GUI pretty easily. That should make it so the node doesn't get added to the logicalhostname resource group until X dependencies are met (what X stands for is entirely up to you; I didn't see the resource you want to come up first listed.)

  • TimesTen database in Sun Cluster environment

    Hi,
    Currently we have our application together with the TimesTen database installed at the customer on two different nodes (running on Sun Solaris 10). The second node acts as a backup to provide failover functionality, although right now only manual failover is supported.
    We are now looking into a hot-standby / high availability solution using Sun Cluster software. As understood from the documentation, applications can be 'plugged-in' to the Sun Cluster using Agents to monitor the application. Sun Cluster Agents should be already available for certain applications such as:
    # MySQL
    # Oracle 9i, 10g (HA and RAC)
    # Oracle 9iAS Application Server
    # PostgreSQL
    (See http://www.sun.com/software/solaris/cluster/faq.jsp#q_19)
    Our question is whether Sun Cluster Agents are already (freely) available for TimesTen? If so, where to find them. If not, should we write a specific Agent separately for TimesTen or handle database problems from the application.
    Does someone have any experience using TimesTen in a Sun Cluster environment?
    Thanks in advance!

    Yes, we use 2-way replication, but we don't use cache connect. The replication is created like this on both servers:
    create replication MYDB.REPSCHEME
    element SERVER01_DS datastore
    master MYDB on "SERVER01_REP"
    transmit nondurable
    subscriber MYDB on "SERVER02_REP"
    element SERVER02_DS datastore
    master MYDB on "SERVER02_REP"
    transmit nondurable
    subscriber MYDB on "SERVER01_REP"
    store MYDB on "SERVER01_REP"
    port 16004
    failthreshold 500
    store MYDB on "SERVER02_REP"
    port 16004
    failthreshold 500
    The application runs on SERVER01 and is standby on SERVER02. If an invalid state is detected in the application, the application on SERVER01 is stopped and the application on SERVER02 is started.
    In addition to this, we want to fail over if the database on the SERVER01 is in invalid state. What should we have monitored by the Clustering Agent to detect an invalid state in TT?

  • SUNWjass on sun cluster

    Hi,
    I would like to do hardenning on sun cluster nodes by using SUNWjass.
    Can anybody tell me what all profile I need to apply? When I apply Cluster Security Hardenning driver profile, Cluster interconnect stop functioning untill I disable the IP filter.
    Seeking suggessions on the filter entries on /etc/ipf/ipf.conf file
    Thanks and Regards
    Ushas Symon

    Hi Tim,
    I would like to get clarified on the same question, There are many profiles which can be applied as part of hardenning, (Ex, Cluster config, Cluster Security, Server Config, Server Security etc), For a sun cluster which in failover configuration, Do I need to install both, Server Security as well as cluster security, or either one only?
    I am afraid if it would make some changes and if something goes wrong, I will have to backout the jass profile.
    Just for clarification
    Thanks and Regards
    Ushas

  • Sun Cluster Core Conflict - on SUN Java install

    Hi
    We had a prototype cluster that we were playing with over two nodes.
    We decided to uninstall the cluster by putting node into single user mode and running scinstall -r.
    Afterwards we found that the Java Availability Suite was a little messed up - maybe because the kernel/registry had not been updated - it though the cluster and agent software was uninstalled and would not let us re-install. All the executabvles from /etc/cluster/bin had been removed from the nodes.
    So, On both nodes we ran the uninstall program from /var/sadm/prod/... and then selected cluster and agents to uninstall.
    On the first node, this completely removed the sun cluster compoenets and then allowed us to re-install the cluster software successfully.
    On the second node, for some reason, it has left behind the component "Sun Cluster Core", and will not allow us to remove it with the uninstall.
    When we try to re-install we get the following:
    "Conflict - incomplete version of Sun Cluster Core has been detected"
    In then points us to the sun cluster upgrade guide on sun.com.
    My question is - how do we 'clean up' this node and remove the sun cluster core so we can re-install the sun cluster software from scratch?
    I don't quite understand how this has been left behind....
    thanks in advance
    S1black.

    You can use prodreg directly to clean up when your de-install has gone bad.
    Use:
    # prodreg browse
    to list the products. You may need to recurse down into the individual items. The use:
    # prodreg unregister ...
    to unregister and pkgrm to remove the packages manually.
    That has worked for me in the past. Not sure if it is the 'official' way though!
    Regards,
    Tim
    ---

  • Veritas required for Oracle RAC on Sun Cluster v3?

    Hi,
    We are planning a 2 node Oracle 9i RAC cluster on Sun Cluster 3.
    Can you please explain these 2 questions?
    1)
    If we have a hardware disk array RAID controller with LUNs etc, then why do we need to have Veritas Volume Manager (VxVM) if all the LUNS are configured at a hardware level?
    2)
    Do we need to have VxFS? All our Oracle database files will be on raw partitions.
    Thanks,
    Steve

    > We are planning a 2 node Oracle 9i RAC cluster on Sun
    Cluster 3.Good. This is a popular configuration.
    Can you please explain these 2 questions?
    1)
    If we have a hardware disk array RAID controller with
    LUNs etc, then why do we need to have Veritas Volume
    Manager (VxVM) if all the LUNS are configured at a
    hardware level?VxVM is not required to run RAC. VxVM has an option (separately
    licensable) which is specifically designed for OPS/RAC. But if
    you have a highly reliable, multi-pathed, hardware RAID platform,
    you are not required to have VxVM.
    2)
    Do we need to have VxFS? All our Oracle database
    files will be on raw partitions.No.
    IMHO, simplify is a good philosophy. Adding more software
    and layers into a highly available design will tend to reduce
    the availability. So, if you are going for maximum availabiliity,
    you will want to avoid over-complicating the design. KISS.
    In the case of RAC, or Oracle in general, many people do use
    raw and Oracle has the ability to manage data in raw devices
    pretty well. Oracle 10g further improves along these lines.
    A tenet in the design of highly available systems is to keep
    the data management as close to the application as possible.
    Oracle, and especially 10g, are following this tenet. The only
    danger here is that they could try to get too clever, and end up
    following policies which are suboptimal as the underlying
    technologies change. But even in this case, the policy is
    coming from the application rather than the supporting platform.
    -- richard

  • What are typical failover times for application X on Sun Cluster

    Our company does not yet have any hands-on experience with clustering anything on Solaris, although we do with Veritas and Miscrosoft. My experience with MS is that it is as close to seemless (instantaneous) as possible. The Veritas clustering takes a little bit longer to activate the standby's. A new application we are bringing in house soon runs on Sun cluster (it is some BEA Tuxedo/WebLogic/Oracle monster). They claim the time it takes to flip from the active node to the standby node is ~30minutes. This to us seems a bit insane since they are calling this "HA". Is this type of failover time typical in Sun land? Thanks for any numbers or reference.

    This is a hard question to answer because it depends on the cluster agent/application.
    On one hand you may have a simple Sun Cluster application that fails over in seconds because it has to do a limited amount of work (umount here, mount there, plumb network interface, etc) to actually failover.
    On the other hand these operations may, depending on the application, take longer than another application due to the very nature of that application.
    An Apache web server failover may take 10-15 seconds but an Oracle failover may take longer. There are many variables that control what happens from the time that a node failure is detected to the time that an application appears on another cluster node.
    If the failover time is 30 minutes I would ask your vendor why that is exactly.
    Not in a confrontational way but a 'I don't get how this is high availability' since the assumption is that up to 30 minutes could elapse from the time that your application goes down to it coming back on another node.
    A better solution might be a different application vendor (I know, I know) or a scalable application that can run on more than one cluster node at a time.
    The logic with the scalable approach is that if a failover takes 30 minutes or so to complete it (failover) becomes an expensive operation so I would rather that my application can use multiple nodes at once rather than eat a 30 minute failover if one node dies in a two node cluster:
    serverA > 30 minute failover > serverB
    seems to be less desirable than
    serverA, serverB, serverC, etc concurrently providing access to the application so that failover only happens when we get down to a handful of nodes
    Either one is probably more desirable than having an application outage(?)

Maybe you are looking for

  • Apple id password issues

    Does anyone else have the problem of having to enter their apple id password all the time on their iphone 4s running ios 5? it is very annyoying. Any ideas how to fix thi?

  • TS3274 my ipad wont connect to my router, yet three pc's will

    I see my WiFi network and yet not able  to connect.   My three PC's can and I have double checked passwords, etc, Any idea someone. Thanksfor your time

  • Sample adapter is not called by XI

    I have deployed the sample adapter in our namespace. I send a message to the adapter. The adapter is never called. The adapter has the status aktiv.

  • How to start Photoshop Elements 12 after downloading?

    I downloaded 1.1gb of data thinking I was getting Elements 12 Trial Edition but what I got was Adobe Download Assistant. Is this where I launch Elements?  If so, how?  Thanks for the help.  Sorry, I don't get the whole Tag thing.  So don't know what

  • Tuning a stubborn SQL statement

    Hello, I have performance problems with this SQL statement: SELECT   T_00 . "FBRNR" , T_00 . "FBFNR" , T_01 . "DIVISION" , T_01 . "SERVCODE" , T_01 . "KUNSND" ,   T_01 . "KUNRCV" , T_01 . "KUNREG" , T_01 . "LFBED" , T_00 . "ABRDAT" , T_01 . "REGNL" F