Primary queue

I am using MQ platform edtion. When I create more than one threads waiting for messages (no message in the queue), it throws exception. From broke window, i read "Unable to attach to queue queue:single:jason_QDest: a primary queue is already active" What is this problem? Is this caused by delivery policy : single?

Yes, by default a queue has delivery policy 'single' which means only 1 consumer is allowed on the queue.
You can change this behavior by
broker configuration properties. Please
see MQ Administraction guide.

Similar Messages

  • JMS/Queue cluster question

              Hi
              I have some very basic cluster questions on JMS Queues. Lets say Q1>I have 3 WLS
              in cluster. I create the queue in only WLS#1 - then all the other WLS (#2 and #3)
              should have a stub in their JNDI tree for the Queue which points to the Queue in
              #1 - right? Basically what I am trying to acheive is to have the queue in one server
              and all the other servers have a pointer to it - I beleive this is possible in WLS
              cluster - right??
              Q2> Is there any way a client to the queue running on a WLS can tell whether the
              Queue handle its using is local (ie in the same server) or remote. Is the API createQueue(./queuename)
              going to help here??
              Q3>Is there any way to create a Queue dynamically - I guess JMX is the answer -right?
              But I will take this question a bit further - lets say Q1 answer is yes. In this
              case if server #1 crashes - then #2 and #3 have no Queues. So if they try to create
              a replica of the Queue (as on server#1) - pointing to the same filestore - can they
              do it?? - I want only one of them to succed in creating the Queue and also the Queue
              should have all the data of the #1 Queue (1 to 1 replica).
              All I want is the concept of primary and secondary queue in a cluster. Go on using
              the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession replication
              concept in clusters. My cluster purpose is more for failover rather than loadbalancing.
              TIA
              Anamitra
              

              Anamitra wrote:
              > Hi Tom
              > 7.0 is definitely an option for me. So lets take the scenarion on case of JMS cluster
              > and 7.0.
              >
              > I do not understand what u mean by HA framework?
              An HA framework is a third party product that can be used to automatically restart a failed server
              (perhaps on a new machine), and that will guarantee that the same server isn't started in two
              different places (that would be bad). There are few of these HA products, "Veritas" is one of
              them. Note that if you are using JMS file stores or transactions, both of which depend on the disk,
              you must make sure that the files are available on the new machine. One approach to this is to use
              what is known as a "dual-ported" disk.
              > If I am using a cluster of 3 WLS
              > 7.0 servers - as u have said I can create a distrubuted Queue with a fwd delay attribute
              > set to 0 if I have the consumer only in one server say server #1.
              > But still if the server #1 goes down u say that the Queues in server #2 and server
              > #3 will not have access to the messages which were stuck in the server #1 Queue when
              > it went down -right?
              Right, but is there a point in forwarding the messages to your consumer's destination if your
              application is down?
              If your application can tolerate it, you may wish to consider allowing multiple instances of it (one
              per physical destination). That way if something goes down, only those messages are out-of-business
              until the application comes back up...
              >
              >
              > Why cant the other servers see them - they all point to the same store right??
              > thanks
              > Anamitra
              >
              Again, multiple JMS servers can not share a store. Nor can multiple stores share a file. That will
              cause corruption. Multiple stores CAN share a database, but can't use the same tables in the
              database.
              Tom
              >
              > Tom Barnes <[email protected]> wrote:
              > >
              > >
              > >Anamitra wrote:
              > >
              > >> Hi
              > >> I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > >> in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > >> should have a stub in their JNDI tree for the Queue which points to the
              > >Queue in
              > >> #1 - right?
              > >
              > >Its not a stub. But essentially right.
              > >
              > >> Basically what I am trying to acheive is to have the queue in one server
              > >> and all the other servers have a pointer to it - I beleive this is possible
              > >in WLS
              > >> cluster - right??
              > >
              > >Certainly.
              > >
              > >>
              > >> Q2> Is there any way a client to the queue running on a WLS can tell whether
              > >the
              > >> Queue handle its using is local (ie in the same server) or remote. Is
              > >the API createQueue(./queuename)
              > >> going to help here??
              > >
              > >That would do it. This returns the queue on the CF side of the established
              > >Connection.
              > >
              > >>
              > >> Q3>Is there any way to create a Queue dynamically - I guess JMX is the
              > >answer -right?
              > >> But I will take this question a bit further - lets say Q1 answer is yes.
              > >In this
              > >> case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > >> a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > >> do it??
              > >> - I want only one of them to succed in creating the Queue and also the
              > >Queue
              > >> should have all the data of the #1 Queue (1 to 1 replica).
              > >
              > >No. Not possible. Corruption city.
              > >Only one server may safely access a store at a time.
              > >If you have an HA framework that can ensure this atomicity fine, or are
              > >willing
              > >to ensure this manually then fine.
              > >
              > >>
              > >>
              > >> All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > >> the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > >> concept in clusters. My cluster purpose is more for failover rather than
              > >loadbalancing.
              > >
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > >you want used most. Optionally, 7.0 will automatically forward messages
              > >from distr. dest
              > >members that have no consumers to those that do.
              > >
              > >In 6.1 you can emulate a distributed destination this way (from an upcoming
              > >white-paper):
              > >Approximating Distributed Queues in 6.1
              > >
              > >If you wish to distribute the destination across several servers in a cluster,
              > >use the distributed
              > >destination features built into WL 7.0. If 7.0 is not an option, you can
              > >still approximate a simple
              > >distributed destination when running JMS servers in a &#8220;single-tier&#8221;
              > configuration.
              > > Single-tier indicates
              > >that there is a local JMS server on each server that a connection factory
              > >is targeted at. Here is a
              > >typical scenario, where producers randomly pick which server and consequently
              > >which part of the
              > >distributed destination to produce to, while consumers in the form of MDBs
              > >are pinned to a particular
              > >destination and are replicated homogenously to all destinations:
              > >
              > >· Create JMS servers on multiple servers in the cluster. The servers will
              > >collectively host the
              > >distributed queue &#8220;A&#8221;. Remember, the JMS servers (and WL servers) must
              > >be named differently.
              > >
              > >· Configure a queue on each JMS server. These become the physical destinations
              > >that collectively become
              > >the distributed destination. Each destination should have the same name
              > >"A".
              > >
              > >· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;, and also
              > take
              > >care to set the destination&#8217;s
              > >&#8220;JNDINameReplicated&#8221; parameter to false. The &#8220;JNDINameReplicated&#8221;
              > parameter
              > >is available in 7.0, 6.1SP3
              > >or later, or 6.1SP2 with patch CR061106.
              > >
              > >· Create a connection factory, and target it at all servers that have a
              > >JMS server with &#8220;A&#8221;.
              > >
              > >· Target the same MDB pool at each server that has a JMS server with destination
              > >&#8220;A&#8221;, configure its
              > >destination to be &#8220;JNDI_A&#8221;. Do not specify a connection factory URL
              > when
              > >configuring the MDB, as it can
              > >use the server&#8217;s default JNDI context that already contains the destination.
              > >
              > >· Producers look up the connection factory, create a connection, then a
              > >session as usual. Then producers
              > >look up the destination by calling javax.jms.QueueSession.createQueue(String).
              > > The parameter to
              > >createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > so
              > >&#8220;./A&#8221; works in this example.
              > >This will return a physical destination of the distributed destination that
              > >is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later, and 6.1SP2
              > >with patch CR072612.
              > >
              > >This design pattern allows for high availability, as if one server goes
              > >down, the distributed destination
              > >is still available and only the messages on that one server become unavailable.
              > > It also allows for high
              > >scalability as speedup is directly proportional to the number of servers
              > >on which the distributed
              > >destination is deployed.
              > >
              > >
              > >
              > >>
              > >> TIA
              > >> Anamitra
              > >
              > >
              > ><!doctype html public "-//w3c//dtd html 4.0 transitional//en">
              > ><html>
              > >Anamitra wrote:
              > ><blockquote TYPE=CITE>Hi
              > ><br>I have some very basic cluster questions on JMS Queues. Lets say Q1>I
              > >have 3 WLS
              > ><br>in cluster. I create the queue in only WLS#1 - then all the other WLS
              > >(#2 and #3)
              > ><br>should have a stub in their JNDI tree for the Queue which points to
              > >the Queue in
              > ><br>#1 - right?</blockquote>
              > >Its not a stub. But essentially right.
              > ><blockquote TYPE=CITE>Basically what I am trying to acheive is to have
              > >the queue in one server
              > ><br>and all the other servers have a pointer to it - I beleive this is
              > >possible in WLS
              > ><br>cluster - right??</blockquote>
              > >Certainly.
              > ><blockquote TYPE=CITE>
              > ><br>Q2> Is there any way a client to the queue running on a WLS can tell
              > >whether the
              > ><br>Queue handle its using is local (ie in the same server) or remote.
              > >Is the API createQueue(./queuename)
              > ><br>going to help here??</blockquote>
              > >That would do it. This returns the queue on the
              > >CF side of the established Connection.
              > ><blockquote TYPE=CITE>
              > ><br>Q3>Is there any way to create a Queue dynamically - I guess JMX is
              > >the answer -right?
              > ><br>But I will take this question a bit further - lets say Q1 answer is
              > >yes. In this
              > ><br>case if server #1 crashes - then #2 and #3 have no Queues. So if they
              > >try to create
              > ><br>a replica of the Queue (as on server#1) - pointing to the same filestore
              > >- can they
              > ><br>do it?? <br>
              > >- I want only one of them to succed in creating the Queue and also the
              > >Queue
              > ><br>should have all the data of the #1 Queue (1 to 1 replica).</blockquote>
              > >No. Not possible. Corruption city.
              > ><br>Only one server may safely access a store at a time.
              > ><br>If you have an HA framework that can ensure this atomicity fine, or
              > >are willing
              > ><br>to ensure this manually then fine.
              > ><blockquote TYPE=CITE>
              > ><p>All I want is the concept of primary and secondary queue in a cluster.
              > >Go on using
              > ><br>the primary queue - but if it fails use the 2ndry queue. Kind of HttpSession
              > >replication
              > ><br>concept in clusters. My cluster purpose is more for failover rather
              > >than loadbalancing.</blockquote>
              > >If you use 7.0 you could use a distributed destination, with a high weight
              > >on the destination
              > ><br>you want used most. Optionally, 7.0 will automatically
              > >forward messages from distr. dest
              > ><br>members that have no consumers to those that do.
              > ><p><i>In 6.1 you can emulate a distributed destination this way (from an
              > >upcoming white-paper):</i>
              > ><br><i>Approximating Distributed Queues in 6.1</i><i></i>
              > ><p><i>If you wish to distribute the destination across several servers
              > >in a cluster, use the distributed destination features built into WL 7.0.
              > >If 7.0 is not an option, you can still approximate a simple distributed
              > >destination when running JMS servers in a &#8220;single-tier&#8221; configuration.
              > >Single-tier indicates that there is a local JMS server on each server that
              > >a connection factory is targeted at. Here is a typical scenario,
              > >where producers randomly pick which server and consequently which part
              > >of the distributed destination to produce to, while consumers in the form
              > >of MDBs are pinned to a particular destination and are replicated homogenously
              > >to all destinations:</i><i></i>
              > ><p><i>· Create JMS servers on multiple servers in the cluster.
              > >The servers will collectively host the distributed queue &#8220;A&#8221;. Remember,
              > >the JMS servers (and WL servers) must be named differently.</i><i></i>
              > ><p><i>· Configure a queue on each JMS server. These become
              > >the physical destinations that collectively become the distributed destination.
              > >Each destination should have the same name "A".</i><i></i>
              > ><p><i>· Configure each queue to have the same JNDI name &#8220;JNDI_A&#8221;,
              > >and also take care to set the destination&#8217;s &#8220;JNDINameReplicated&#8221;
              > parameter
              > >to false. The &#8220;JNDINameReplicated&#8221; parameter is available in
              > >7.0, 6.1SP3 or later, or 6.1SP2 with patch CR061106.</i><i></i>
              > ><p><i>· Create a connection factory, and target it at all servers
              > >that have a JMS server with &#8220;A&#8221;.</i><i></i>
              > ><p><i>· Target the same MDB pool at each server that has a JMS server
              > >with destination &#8220;A&#8221;, configure its destination to be &#8220;JNDI_A&#8221;.
              > >Do not specify a connection factory URL when configuring the MDB, as it
              > >can use the server&#8217;s default JNDI context that already contains the destination.</i><i></i>
              > ><p><i>· Producers look up the connection factory, create a connection,
              > >then a session as usual. Then producers look up the destination by
              > >calling javax.jms.QueueSession.createQueue(String). The parameter
              > >to createQueue requires a special syntax, the syntax is &#8220;./<queue name>&#8221;,
              > >so &#8220;./A&#8221; works in this example. This will return a physical
              > >destination of the distributed destination that is local to the producer&#8217;s
              > >connection. This syntax is available on 7.0, 6.1SP3 or later,
              > >and 6.1SP2 with patch CR072612.</i><i></i>
              > ><p><i>This design pattern allows for high availability, as if one server
              > >goes down, the distributed destination is still available and only the
              > >messages on that one server become unavailable. It also allows
              > >for high scalability as speedup is directly proportional to the number
              > >of servers on which the distributed destination is deployed.</i>
              > ><br><i></i>
              > ><br><i></i>
              > ><blockquote TYPE=CITE>
              > ><br>TIA
              > ><br>Anamitra</blockquote>
              > ></html>
              > >
              > >
              

  • PAMS__RESRCFAIL on pams_cancel_timer only with message on the queue

    I'm using Oracle MessageQ on RedHat 6.2 as we are migrating from OpenVMS to Linux. Seemed to work pretty good until now we have discovered the following:
    When our programs calls pams_cancel_timer we get a returnstatus PAMS__RESRCFAIL, but only if there is at least one message on the primary queue of the process. When the queue is empty, the call return success.
    In a different thread I read that this has something to do with memory allocation going wrong, but can somebody tell me which parameter I should increase to avoid the occurence of this problem?

    An update for this problem:
    We executed exactly the same program with exactly the same DMQ-group configuration on Ubuntu with the same MessageQ version. Then we don't get the PAMS__RESRCFAIL returnstatus upon pams_cancel_timer. We draw the conclusion that something in RedHat must be bothering us. Can somebody tell what that can be?
    We tried to flavours of RHEL: 5.5 i386 and 6.3 x86_64 and both have the same problem.

  • Memory Leak, Receiver Got Null Message & Consumer limit exceeded on destina

    When running program that adds an Object message to a JMS queue and then recieves it. I get the following.
    1) interminitent NULL messages recieved.
    2) jms.JMSException: [C4073]: Consumer limit exceeded on destination interactionQueueDest. Even though only one receiver can be receiving via the supplied program.
    3) After many message are added to the queue 1000's the Message Queue goes to Out Of Memory exception. It should swap to disk!!
    STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
    RUN this program via a JSP call in the application server.
    JSP
    <%@ page language="java" import="jms.*"%>
    <html>
    <head>
    <title>Leak Memory</title>
    </head>
    <body>
    <hr/>
    <h1>Leak Memory</h1>
    <%
       LeakMemory leakMemory = new LeakMemory();
       leakMemory.runTest(10000,1000);
      // NOTE will brake but slower with setting leakMemory.runTest(10000,100);
    %>JMS resources must be created:
    jms/queueConnectionFactory
    jms/interactionQueue
    must be created first.
    Class:
    package jms;
    import javax.naming.*;
    import javax.jms.*;
    public class LeakMemory implements Runnable {
      private QueueConnectionFactory queueConnectionFactory = null;
      private Queue interactionQueue = null;
      private boolean receiverRun = true;
      private QueueConnection queueConnection;
      private int totalMessageInQueue = 0;
      public LeakMemory() {
        init();
       * initialize queues
      public void init(){
        try {
          InitialContext context = new InitialContext();
          this.queueConnectionFactory = (QueueConnectionFactory)context.lookup("jms/queueConnectionFactory");
          this.interactionQueue = (Queue) context.lookup("jms/interactionQueue");
        catch (NamingException ex) {
          printerError(ex);
      public void runTest(int messageCount, int messageSize){
        this.receiverRun = true;
        Thread receiverThread = new Thread(this);
        receiverThread.start();
        for (int i = 0; i < messageCount; i++) {
            StringBuffer messageToSend = new StringBuffer();
            for (int ii = 0; ii < messageSize; ii++) {
              messageToSend.append("0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789\n");
            QueueConnection queueConnectionAdder = null;
            QueueSession queueInteractionSession = null;
            QueueSender interactionQueueSender = null;
            try {
                //Get a queue connection
                queueConnectionAdder = this.getQueueConnection();
                queueInteractionSession = queueConnectionAdder.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
                interactionQueueSender = queueInteractionSession.createSender(this.interactionQueue);
                ObjectMessage objectMessage = queueInteractionSession.createObjectMessage(messageToSend);
                objectMessage.setStringProperty("PROPERTY", "" + System.currentTimeMillis());
                //Send object
                interactionQueueSender.send(objectMessage,DeliveryMode.PERSISTENT,5,0);
                totalMessageInQueue++;
                //Close Resources
                interactionQueueSender.close();
                queueInteractionSession.close();
                interactionQueueSender = null;
                queueInteractionSession = null;
            } catch (JMSException ex) {
               printerError(ex);
       * run
      public void run()  {
        while(this.receiverRun){
                try {
                  QueueSession interactionQueueSession = this.getQueueConnection().createQueueSession(false, Session.CLIENT_ACKNOWLEDGE);
                  QueueReceiver queueReceiver = interactionQueueSession.createReceiver(this.interactionQueue);
                  ObjectMessage message = (ObjectMessage)queueReceiver.receive(100);
                  if(message != null){
                    StringBuffer messageRecived = (StringBuffer)message.getObject();
                    //Simulate Doing Something
                    synchronized (this) {
                      try {
                        Thread.sleep(400);
                      catch (InterruptedException ex1) {
                        //Can Safely be ignored
                    message.acknowledge();
                    totalMessageInQueue--;
                  } else {
                    printerError(new Exception("Receiver Got Null Message"));
                  queueReceiver.close();
                  interactionQueueSession.close();
                  queueReceiver = null;
                  interactionQueueSession = null;
                catch (JMSException ex) {
                  printerError(ex);
        * Get's the queue Connection and starts it
        * @return QueueConnection The queueConnection
       public synchronized QueueConnection getQueueConnection(){
           if (this.queueConnection == null) {
             try {
               this.queueConnection = this.queueConnectionFactory.createQueueConnection();
               this.queueConnection.start();
             catch (JMSException ex) {
               printerError(ex);
         return this.queueConnection;
       private void printerError(Exception ex){
         System.err.print("ERROR Exception totalMessageInQueue = " + this.totalMessageInQueue + "\n");
         ex.printStackTrace();
    }Is there something wrong with the way I'm working with JMS or is it just this unreliable in Sun App Server 7 Update 3 on windows?

    1) interminitent NULL messages recieved.Thanks that explains the behavior. It was wierd getting null messages when I know there is messages in the queue.
    2) jms.JMSException: [C4073]: Consumer limit exceeded
    on destination interactionQueueDest. Even though only
    one receiver can be receiving via the supplied
    program. No other instances, Only this program. Try it yourself!! It works everytime on Sun Application Server 7 update 2 & 3
    heres the broker dump at that error point
    [14/Apr/2004:12:51:47 BST] [B1065]: Accepting: [email protected]:3211->admin:3205. Count=1
    [14/Apr/2004:12:51:47 BST] [B1066]:   Closing: [email protected]:3211->admin:3205. Count=0
    [14/Apr/2004:12:52:20 BST] [B1065]: Accepting: [email protected]:3231->jms:3204. Count=1 [14/Apr/2004:12:53:31 BST] WARNING [B2009]: Creation of consumer  from connection [email protected]:3231 on destination interactionQueueDest failed:
    B4006: com.sun.messaging.jmq.jmsserver.util.BrokerException: [B4006]: Unable to attach to queue queue:single:interactionQueueDest: a primary queue is already active
    3) After many message are added to the queue 1000's
    the Message Queue goes to Out Of Memory exception. It
    should swap to disk!!The broker runs out of memory. Version in use
    Sun ONE Message Queue                   Copyright 2002
    Version:  3.0.1 SP2 (Build 4-a)              Sun Microsystems, Inc.
    Compile:  Fri 07/11/2003         All Rights ReservedOut of memory snippet
    [14/Apr/2004:13:08:28 BST] [B1089]: In low memory condition, Broker is attempting to free up resources
    [14/Apr/2004:13:08:28 BST] [B1088]: Entering Memory State [B0022]: YELLOW from previous state [B0021]: GREEN  - current memory is 118657K, 60% of total memory
    [14/Apr/2004:13:08:38 BST] WARNING [B2075]: Broker ran out of memory before the passed in VM maximum (-Xmx) 201326592 b,  lowering max to currently allocated memory (200431976 b ) and trying to recover [14/Apr/2004:13:08:38 BST] [B1089]: In low memory condition, Broker is attempting to free up resources
    [14/Apr/2004:13:08:38 BST] [B1088]: Entering Memory State [B0024]:  RED  from previous state [B0022]: YELLOW - current memory is 128796K, 99% of total memory [14/Apr/2004:13:08:38 BST] ERROR [B3008]: Message 2073-192.168.0.50(80:d:b6:c4:d6:73)-3319-1081944517772 exists in the store already [14/Apr/2004:13:08:38 BST] WARNING [B2011]: Storing of JMS message from IMQConn[AUTHENTICATED,[email protected]:3319,jms:3282] failed:
    com.sun.messaging.jmq.jmsserver.util.BrokerException: Message 2073-192.168.0.50(80:d:b6:c4:d6:73)-3319-1081944517772 exists in the store already
    [14/Apr/2004:13:08:38 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:38 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:39 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:39 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:39 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:39 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:40 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:40 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:40 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:40 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:41 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:42 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:42 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:42 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:42 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:43 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:43 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:43 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:43 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:44 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:44 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:44 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:45 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:45 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:46 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:46 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:47 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:47 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:47 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:47 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:48 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:49 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:49 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:49 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:49 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:50 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:50 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:50 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:50 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:51 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:51 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:51 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:51 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:52 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:52 BST] WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
    [14/Apr/2004:13:08:53 BST] ERROR [B3107]: Attempt to free memory failed, taking more drastic measures : java.lang.OutOfMemoryError
    [14/Apr/2004:13:08:53 BST] ERROR unable to deal w/ error: 1
    [14/Apr/2004:13:08:53 BST] ERROR TRYING TO CLOSE [14/Apr/2004:13:08:53 BST] ERROR DONE CLOSING
    [14/Apr/2004:13:08:53 BST] [B1066]:   Closing: [email protected]:3319->jms:3282. Count=0

  • Best Practice: Setting up Agents for cross-training

    The post that sparked this topic:
    http://forum.cisco.com/eforum/servlet/NetProf?page=netprof&forum=Unified%20Communications%20and%20Video&topic=Contact%20Center&topicID=.ee6fe12&fromOutline=&CommCmd=MB%3Fcmd%3Ddisplay_location%26location%3D.2cc2d609
    My contribution to this topic:
    The Scenario:
    Agent2 is a primary resource for Q2, which takes a lot of calls. At any given time there are always at least 5 calls in queue. Agent1 is a primary resource for Q1, which takes fewer calls than Q2, and rarely has calls waiting in queue. Agent1 is special, because he/she is cross-trained in Q2 and helps out when needed. Agent1 should never take a call for Q2 if a call for Q1 is waiting; regardless of how long the caller in Q2 has been waiting.
    The Problem:
    CSQs select their resources independently of what is going on in other CSQs. They only look at their own available resource pool. If a resource is available, that resource becomes the selected resource to handle the current contact; regardless of that resource's other CSQ associations.
    Agent1 runs the risk of helping Q2 callers who have been waiting longer than Q1 callers, even though he/she should be primarily helping Q1 callers.
    The Setup:
    Agents
    Agent1 (Skills: Q1 [8]; Q2 [4])
    Agent2 (Skills: Q2 [8])
    Skills
    Q1
    Q2
    CSQs
    Q1_t1 (Most Skilled; Skill Q1 - 6 and above)
    Q1_t2 (Most Skilled; Skill Q1 - 1 and above)
    Q2_t1 (Most Skilled; Skill Q2 - 6 and above)
    Q2_t2 (Most Skilled; Skill Q2 - 1 and above)
    The Solution:
    You create a tiered structure out of your CSQs.
    Instead of having 10 levels of skill to choose from, you have 5. You can think of this like a 5 star rating for your agents.
    We take advantage of the fact that scripts are interruptible, and at any time during a queue loop an agent becomes available, they will be placed into reserved state immediately.
    We also take advantage of the fact that, if a resource is Ready in a second tier queue, then we know that there are no callers waiting in their primary queue. Otherwise, the resource would be reserved, talking, or not ready.
    In your Q2 script, select from Q2_t1 first.
    If queued and if Get Reporting Statistics shows > 0 resources Ready in Q2_t2, then select from Q2_t2. Dequeue if queued or if a Connect step failure occurs.
    This creates a situation where Agent1, who is skilled in both CSQs, empties his/her primary queue (Q1_t1) before ever taking a call from his/her secondary queue (Q2_t2). If no calls are waiting in Q1, then he/she is still eligible to help out Q2.
    Possible Problems:
    1. There would be a change in the way you look at reporting.
    2. There are now two CSQs, because you cannot change the skill criteria in a script.
    3. In a rare instance the secondary script could get the report stats, see 1 resource ready, and right as it executes the select resource step, the primary script executes its own select resource step. Agent1 is now talking to a secondary contact, and his/her primary contact has to wait.
    The likely hood of this happening increases as callers waiting in Q2 increases.
    Conclusion:
    What are some of your thoughts on this topic?
    How have you solved cross-training previously?
    What would you add, subtract, or modify from my proposed solution?

    Hi Anthony,
    I just found your post about cross-training and I can only say it is great!
    Actually it is really close to the be behaviour I have to implement for a customer:
    - A 2 level helpdesk: level 1 takes all the calls, level 2 takes the calls that level 1 could not solve,
    - Agents of level 2 can help those of level 1 if they are available (or if the number of calls in queue is too high; that point needs to be decided),
    - The level 1 is a team of Agents,
    - The level 2 is divided into 2 Agents teams, each one dedicated to a specific king of incident.
    What I planned is the following (I reused your naming and presentation to explain it ):
    Agents
    For level 1 : Agent1 to Agent20 (Skills: S1 [8])
    For level 2 team 1 : Agent21 to Agent Agent30 (Skills: S1 [4]; S2 [8])
    For level 2 team 2 : Agent31 to Agent Agent40 (Skills: S1 [4]; S3 [8])
    Skills
    S1
    S2
    S3
    CSQs
    Q1_t1 (Most Skilled; Skill S1 - 6 and above)
    Q1_t2 (Most Skilled; Skill S1 - 1 and above)
    Q2 (Most Skilled; Skill S2 - 6 and above)
    Q3 (Most Skilled; Skill S3 - 6 and above)
    In the first script
    Select resources from Q1_t1 first.
    If  queued and if Get Reporting Statistics shows > 0 resources Ready in  Q1_t2, then select from Q1_t2.  Dequeue if queued or if a Connect step  failure occurs.
    When Agent1 to Agent20 answer a call and cannot solve the issue, it transfers the call to the script of Q2 or Q3, depending on the kind of issue.
    In the second script
    There is a single script for queues Q2 and Q3: it is executed differently using a "name of queue" parameter.
    Select resources from Q2/Q3.
    Do you think it would be the best way to answer the need?
    Also, I have understood that dequeue step is used for statistics (remove a call from the statistics of a queue): is that correct or is there another use here?
    Many thanks for your answer!
    Julien

  • JMQ Resource in conflict on multiple KJS's via IAS 6.5

    Hello,
    As part of running our app under IAS 6.5 Solaris, we connect to a queue using JMQ. However, since we run 2 KJS's on our machine, we're actually connecting twice - once for each KJS. When the 2nd KJS comes up, we get the following exception:
    javax.jms.JMSException: [C4055]: Resource in conflict
    at com.sun.messaging.jmq.jmsclient.ProtocolHandler.addInterest(ProtocolHandler.java:1305)
    at com.sun.messaging.jmq.jmsclient.WriteChannel.addInterest(WriteChannel.java:37)
    at com.sun.messaging.jmq.jmsclient.ConnectionImpl.addInterest(ConnectionImpl.java:572)
    at com.sun.messaging.jmq.jmsclient.Consumer.registerInterest(Consumer.java:90)
    at com.sun.messaging.jmq.jmsclient.MessageConsumerImpl.addInterest(MessageConsumerImpl.java:126)
    at com.sun.messaging.jmq.jmsclient.MessageConsumerImpl.init(MessageConsumerImpl.java:121)
    at com.sun.messaging.jmq.jmsclient.QueueReceiverImpl.<init>(QueueReceiverImpl.java:40)
    at com.sun.messaging.jmq.jmsclient.QueueSessionImpl.createReceiver(QueueSessionImpl.java:82)
    The JMQ server that is running gives the following message simultaneously:
    [25/Apr/2002:17:14:03 EDT] WARNING [B2009]: Creation of consumer lstCreateLoanApp to destination 1 failed:
    B4006: com.sun.messaging.jmq.jmsserver.util.BrokerException: [B4006]: Unable to attach to queue queue:lstCreateLoanApp: a primary queue is already active
    In researching the problem, we found that you must manually set QueueConnection.setClientID() to a unique number. We therefore did the following:
                   String newClientId = String.valueOf(Math.random());
                   queueConnection.setClientID(newClientId);
                   cat.debug("Set QueueConnection clientID to [" + newClientId + "]");
    That is successfully resetting the client ID and prints out stuff that looks like this:
                   Set QueueConnection clientID to [0.8012784079113983]
    However, even after resetting the client ID to a random number on each QueueConnection, we still get the Resource in conflict problem.
    Is our thought that the client ID needs to be reset to a unique number for each QueueConnection valid? If so, does the code above do the job correctly? If not, how do we get around this?
    Thanks in advance,
    Mike

    I don't know if you guys are still wrestling with this, but I just wanted to let you know that I have the same configuration - iPlanet app server 6.5 with 2 kjs's integrated with JMQ 2.0 beta - and i don't have this problem at all. From the information given, I can't begin to guess what the differences are.

  • MessageQ keep-alive. Info about client disconnect

    Hi,
    I'm using BEA MEssageQ to recieve messages from clients. But i need information if client is disconnected. Is there any keep-alive mechanism which tell me if client disconnected. Or maybe there is a message to check if client is connected?

    Hi,
    I asked one of the developers and got this response:
    Actually OMQ has no pre-defined keep alive mechanism.  OMQ is always accept the requests from clients passively, it cannot check the connectivity of client automatically.
    OMQ can provide information about the status of the queue that the client attaches. If you can accept the hypothesis that client is connected if the its primary queue is attached, then one possible solution for your request may be: create an application which registers to receive availability (attached/detached) messages of the target queue from local Avail Server process. If the queue is available/attached, it means the corresponding client is connected. And if the queue is unavailable/detached, it means the corresponding client is disconnected.
    Regards,
    Todd Little
    Oracle Tuxedo Chief  Architect

  • Cannot connect to Integration server

    Hi,
    I am trying to connect IBM MQ Series and ESB. As mentioned in the reference document I have updated every thing:
    A). Add the com.ibm.mq.jar to the MQSeries Adapter Classpath
    The steps in this section should be performed only once, before using the MQSeries adapter. Perform the following steps to add the com.ibm.mq.jar to the classpath for the MQSeries adapter:
    1. Copy the com.ibm.mq.jar file to the any folder.
    2. Create a new shared library in the server.xml file by specifying the path of the
    com.ibm.mq.jar as shown in the following example:
    <shared-library name="oracle.mqseries" version="10.1.3">
    <code-source path="C:\ORAHOME\bpel/lib/com.ibm.mq.jar"/>
    </shared-library>
    3. Modify the oc4j-ra.xml file to include the new shared library as shown in the
    following example.
    <imported-shared-libraries>
    <import-shared-library name="oracle.bpel.common"/>
    <import-shared-library name="oracle.xml"/>
    <import-shared-library name="oracle.mqseries"/>
    </imported-shared-libraries>
    4. Restart the server.
    B). Modify the oc4j-ra.xml File
    Specify the value of the following parameters in the oc4j-ra.xml file:
    ■ hostName: The name of the computer on which the IBM Websphere MQ server is
    running.
    ■ portNumber: The port number for connecting to the IBM Websphere MQ server.
    ■ queueManagerName: The name of the primary queue manager.
    ■ channelName: The name of channel.
    ■ userID: The user ID if connecting to IBM Websphere MQ server. running on a
    remote location.
    ■ password: The password corresponding to the user ID.
    ■ clientEncoding: This parameter is required if you are encoding the message
    header.
    ■ hostOSType: This parameter should be specified if the Websphere server is
    running on a zSeries/Operating System (z/OS). The value should be zos.
    I have a question here. The first step "Copy the com.ibm.mq.jar file to the any folder". What does this actually mean. Can I copy this file at any location in the system. If so what is the path I need to mention in second step "<code-source path="C:\ORAHOME\bpel/lib/com.ibm.mq.jar".
    And I am trying to connect ESB and IBM MQ. Why do I need to give the path as
    "C:\ORAHOME\bpel/lib/com.ibm.mq.jar".
    Anyway I did all that is mentioned above but I couldn't start the server. I am not getting any error.
    Can anyone please help me on this.
    Thanks
    Ramana.

    Yes -- I use 7777 as default port and it is up running. I also tried HTTP2 port but same result.
    Oracle HTTP Server : HTTP_Server
    HTTPS1 443
    HTTP2 7200
    HTTP1 7777
    Application connection passed testing -- I use oc4j connection for that. following are my configurations:
    App Server Connection
    type:Stand OC4J 10g 10.1.3
    host:localhost
    RMI Port:12401
    Test: OK
    Integration Server Connection
    port: 7777
    protocol: HTTP
    I can open BPEL manager at http://localhost:7777/BPELConsole
    Anything else I need to configure? really lost...

  • SCCM 2012 - Incoming Message queue status showing 104074 and Site link failed from CAS to Primary

    HI
    dbo.configmgrDRSQueue is automatically stopping and when we try to enable it it is slowiy decreasing by 4- 5. we would like to clear the queue in one shot to re-initiate the sync between the CAS to affect Primary site server so please help on this.

    Hi,
    Have you tried to use Replication Link Analyzer to  repair replication issues? Please check the article below.
    Replication Link Analyzer in Configuration Manager 2012
    https://gallery.technet.microsoft.com/Replication-Link-Analyzer-cdbefc49
    Best Regards,
    Joyce
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Unable to delete messages from queue.

    I have a journaling rule setup to send mail to a archive appliance. I have two messages in my queue that are From Address:<>  to my recipient (my archiver's email address). The journal rule works fine since it is working for all my users mail, but I have two messages stuck in my queue complaining that they could not be categorized and an event 9213 for MSExchangeTransport every 30 minutes. 
    "A non-expirable message with the internal message ID 7996 could not be categorized. This message may be a journal report or other system message. The message will remain in the queue until administrative action is taken to resolve the error. Other messages may also have encountered this error. To further diagnose the error, use the Queue Viewer or the Mail Flow Troubleshooter."
    I have tried via the GUI and Shell to remove these two messages. The GUI gives me an error:
    Microsoft Exchange Error
    Action 'Remove (with NDR)' could not be performed on object 'RE: Test to determine if email can be delivered & Received'.
    RE: Test to determine if email can be delivered & Received
    Failed
    Error:
    The requested operation can't be performed for the object with identity MyMailServerName\Submission\7999.
    OK
    The Shell will actually not report back an error but I still have those messages in the queue. The command to see info about the message was:
    Get-Message -IncludeRecipientInfo | Where { $_.Recipients -Like "*[email protected]*" } | Format-List
    Then I tried deleting the message using the "InternetMessageID" info I gathered from this.
    Remove-Message -Filter {InternetMessageID -eq "[email protected]"} -WithNDR $false
    This prompts me for a Yes, No, etc... responce (I thought I had it this time) and gave me no error, but the message still resides in queue.
    I am at a loss. Any help?

    Correct, two messages stuck in queue and cannot remove.
    Environment is a Windows 2003 R2 w/ SP2 64bit server, Exchange 2007 SP1 with all roles, AD/DNS is also Windows 2003 R2 w/ SP2 64bit.
    This just seemed to start happening a few days ago and just with these two particular messages. I have a archive appliance from ArcMail and a journal rule setup in Exchange to journal all mail to the ArcMail appliance. This has been working and continues to work except for these two messages. 
    [PS] C:\>Get-Message -IncludeRecipientInfo | Where { $_.Recipients -Like "*[email protected]*" } | Format-List
    Identity          : CO-MAIL1\Submission\7996
    Subject           : RE: Test to determine if email can be delivered & Received
    InternetMessageId : <[email protected]>
    FromAddress       : <>
    Status            : Retry
    Size              : 29638B
    MessageSourceName : Journaling
    SourceIP          : 255.255.255.255
    SCL               : 0
    DateReceived      : 9/18/2008 1:39:34 PM
    ExpirationTime    :
    LastError         : Categorization failed. The message will be deferred and ret
                        ried because it was marked for retry if rejected.
    RetryCount        : 0
    Queue             : CO-MAIL1\Submission
    Recipients        : {[email protected]}
    IsValid           : True
    ObjectState       : Unchanged
    Identity          : CO-MAIL1\Submission\7999
    Subject           : RE: Test to determine if email can be delivered & Received
    InternetMessageId : <[email protected]>
    FromAddress       : <>
    Status            : Retry
    Size              : 29623B
    MessageSourceName : Journaling
    SourceIP          : 255.255.255.255
    SCL               : 0
    DateReceived      : 9/18/2008 1:39:34 PM
    ExpirationTime    :
    LastError         : Categorization failed. The message will be deferred and ret
                        ried because it was marked for retry if rejected.
    RetryCount        : 0
    Queue             : CO-MAIL1\Submission
    Recipients        : {[email protected]}
    IsValid           : True
    ObjectState       : Unchanged
    Domain Controller Diagnosis
    Performing initial setup:
       Done gathering initial info.
    Doing initial required tests
       Testing server: co-site\CO-DNS1
          Starting test: Connectivity
             ......................... CO-DNS1 passed test Connectivity
    Doing primary tests
       Testing server: co-site\CO-DNS1
          Starting test: Replications
             ......................... CO-DNS1 passed test Replications
          Starting test: NCSecDesc
             ......................... CO-DNS1 passed test NCSecDesc
          Starting test: NetLogons
             ......................... CO-DNS1 passed test NetLogons
          Starting test: Advertising
             ......................... CO-DNS1 passed test Advertising
          Starting test: KnowsOfRoleHolders
             ......................... CO-DNS1 passed test KnowsOfRoleHolders
          Starting test: RidManager
             ......................... CO-DNS1 passed test RidManager
          Starting test: MachineAccount
             ......................... CO-DNS1 passed test MachineAccount
          Starting test: Services
             ......................... CO-DNS1 passed test Services
          Starting test: ObjectsReplicated
             ......................... CO-DNS1 passed test ObjectsReplicated
          Starting test: frssysvol
             ......................... CO-DNS1 passed test frssysvol
          Starting test: frsevent
             ......................... CO-DNS1 passed test frsevent
          Starting test: kccevent
             ......................... CO-DNS1 passed test kccevent
          Starting test: systemlog
             ......................... CO-DNS1 passed test systemlog
          Starting test: VerifyReferences
             ......................... CO-DNS1 passed test VerifyReferences
       Running partition tests on : ForestDnsZones
          Starting test: CrossRefValidation
             ......................... ForestDnsZones passed test CrossRefValidation
          Starting test: CheckSDRefDom
             ......................... ForestDnsZones passed test CheckSDRefDom
       Running partition tests on : DomainDnsZones
          Starting test: CrossRefValidation
             ......................... DomainDnsZones passed test CrossRefValidation
          Starting test: CheckSDRefDom
             ......................... DomainDnsZones passed test CheckSDRefDom
       Running partition tests on : Schema
          Starting test: CrossRefValidation
             ......................... Schema passed test CrossRefValidation
          Starting test: CheckSDRefDom
             ......................... Schema passed test CheckSDRefDom
       Running partition tests on : Configuration
          Starting test: CrossRefValidation
             ......................... Configuration passed test CrossRefValidation
          Starting test: CheckSDRefDom
             ......................... Configuration passed test CheckSDRefDom
       Running partition tests on : lcsd
          Starting test: CrossRefValidation
             ......................... lcsd passed test CrossRefValidation
          Starting test: CheckSDRefDom
             ......................... lcsd passed test CheckSDRefDom
       Running enterprise tests on : lcsd.local
          Starting test: Intersite
             ......................... lcsd.local passed test Intersite
          Starting test: FsmoCheck
             ......................... lcsd.local passed test FsmoCheck
    [PS] C:\>Remove-Message -filter {FromAddress -eq "<>"} -withNDR $false -debug -verbose
    VERBOSE: Remove-Message : Beginning processing.
    Confirm
    Are you sure you want to perform this action?
    Removing the messages that match filter "FromAddress -eq "<>"".
    Yes  Yes to All  No  [L] No to All  Suspend  [?] Help
    (default is "Y"):Y
    VERBOSE: Remove-Message : Ending processing.
    [PS] C:\>Remove-Message -Filter {InternetMessageID -eq "[email protected]"} -WithNDR $false -debug -verbose
    VERBOSE: Remove-Message : Beginning processing.
    Confirm
    Are you sure you want to perform this action?
    Removing the messages that match filter "InternetMessageID -eq
    "[email protected]"".
    Yes  Yes to All  No  [L] No to All  Suspend  [?] Help
    (default is "Y"):Y
    VERBOSE: Remove-Message : Ending processing.
    [PS] C:\>Remove-Message -filter {FromAddress -eq ""} -withNDR $false -debug -ver
    bose
    VERBOSE: Remove-Message : Beginning processing.
    Confirm
    Are you sure you want to perform this action?
    Removing the messages that match filter "FromAddress -eq """.
    Yes  Yes to All  No  [L] No to All  Suspend  [?] Help
    (default is "Y"):Y
    VERBOSE: Remove-Message : Ending processing.
    [PS] C:\>
    Tried several filter options and all had the same result. No events in the event viewer and did not remove the stuck messages.

  • ORA-16191: Primary log shipping client not logged on standby.

    Hi,
    Please help me in the following scenario. I have two nodes ASM1 & ASM2 with RHEL4 U5 OS. On node ASM1 there is database ORCL using ASM diskgroups DATA & RECOVER and archive location is on '+RECOVER/orcl/'. On ASM2 node, I have to configure STDBYORCL (standby) database using ASM. I have taken the copy of database ORCL via RMAN, as per maximum availability architecture.
    Then I have ftp'd all to ASM2 and put them on FS /u01/oradata. Have made all necessary changes in primary and standby database pfile and then perform the duplicate database for standby using RMAN in order to put the db files in desired diskgroups. I have mounted the standby database but unfortunately, log transport service is not working and archives are not getting shipped to standby host.
    Here are all configuration details.
    Primary database ORCL pfile:
    [oracle@asm dbs]$ more initorcl.ora
    stdbyorcl.__db_cache_size=251658240
    orcl.__db_cache_size=226492416
    stdbyorcl.__java_pool_size=4194304
    orcl.__java_pool_size=4194304
    stdbyorcl.__large_pool_size=4194304
    orcl.__large_pool_size=4194304
    stdbyorcl.__shared_pool_size=100663296
    orcl.__shared_pool_size=125829120
    stdbyorcl.__streams_pool_size=0
    orcl.__streams_pool_size=0
    *.audit_file_dest='/opt/oracle/admin/orcl/adump'
    *.background_dump_dest='/opt/oracle/admin/orcl/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='+DATA/orcl/controlfile/current.270.665007729','+RECOVER/orcl/controlfile/current.262.665007731'
    *.core_dump_dest='/opt/oracle/admin/orcl/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='+DATA'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl'
    *.db_recovery_file_dest='+RECOVER'
    *.db_recovery_file_dest_size=3163553792
    *.db_unique_name=orcl
    *.fal_client=orcl
    *.fal_server=stdbyorcl
    *.instance_name='orcl'
    *.job_queue_processes=10
    *.log_archive_config='dg_config=(orcl,stdbyorcl)'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.log_archive_dest_2='SERVICE=stdbyorcl'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='%t_%s_%r.dbf'
    *.open_cursors=300
    *.pga_aggregate_target=121634816
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=364904448
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS'
    *.user_dump_dest='/opt/oracle/admin/orcl/udump'
    Standby database STDBYORCL pfile:
    [oracle@asm2 dbs]$ more initstdbyorcl.ora
    stdbyorcl.__db_cache_size=251658240
    stdbyorcl.__java_pool_size=4194304
    stdbyorcl.__large_pool_size=4194304
    stdbyorcl.__shared_pool_size=100663296
    stdbyorcl.__streams_pool_size=0
    *.audit_file_dest='/opt/oracle/admin/stdbyorcl/adump'
    *.background_dump_dest='/opt/oracle/admin/stdbyorcl/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='u01/oradata/stdbyorcl_control01.ctl'#Restore Controlfile
    *.core_dump_dest='/opt/oracle/admin/stdbyorcl/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/oradata'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl'
    *.db_recovery_file_dest='+RECOVER'
    *.db_recovery_file_dest_size=3163553792
    *.db_unique_name=stdbyorcl
    *.fal_client=stdbyorcl
    *.fal_server=orcl
    *.instance_name='stdbyorcl'
    *.job_queue_processes=10
    *.log_archive_config='dg_config=(orcl,stdbyorcl)'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.log_archive_dest_2='SERVICE=orcl'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='%t_%s_%r.dbf'
    *.log_archive_start=TRUE
    *.open_cursors=300
    *.pga_aggregate_target=121634816
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=364904448
    *.standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS'
    *.user_dump_dest='/opt/oracle/admin/stdbyorcl/udump'
    db_file_name_convert=('+DATA/ORCL/DATAFILE','/u01/oradata','+RECOVER/ORCL/DATAFILE','/u01/oradata')
    log_file_name_convert=('+DATA/ORCL/ONLINELOG','/u01/oradata','+RECOVER/ORCL/ONLINELOG','/u01/oradata')
    Have configured the tns service on both the hosts and its working absolutely fine.
    <p>
    ASM1
    =====
    [oracle@asm dbs]$ tnsping stdbyorcl
    </p>
    <p>
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:49:00
    </p>
    <p>
    Copyright (c) 1997, 2005, Oracle. All rights reserved.
    </p>
    <p>
    Used parameter files:
    </p>
    <p>
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.20)(PORT = 1521))) (CONNECT_DATA = (SID = stdbyorcl) (SERVER = DEDICATED)))
    OK (30 msec)
    ASM2
    =====
    </p>
    <p>
    [oracle@asm2 archive]$ tnsping orcl
    </p>
    <p>
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:48:39
    </p>
    <p>
    Copyright (c) 1997, 2005, Oracle. All rights reserved.
    </p>
    <p>
    Used parameter files:
    </p>
    <p>
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.10)(PORT = 1521))) (CONNECT_DATA = (SID = orcl) (SERVER = DEDICATED)))
    OK (30 msec)
    Please guide where I am missing. Thanking you in anticipation.
    Regards,
    Ravish Garg

    Following are the errors I am receiving as per alert log.
    ORCL alert log:
    Thu Sep 25 17:49:14 2008
    ARCH: Possible network disconnect with primary database
    Thu Sep 25 17:49:14 2008
    Error 1031 received logging on to the standby
    Thu Sep 25 17:49:14 2008
    Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
    ORA-01031: insufficient privileges
    FAL[server, ARC1]: Error 1031 creating remote archivelog file 'STDBYORCL'
    FAL[server, ARC1]: FAL archive failed, see trace file.
    Thu Sep 25 17:49:14 2008
    Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Thu Sep 25 17:49:14 2008
    ORACLE Instance orcl - Archival Error. Archiver continuing.
    Thu Sep 25 17:49:44 2008
    FAL[server]: Fail to queue the whole FAL gap
    GAP - thread 1 sequence 40-40
    DBID 1192788465 branch 665007733
    Thu Sep 25 17:49:46 2008
    Thread 1 advanced to log sequence 48
    Current log# 2 seq# 48 mem# 0: +DATA/orcl/onlinelog/group_2.272.665007735
    Current log# 2 seq# 48 mem# 1: +RECOVER/orcl/onlinelog/group_2.264.665007737
    Thu Sep 25 17:55:43 2008
    Shutting down archive processes
    Thu Sep 25 17:55:48 2008
    ARCH shutting down
    ARC2: Archival stopped
    STDBYORCL alert log:
    ==============
    Thu Sep 25 17:49:27 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-01017: invalid username/password; logon denied
    Thu Sep 25 17:49:27 2008
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    It may be necessary to define the DB_ALLOWED_LOGON_VERSION
    initialization parameter to the value "10". Check the
    manual for information on this initialization parameter.
    Thu Sep 25 17:49:27 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-16191: Primary log shipping client not logged on standby
    PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
    Thu Sep 25 17:51:38 2008
    FAL[client]: Failed to request gap sequence
    GAP - thread 1 sequence 40-40
    DBID 1192788465 branch 665007733
    FAL[client]: All defined FAL servers have been attempted.
    Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
    parameter is defined to a value that is sufficiently large
    enough to maintain adequate log switch information to resolve
    archivelog gaps.
    Thu Sep 25 17:55:16 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-01017: invalid username/password; logon denied
    Thu Sep 25 17:55:16 2008
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    It may be necessary to define the DB_ALLOWED_LOGON_VERSION
    initialization parameter to the value "10". Check the
    manual for information on this initialization parameter.
    Thu Sep 25 17:55:16 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-16191: Primary log shipping client not logged on standby
    PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
    Please suggest where I am missing.
    Regards,
    Ravish Garg

  • Render Queue Question for After Effects CS5.5

    What is the best render settings for my laptop amd-300 apu with radeon hd graphics 1.30 GHz that has 2.00 GB Ram (1.60GB Usable?) It's also 64 bit.
    I am rendering some special effects with CC Mr. Mercury and CC Vector Blur. I tried using just the standard settings and it goes fine until it gets up to the CC Mr. Mercury and Vector Blur, then it slows the heck down and takes forever to finish.
    In the Render Queue what type of settings should I use to make it go faster and still make the effects look good?
    Thanks Adrian

    FYI, I have a zillion year old MacBook (plastic computer - remember those) that meets the minimum system requirements of an Intel processor and 2GB ram. It's old and runs CS6 just fine as long as I just do the basics and keep the preview resolution down to about 1/4. I would not try and render production work on that system because it's not up to the task and I'd never expect it to be a production machine. I do use it often when I don't want to drag an expensive machine around with me on my travels.
    If you want a production machine you have to pay for it. I have a fully decked out new MBPro R that I use as my primary AE design machine but I still go to a desktop system for large projects that have to be produced on a deadline. If I were a hobbyist I'd live with what I could afford. Because I am a professional and I charge for my services I adjust my rates to pay for the gear that I need to do the job. It's simple economics. Most film makers I know have no idea of how to run a business. Most are starving most of the time. Only a few understand that doing what we talk about on this forum is either an expensive hobby or a business. If it's a business you have to learn how to run a business before you learn how to make a movie. If it's a hobby they you have to have the means to support it. It's no different than Skiing, biking, or building model airplanes. If you can't afford new gear you have to make do with what you've got.

  • ORA-22913 while creating a QUEUE TABLE of a "Typed type"

    Hi guys:
    I'm trying to recreate an [AskTom's post|http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:8760267539329], but with a single difference. My Oracle Type contains a field that is another Oracle Type and when I try to create a QUEUE_TABLE, I got the ORA-22913.
    Here are my steps:
    create or replace TYPE PODTL_TYPE AS OBJECT
    item varchar2(25),
    ref_item varchar2(25),
    physical_location_type varchar2(1),
    physical_location number(10),
    physical_qty_ordered number(12,4),
    unit_cost number(20,4),
    origin_country_id varchar2(3),
    supp_pack_size number(12,4),
    earliest_ship_date date,
    latest_ship_date date,
    pickup_loc varchar2(250),
    pickup_no varchar2(25),
    packing_method varchar2(6),
    round_lvl varchar2(6),
    door_ind varchar2(1),
    priority_level number(1),
    new_item varchar2(1),
    quarantine varchar2(1),
    rcvd_unit_qty number(12,4),
    tsf_po_link_id number(10),
    cost_source varchar2(4),
    est_in_stock_date date
    create or replace TYPE PODtl_coll as table of PODTL_TYPE;
    create or replace TYPE PODesc AS OBJECT
    doc_type varchar2(1),
    order_no varchar2(10),
    order_type varchar2(9),
    order_type_desc varchar2(250),
    dept number(4),
    dept_name varchar2(120),
    buyer number(4),
    buyer_name varchar2(120),
    supplier varchar2(10),
    promotion number(10),
    prom_desc varchar2(160),
    qc_ind varchar2(1),
    not_before_date date,
    not_after_date date,
    otb_eow_date date,
    earliest_ship_date date,
    latest_ship_date date,
    close_date date,
    terms varchar2(15),
    terms_code varchar2(50),
    freight_terms varchar2(30),
    cust_order varchar2(1),
    payment_method varchar2(6),
    payment_method_desc varchar2(40),
    backhaul_type varchar2(6),
    backhaul_type_desc varchar2(40),
    backhaul_allowance number(20,4),
    ship_method varchar2(6),
    ship_method_desc varchar2(40),
    purchase_type varchar2(6),
    purchase_type_desc varchar2(40),
    status varchar2(1),
    ship_pay_method varchar2(2),
    ship_pay_method_desc varchar2(40),
    fob_trans_res varchar2(2),
    fob_trans_res_code_desc varchar2(40),
    fob_trans_res_desc varchar2(250),
    fob_title_pass varchar2(2),
    fob_title_pass_code_desc varchar2(40),
    fob_title_pass_desc varchar2(250),
    vendor_order_no varchar2(15),
    exchange_rate number(20,10),
    factory varchar2(10),
    factory_desc varchar2(240),
    agent varchar2(10),
    agent_desc varchar2(240),
    discharge_port varchar2(5),
    discharge_port_desc varchar2(150),
    lading_port varchar2(5),
    lading_port_desc varchar2(150),
    bill_to_id varchar2(5),
    freight_contract_no varchar2(10),
    po_type varchar2(4),
    po_type_desc varchar2(120),
    pre_mark_ind varchar2(1),
    currency_code varchar2(3),
    contract_no number(6),
    pickup_loc varchar2(250),
    pickup_no varchar2(25),
    pickup_date date,
    app_datetime date,
    comment_desc varchar2(2000),
    PODtl PODtl_coll
    These are my 3 Oracle types. When I try to create the QUEUE TABLE:
    DBMS_AQADM.CREATE_QUEUE_TABLE(
    Queue_table => 'PODESC_QUEUE_TABLE',
    Queue_payload_type => 'PODesc',
    Multiple_consumers => TRUE);
    END;
    I got the following error:
    22913. 00000 - "must specify table name for nested table column or attribute"
    *Cause:    The storage clause is not specified for a nested table column
    or attribute.
    *Action:   Specify the nested table storage clause for the nested table
    column or attribute.
    How can I solve this?

    Here is the syntax used by Oracle in one of their internal tables.
    orabase> select dbms_metadata.get_ddl('TABLE', 'ORDERS_QUEUETABLE', 'IX') from dual;
    DBMS_METADATA.GET_DDL('TABLE','ORDERS_QUEUETABLE','IX')
      CREATE TABLE "IX"."ORDERS_QUEUETABLE"
       (    "Q_NAME" VARCHAR2(30),
            "MSGID" RAW(16),
            "CORRID" VARCHAR2(128),
            "PRIORITY" NUMBER,
            "STATE" NUMBER,
            "DELAY" TIMESTAMP (6),
            "EXPIRATION" NUMBER,
            "TIME_MANAGER_INFO" TIMESTAMP (6),
            "LOCAL_ORDER_NO" NUMBER,
            "CHAIN_NO" NUMBER,
            "CSCN" NUMBER,
            "DSCN" NUMBER,
            "ENQ_TIME" TIMESTAMP (6),
            "ENQ_UID" VARCHAR2(30),
            "ENQ_TID" VARCHAR2(30),
            "DEQ_TIME" TIMESTAMP (6),
            "DEQ_UID" VARCHAR2(30),
            "DEQ_TID" VARCHAR2(30),
            "RETRY_COUNT" NUMBER,
            "EXCEPTION_QSCHEMA" VARCHAR2(30),
            "EXCEPTION_QUEUE" VARCHAR2(30),
            "STEP_NO" NUMBER,
            "RECIPIENT_KEY" NUMBER,
            "DEQUEUE_MSGID" RAW(16),
            "SENDER_NAME" VARCHAR2(30),
            "SENDER_ADDRESS" VARCHAR2(1024),
            "SENDER_PROTOCOL" NUMBER,
            "USER_DATA" "IX"."ORDER_EVENT_TYP" ,   <---------------------- seems analogous to what you are trying to do
            "USER_PROP" "SYS"."ANYDATA" ,
             PRIMARY KEY ("MSGID")
      USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOLOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "EXAMPLE"  ENABLE
       ) USAGE QUEUE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS NOLOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "EXAMPLE"
    OPAQUE TYPE "USER_PROP" STORE AS BASICFILE LOB (
      ENABLE STORAGE IN ROW CHUNK 8192 RETENTION
      CACHE
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))Spend some time looking in directories under $ORACLE_HOME and you may well find the DDL that built it.

  • OIM 9.1 GTC app db and "number" primary key

    Hi
    I am having a bit of an issue with the 9.1 GTC application table db adapter.
    The adapter works great as long as the primary key in the db table is a varchar2 put the provisioning fails totally when the primary key is a number. I have been playing around a bit trying to resolve the issue but nothing seems to be working.
    The implementation of the writing to the table is totally encapsulated in the generated code and I can't find any XMLs laying around that I could change to get things working.
    Has anyone got any suggestions?
    Best regards
    /M

    Hi Kevin,
    Thank you for your answer. The group has permission to view this request. And example: If I include all permissions that group SPOC has on GIAAnalyst it still not working. In this environment we are not using the administrative queue.
    Thiago Leoncio

  • SAP Netweaver ABAP Trial - Running but Dialog Queue standstill

    Dear all,
    since today morning I am having troubles with the SAP Netweaver ABAP Trial version which has run now for over 6 months without any issues. The issues appeared suddenly and I didn't change anything to configuration / network. I am going through the forum now since a couple of hours and I have checked the following. I have read that the most common error is, that the log is full. Anyhow I have checked the log files.
    1.) Logfiles: I can't see anything in the log file saying that something is wrong. Please find enclosed the log files DEV_DISP, DEV_W0 and DEV_MS:
    DEV_DISP:
    trc file: "dev_disp", trc level: 1, release: "720"
    sysno      00
    sid        NSP
    systemid   562 (PC with Windows NT)
    relno      7200
    patchlevel 0
    patchno    201
    intno      20020600
    make       multithreaded, Unicode, 64 bit, optimized
    profile    \\mwsrvsap\sapmnt\NSP\SYS\profile\NSP_DVEBMGS00_mwsrvsap
    pid        3112
    Sun Sep 07 12:04:00 2014
    kernel runs with dp version 133000(ext=118000) (@(#) DPLIB-INT-VERSION-133000-UC)
    length of sys_adm_ext is 588 bytes
    *** SWITCH TRC-HIDE on ***
    ***LOG Q00=> DpSapEnvInit, DPStart (00 3112) [dpxxdisp.c   1315]
      shared lib "dw_xml.dll" version 201 successfully loaded
      shared lib "dw_xtc.dll" version 201 successfully loaded
      shared lib "dw_stl.dll" version 201 successfully loaded
      shared lib "dw_gui.dll" version 201 successfully loaded
      shared lib "dw_mdm.dll" version 201 successfully loaded
      shared lib "dw_rndrt.dll" version 201 successfully loaded
      shared lib "dw_abp.dll" version 201 successfully loaded
      shared lib "dw_sym.dll" version 201 successfully loaded
      shared lib "dw_aci.dll" version 201 successfully loaded
    rdisp/softcancel_sequence :  -> 0,5,-1
    use internal message server connection to port 3900
    rdisp/dynamic_wp_check : 1
    rdisp/calculateLoadAverage : 1
    Sun Sep 07 12:04:04 2014
    *** WARNING => DpNetCheck: NiAddrToHost(1.0.0.0) took 4 seconds
    ***LOG GZZ=> 1 possible network problems detected - check tracefile and adjust the DNS settings [dpxxtool2.c  6423]
    MtxInit: 30000 0 0
    DpSysAdmExtInit: ABAP is active
    DpSysAdmExtInit: VMC (JAVA VM in WP) is not active
    DpIPCInit2: write dp-profile-values into sys_adm_ext
    DpIPCInit2: start server >mwsrvsap_NSP_00                         <
    DpShMCreate: sizeof(wp_adm) 31696 (2264)
    DpShMCreate: sizeof(tm_adm) 5517056 (27448)
    DpShMCreate: sizeof(wp_ca_adm) 64000 (64)
    DpShMCreate: sizeof(appc_ca_adm) 64000 (64)
    DpCommTableSize: max/headSize/ftSize/tableSize=500/16/584064/584080
    DpShMCreate: sizeof(comm_adm) 584080 (1144)
    DpSlockTableSize: max/headSize/ftSize/fiSize/tableSize=0/0/0/0/0
    DpShMCreate: sizeof(slock_adm) 0 (296)
    DpFileTableSize: max/headSize/ftSize/tableSize=0/0/0/0
    DpShMCreate: sizeof(file_adm) 0 (80)
    DpShMCreate: sizeof(vmc_adm) 0 (2152)
    DpShMCreate: sizeof(wall_adm) (41664/42896/64/192)
    DpShMCreate: sizeof(gw_adm) 48
    DpShMCreate: sizeof(j2ee_adm) 3952
    DpShMCreate: SHM_DP_ADM_KEY (addr: 00000000079D0050, size: 6363600)
    DpShMCreate: allocated sys_adm at 00000000079D0060
    DpShMCreate: allocated wp_adm_list at 00000000079D3070
    DpShMCreate: allocated wp_adm at 00000000079D3260
    DpShMCreate: allocated tm_adm_list at 00000000079DAE40
    DpShMCreate: allocated tm_adm at 00000000079DAE90
    DpShMCreate: allocated wp_ca_adm at 0000000007F1DDA0
    DpShMCreate: allocated appc_ca_adm at 0000000007F2D7B0
    DpShMCreate: allocated comm_adm at 0000000007F3D1C0
    DpShMCreate: system runs without slock table
    DpShMCreate: system runs without file table
    DpShMCreate: allocated vmc_adm_list at 0000000007FCBB60
    DpShMCreate: system runs without vmc_adm
    DpShMCreate: allocated gw_adm at 0000000007FCBC10
    DpShMCreate: allocated j2ee_adm at 0000000007FCBC50
    DpShMCreate: allocated ca_info at 0000000007FCCBD0
    DpShMCreate: allocated wall_adm at 0000000007FCCC60
    Sun Sep 07 12:04:05 2014
    DpCommAttachTable: attached comm table (header=0000000007F3D1C0/ft=0000000007F3D1D0)
    DpSysAdmIntInit: initialize sys_adm
    rdisp/test_roll : roll strategy is DP_NORMAL_ROLL
    dia token check not active (6 token)
    MBUF state OFF
    DpCommInitTable: init table for 500 entries
    DpRqQInit: keep protect_queue / slots_per_queue 0 / 2001 in sys_adm
    rdisp/queue_size_check_value :  -> on,50,30,40,500,50,500,80
    EmInit: MmSetImplementation( 2 ).
    MM global diagnostic options set: 0
    <ES> client 0 initializing ....
    <ES> EsILock: use spinlock for locking
    <ES> InitFreeList
    <ES> block size is 4096 kByte.
    <ES> Info: em/initial_size_MB( 8195MB) not multiple of em/blocksize_KB( 4096KB)
    <ES> Info: em/initial_size_MB rounded up to 8196MB
    Using implementation view
    <EsNT> Using memory model view.
    <EsNT> Memory Reset disabled as NT default
    <ES> 2048 blocks reserved for free list.
    ES initialized.
    mm.dump: set maximum dump mem to 96 MB
    DpVmcSetActive: set vmc state DP_VMC_NOT_ACTIVE
    MPI: dynamic quotas disabled.
    MPI init: pipes=4000 buffers=1279 reserved=383 quota=10%
    rdisp/http_min_wait_dia_wp : 1 -> 1
    ***LOG CPS=> DpLoopInit, ICU ( 4.0.1 4.0.1 5.1) [dpxxdisp.c   1701]
    ***LOG Q0K=> DpMsAttach, mscon ( mwsrvsap) [dpxxdisp.c   12467]
    MBUF state LOADING
    DpStartStopMsg: send start message (myname is >mwsrvsap_NSP_00                         <)
    DpStartStopMsg: start msg sent
    CCMS uses Shared Memory Key 73 for monitoring.
    CCMS: Initalized shared memory of size 60000000 for monitoring segment.
    CCMS: Checking Downtime Configuration of Monitoring Segment.
    CCMS: AlMsUpload called by wp 1024.
    Sun Sep 07 12:04:06 2014
    CCMS: AlMsUpload successful for C:\usr\sap\NSP\DVEBMGS00\log\ALMTTREE.DAT (657 MTEs).
    CCMS: start to initalize 3.X shared alert area (first segment).
    DpMBufHwIdSet: set Hardware-ID
    ***LOG Q1C=> DpMBufHwIdSet [dpxxmbuf.c   1296]
    MBUF state ACTIVE
    DpWpBlksLow: max wp blocks in queue is 800 (80 %)
    MBUF component UP
    DpMsgProcess: 1 server in MBUF
    DpAppcBlksLow: max appc blocks in queue is 500 (50 %)
    DEV_W0:
    trc file: "dev_w0", trc level: 1, release: "720"
    *  ACTIVE TRACE LEVEL           1
    *  ACTIVE TRACE COMPONENTS      all, MJ
    M sysno      00
    M sid        NSP
    M systemid   562 (PC with Windows NT)
    M relno      7200
    M patchlevel 0
    M patchno    201
    M intno      20020600
    M make       multithreaded, Unicode, 64 bit, optimized
    M profile    \\mwsrvsap\sapmnt\NSP\SYS\profile\NSP_DVEBMGS00_mwsrvsap
    M pid        3208
    M
    M  kernel runs with dp version 133000(ext=118000) (@(#) DPLIB-INT-VERSION-133000-UC)
    M  length of sys_adm_ext is 588 bytes
    M  ***LOG Q0Q=> tskh_init, WPStart (Workp. 0 3208) [dpxxdisp.c   1377]
    I  MtxInit: 30000 0 0
    M  DpSysAdmExtCreate: ABAP is active
    M  DpSysAdmExtCreate: VMC (JAVA VM in WP) is not active
    M  DpIPCInit2: read dp-profile-values from sys_adm_ext
    M  DpShMCreate: sizeof(wp_adm) 31696 (2264)
    M  DpShMCreate: sizeof(tm_adm) 5517056 (27448)
    M  DpShMCreate: sizeof(wp_ca_adm) 64000 (64)
    M  DpShMCreate: sizeof(appc_ca_adm) 64000 (64)
    M  DpCommTableSize: max/headSize/ftSize/tableSize=500/16/584064/584080
    M  DpShMCreate: sizeof(comm_adm) 584080 (1144)
    M  DpSlockTableSize: max/headSize/ftSize/fiSize/tableSize=0/0/0/0/0
    M  DpShMCreate: sizeof(slock_adm) 0 (296)
    M  DpFileTableSize: max/headSize/ftSize/tableSize=0/0/0/0
    M  DpShMCreate: sizeof(file_adm) 0 (80)
    M  DpShMCreate: sizeof(vmc_adm) 0 (2152)
    M  DpShMCreate: sizeof(wall_adm) (41664/42896/64/192)
    M  DpShMCreate: sizeof(gw_adm) 48
    M  DpShMCreate: sizeof(j2ee_adm) 3952
    M  DpShMCreate: SHM_DP_ADM_KEY (addr: 00000000100C0050, size: 6363600)
    M  DpShMCreate: allocated sys_adm at 00000000100C0060
    M  DpShMCreate: allocated wp_adm_list at 00000000100C3070
    M  DpShMCreate: allocated wp_adm at 00000000100C3260
    M  DpShMCreate: allocated tm_adm_list at 00000000100CAE40
    M  DpShMCreate: allocated tm_adm at 00000000100CAE90
    M  DpShMCreate: allocated wp_ca_adm at 000000001060DDA0
    M  DpShMCreate: allocated appc_ca_adm at 000000001061D7B0
    M  DpShMCreate: allocated comm_adm at 000000001062D1C0
    M  DpShMCreate: system runs without slock table
    M  DpShMCreate: system runs without file table
    M  DpShMCreate: allocated vmc_adm_list at 00000000106BBB60
    M  DpShMCreate: system runs without vmc_adm
    M  DpShMCreate: allocated gw_adm at 00000000106BBC10
    M  DpShMCreate: allocated j2ee_adm at 00000000106BBC50
    M  DpShMCreate: allocated ca_info at 00000000106BCBD0
    M  DpShMCreate: allocated wall_adm at 00000000106BCC60
    M  DpCommAttachTable: attached comm table (header=000000001062D1C0/ft=000000001062D1D0)
    M  DpRqQInit: use protect_queue / slots_per_queue 0 / 2001 from sys_adm
    M
    M Sun Sep 07 12:04:06 2014
    M  rdisp/queue_size_check_value :  -> on,50,30,40,500,50,500,80
    X  EmInit: MmSetImplementation( 2 ).
    X  MM global diagnostic options set: 0
    X  <ES> client 0 initializing ....
    X  <ES> EsILock: use spinlock for locking
    X  Using implementation view
    X  <EsNT> Using memory model view.
    M  <EsNT> Memory Reset disabled as NT default
    X  ES initialized.
    X  mm.dump: set maximum dump mem to 96 MB
    M  DpVmcSetActive: set vmc state DP_VMC_NOT_ACTIVE
    M  ThStart: taskhandler started
    M  ThInit: initializing DIA work process W0
    M
    M Sun Sep 07 12:04:08 2014
    M  ThInit: running on host mwsrvsap
    M
    M Sun Sep 07 12:04:10 2014
    M  calling db_connect ...
    B  Loading DB library 'C:\usr\sap\NSP\DVEBMGS00\exe\dbsdbslib.dll' ...
    B  Library 'C:\usr\sap\NSP\DVEBMGS00\exe\dbsdbslib.dll' loaded
    B  Version of 'C:\usr\sap\NSP\DVEBMGS00\exe\dbsdbslib.dll' is "720.00", patchlevel (0.201)
    C
    C  DBSDBSLIB : version 720.00, patch 0.201 (Make PL 0.201)
    C  MAXDB shared library (dbsdbslib) patchlevels (last 10)
    C    (0.201) Take care of warnings during database connect (note 1600066)
    C    (0.117) Define a primary key on the temp tables for R3szchk (note 1606260)
    C    (0.114) Support of MaxDB 7.8 and 7.9 (note 1653058)
    C    (0.103) Close all lob locators at end of the transaction (note 1626591)
    C    (0.101) Fix for unknown table __TABLE_SIZES_ (R3szchk) (note 1619504)
    C    (0.098) Use filesystem counter for R3szchk (note 1606260)
    C    (0.092) Secondary connection to HANA (note 1481256)
    C    (0.089) UPDSTAT with SAPSYSTEMNAME longer as 3 characters (note 1584921)
    C    (0.081) No UPSERT on WBCROSSGT (note 1521468)
    C    (0.080) New feature batch streaming (note 1340617)
    C
    C
    C  Loading SQLDBC client runtime ...
    C  SQLDBC Module  : C:\sapdb\clients\NSP\pgm\libSQLDBC77.dll
    C  SQLDBC SDK     : SQLDBC.H  7.9.7    BUILD 010-123-243-190
    C  SQLDBC Runtime : libSQLDBC 7.9.7    BUILD 010-123-243-190
    C  SQLDBC client runtime is MaxDB 7.9.7.010 CL 243190
    C  SQLDBC supports new DECIMAL interface : 1
    C  SQLDBC supports VARIABLE INPUT data   : 1
    C  SQLDBC supports VARIABLE OUTPUT data  : 1
    C  SQLDBC supports Multiple Streams      : 1
    C  SQLDBC supports LOB LOCATOR KEEPALIVE : 1
    C  SQLDBC supports LOB LOCATOR COPY      : 1
    C  SQLDBC supports BULK SELECT with LOBS : 1
    C  SQLDBC supports BATCH STREAM          : 1
    C  INFO : SQLOPT= -I 0 -t 0 -S SAPR3
    C  Try to connect (DEFAULT) on connection 0 ...
    C  Attach to SAP DB : Kernel    7.9.07   Build 010-123-243-190
    C  Database release is SAP DB 7.9.07.010
    C  INFO : Database 'NSP' instance is running on 'mwsrvsap'
    C  DB supports UPSERT SQL syntax : 1
    C  DB supports new EXPAND syntax : 1
    C  DB supports LOB locators      : 1
    C  DB uses MVCC support          : 0
    C  DB max. input host variables  : 2000
    C  DB max. statement length      : 65535
    C  UPSERT is disabled for : WBCROSSGT
    C  INFO : SAP DB Packet_Size = 131072
    C  INFO : SAP DB Min_Reply_Size = 4096
    C  INFO : SAP DB Comm_Size = 126976
    C  INFO : DBSL buffer size = 126976
    C  INFO : SAP DB MaxLocks = 300000
    C  INFO : Connect to DB as 'SAPNSP'
    C  Command info enabled
    C  Now I'm connected to MaxDB
    C  00: mwsrvsap-NSP, since=20140907120410, ABAP= <unknown> (0)
    B  Connection 0 opened (DBSL handle 0)
    C  INFO : SAP RELEASE (DB) = 731
    M  ThInit: db_connect o.k.
    M
    M Sun Sep 07 12:04:11 2014
    M  ICT: exclude compression: *.zip,*.rar,*.arj,*.z,*.gz,*.tar,*.lzh,*.cab,*.hqx,*.ace,*.jar,*.ear,*.war,*.css,*.pdf,*.gzip,*.uue,*.bz2,*.iso,*.sda,*.sar,*.gif,*.png,*.swc,*.swf
    DEV_MS:
    trc file: "dev_ms", trc level: 1, release: "720"
    [Thr 3092] Sun Sep 07 12:03:56 2014
    [Thr 3092] ms/http_max_clients = 500 -> 500
    [Thr 3092] MsSSetTrcLog: trc logging active, max size = 52428800 bytes
    systemid   562 (PC with Windows NT)
    relno      7200
    patchlevel 0
    patchno    101
    intno      20020600
    make       multithreaded, Unicode, 64 bit, optimized
    pid        3088
    [Thr 3092] ***LOG Q01=> MsSInit, MSStart (Msg Server 1 3088) [msxxserv.c   2278]
    [Thr 3092] load acl file = C:\usr\sap\NSP\SYS\global\ms_acl_info.DAT
    [Thr 3092] MsGetOwnIpAddr: my host addresses are :
    [Thr 3092]   1 : [10.0.0.225] mwsrvsap.local (HOSTNAME)
    [Thr 3092]   2 : [127.0.0.1] mwsrvsap (LOCALHOST)
    [Thr 3092]   3 : [10.10.0.10] mwsrvsap (NILIST)
    [Thr 3092] MsHttpInit: full qualified hostname = mwsrvsap
    [Thr 3092] HTTP logging is switch off
    [Thr 3092] set HTTP state to LISTEN
    [Thr 3092] *** HTTP port 8100 state LISTEN ***
    [Thr 3092] *** I listen to internal port 3900 (3900) ***
    [Thr 3092] *** HTTP port 8100 state LISTEN ***
    [Thr 3092] CUSTOMER KEY: >V1901974459<
    [Thr 3092] build version=720.2011.10.26
    2.) So I installed the SAP MaxDB Database Manager in order to check the logs and do an archiving. But here seems to be the issue - I can not see the NSP Database instance (see attached screenshot). But how can this be? The log file say that the db connection is ok?
    3.) What is really strange to me is the Syslog in the management console --> see log3.png. Why does it say work process in reconnect status?
    Does anybody have an idea how to solve this issue? Help is really appreciated and points will be rewarded.
    Best regards
    Maik

    Thank you first of all for your answer.
    I tried to do so, but in the MaxDB Database Manager I can add the db instance, but I am not able to connect to it (service is running).

Maybe you are looking for

  • How to link to a youtube or vimeo video in Adobe Presenter

    I am new to using Presenter, and am having trouble inserting videos in Adobe Presenter (the videos are often over 1 hour in duration) and am wondering if it's possible to instead insert/embed the link to the video, which is hosted on either YouTube o

  • SQL strange behaviour

    Hi I have created two tables temp_n and temp_c 1) DESC TEMP_N Name    Null    Type      COL2         NUMBER 2) DESC TEMP_C Name    Null    Type      COL3             NUMBER(1) 3)  SELECT * FROM TEMP_C COL3        1        2 SELECT * FROM TEMP_N COL2

  • Raw Material (excise duty paid) scrapped at the stage of Production.

    Dear Experts, Excise duties applicable (and paid) raw Material is scrapped due to damage in Production.  This material is not returned to vendor. Excise duty has been paid, Excise entries are updated. As per the central government excise rules, how t

  • Re: field exits

    Hi,    Iam getting one problem while creating field exit for tcode iw21.  When i choose the activiteis tab, if there is no value in the table control first field then i raise error message( there is no activities defined) . But the error messge is re

  • Dbworker database connection port

    Does anyone know how to change the port that dbworker connectts to a specific external database on ?.  We have a requirement to connect to a specific MS SQL instance on a non standard port.  I have tried changing it in the router registry and also in