Using metrics with a cluster of brokers.

Hello,
Here is my situation -
I'm using the metric topic "mq.metrics.destination_list" to receive the mapped message of metric data. I'm doing this to get the queue depth, number of consumers and number of messages acknowledge. My JMS provider is Sun JMS. I have a cluster of two brokers on Window Servers. I set the address list of brokers accordingly as shown.
connectFactory = new com.sun.messaging.ConnectionFactory();
connectFactory.setProperty("imqAddressList", addressList());
The problem is the metric data only contains data on the first broker in my list. I would assume since they are clustered I should get the combination of all the brokers in the cluster. Is this possible and if so what am I missing?
PS: I know I could poll each broker and manually combine the data but I prefer not to.
Thanks

I'm afraid that this is the case at the moment. Metrics are only local to the broker.
Tom

Similar Messages

  • Fedex can only be used with dreamweaver using metric, not pounds (lbs)

    Fedex can only be used with dreamweaver using metric, not pounds (lbs), evidently?
    This as per their tier 2 tech suport.
    They say I should just convert to metric, no joke!
    Any ideas would be greatly appreaciated?
    Also a warning to anyone thinking of using business catalyst for their site, sorry, their business.
    Their note to me:
    Your request (# 82816) has been solved. To reopen this request, reply to this email or go to the Help & Support page.
                  Silviu Ghimposanu (Adobe Business Catalyst Support)           
                  Apr 14 15:23           
        Thanks for contacting us Michael  
    Our engineering team has detected some problems when doing weight conversions (from the quantities entered in the interface to what’s used when making the API calls to FedEx) -
      He have added this to our internal bug tracker to be fixed for one of the future releases. 
        I realise that this might not be what you wanted to hear,but at this moment we do't have a workaround in order to send the
      dimensions in pounds and not in KG 
        Sorry for this inconvenience!
      Kind regards, 

    If you're in US it will be in pounds. If your site is set any other country
    in the rest of the world it will be metric. So BC or as you call it
    Dreamweaver will look at the site country, which you set when creating the
    site

  • "ORA-1715 : UNIQUE may not be used with a cluster index" but why?

    "ORA-1715 : UNIQUE may not be used with a cluster index" but why and what "may" means here? Any comments will be welcomed, thank you and best regards;
    show rel
    release 1002000300
    CREATE CLUSTER sc_srvr_id (
    srvr_id NUMBER(10)) SIZE 1024;
    SELECT cluster_name, tablespace_name, hashkeys,
    degree, single_table FROM user_clusters;
    CREATE UNIQUE INDEX idx_sc_srvr_id ON CLUSTER sc_srvr_id;
    ERROR at line 1:
    ORA-01715: UNIQUE may not be used with a cluster index
    CREATE INDEX idx_sc_srvr_id ON CLUSTER sc_srvr_id;
    SELECT index_name, index_type, tablespace_name
    FROM user_indexes where index_name like '%SRVR%' ;
    CREATE TABLE cservers (
    srvr_id    NUMBER(10),
    network_id NUMBER(10),
    status     VARCHAR2(1),
    latitude   FLOAT(20),
    longitude  FLOAT(20),
    netaddress VARCHAR2(15))
    CLUSTER sc_srvr_id (srvr_id);
    ALTER TABLE cservers add constraint pk_srvr_id primary key (srvr_id ) ;
    SELECT index_name, index_type, tablespace_name
    FROM user_indexes where index_name like '%SRVR%' ;
    INDEX_NAME                     INDEX_TYPE
    TABLESPACE_NAME
    IDX_SC_SRVR_ID                 CLUSTER
    USERS
    PK_SRVR_ID                     NORMAL
    USERSdo we really need another pkey index here?

    "May" has different meanings, one of which is:
    (used to express opportunity or permission)
    Metalink note 19067.1 says:
    This is not permitted.
    ... which agrees with the above meaning of it.
    Besides these, it does not make any sense to me to create a unique index on a cluster. You can have primary keys in the tables you include in the cluster, it depends on your business requirement. But why do you need a unique index on a cluster?

  • Using equalizer with reports server cluster

    Hello World!
    Just wonder if anybody using equalizer with Reports servers cluster, if yes what type?
    what is the cluster architecture?
    such as Machine A, B, C.... machine A has infra and mid , machine B has only infra, machine C has only mid...
    The problem we are having when the second member of a cluster is down the report server component complaints with somw internal error...
    TIA

    3. Connection factories target: admin server; JMS server target: admin          server
              > -> Potential drawback - The MDB has to maintain a queue connection with a
              server
              > that is not part of the cluster. (Again, please correct me if I'm wrong.)
              I'm
              > not sure if this introduces extra time taken for the MDB to receive its
              messages
              > and for it to send the processed messages to the queue.
              Admin server is not supposed to participate in the cluster. I wouldn't
              deploy anything on the admin server.
              I think my personal preferene would be connection factories to the cluster
              and use distributed destinations.
              Regards...
              

  • Cannot use comparisons with column column references in pool and cluste

    Hi Experts,
    I  am facing an issue in a select statement written in 4.6 version and in that the table is a transparent table but when porting it to ECC6.0, it gives the follwing error:
    You cannot use comparisons with column column references in pool and cluster tables:'A~MATNR'. refe
    The select statement is as below:
    SELECT akschl alifnr amatnr aekorg awerks aesokz
             aknumh adatbi a~datab
             bmtart bmatkl
            b~yybcezndr " Commented as not required(IDE)
             cwerks cmmsta c~herkl
                INTO CORRESPONDING FIELDS OF TABLE  gt_a017
                FROM a017 AS a
               INNER JOIN mara AS b
                ON bmatnr = amatnr
                 INNER JOIN marc AS c
                   ON cmatnr = bmatnr
                  AND cwerks = awerks
                 INNER JOIN lfa1 AS d
                 ON dlifnr = alifnr
                WHERE a~kappl  = 'M'           AND
                      a~lifnr IN s_lifnr              AND
                      a~matnr IN s_matnr        AND
                      a~ekorg IN s_ekorg        AND
                     a~kschl  = v_kschl        AND
                      a~kschl  = gv_kschl       .
    Kindly help me out in this as A017 is a pooled table in ECC 6.0 . Thanks in advance!!!
    Thanks and Best Regards,
    Sahil

    Hi Sahil,
    Refer below code
    SELECT KSCHL LIFNR MATNR EKORG WERKS ESOKZ
           KNUMH DATBI DATAB
      FROM A017
      INTO IT_A017 " internal Table
      WHERE KAPPL = 'M' AND
            LIFNR IN S_LIFNR AND
            MATNR IN S_MATNR AND
            EKORG IN S_EKORG AND
    *         kschl = v_kschl AND
            KSCHL = GV_KSCHL .
      IF IT_A017[] IS NOT INITIAL.
        SELECT MATNR MTART MATKL
              FROM MARA
          INTO TABLE IT_MARA" internal Table
          FOR ALL ENTRIES IN IT_A017
          WHERE MATNR = IT_A017-MATNR.
        SELECT MATNR WERKS MMSTA HERKL
          FROM MARC
          INTO TABLE IT_MARC" internal Table
          FOR ALL ENTRIES IN IT_A017
          WHERE MATNR = IT_A017-MATNR AND
                WERKS = IT_A017-WERKS.
        SELECT LIFNR
          FROM LFA1
          INTO IT_LFA1" internal Table
          FOR ALL ENTRIES IN IT_A017
          WHERE LIFNR = IT_A017-LIFNR.
        ENDIF.
    After this use READ statement and fill the data in final internal table..
    Please search on SCN for more information about how to use FOR ALL ENTRIES..
    Hope it will solve your problem..
    Thanks & Regards
    ilesh 24x7
    ilesh Nandaniya

  • Error occurred while finding users using API with custom field

    Hi All,
    I am getting the following error while searching user using API with custom attribute. Did anybody faced the same problem before ?
    Hashtable<Object,Object> env = new Hashtable<Object,Object>();
    env.put("java.naming.factory.initial", "weblogic.jndi.WLInitialContextFactory");
    env.put(OIMClient.JAVA_NAMING_PROVIDER_URL, "t3://localhost:14000");
    System.setProperty("java.security.auth.login.config","C:\\Oracle\\Middleware\\Oracle_IDM1\\designconsole\\config\\authwl.conf");
    System.setProperty("OIM.AppServerType", "wls");
    System.setProperty("APPSERVER_TYPE", "wls");
    tcUtilityFactory ioUtilityFactory = new tcUtilityFactory(env, "xelsysadm", "Weblogic123$");
    OIMClient client = new OIMClient(env);
    client.login("xelsysadm", "Weblogic123$".toCharArray());
    SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd");
    tcUserOperationsIntf moUserUtility = (tcUserOperationsIntf)ioUtilityFactory.getUtility("Thor.API.Operations.tcUserOperationsIntf");
    Hashtable mhSearchCriteria = new Hashtable();
    mhSearchCriteria.put("USR_UDF_ACTUALSTARTDATE",formatter.format(date));
    tcResultSet moResultSet = moUserUtility.findAllUsers(mhSearchCriteria);
    printTcResultSet(moResultSet,"abcd");
    log4j:WARN No appenders could be found for logger (org.springframework.jndi.JndiTemplate).
    log4j:WARN Please initialize the log4j system properly.
    Exception in thread "main" Thor.API.Exceptions.tcAPIException: Error occurred while finding users.
    at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:237)
    at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:348)
    at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:259)
    at Thor.API.Operations.tcUserOperationsIntf_e9jcxp_tcUserOperationsIntfRemoteImpl_1036_WLStub.findAllUsersx(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at weblogic.ejb.container.internal.RemoteBusinessIntfProxy.invoke(RemoteBusinessIntfProxy.java:85)
    at com.sun.proxy.$Proxy2.findAllUsersx(Unknown Source)
    at Thor.API.Operations.tcUserOperationsIntfDelegate.findAllUsers(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at Thor.API.Base.SecurityInvocationHandler$1.run(SecurityInvocationHandler.java:68)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
    at weblogic.security.Security.runAs(Security.java:41)
    at Thor.API.Security.LoginHandler.weblogicLoginSession.runAs(weblogicLoginSession.java:52)
    at Thor.API.Base.SecurityInvocationHandler.invoke(SecurityInvocationHandler.java:79)
    at com.sun.proxy.$Proxy3.findAllUsers(Unknown Source)
    at oim.standalone.code.OIMAPIConnection.usersearch(OIMAPIConnection.java:209)
    at oim.standalone.code.OIMAPIConnection.main(OIMAPIConnection.java:342)
    Caused by: Thor.API.Exceptions.tcAPIException: Error occurred while finding users.
    at com.thortech.xl.ejb.beansimpl.tcUserOperationsBean.findAllUsers(tcUserOperationsBean.java:4604)
    at Thor.API.Operations.tcUserOperationsIntfEJB.findAllUsersx(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor1614.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.oracle.pitchfork.spi.MethodInvocationVisitorImpl.visit(MethodInvocationVisitorImpl.java:34)
    at weblogic.ejb.container.injection.EnvironmentInterceptorCallbackImpl.callback(EnvironmentInterceptorCallbackImpl.java:54)
    at com.oracle.pitchfork.spi.EnvironmentInterceptor.invoke(EnvironmentInterceptor.java:42)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at com.sun.proxy.$Proxy347.findAllUsersx(Unknown Source)
    at Thor.API.Operations.tcUserOperationsIntf_e9jcxp_tcUserOperationsIntfRemoteImpl.__WL_invoke(Unknown Source)
    at weblogic.ejb.container.internal.SessionRemoteMethodInvoker.invoke(SessionRemoteMethodInvoker.java:40)
    at Thor.API.Operations.tcUserOperationsIntf_e9jcxp_tcUserOperationsIntfRemoteImpl.findAllUsersx(Unknown Source)
    at Thor.API.Operations.tcUserOperationsIntf_e9jcxp_tcUserOperationsIntfRemoteImpl_WLSkel.invoke(Unknown Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667)
    at weblogic.rmi.cluster.ClusterableServerRef.invoke(ClusterableServerRef.java:230)
    at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518)
    at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    Thank you

    Hi J,
    Thanks for the reply. But the code is working fine for OOTB attributes and  for 11g API i am getting permission exception
    Exception in thread "main" oracle.iam.platform.authz.exception.AccessDeniedException: You do not have permission to search the following user attributes: USR_UDF_ACTUALSTARTDATE.
    at oracle.iam.identity.usermgmt.impl.UserManagerImpl.search(UserManagerImpl.java:1465)
    at sun.reflect.GeneratedMethodAccessor1034.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at oracle.iam.platform.utils.DMSMethodInterceptor.invoke(DMSMethodInterceptor.java:25)
    at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at com.sun.proxy.$Proxy366.search(Unknown Source)
    at oracle.iam.identity.usermgmt.api.UserManagerEJB.searchx(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor1449.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.oracle.pitchfork.spi.MethodInvocationVisitorImpl.visit(MethodInvocationVisitorImpl.java:34)
    at weblogic.ejb.container.injection.EnvironmentInterceptorCallbackImpl.callback(EnvironmentInterceptorCallbackImpl.java:54)
    at com.oracle.pitchfork.spi.EnvironmentInterceptor.invoke(EnvironmentInterceptor.java:42)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:131)
    at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:119)
    at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
    at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
    at com.sun.proxy.$Proxy365.searchx(Unknown Source)
    at oracle.iam.identity.usermgmt.api.UserManager_nimav7_UserManagerRemoteImpl.__WL_invoke(Unknown Source)
    at weblogic.ejb.container.internal.SessionRemoteMethodInvoker.invoke(SessionRemoteMethodInvoker.java:40)
    at oracle.iam.identity.usermgmt.api.UserManager_nimav7_UserManagerRemoteImpl.searchx(Unknown Source)
    at oracle.iam.identity.usermgmt.api.UserManager_nimav7_UserManagerRemoteImpl_WLSkel.invoke(Unknown Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667)
    at weblogic.rmi.cluster.ClusterableServerRef.invoke(ClusterableServerRef.java:230)
    at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518)
    at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    Caused by: oracle.iam.identity.exception.SearchAttributeAccessDeniedException: You do not have permission to search the following user attributes: USR_UDF_ACTUALSTARTDATE.
    at oracle.iam.identity.usermgmt.impl.UserManagerImpl.search(UserManagerImpl.java:1462)
    ... 44 more

  • The hostname test01 is not authorized to be used in this zone cluster

    Hi,
    I have problems to register a LogicalHostname to a Zone Cluster.
    Here my steps:
    - create the ZoneCluster
    # clzc configure test01
    clzc:test01> info
    zonename: test01
    zonepath: /export/zones/test01
    autoboot: true
    brand: cluster
    bootargs:
    pool: test
    limitpriv:
    scheduling-class:
    ip-type: shared
    enable_priv_net: true
    sysid:
    name_service not specified
    nfs4_domain: dynamic
    security_policy: NONE
    system_locale: en_US.UTF-8
    terminal: vt100
    timezone: Europe/Berlin
    node:
    physical-host: farm01a
    hostname: test01a
    net:
    address: 172.19.115.232
    physical: e1000g0
    node:
    physical-host: farm01b
    hostname: test01b
    net:
    address: 172.19.115.233
    physical: e1000g0
    - create a RG
    # clrg create -Z test01 test01-rg
    - create Logicalhostname (with error)
    # clrslh create -g test01-rg -Z test01 -h test01 test01-ip
    clrslh: farm01b:test01 - The hostname test01 is not authorized to be used in this zone cluster test01.
    clrslh: farm01b:test01 - Resource contains invalid hostnames.
    clrslh: (C189917) VALIDATE on resource test01-ip, resource group test01-rg, exited with non-zero exit status.
    clrslh: (C720144) Validation of resource test01-ip in resource group test01-rg on node test01b failed.
    clrslh: (C891200) Failed to create resource "test01:test01-ip".
    Here the entries in /etc/hosts from farm01a and farm01b
    172.19.115.119 farm01a # Cluster Node
    172.19.115.120 farm01b loghost
    172.19.115.232 test01a
    172.19.115.233 test01b
    172.19.115.252 test01
    Hope somebody could help.
    regards,
    Sascha
    Edited by: sbrech on 13.05.2009 11:44

    When I scanned my last example of a zone cluster, I spotted, that I added my logical host to the zone clusters configuration.
    create -b
    set zonepath=/zones/cluster
    set brand=cluster
    set autoboot=true
    set enable_priv_net=true
    set ip-type=shared
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=applh
    set physical=auto
    end
    add dataset
    set name=applpool
    end
    add node
    set physical-host=deulwork80
    set hostname=deulclu
    add net
    set address=172.16.30.81
    set physical=e1000g0
    end
    end
    add sysid
    set root_password=nMKsicI310jEM
    set name_service=""
    set nfs4_domain=dynamic
    set security_policy=NONE
    set system_locale=C
    set terminal=vt100
    set timezone=Europe/Berlin
    end
    I am refering to:
    add net
    set address=applh
    set physical=auto
    end
    So as far as I can see this is missing from your configuration. Sorry for leading you in the wrong way.
    Detlef

  • Unable to plot 1-D array with a cluster of 2 elements on an XY graph

    I'm Unable to plot a 1-D array with a cluster of 2 elements on an XY graph. The data appears at the input to the graph but nothing is plotted. I'm trying to plot a line connecting each point generated.
    Solved!
    Go to Solution.
    Attachments:
    TEST11.vi ‏13 KB

    Chuck,
    0. Do not post VIs with infinite loops! Change the True constant to a control.  One of the regular participants on these Forums has commented that using the Abort button to stop a VI is like using a tree to stop a car.  It works, but may have undesired consequences!
    1. Use a shift register on the For loop also.
    2. Inserting into an empty array leaves you with an empty array.  Initialize the array outside the loop to a size greater or equal to the maximum size you will be using.  Then use Replace Array Subest inside the loop.
    3. Just make an array of the cluster of points.  No need for the extra cluster.
    Lynn
    Attachments:
    TEST11.2.vi ‏11 KB

  • Recovery scenario - Voting disk  does not match with the cluster guid

    Hi all,
    Think of you can not start your guest VMs just because it has a corrupted system.img root image. And assume it contains 5 physical disk( which are all created by the RAC template) hence ASM on them.
    What is the simplest recovery scneario of the guest vms (RAC)?
    Can it be a feasible scenario for recover of the availablity? (Assume both of the RAC system images are corrupted and we prefer not a system level recovery rather than backup / restore)
    1. Create 2 RAC instances using the same networking and hostname details as the ones that are corrupted. - Use 5 different new disks.
    2 Shutdown the newly created instances. Drop the disks from the newly created instances using VM manager.
    3. Add the old disks whose system image is failing to be recoverd but ASM disks are still in use (from the newly created instances using VM manager.) to the newly created instances.
    4. Open the newly created instances
    Can we expect the ASM and CRS could be initialized and be opened without a problem?
    When I try this scenario I get the folllowing error from the cssd/crsd .
    - Cluster guid 9112ddc0824fefd5ff2b7f9f7be8f048 found in voting disk does not match with the cluster guid a3eec66a2854ff0bffe784260856f92a obtained from the GPnP profile.
    - Found 0 configured voting files but 1 voting files are required, terminating to ensure data integrity.
    What could be the simplest way of recovery of a virtual machine that has healthy ASM disks but corrupted system image?
    Thank you

    Hi,
    you have a similar problem, when trying to clone databases with 11.2.
    The problem is that a cluster is uniquely identified, and this information is hold in the OCR and the Voting disks. So exactly these 2 are not to be cloned.
    To achieve what you want, simply setup your system in that way, that you have a separate diskgroup for OCR and Voting (and ASM spfile), which is not to be restored in this case of szeanrio.
    Only all database files in ASM will then be exchanged later.
    Then what you want can be achieved.
    However I am not sure that the RAC templates have the option to install OCR and Voting into a separated diskgroup.
    Regards
    Sebastian

  • JNDI Lookup for multiple server instances with multiple cluster nodes

    Hi Experts,
    I need help with retreiving log files for multiple server instances with multiple cluster nodes. The system is Netweaver 7.01.
    There are 3 server instances all instances with 3 cluster nodes.
    There are EJB session beans deployed on them to retreive the log information for each server node.
    In the session bean there is a method:
    public List getServers() {
      List servers = new ArrayList();
      ClassLoader saveLoader = Thread.currentThread().getContextClassLoader();
      try {
       Properties prop = new Properties();
       prop.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sap.engine.services.jndi.InitialContextFactoryImpl");
       prop.put(Context.SECURITY_AUTHENTICATION, "none");
       Thread.currentThread().setContextClassLoader((com.sap.engine.services.adminadapter.interfaces.RemoteAdminInterface.class).getClassLoader());
       InitialContext mInitialContext = new InitialContext(prop);
       RemoteAdminInterface rai = (RemoteAdminInterface) mInitialContext.lookup("adminadapter");
       ClusterAdministrator cadm = rai.getClusterAdministrator();
       ConvenienceEngineAdministrator cea = rai.getConvenienceEngineAdministrator();
       int nodeId[] = cea.getClusterNodeIds();
       int dispatcherId = 0;
       String dispatcherIP = null;
       String p4Port = null;
       for (int i = 0; i < nodeId.length; i++) {
        if (cea.getClusterNodeType(nodeId[i]) != 1)
         continue;
        Properties dispatcherProp = cadm.getNodeInfo(nodeId[i]);
        dispatcherIP = dispatcherProp.getProperty("Host", "localhost");
        p4Port = cea.getServiceProperty(nodeId[i], "p4", "port");
        String[] loc = new String[3];
        loc[0] = dispatcherIP;
        loc[1] = p4Port;
        loc[2] = null;
        servers.add(loc);
       mInitialContext.close();
      } catch (NamingException e) {
      } catch (RemoteException e) {
      } finally {
       Thread.currentThread().setContextClassLoader(saveLoader);
      return servers;
    and the retreived server information used here in another class:
    public void run() {
      ReadLogsSession readLogsSession;
      int total = servers.size();
      for (Iterator iter = servers.iterator(); iter.hasNext();) {
       if (keepAlive) {
        try {
         Thread.sleep(500);
        } catch (InterruptedException e) {
         status = status + e.getMessage();
         System.err.println("LogReader Thread Exception" + e.toString());
         e.printStackTrace();
        String[] serverLocs = (String[]) iter.next();
        searchFilter.setDetails("[" + serverLocs[1] + "]");
        Properties prop = new Properties();
        prop.put(Context.INITIAL_CONTEXT_FACTORY, "com.sap.engine.services.jndi.InitialContextFactoryImpl");
        prop.put(Context.PROVIDER_URL, serverLocs[0] + ":" + serverLocs[1]);
        System.err.println("LogReader run [" + serverLocs[0] + ":" + serverLocs[1] + "]");
        status = " Reading :[" + serverLocs[0] + ":" + serverLocs[1] + "] servers :[" + currentIndex + "/" + total + " ] ";
        prop.put("force_remote", "true");
        prop.put(Context.SECURITY_AUTHENTICATION, "none");
        try {
         Context ctx = new InitialContext(prop);
         Object ob = ctx.lookup("com.xom.sia.ReadLogsSession");
         ReadLogsSessionHome readLogsSessionHome = (ReadLogsSessionHome) PortableRemoteObject.narrow(ob, ReadLogsSessionHome.class);
         status = status + "Found ReadLogsSessionHome ["+readLogsSessionHome+"]";
         readLogsSession = readLogsSessionHome.create();
         if(readLogsSession!=null){
          status = status + " Created  ["+readLogsSession+"]";
          List l = readLogsSession.getAuditLogs(searchFilter);
          serverLocs[2] = String.valueOf(l.size());
          status = status + serverLocs[2];
          allRecords.addAll(l);
         }else{
          status = status + " unable to create  readLogsSession ";
         ctx.close();
        } catch (NamingException e) {
         status = status + e.getMessage();
         System.err.println(e.getMessage());
         e.printStackTrace();
        } catch (CreateException e) {
         status = status + e.getMessage();
         System.err.println(e.getMessage());
         e.printStackTrace();
        } catch (IOException e) {
         status = status + e.getMessage();
         System.err.println(e.getMessage());
         e.printStackTrace();
        } catch (Exception e) {
         status = status + e.getMessage();
         System.err.println(e.getMessage());
         e.printStackTrace();
       currentIndex++;
      jobComplete = true;
    The application is working for multiple server instances with a single cluster node but not working for multiple cusltered environment.
    Anybody knows what should be changed to handle more cluster nodes?
    Thanks,
    Gergely

    Thanks for the response.
    I was afraid that it would be something like that although
    was hoping for
    something closer to the application pools we use with IIS to
    isolate sites
    and limit the impact one badly behaving one can have on
    another.
    mmr
    "Ian Skinner" <[email protected]> wrote in message
    news:fe5u5v$pue$[email protected]..
    > Run CF with one instance. Look at your processes and see
    how much memory
    > the "JRun" process is using, multiply this by number of
    other CF
    > instances.
    >
    > You are most likely going to end up on implementing a
    "handful" of
    > instances versus "dozens" of instance on all but the
    beefiest of servers.
    >
    > This can be affected by how much memory each instance
    uses. An
    > application that puts major amounts of data into
    persistent scopes such as
    > application and|or session will have a larger foot print
    then a leaner
    > application that does not put much data into memory
    and|or leave it there
    > for a very long time.
    >
    > I know the first time we made use of CF in it's
    multi-home flavor, we went
    > a bit overboard and created way too many. After nearly
    bringing a
    > moderate server to its knees, we consolidated until we
    had three or four
    > or so IIRC. A couple dedicated to to each of our largest
    and most
    > critical applications and a couple general instances
    that ran many smaller
    > applications each.
    >
    >
    >
    >
    >

  • OSB 10gR3 MQ transport - How to loadbalance with MQ cluster?

    Hi,
    I'm using MQ transport to put and get messages from OSB 10.3.1. I would like to know how a Proxy/Business Service can be configured to make use of an MQ cluster (or an MQ Queue Manager cluster to be specific)?
    Thanks in advance
    Vikas

    interesting question...
    here http://docs.oracle.com/cd/E21764_01/doc.1111/e15867/interop.htm#OSBAG1403
    they say
    IBM WebSphere MQ 5.3, 6.0, *7.0 server support with 6.x client libraries*
    I vaguely remember having used the 6.0 jars to connect to MQ 7, but I believe that even using the 7.0 jars would work.... don't quote me on this...

  • Why Inner join or Outer join is not used for Pool or Cluster tables  ?

    Hi SAP-ABAP Experts .
    With Due Regards .
    May u explain me why Inner join or Outer join is not useful for Pool or Cluster tables  ?
    because peoples advised not use Joins for Pool and Cluster tables , What harm will take place , If we do this ?
    Best Regards to all : Rajneesh

    Both Pooled and Cluster Tables are stored as tables within the database. Only the structures of the two table types which represent a single logical view of the data are defined within the ABAP/4 Data Dictionary. The data is actually stored in bulk storage in a different structure. These tables are commonly loaded into memory (i.e., 'buffered') due to the fact they are typically used for storing internal control information and other types of data with little or no external (business) relevance.
    Pooled and cluster tables are usually used only by SAP and not used by customers, probably because of the proprietary format of these tables within the database and because of technical restrictions placed upon their use within ABAP/4 programs. On a pooled or cluster table:
    Secondary indexes cannot be created.
    You cannot use the ABAP/4 constructs select distinct or group by.
    You cannot use native SQL.
    You cannot specify field names after the order by clause. order by primary key is the only permitted variation.
    I hope it helps.
    Best Regards,
    Vibha
    Please mark all the helpful answers

  • Getting "invalid type: 169" errors when using POF with Push Replication

    I'm trying to get Push Replication - latest version - running on Coherence 3.6.1. I can get it working fine if I don't use POF with my objects, but when trying to use POF format for my objects I get this:
    2011-02-11 13:06:00.993/2.297 Oracle Coherence GE 3.6.1.1 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2011-02-11 13:06:01.149/2.453 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded POF configuration from "file:/C:/wsgpc/GlobalPositionsCache/resource/coherence/pof-config.xml"
    2011-02-11 13:06:01.149/2.453 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2011-02-11 13:06:01.149/2.453 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-common-1.7.3.20019.jar!/coherence-common-pof-config.xml"
    2011-02-11 13:06:01.165/2.469 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-messagingpattern-2.7.4.21016.jar!/coherence-messagingpattern-pof-config.xml"
    2011-02-11 13:06:01.165/2.469 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-pushreplicationpattern-3.0.3.20019.jar!/coherence-pushreplicationpattern-pof-config.xml"
    2011-02-11 13:06:01.243/2.547 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Service DistributedCacheForSequenceGenerators joined the cluster with senior service member 1
    2011-02-11 13:06:01.258/2.562 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForLiveObjects, member=1): Service DistributedCacheForLiveObjects joined the cluster with senior service member 1
    2011-02-11 13:06:01.274/2.578 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForSubscriptions, member=1): Service DistributedCacheForSubscriptions joined the cluster with senior service member 1
    2011-02-11 13:06:01.290/2.594 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForMessages, member=1): Service DistributedCacheForMessages joined the cluster with senior service member 1
    2011-02-11 13:06:01.305/2.609 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheForDestinations, member=1): Service DistributedCacheForDestinations joined the cluster with senior service member 1
    2011-02-11 13:06:01.305/2.609 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache:DistributedCacheWithPublishingCacheStore, member=1): Service DistributedCacheWithPublishingCacheStore joined the cluster with senior service member 1
    2011-02-11 13:06:01.321/2.625 Oracle Coherence GE 3.6.1.1 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <Info> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): TcpAcceptor now listening for connections on 166.15.224.91:20002
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Started: TcpAcceptor{Name=Proxy:ExtendTcpProxyService:TcpAcceptor, State=(SERVICE_STARTED), ThreadCount=0, Codec=Codec(Format=POF), Serializer=com.tangosol.io.DefaultSerializer, PingInterval=0, PingTimeout=0, RequestTimeout=0, SocketProvider=SystemSocketProvider, LocalAddress=[/166.15.224.91:20002], SocketOptions{LingerTimeout=0, KeepAliveEnabled=true, TcpDelayEnabled=false}, ListenBacklog=0, BufferPoolIn=BufferPool(BufferSize=2KB, BufferType=DIRECT, Capacity=Unlimited), BufferPoolOut=BufferPool(BufferSize=2KB, BufferType=DIRECT, Capacity=Unlimited)}
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService, member=1): Service ExtendTcpProxyService joined the cluster with senior service member 1
    2011-02-11 13:06:01.461/2.765 Oracle Coherence GE 3.6.1.1 <Info> (thread=main, member=1):
    Services
    ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.6, OldestMemberId=1}
    InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
    PartitionedCache{Name=DistributedCacheForSequenceGenerators, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForLiveObjects, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForSubscriptions, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForMessages, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheForDestinations, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCacheWithPublishingCacheStore, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    ProxyService{Name=ExtendTcpProxyService, State=(SERVICE_STARTED), Id=9, Version=3.2, OldestMemberId=1}
    Started DefaultCacheServer...
    2011-02-11 13:08:27.894/149.198 Oracle Coherence GE 3.6.1.1 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=csfb.cs-group.com, clusterName=SPTestCluster, cacheName=source-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=32, value=0x15A90F00004E07424F4F4B303038014E08494E535430393834024E0345535040), value=Binary(length=147, value=0x1281A30115AA0F0000A90F00004E07424F4F4B303038014E08494E535430393834024E03455350400248ADEEF99607060348858197BF22060448B4D8E9BE02060548A0D2CDC70E060648B0E9A2C4030607488DBCD6E50D060848B18FC1882006094E03303038402B155B014E0524737263244E1F637366622E63732D67726F75702E636F6D2D535054657374436C7573746572), originalValue=Binary(length=0, value=0x)}} to Cache passive-cache because of
    (Wrapped) java.io.StreamCorruptedException: invalid type: 169 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
    2011-02-11 13:08:27.894/149.198 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [source-cache]) java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
         at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:103)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
         ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:265)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
         at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
         at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
         at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
         ... 10 more
    Caused by: java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2265)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2253)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2703)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         ... 16 more
    2011-02-11 13:08:37.925/159.229 Oracle Coherence GE 3.6.1.1 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=csfb.cs-group.com, clusterName=SPTestCluster, cacheName=source-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=32, value=0x15A90F00004E07424F4F4B303038014E08494E535430393834024E0345535040), value=Binary(length=147, value=0x1281A30115AA0F0000A90F00004E07424F4F4B303038014E08494E535430393834024E03455350400248ADEEF99607060348858197BF22060448B4D8E9BE02060548A0D2CDC70E060648B0E9A2C4030607488DBCD6E50D060848B18FC1882006094E03303038402B155B014E0524737263244E1F637366622E63732D67726F75702E636F6D2D535054657374436C7573746572), originalValue=Binary(length=0, value=0x)}} to Cache passive-cache because of
    (Wrapped) java.io.StreamCorruptedException: invalid type: 169 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
    2011-02-11 13:08:37.925/159.229 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [source-cache]) java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
         at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:103)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
         ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:265)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
         at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
         at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
         at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
         ... 10 more
    Caused by: java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2265)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2253)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2703)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         ... 16 more
    2011-02-11 13:08:47.940/169.244 Oracle Coherence GE 3.6.1.1 <Error> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): Failed to publish EntryOperation{siteName=csfb.cs-group.com, clusterName=SPTestCluster, cacheName=source-cache, operation=Insert, publishableEntry=PublishableEntry{key=Binary(length=32, value=0x15A90F00004E07424F4F4B303038014E08494E535430393834024E0345535040), value=Binary(length=147, value=0x1281A30115AA0F0000A90F00004E07424F4F4B303038014E08494E535430393834024E03455350400248ADEEF99607060348858197BF22060448B4D8E9BE02060548A0D2CDC70E060648B0E9A2C4030607488DBCD6E50D060848B18FC1882006094E03303038402B155B014E0524737263244E1F637366622E63732D67726F75702E636F6D2D535054657374436C7573746572), originalValue=Binary(length=0, value=0x)}} to Cache passive-cache because of
    (Wrapped) java.io.StreamCorruptedException: invalid type: 169 Class:com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher
    2011-02-11 13:08:47.940/169.244 Oracle Coherence GE 3.6.1.1 <D5> (thread=Proxy:ExtendTcpProxyService:TcpAcceptor, member=1): An exception occurred while processing a InvocationRequest for Service=Proxy:ExtendTcpProxyService:TcpAcceptor: (Wrapped: Failed to publish a batch with the publisher [Active Publisher] on cache [source-cache]) java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:348)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.query(InvocationServiceProxy.CDB:6)
         at com.tangosol.coherence.component.net.extend.messageFactory.InvocationServiceFactory$InvocationRequest.onRun(InvocationServiceFactory.CDB:12)
         at com.tangosol.coherence.component.net.extend.message.Request.run(Request.CDB:4)
         at com.tangosol.coherence.component.net.extend.proxy.serviceProxy.InvocationServiceProxy.onMessage(InvocationServiceProxy.CDB:9)
         at com.tangosol.coherence.component.net.extend.Channel.execute(Channel.CDB:39)
         at com.tangosol.coherence.component.net.extend.Channel.receive(Channel.CDB:26)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Peer.onNotify(Peer.CDB:103)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.IllegalStateException: Attempted to publish to cache passive-cache
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:163)
         at com.oracle.coherence.patterns.pushreplication.publishers.RemoteClusterPublisher$RemotePublishingAgent.run(RemoteClusterPublisher.java:343)
         ... 9 more
    Caused by: (Wrapped) java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:265)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService$ConverterKeyToBinary.convert(PartitionedService.CDB:16)
         at com.tangosol.util.ConverterCollections$ConverterInvocableMap.invoke(ConverterCollections.java:2156)
         at com.tangosol.util.ConverterCollections$ConverterNamedCache.invoke(ConverterCollections.java:2622)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.invoke(PartitionedCache.CDB:11)
         at com.tangosol.coherence.component.util.SafeNamedCache.invoke(SafeNamedCache.CDB:1)
         at com.oracle.coherence.patterns.pushreplication.publishers.cache.AbstractCachePublisher.publishBatch(AbstractCachePublisher.java:142)
         ... 10 more
    Caused by: java.io.StreamCorruptedException: invalid type: 169
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2265)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2253)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:74)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2703)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:261)
         ... 16 more
    It seems to be loading my POF configuration file - which also includes the standard Coherence ones as well as those required for PR - just fine, as you can see at the top of the trace.
    Any ideas why POF format for my objects is giving this error (NB. I've tested the POF stuff outside of PR and it all works fine.)
    EDIT: I've tried switching the "publisher" to the "file" publisher in PR. And that works fine. I see my POF format cached data extracted and published to the directory I specify. So the "publish" part seems to work when I use a file-publisher.
    Cheers,
    Steve

    Hi Neville,
    I don't pass any POF config parameters on the command-line. My POF file is called "pof-config.xml" so seems to be picked up by default. The trace I showed in my post shows the file being picked up.
    My POF config file content is as follows:
    <pof-config>
         <user-type-list>
              <!-- Standard Coherence POF types -->
              <include>coherence-pof-config.xml</include>
              <!-- Coherence Push Replication Required POF types -->
    <include>coherence-common-pof-config.xml</include>
    <include>coherence-messagingpattern-pof-config.xml</include>
    <include>coherence-pushreplicationpattern-pof-config.xml</include>
              <!-- User POF types (must be above 1000) -->
              <user-type>
                   <type-id>1001</type-id>
                   <class-name>com.csg.gpc.domain.model.position.trading.TradingPositionKey</class-name>
                   <serializer>
                        <class-name>com.csg.gpc.coherence.pof.position.trading.TradingPositionKeySerializer</class-name>
                   </serializer>
              </user-type>
              <user-type>
                   <type-id>1002</type-id>
                   <class-name>com.csg.gpc.domain.model.position.trading.TradingPosition</class-name>
                   <serializer>
                        <class-name>com.csg.gpc.coherence.pof.position.trading.TradingPositionSerializer</class-name>
                   </serializer>
              </user-type>
              <user-type>
                   <type-id>1003</type-id>
                   <class-name>com.csg.gpc.domain.model.position.simple.SimplePosition</class-name>
                   <serializer>
                        <class-name>com.csg.gpc.coherence.pof.position.simple.SimplePositionSerializer</class-name>
                   </serializer>
              </user-type>
              <user-type>
                   <type-id>1004</type-id>
                   <class-name>com.csg.gpc.coherence.processor.TradingPositionUpdateProcessor</class-name>
              </user-type>
         </user-type-list>
    </pof-config>
    EDIT: I'm running both clusters here from within Eclipse. Here's the POF bits from the startup of the receiving cluster:
    2011-02-11 15:05:22.607/2.328 Oracle Coherence GE 3.6.1.1 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded POF configuration from "file:/C:/wsgpc/GlobalPositionsCache/resource/coherence/pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-common-1.7.3.20019.jar!/coherence-common-pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-messagingpattern-2.7.4.21016.jar!/coherence-messagingpattern-pof-config.xml"
    2011-02-11 15:05:22.779/2.500 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-pushreplicationpattern-3.0.3.20019.jar!/coherence-pushreplicationpattern-pof-config.xml"
    And here's the start-up POF bits from the sending cluster:
    2011-02-11 15:07:09.744/2.343 Oracle Coherence GE 3.6.1.1 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded POF configuration from "file:/C:/wsgpc/GlobalPositionsCache/resource/coherence/pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-common-1.7.3.20019.jar!/coherence-common-pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-messagingpattern-2.7.4.21016.jar!/coherence-messagingpattern-pof-config.xml"
    2011-02-11 15:07:09.916/2.515 Oracle Coherence GE 3.6.1.1 <Info> (thread=DistributedCache:DistributedCacheForSequenceGenerators, member=1): Loaded included POF configuration from "jar:file:/C:/coherence3.6-pushreplication/coherence-3.6-pushreplicationpattern-3.0.3.20019.jar!/coherence-pushreplicationpattern-pof-config.xml"
    They both seem to be reading my pof-config.xml file.
    I have the following in my sending cluster cache config:
              <sync:provider pof-enabled="true">
                   <sync:coherence-provider />
              </sync:provider>
    And this in the receiving cache config:
    <introduce:config
    file="coherence-pushreplicationpattern-pof-cache-config.xml" />
    Cheers,
    Steve
    Edited by: stevephe on 11-Feb-2011 07:05

  • DLL Call works fine with 1 cluster input but bombs with 2

    I have some problems getting my DLL to work using 2 cluster inputs.
     I create a simple DLL (MS C++ Express 2010) with 1 cluster and 1 struct Input and everything works.
     If I add a second cluster/struct, Labview bombs and disappears, Even if it's the same structure definition and initialization inputs.
     I'm making sure I have the struct Boundries set to 1 byte.  
     Any ideas?
    Solved!
    Go to Solution.

     Not using the #pragma pack, just setting the option for structure boundry properties in VC.
     It's been on maximum debugging but it's still crashing.
     Attached vi is nothing fancy, just trying to get 2 inputs to work.
     Hopefully something simple I'm not doing correctly. Thanks.
    Attachments:
    dll_definition_new.cpp ‏1 KB
    Nozzle_IO_30b.vi ‏8 KB

  • Error communicating with the Cluster Manage Web Service

    Hi,
    i connected oracle database with the Endeca Studio but while uploading Departments dataset i m getting this error
    "Error communicating with the Cluster Manage Web Service" so plz can anyone explain me wat is this error about.
    thanks in advance

    I actually ran into this myself- as Pat notes, it's hard to say without seeing log files, but I think that perhaps the default domain profile that the Endeca Server uses for domains created using Provisioning services has not been created.  First see what domain profiles exist.  Navigate to wherever endeca-cmd lives (e.g., user_projects/domains/endeca_server_domain/EndecaServer/bin), and use the list-dd-profile command:
    -bash-3.2$ ./endeca-cmd.sh list-dd-profiles
    The profiles that exist will be returned in your terminal window:
    prov_dd_profile
    default
    If you only see 'default', then you will need to create 'prov_dd_profile'
    ./endeca-cmd.sh put-dd-profile prov_dd_profile
    Then try uploading your file again in Studio.
    Cheers,
    Andrew

Maybe you are looking for