Query regarding Cluster nodes in CC

Hi Experts,
We have a query regarding the cluster nodes available in the CC monitoring.
Can two nodes of a same channel can poll at the same time?
Kindly suggest what should be done to make a specific cluster node of a CC polls at a particular time.
Thanks
Suganya.

Hi,
There is an answered thread on this
Processing in  Multiple Cluster Nodes
Regards,
Manjusha

Similar Messages

  • Query regarding the Node manager configuration(WLS and OAM Managed server)

    Query regarding the Node manager configuration(WLS and OAM Managed server):
    1) In the nodemanager.properties I have added the ListenAddress:myMachineName and ListenPort: 5556
    My setup : One physical Linux machine(myMachineName) has : WLS admin server, managed server(OAM 11G) and nodemanager.No clustered environment.
    2) nodemanager.log has the following exception when I start the oam_server1 using EM(Enterprise Manager11g):
    Mar 23 2012 1:39:55 AM> <SEVERE> <Fatal error in node manager server>
    java.net.BindException: Address already in use
    at java.net.PlainSocketImpl.socketBind(Native Method)
    at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:336)
    at java.net.ServerSocket.bind(ServerSocket.java:336)
    at javax.net.ssl.impl.SSLServerSocketImpl.bind(Unknown Source)
    at java.net.ServerSocket.<init>(ServerSocket.java:202)
    at javax.net.ssl.SSLServerSocket.<init>(SSLServerSocket.java:125)
    at javax.net.ssl.impl.SSLServerSocketImpl.<init>(Unknown Source)
    at javax.net.ssl.impl.SSLServerSocketFactoryImpl.createServerSocket(Unknown Source)
    Default port on which node manager listen for requests is localhost:5556.I have changed it to point to my machine. The port should be of WLS admin server or it should be the managed server port?
    3) I have started the NodeManager using the startNodeManager.sh script.
    4) The admin server port is 7001 and the oam managed server port is 14100.
    Any inputs on what might be wrong in the setup will be helpful.Thanks !

    By using netstat -anp|grep 5556 you can check which process on your machine is using the 5556 port.

  • Query regarding Treen node and its Content

    Hi all,
    I have a doubt regarding treenodes.I am creating a dynamic tree.So when i added a tree node i added a text box in the fecet of that node like
    TextField txtNodeName=new TextField();
    treeNode.getFacets.put("content",txtNodeName);
    Is there any way to get the value in the textfield.I can set the value but not able to get the entered value.Is ther any solution for this
    Kindly help
    Thanks in advance
    Sree

    try {
    TextField comp = treeNode.getFacets().get(<FacetName>);
    String text = comp.getText();
    } catch(Exception e) {
    Hope this is what you were looking for.

  • Regarding Pulling out network cable from cluster node

    I have two cluster nodes installed with my application.
    I have pulled out the Network cable from the primary where my application is running. So the primary is not reachle from remote box.(cannot ping primary)
    I have found the following error messages
    SUNW,hme0 : No response from Ethernet network : Link down -- cable problem?
    I have found the device group and resource group online the primary and the sun cluster does not failover to secondary node. Does Sun cluster support this scenario ?
    Or do i need to any additional configuration? Can i get clarification on this

    Hi Sudheer,
    if you have two interfaces in your ipmpgroup, I am missing the test address.
    http://docs.sun.com/app/docs/doc/819-3000/emybr?l=en&q=ipmp&a=view
    states a hostname.hme0 as:
    192.168.85.19 netmask + broadcast + group testgroup1 up \
         addif 192.168.85.21 deprecated -failover netmask + broadcast + up
    and for hotname.hme1
    192.168.85.20 netmask + broadcast + group testgroup1 up \
         addif 192.168.85.22 deprecated -failover netmask + broadcast + up
    you can safely replace the addresses by names if they are in /etc/hosts
    In this case the -failover flag for the physical of your example is wrong.
    If you only have one adapter,
    One line in /etc/hostname.hme0 like you stated in your example is correct.
    this is from one of my clusters.
    deulwork20 group sc_ipmp0 -failover
    it is the ipmpgroup Sun Cluster creates for you if you do not specify anything else. so for one single adapter one line like "hadev1 group sc_ipmp0 -failover" is correct.
    DEtlef

  • Query Sun Cluster 3.2 With SNMP?

    Hello,
    Is there a way to gleen cluster information from SNMP without the use of Sun Management Center (SMC)? My understanding is that it can be configured to send traps to an SNMP monitoring host, but can the cluster nodes be queried in any way using something such as snmpget?
    Thank you and any and all help appreciated.
    Regards,
    Peter

    please check this blog
    http://blogs.oracle.com/SC/entry/sun_cluster_3_2_snmp

  • File Being processed in two cluster nodes

    Hi ,
    We are having two cluster nodes and when my  adapter picks the file, the file is getting processed in 2 cluster nodes.
    I believe the file should get processed in either of the cluster node but not in both cluster nodes.
    Has any one faced this kind of situation in any of your projects where you might be having different cluster nodes.
    Thanks,
    Chandra.

    Hi Chandra
      Did u get a chance to see this post.. it may help
        Processing in  Multiple Cluster Nodes
    Regards,
    Sandeep

  • Processing in  Multiple Cluster Nodes

    Hi All,
    In our PI system we have 2 Java nodes due to some requirement. When the communication channel runs and we check the message log, in one Cluster node we have a successful message. In other Cluster Node we have an error message that says "File not found".
    The file processing is completeing successfully on one Cluster node. But I wanted to know if there is any way to suppress the processing of the same file by same channel on another Node. Some setting in administration or IB where we can get this done.
    Is there any way to get this done by some setting?
    Thanks,
    Rashmi.

    Hello!
    As per note #801926, please set the clusterSyncMode parameter on Advanced tab of the communication channel with LOCK value.
    And also check the entries 4 and 48 of the FAQ note #821267:
    4. FTP Sender File Processing in Cluster Environment
    48. File System(NFS) File Sender Processing in Cluster Environment
    Best regards,
    Lucas

  • Query regarding the data type for fetcing records from multiple ODS tables

    hey guys;
    i have a query regarding the data type for fetcing records from multiple ODS tables.
    if i have 2 table with a same column name then in the datatype under parent row node i cant add 2 nodes with the same name.
    can any one help with some suggestion.

    Hi Mudit,
    One option would be to go as mentioned by Padamja , prefxing the table name to the column name or another would be to use the AS keyoword in your SQL statement.
    AS is used to rename the column name when data is being selected from your DB.
    So, the query  Select ename as empname from emptable will return the data with column name as empname.
    Regards,
    Bhavesh

  • Cluster node (1 out of 6) is in error for a file channel - SAP XI

    Hi,
    In one of a sender file channel for an interface, one java node out of 6 configured ones failed. The error message says "Login Incorrect" while the other cluster nodes are polling properly. I have tried to update the password in the channel's configuration in Integration Directory and activated it. This doesnt help. Please advice!
    Thanks in advance!
    Regards,
    Kumaran

    Hello Kumaran,
    The status of the file adapter is not reflected properly but this should not have an impact. No message might have arrived at that node yet. Once it receives a message for processing, the status will be changed.
    Anil

  • Normal query on cluster/pooled tables...

    Hi experts,
    Can i write a normal sql query on cluster or pooled tables?
    Will it create any sort of problem?
    Regards,
    Viswanathan .S

    Hi Viswanathan,
    It is not recommended to access them directly due to there size like tables BSEG, CDPOS.
    Report may run smooth on dev but will slow down as data increases in case of production server.
    There are alternative tables available for their substitution. Which table(s) are you trying to access anyways ?
    Regards,
    Naveen

  • Hyper-V Failover Cluster Node Corruption

    Dear All,
                Some of my nodes are showing abnormal behavior.  They are restarting every now and then.  I had updated the cluster nodes, but all updates were OS specific, there was nothing specific
    with respect to hardware update.
    I have analyzed crash dumps and find out that following is causing the crash:
    page_fault_in_nonpaged_area
    anyone has any idea about this?
    Thanks in advance.

    Hi ,
    What is the OS of the cluster node ?
    Did you try to remove the protection client for troubleshooing ?
    If it is a 2008R2 cluster , please refer to this thread :
    http://social.technet.microsoft.com/Forums/en-US/32ab6a85-6002-4c3c-97ea-27cb1091e9b3/windows-cluster-server-is-getting-restarted?forum=winservergen
    Hope it helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • JMS Channel Cluster Nodes-INACTIVE

    Hello All,
    We have a Sender - JMS Channel which is green state but the Cluster Node (10 of them) are in WAITING STATE - Channel_Inactive. And Nodes are in GREEN STATE.
    I have checked the Cache in Integration Directory where I could see RED entries and I have tried 'Repeat Cache Instance' , but in vain. Is it a fair idea to Run the Function-module 'LCR_CLEAR_CACHE' ? Does it have any impact?!
    Due to this 1 of the message is lying in the MS system (JMS stream) in Uncommitted State.
    ALL HAPPENING IN A PROD SYSTEM!!!
    Please find the screenshots attached.
    Regards
    KarthiSP

    Hi ,
    Check the central adapter engine cache status in Cache monitoring from RWB...its having green or red...if it is red check with basis team...
    Thanks,
    Naveen

  • How to use SVM metadevices with cluster - sync metadb between cluster nodes

    Hi guys,
    I feel like I've searched the whole internet regarding that matter but found nothing - so hopefully someone here can help me?!?!?
    <b>Situation:</b>
    I have a running server with Sol10 U2. SAN storage is attached to the server but without any virtualization in the SAN network.
    The virtualization is done by Solaris Volume Manager.
    The customer has decided to extend the environment with a second server to build up a cluster. According our standards we
    have to use Symantec Veritas Cluster, but I think regarding my question it doesn't matter which cluster software is used.
    The SVM configuration is nothing special. The internal disks are configured with mirroring, the SAN LUNs are partitioned via format
    and each slice is a meta device.
    d100 p 4.0GB d6
    d6 m 44GB d20 d21
    d20 s 44GB c1t0d0s6
    d21 s 44GB c1t1d0s6
    d4 m 4.0GB d16 d17
    d16 s 4.0GB c1t0d0s4
    d17 s 4.0GB c1t1d0s4
    d3 m 4.0GB d14 d15
    d14 s 4.0GB c1t0d0s3
    d15 s 4.0GB c1t1d0s3
    d2 m 32GB d12 d13
    d12 s 32GB c1t0d0s1
    d13 s 32GB c1t1d0s1
    d1 m 12GB d10 d11
    d10 s 12GB c1t0d0s0
    d11 s 12GB c1t1d0s0
    d5 m 6.0GB d18 d19
    d18 s 6.0GB c1t0d0s5
    d19 s 6.0GB c1t1d0s5
    d1034 s 21GB /dev/dsk/c4t600508B4001064300001C00004930000d0s5
    d1033 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s4
    d1032 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s3
    d1031 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s1
    d1030 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s0
    d1024 s 31GB /dev/dsk/c4t600508B4001064300001C00004870000d0s5
    d1023 s 512MB /dev/dsk/c4t600508B4001064300001C00004870000d0s4
    d1022 s 2.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s3
    d1021 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s1
    d1020 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s0
    d1014 s 8.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s5
    d1013 s 1.7GB /dev/dsk/c4t600508B4001064300001C00004750000d0s4
    d1012 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s3
    d1011 s 256MB /dev/dsk/c4t600508B4001064300001C00004750000d0s1
    d1010 s 4.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s0
    d1004 s 46GB /dev/dsk/c4t600508B4001064300001C00004690000d0s5
    d1003 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s4
    d1002 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s3
    d1001 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s1
    d1000 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s0
    <b>The problem is the following:</b>
    The SVM configuration on the second server (cluster node 2) must be the same for the devices d1000-d1034.
    Generally spoken the metadb needs to be in sync.
    - How can I manage this?
    - Do I have to use disk sets?
    - Will a copy of the md.cf/md.tab and an initialization with metainit do it?
    I would be great to have several options how one can manage this.
    Thanks and regards,
    Markus

    Dear Tim,
    Thank you for your answer.
    I can confirm that Veritas Cluster doesn't support SVM by default. Of course they want to sell their own volume manager ;o).
    But that wouldn't be the big problem. With SVM I expect the same behaviour as with VxVM, If I do or have to use disk sets,
    and for that I can write a custom agent.
    My problem is not the cluster implementation. It's more likely a fundamental problem with syncing the SVM config for a set
    of meta devices between two hosts. I'm far from implementing the devices into the cluster config as long as I don't know how
    how to let both nodes know about both devices.
    Currently only the hosts that initialized the volumes knows about them. The second node doesn't know anything about the
    devices d1000-d1034.
    What I need to know in this state is:
    - How can I "register" the alrady initialized meta devices d1000-d1034 on the second cluster node?
    - Do I have to use disk sets?
    - Can I only copy and paste the appropriate lines of the md.cf/md.tab
    - Generaly speaking: How can one configure SVM that different hosts see the same meta devices?
    Hope that someone can help me!
    Thanks,
    Markus

  • Query regarding Treenode expansion

    hi all,
    I have a query regarding the expansion of a tree node.When i click at a tree node it is not expanding.Only when i click on the image to expand it is expanding.I tried with tree's Expand on select property but it is not working.can anyone help me please....
    Thanks in advance
    Sree

    Hi ,
    Have you tried the tutorials tech atricles in this area?
    see
    http://developers.sun.com/prodtech/javatools/jscreator/reference/techart/2/tree_component.html
    http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/2/sitemaptree.html
    This will help you learn using tree nodes.
    Thanks
    K

  • Configuring WAR-Scoped Cluster Nodes Error

    I did a test following the guide: http://download.oracle.com/docs/cd/E15357_01/coh.360/e15829/cweb_wls.htm#BABCDJGC
    2.2.5.3 Configuring WAR-Scoped Cluster Nodes
    i deploy coherece.jar and active-cache.jar, the following errors occurs at weblogic console
    "Issues were encountered while parsing this deployment to determine module type. Assuming this is a library deployment."
    but I can deploy them successfuly and ignore errors.
    but then I deploy the test war app, deployment will be failed with following errors:
    Caused By: weblogic.management.DeploymentException: Error: Unresolved Webapp Lib
    rary references for "ServletContext@1391083548[app:CoherenceWeb module:Coherence
    Web.war path:/CoherenceWeb spec-version:2.5]", defined in weblogic.xml [Extensio
    n-Name: coherence, exact-match: false], [Extension-Name: active-cache, exact-mat
    ch: false]
    My question is how to fix it?

    Unfortunately, a WAR cannot refer a jar deployed as a share library in WebLogic. This is a limitation that will hopefully be fixed in future versions of WebLogic.
    For no you will have to either package your application as an EAR, or bundle the coherence.jar inside your WAR under WEB-INF/lib/.
    Hope that helps.
    Regards,
    Torkel

Maybe you are looking for

  • Looking for a specific webhelp skin

    I noticed a webhelp skin that I would like to use for my robohelp project's output, but I can't seem to find it in the options provided in the following adobe page: http://www.adobe.com/cfusion/exchange/index.cfm?s=5&from=1&o=desc&cat=316&scat=317&l=

  • IMac (10.4) suddenly changed the resolution of my display

    Hi, my old iMac (10.4) suddenly changed the resolution of my display and I can not change it back. It only happens for my account, and not for the other two accounts on the same computer. I went to system preferences and display, but the resolution i

  • RAC 11.2.0.1 deployed on redhat 64bit - database doesn't auto start

    I'm running through some migrations. Installed RAC 11.2 on new servers. Migrated 10g RAC database to new environment. I can start the database just fine using srvctl, or sqlplus. Everything looks good. But when either node, or both nodes are rebooted

  • Can't Import QT file

    When I try to import a 2.2 GByte QuickTime file, I get a message quote: "The file could not be imported: The file "Paul's Hard Drive/Volumes/YF23/YF_23METZ.mov" is bigger than the maximum size supported." From what I read on this forum and the iMovie

  • When I open multiple desktops, safari opens the same window on all the open desktops

    how do I open different windows on different desktops? thanks