Geeting Cluster Node through API Call

Dear All
I am facing a problem in Netweaver 7.0 I  have to find out through API as which cluster node the dispatcher is forwarding the requests to. Is there any way of knowing thourgh API calls.
We are using Java as a programming language
Thanks in advance

Ok, here is the update:
We have looked into this issue and we can reproduce it.
I will log a bug on my side but there is a workaround.
Please ask him to use, POST https://api.share.acrobat.com/webservices/api/v1/dc/[nodeid/]?method=move&newname=foo.txt

Similar Messages

  • Add approval task through API call

    Hello, I am attempting to solve the following problem.
    I have a UDF defined on the Resource Objects form (OBJ table), this filed contains a comma delimited list of OIM groups which is of size n (based on the resource object).
    I would like to create an approval task for each group in this list. In addition i would like the name of each task to show up as the group name. so when a user logs into the UI and looks at the approval details the see the approval task as the group name.
    I have been able to add a task using tcProvisioningOperationsIntf.addProcessTaskInstance API however this API does not allow me to modify 1) the group to assign the request to and 2) the name of the task.
    thanks

    Hey Kevin, thanks for responding.
    This query will allow me to get the process task key, so i can be added to the approval task via. tcProvisioningOperationsIntf.addProcessTaskInstance. However the issue is, no task currently exists. So before i can add an instance of the task i have to actually create a new task. but i was un-sure how to accomplish this through api calls.
    The goal here is to allow a list of groups to be configurable at the resource level without having to modify the approval process.
    thanks

  • Disabling user through API call -process task-followed by an Enable User...

    Hi,
    I am running on OIM 9.1 BP11. I implemented a process task to disable the user based on a URS form field change.
    I can confirm from the log file and the resource that the Disable user (xellerate user) happened. But the user got enabled back right away. The log file showed that a scheduled task named "Enable User After Start Date" ran and enable the user. So, I disabled that scheduled task.
    Then I repeated the test again. I observed the same behavior of user being disabled and enabled again but this time, OIM called an adapter. This is what I observed in the log file:
    20988 INFO,20 Oct 2010 12:21:56,519,[XELLERATE.DATABASE],DB read: select evt.ev t_key, evt.evt_name, evt.evt_package, mil.mil_name from mil mil, evt evt w here evt.evt_key = mil.evt_key and mil.mil_key=10
    20989 DEBUG,20 Oct 2010 12:21:56,519,[XELLERATE.DATABASE],select evt.evt_key, ev t.evt_name, evt.evt_package, mil.mil_name from mil mil, evt evt where evt. evt_key = mil.evt_key and mil.mil_key=10
    20990 INFO,20 Oct 2010 12:21:56,519,[XELLERATE.PERFORMANCE],Query: DB: 0, LOAD: 0, TOTAL: 0
    20991 DEBUG,20 Oct 2010 12:21:56,519,[XELLERATE.SERVER],Class/Method: tcBusiness Obj/getSqlOperationFromMembers entered.
    20992 DEBUG,20 Oct 2010 12:21:56,519,[XELLERATE.SERVER],Class/Method: tcBusiness Obj/getSqlOperationFromMembers left.
    20993 DEBUG,20 Oct 2010 12:21:56,519,[XELLERATE.ADAPTERS],Class/Method: tcADPCla ssLoader/getClassLoader entered.
    20994 DEBUG,20 Oct 2010 12:21:56,519,[XELLERATE.ADAPTERS],Class/Method: tcADPCla ssLoader/getClassLoader left.
    20995 DEBUG,20 Oct 2010 12:21:56,520,[XELLERATE.ADAPTERS],Class/Method: tcADPCla ssLoader/findClass entered.
    20996 INFO,20 Oct 2010 12:21:56,530,[XELLERATE.ADAPTERS],Adapter: Enabling the User was initiated for the task: Enable User.
    20997 INFO,20 Oct 2010 12:21:56,531,[XELLERATE.JAVACLIENT],System Event Handler : Enabling the User
    I did exactly the same disabling user process at another client and it worked fine. I don't understand what causes OIM to call this system Event handler to re-enable the user.
    Please help.
    Thanks
    Khanh

    Do you have any Entity Adapter or Event Handler or Trigger which enables user for some condition ?
    Check your environment. If you have please remove that and try.
    Does this user has and provisioned resource ? If yes, try for some other user which doesn't have resource provisioned.

  • How to get the highlighted text range through Framemaker's API calls?

    Hi all,
    I'm new to the Framemaker API (7.x) and developing a plugin for Framemaker 7.x. I wanted to get the highlighted objects from the active mif document. Tried this code:
    F_TextRangeT tr;
    /* Get the current text selection. */
    tr = F_ApiGetTextRange(FV_SessionId, docId, FP_TextSelection);
    Here is how I tested it:
    1. some text in the mif doc was highlighted.
    2. I clicked the menu item "my_plugin", a framemaker dialog popped up, some test configuration was sent through the dialog.
    3. The test configuration was read by my program correctly.
    However, the debugger showed that the tr is null! When I changed the first step of the test into:
    1. placed the mouse cursor at somewhere in the mif doc.
    The plugin worked somehow, the line where the mouse cursor was placed got selected, although I did not highlight anything.
    My question is: how to get a highlighted range though the Framemaker API call? I've been flipping through the FDK 7.0 Programmer's Reference but haven't found a solution yet. Any hint will be highly appreciated.
    Best Regards,
    Ellen N. Zhao

    Here is some information from the FDK Programmer's guide page 115:
    IMPORTANT: A valid text range can span multiple paragraphs, subcolumns, or text frames.
    It cant span multiple flows, footnotes, table cells, or text lines.
    It is possible for a document to have no text selection or insertion point at all. This can occur in
    the following circumstances:
    ● One or more graphic objects in the document are selected
    ● One or more entire table cells in the document are selected
    ● There is no selection of any type in the document
    So, I did select one or more entire table cells in the document in my first test. It's logical that the result was not like expected.
    But selecting one or more entire table cells is a crucial requirement for my plugin, is there anyway to get things straight?
    Simply put, I want to enable users to select one or more table cells in the document, and I want to get the object handles of the first cell and the last cell through Framemaker API calls. How?
    Many thanks in advance!
    Best Regards,
    Ellen N. Zhao

  • Error while calling activityPrepare:needs to be invoked through an API call

    I try to call activityPrepare using PAPI WS from SOAP UI. I get the following error :
    An error occurred while processing task '0' for instance '/MyProcess#Default-1.0/4/0' in activity '/MyProcess#Default-1.0/Interactive[AgeFilter]'. The task is external and it needs to be invoked through an API call and not through the Classic WorkSpace.
    I have no problem when I invoke activityExecute() but when I try activityPrepare() either from OSB, ADF or soapUI, I get this error.
    I don't get what that means. Any idea why this error occurs?
    Thanks
    Jag

    The problem is that I did not make the activity 'External'. This can be done by :
    Right click the activity -> Main Task -> select External Activity
    Then you have to select two methods, one for prepare Activity, the other for commit activity.
    For the prepare Activity, select the method you normally select for execute activity.
    For the commit activity, create a new method and just write the following line in it :
    action = action.OK;
    Problem solved.
    Jag

  • XI AF API call failed. Module exception: Java Mapping

    Hi Experts,
    I have faced a typical error for one of my interface. The Scenario is SOAP to SOAP and we are performing SOAP Lookup  before sending the data to Target web service. We are using one Java Mapping  'FileIns_lookupLoad.java --- com/fi/' and performing the SOAP Lookup from within the code. In the Lookup Communication channel we are using 'Axis' as message protocol and passing couple of values through Module Key parameters. We have same ESR and ID objects in Development, Quality and Production (as per Version ID and History).
    Now the problem is Lookup is failing only in Development but Quality and Production is working fine. I have replaced Quality CC URL with the development WSDL URL, but that worked fine. That means Development Web Service is good. We are getting the below error in Dev:
    <SAP:Stack>StreamTransformationException triggered by application mapping program com/fi/FileIns_lookupLoad; Look Up Failed</SAP:Stack>
    Trace : *
    <Trace level="1"
    type="T"> Some Thing Wrong in LookUpError when calling an adapter by using the communication channel CC_IN_SOAP_GEInsuranceLoadLookup1 (Party: , Service: BusService_GE, Object ID: 4214805c52893ef9b0b3f0ef0902fe9e) XI AF API call failed. Module exception: 'while trying to invoke the method org.apache.axis.types.URI.toString() of an object returned from com.sap.xi.XI.Message._30.QualifiedName.getNamespace()'. Cause Exception: 'while trying to invoke the method org.apache.axis.types.URI.toString() of an object returned from com.sap.xi.XI.Message._30.QualifiedName.getNamespace()'. </Trace>
      <Trace level="1" type="T">*** END APPLICATION TRACE ***</Trace>
      <Trace level="1" type="T">Java mapping com/fi/FileIns_lookupLoad has thrown a StreamTransformationException. Thrown: com.sap.aii.mapping.api.StreamTransformationException: Look Up Failed at com.fi.FileIns_lookupLoad.execute(FileIns_lookupLoad.java:282) at com.fi.FileIns_lookupLoad.transform(FileIns_lookupLoad.java:74) at com.sap.aii.ib.server.mapping.execution.JavaMapping.executeStep(JavaMapping.java:92) at com.sap.aii.ib.server.mapping.execution.Mapping.execute(Mapping.java:60) at com.sap.aii.ib.server.mapping.execution.SequenceMapping.executeStep.................................................
    ................................................................................<Trace level="1" type="T">Application mapping program com/fi/FileIns_lookupLoad throws a stream transformation exception: Look Up Failed Thrown: com.sap.aii.ib.core.mapping.execution.ApplicationException: Application mapping program com/fi/FileIns_lookupLoad throws a stream transformation exception: Look Up Failed at com.sap.aii.ib.server.mapping.execution.JavaMapping.executeStep(JavaMapping.java:95) at com.sap.aii.ib.server.mapping.execution.Mapping.execute(Mapping.java:60) at com.sap.aii.ib.server.mapping.execution.SequenceMapping.executeStep(SequenceMapping.java:40) at com.sap.aii.ib.server.mapping.execution.Mapping.execute
    I have already checked with Basis and as per them JDK and Java versions are identical in Dev, QAS and Prod. It seems something is wrong with 'XI AF API' which we are calling from the channel. If we totally ignore Lookup process and send direct data, it is working fine. Target CC is also using Axis.
    Sequence in CC: afreq ->xireq ->wssec2 ->xires ->afres
    For any more information please let me know.
    Thanks,
    Nabendu.

    Hi Anupam,
    The Java Mapping code is same in Dev , QAS and Prod. Also the versions of the JAR is same.
    Please find the code below.
    package com.fi;
    import java.util.HashMap;
    import com.sap.aii.mapping.api.AbstractTrace;
    import com.sap.aii.mapping.api.AbstractTransformation;
    import com.sap.aii.mapping.api.StreamTransformation;
    import com.sap.aii.mapping.api.StreamTransformationConstants;
    import com.sap.aii.mapping.api.MappingTrace;
    import com.sap.aii.mapping.api.StreamTransformationException;
    import com.sap.aii.mapping.api.DynamicConfiguration;
    import com.sap.aii.mapping.api.DynamicConfigurationKey;
    import com.sap.aii.mapping.api.TransformationInput;
    import com.sap.aii.mapping.api.TransformationOutput;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.OutputStream;
    import java.util.*;
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    import javax.xml.parsers.ParserConfigurationException;
    import javax.xml.transform.Result;
    import javax.xml.transform.Source;
    import javax.xml.transform.Transformer;
    import javax.xml.transform.TransformerConfigurationException;
    import javax.xml.transform.TransformerException;
    import javax.xml.transform.TransformerFactory;
    import javax.xml.transform.dom.DOMSource;
    import javax.xml.transform.OutputKeys;
    import javax.xml.transform.stream.StreamResult;
    import org.w3c.dom.*;
    import org.w3c.dom.views.AbstractView;
    import org.xml.sax.SAXException;
    import java.io.ByteArrayInputStream;
    import com.sap.aii.mapping.lookup.*;
    /*import com.sap.aii.utilxi.hmis.server.HmisEnvironment.Accessor;
    //import com.sap.aii.utilxi.lock.api.LockServiceException;
    //import com.pmintl.pppimes.RFCLookup.javamapping.*;
    //import java.io.FileInputStream;
    //import java.io.FileOutputStream;*/
    import java.text.DateFormat;
    import java.text.SimpleDateFormat;
    //import java.text.ParseException;
    import java.util.Date;
    import java.io.*;
    public class FileIns_lookupLoad extends AbstractTransformation{
            private Map param = null;
            private AbstractTrace trace = getTrace();
            String senderService = null;
           String inParamChannel = null;
           String inParamBusService = null;
              //3. Each JAVA Mapping using program 7.1 API must implement the method
              // transform(TransformationInput in, TransformationOutput out)
              // as oppose to execute Method in earlier version.
               public void transform(TransformationInput arg0, TransformationOutput arg1) throws StreamTransformationException {
              // TODO Auto-generated method stub
              //4. An info message is added to trace. An instance of trace of object is obtained by calling
              // the getTrace method of class AbstractTransformation
                    inParamChannel = arg0.getInputParameters().getString("COMM_CHANNEL");
                    inParamBusService = arg0.getInputParameters().getString("BUS_SERVICE");
                    getTrace().addInfo("Input Parameter: " + inParamChannel);
                    getTrace().addInfo("Input Parameter: " + inParamBusService);
                    this.execute(arg0.getInputPayload().getInputStream(),
                          arg1.getOutputPayload().getOutputStream());
              //5. Input payload is obtained by using arg0.getInputPayload().getInputStream()
          * @param args
         public static void main(String[] args) {
                // TODO Auto-generated method stub
         public void execute(InputStream in, OutputStream out)
                   throws StreamTransformationException {
              // TODO Auto-generated method stub
              try
                   //Get The Trace
    //               trace = (AbstractTrace)param.get(StreamTransformationConstants.MAPPING_TRACE );
                   trace = getTrace();
                  getTrace().addInfo("Java Mapping Started");
                   DocumentBuilderFactory ifactory = DocumentBuilderFactory.newInstance();
                   DocumentBuilder ibuilder = ifactory.newDocumentBuilder();
                   Document IDoc  = ibuilder.parse(in);
                   Document TDoc = ibuilder.newDocument();
                   String mrnVal = "";
                   trace.addInfo("Preparing Target Doc");
                   Element message = TDoc.createElementNS("urn:Medtronic.com:CATS_Patient_Transactions","n1:Mt_Insurance");
                   Node tRoot = TDoc.appendChild(message);
                   //trace.addInfo(tRoot.getNodeName());
                   Element sRoot = (Element)IDoc.getFirstChild();
                   //trace.addInfo(sRoot.getNodeName());
                   NodeList nl = sRoot.getChildNodes();
               // trace.addInfo("Len"+nl.getLength());
    ////               for(int i=0;i<nl.getLength();i++)
    //                    Node n1 = TDoc.importNode(nl.item(i), true);
    //                    trace.addInfo("Adding Child Nodes");
    //                    trace.addInfo(nl.item(i).getNodeName());
    //                    tRoot.appendChild(n1);
                   NodeList lmrn = sRoot.getElementsByTagName("Mrn");
                   if(lmrn != null)
                        Node n1 = TDoc.importNode(lmrn.item(0), true);
                        Element e1 = (Element)lmrn.item(0);
                        //trace.addInfo(e1.getNodeName());
                        mrnVal = e1.getTextContent();
                        tRoot.appendChild(n1);
                   NodeList lfsc = sRoot.getElementsByTagName("Fsc");
                   if(lfsc != null)
                        for(int i=0;i<lfsc.getLength();i++){
                             Element n1=TDoc.createElement(lfsc.item(i).getNodeName());
                             tRoot.appendChild(n1);
                             Node fieldch = TDoc.createElement("Fields");
                             n1.appendChild(fieldch);
                             NodeList chl = lfsc.item(i).getChildNodes();
                             trace.addInfo("len11"+chl.getLength());
                             for(int j=0;j<chl.getLength();j++){
                                  if(!chl.item(j).getNodeName().equals("FscNumber")&&!chl.item(j).getNodeName().equals("Priority")){
                                       if(chl.item(j).hasChildNodes()){
                                            trace.addInfo(chl.item(j).getNodeName());
                                       Element fscfield = TDoc.createElement("FSCField");
                                       fieldch.appendChild(fscfield);
                                       NodeList FUQl = chl.item(j).getChildNodes();
                                       for(int p =0;p<FUQl.getLength();p++){
                                            //trace.addInfo(" i: "+i+" J: "+j+" P: "+p);
                                            if(FUQl.item(p).getNodeName().equals("FUQNumber")){
                                                 //Node tFUQ = TDoc.createElement("FUQNumber");
                                                 //tFUQ.setTextContent(FUQl.item(p).getTextContent());
                                                 fscfield.setAttribute("FUQNumber", FUQl.item(p).getTextContent());
                                            if(FUQl.item(p).getNodeName().equals("Value")){
                                                 Node tVal = TDoc.createElement("Value");
                                                 tVal.setTextContent(FUQl.item(p).getTextContent());
                                                 fscfield.appendChild(tVal);
                                                 //Node tName = TDoc.createElement("Field");
                                                 //tName.setTextContent(chl.item(j).getNodeName());
                                                 fscfield.setAttribute("Field", chl.item(j).getNodeName());
    //                                   Node fuqch = chl.item(j).getFirstChild();
    //                                   Node valch = fuqch.getNextSibling();
    //                                   if(fuqch!=null){
    //                                        Node tFUQ = TDoc.createElement("FUQNumber");
    //                                        tFUQ.setTextContent(fuqch.getTextContent());
    //                                        fscfield.appendChild(tFUQ);
    //                                   if(valch!=null){
    //                                        Node tVal = TDoc.createElement("Value");
    //                                        tVal.setTextContent(valch.getTextContent());
    //                                        fscfield.appendChild(tVal);
    //                                        Node tName = TDoc.createElement("Filed");
    //                                        tName.setTextContent(chl.item(j).getNodeName());
    //                                        fscfield.appendChild(tName);
                                  else{
                                       //Node numch =TDoc.importNode(chl.item(j), true);
                                       //Element e1 =(Element)n1;
                                       if(chl.item(j).getNodeName().equals("FscNumber")){
                                            n1.setAttribute("Number", chl.item(j).getTextContent());
                                       if(chl.item(j).getNodeName().equals("Priority")){
                                            n1.setAttribute("Priority", chl.item(j).getTextContent());
                             //tRoot.appendChild(lfsc.item(0));
    //                         Element e1 = (Element)lfsc.item(0);
    //                         trace.addInfo(e1.getNodeName());
    //                         mrnVal = e1.getNodeValue();
    /////Start of Look Up Code////
                   //Preparing Input String
                   String lookUpRequest = "<LoadPatientFullInsuranceDetail><Mrn>"+mrnVal+"</Mrn></LoadPatientFullInsuranceDetail>";
                   trace.addInfo("Request");
                   trace.addInfo(lookUpRequest);
                        SystemAccessor acc;
                        Channel channel;
                        Payload lookupResult;     
                        try
                             channel = LookupService.getChannel(inParamBusService, inParamChannel);
                             trace.addInfo("Got the channel");
                             acc = LookupService.getSystemAccessor(channel);
                             trace.addInfo("Got The Channel and Accessor");
                             InputStream ist = new ByteArrayInputStream(lookUpRequest.getBytes());
                             XmlPayload payload = LookupService.getXmlPayload(ist);
                             if(acc != null){
                                  trace.addInfo("Got The Channel and Accessor");
                                  trace.addInfo("Executing The Webservice");
                                  lookupResult = acc.call(payload);
                                  trace.addInfo("End of Executing The Webservice");
                                  trace.addInfo(lookupResult.toString());
                             if(lookUpRequest!=null){
                                  trace.addInfo("Responce is not null");
                                  InputStream rist = lookupResult.getContent();
                                  Document lresDoc = ibuilder.parse(rist);
                                  Node n1 = lresDoc.getFirstChild();
                                  trace.addInfo(n1.getNodeName());
                                  if(n1.getNodeName().endsWith("LoadPatientFullInsuranceDetailResponse"))
                                       Node ch1 = n1.getFirstChild();
                                       trace.addInfo(ch1.getNodeName());
                                       NodeList lookupnl = ch1.getChildNodes();
                                       //trace.add
                                       for(int i=0;i<lookupnl.getLength();i++)
                                            Node n2 = TDoc.importNode(lookupnl.item(i), true);
                                            trace.addInfo("Adding Child Nodes");
                                            trace.addInfo(lookupnl.item(i).getNodeName());
                                            tRoot.appendChild(n2);
                                  else{
                                  throw new StreamTransformationException("Error in Look Up"+n1.getTextContent());
                                   //while(ch1.hasChildNodes() && !ch1.getNodeName().equals("FSCS")){
                             else{
                                  trace.addWarning("Responce is null");
                                 throw new LookupException();
                        catch(LookupException le)
                             trace.addWarning("Some Thing Wrong in LookUp"+le.getMessage());
                             throw new StreamTransformationException("Look Up Failed");                         
                           Transformer transformer = TransformerFactory.newInstance().newTransformer();
    //                       StreamResult reqResult = new StreamResult(new StringWriter());
                           DOMSource source = new DOMSource(TDoc);
                           Result result = new StreamResult(out);
                           transformer.transform(source,result);
              catch(StreamTransformationException ste)
               throw new StreamTransformationException(ste.getMessage());
              catch(Exception e){
                   trace.addInfo(e.getMessage());
              finally{

  • WDRuntimeException: Failed to create J2EE cluster node in SLD

    Hello,
    I am getting the below error, but to my knowledge I have everything set up properly.  Let me briefly outline the logistics (I am running everything LOCALLY (will move to remote later)):
    WAS 6.4 <b>SP12</b>
    Set up JCo and tests fine
    Set up Visual Administrator / SLD Data Supplier / HTTP and CIM configured and seem to test fine
    Created SLD and it tests OK
    Created Technical Landscape
    I have noticed that in SP12, in the SLD config I actually have a NEW category called "<b>System Landscape</b>" above my "Technical Landscape" link.  I have not seen this option in previous versions SP9 or SP11.  Is it mandatory to configure this?
    Also, I created a model for Adaptive RFC and found the function I needed successfully.
    Anyway, here is the error when trying to deploy...
    com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Error while obtaining JCO connection.
         at com.sap.tc.webdynpro.services.datatypes.core.DataTypeBroker$1.fillSldConnection(DataTypeBroker.java:90)
    Caused by: com.sap.tc.webdynpro.services.sal.sl.api.WDSystemLandscapeException: Error while obtaining JCO connection.
    Caused by: com.sap.tc.webdynpro.services.exceptions.WDRuntimeException: Failed to create J2EE cluster node in SLD for 'J2E.SystemHome.bc347792': com.sap.lcr.api.cimclient.LcrException: CIM_ERR_NOT_FOUND: No such instance: SAP_J2EEEngineCluster.CreationClassName="SAP_J2EEEngineCluster",Name="J2E.SystemHome.bc347792"
    Any help will be appreciated!

    I figured it out for those that may have a similar problem.
    Although I had created and tested my JCo's properly and they were working fine, somehow, and I still don't know why, they went RED in the JCo Maintainence screen. 
    I had to "create" again and it works fine now.

  • Cluster Node paused

    Hi there
    My Setup:
    2 Cluster Nodes (HP DL380 G7 & HP DL380 Gen8)
    HP P2000 G3 FC MSA (MPIO)
    The Gen8 Cluster Node pauses after a few minutes, but stays online if the G7 is paused (no drain) My troubleshooting has led me to believe that there is a problem with the Cluster Shared Volume:
    00001508.000010b4::2015/02/19-14:51:14.189 INFO  [RES] Network Name: Agent: Sending request Netname/RecheckConfig to NN:cf2dec1d-ee88-4fb6-a86d-0c2d1aa888b4:Netbios
    00000d1c.0000299c::2015/02/19-14:51:14.615 INFO  [API] s_ApiGetQuorumResource final status 0.
    00000d1c.0000299c::2015/02/19-14:51:14.616 INFO  [RCM [RES] Virtual Machine VirtualMachine1 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
    00001508.000010b4::2015/02/19-14:51:15.010 INFO  [RES] Network Name <Cluster Name>: Getting Read only private properties
    00000d1c.00002294::2015/02/19-14:51:15.096 INFO  [API] s_ApiGetQuorumResource final status 0.
    00000d1c.00002294::2015/02/19-14:51:15.121 INFO  [API] s_ApiGetQuorumResource final status 0.
    000014a8.000024f4::2015/02/19-14:51:15.269 INFO  [RES] Physical Disk <Quorum>: VolumeIsNtfs: Volume
    \\?\GLOBALROOT\Device\Harddisk1\ClusterPartition2\ has FS type NTFS
    00000d1c.00002294::2015/02/19-14:51:15.343 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00002294::2015/02/19-14:51:15.352 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:15.386 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:15.386 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:15.386 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00001420::2015/02/19-14:51:15.847 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00001420::2015/02/19-14:51:15.855 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:15.887 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:15.888 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:15.888 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00001420::2015/02/19-14:51:15.928 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00001420::2015/02/19-14:51:15.939 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:15.968 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:15.969 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:15.969 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00001420::2015/02/19-14:51:16.005 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQ's DLL is not present on this node.  Attempting to find a good node...
    00000d1c.00001420::2015/02/19-14:51:16.015 WARN  [RCM] ResourceTypeChaseTheOwnerLoop::DoCall: ResType MSMQTriggers's DLL is not present on this node.  Attempting to find a good node...
    000014a8.000024f4::2015/02/19-14:51:16.059 INFO  [RES] Physical Disk: HardDiskpQueryDiskFromStm: ClusterStmFindDisk returned device='\\?\mpio#disk&ven_hp&prod_p2000_g3_fc&rev_t250#1&7f6ac24&0&36304346463030314145374646423434393243353331303030#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
    000014a8.000024f4::2015/02/19-14:51:16.059 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: GetVolumeInformation failed for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    000014a8.000024f4::2015/02/19-14:51:16.059 ERR   [RES] Physical Disk: HardDiskpGetDiskInfo: failed to get partition size for
    \\?\GLOBALROOT\Device\Harddisk3\ClusterPartition2\, status 3
    00000d1c.00002568::2015/02/19-14:51:17.110 INFO  [GEM] Node 1: Deleting [2:395 , 2:396] (both included) as it has been ack'd by every node
    00000d1c.0000299c::2015/02/19-14:51:17.444 INFO  [RCM [RES] Virtual Machine VirtualMachine2 embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
    00000d1c.0000299c::2015/02/19-14:51:18.103 INFO  [RCM] rcm::DrainMgr::PauseNodeNoDrain: [DrainMgr] PauseNodeNoDrain
    00000d1c.0000299c::2015/02/19-14:51:18.103 INFO  [GUM] Node 1: Processing RequestLock 1:164
    00000d1c.00002568::2015/02/19-14:51:18.104 INFO  [GUM] Node 1: Processing GrantLock to 1 (sent by 2 gumid: 1470)
    00000d1c.0000299c::2015/02/19-14:51:18.104 INFO  [GUM] Node 1: executing request locally, gumId:1471, my action: /nsm/stateChange, # of updates: 1
    00000d1c.00001420::2015/02/19-14:51:18.104 INFO  [DM] Starting replica transaction, paxos: 99:99:50133, smartPtr: HDL( c9b16cf1e0 ), internalPtr: HDL( c9b21
    This issue has been bugging me for some time now. The Cluster is fully functional and works great until the node gets paused again. I've read somewhere that the MSMQ errors can be ignored, but can't find anything about the
    HardDiskpGetDiskInfo: GetVolumeInformation failed messages. No errors in the san or the Server Event logs. Driver and Firmware are up to date. Any help would be greatly appreciated.
    Best regards

    Thank you for your replies.
    First some information I left out in my original post. We're using Windows Server 2012 R2 Datacenter and are currently only hosting virtual machines on the cluster.
    I did some testing over the weekend, including a firmware update on the san and cluster validation. 
    The problem doesn't seem to be related to backup. We use Microsoft DPM to make a full express backup once every day, the getvolumeinformation Failed error gets logged periodically every half an hour.
    Excerpts from the validation report:
    Validate Disk Failover
    Description: Validate that a disk can fail over successfully with
    data intact.
    Start: 21.02.2015 18:02:17.
    Node Node2 holds the SCSI PR on Test Disk 3
    and brought the disk online, but failed in its attempt to write file data to
    partition table entry 1. The disk structure is corrupted and
    unreadable.
    Stop: 21.02.2015 18:02:37.
    Node Node1 holds the SCSI PR on Test Disk 3
    and brought the disk online, but failed in its attempt to write file data to
    partition table entry 1. The disk structure is corrupted and unreadable.
    Validate File System
    Description: Validate that the file system on disks in shared
    storage is supported by failover clusters and Cluster Shared Volumes (CSVs).
    Failover cluster physical disk resources support NTFS, ReFS, FAT32, FAT, and
    RAW. Only volumes formatted as NTFS or ReFS are accessible in disks added as
    CSVs.
    The test was canceled.
    Validate Simultaneous Failover
    Description: Validate that disks can fail over simultaneously with
    data intact.
    The test was canceled.
    Validate Storage Spaces Persistent Reservation
    Description: Validate that storage supports the SCSI-3 Persistent
    Reservation commands needed by Storage Spaces to support clustering.
    Start: 21.02.2015 18:01:00.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 3 from node Node1. Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x30000000a for Test
    Disk 3 from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 3 from node 
    Node1 using key 0x30000000a.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x3000100aa for Test
    Disk 3 from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0x30000000a SERVICE ACTION RESERVATION KEY 0x30000000b for Test Disk 3 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 2 from node Node1.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x20000000a for Test
    Disk 2 from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 2 from node 
    Node1 using key 0x20000000a.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x2000100aa for Test
    Disk 2 from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0x20000000a SERVICE ACTION RESERVATION KEY 0x20000000b for Test Disk 2 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 0 from node Node1.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0xa for Test Disk 0
    from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 0 from node 
    Node1 using key 0xa.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x100aa for Test Disk 0
    from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0xa SERVICE ACTION RESERVATION KEY 0xb for Test Disk 0 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Verifying there are no Persistent Reservations, or Registration
    keys, on Test Disk 1 from node Node1.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x10000000a for Test
    Disk 1 from node Node1.
    Issuing Persistent Reservation RESERVE on Test Disk 1 from node 
    Node1 using key 0x10000000a.
    Issuing Persistent Reservation REGISTER AND IGNORE EXISTING KEY
    using RESERVATION KEY 0x0 SERVICE ACTION RESERVATION KEY 0x1000100aa for Test
    Disk 1 from node Node2.
    Issuing Persistent Reservation REGISTER using RESERVATION KEY
    0x10000000a SERVICE ACTION RESERVATION KEY 0x10000000b for Test Disk 1 from node 
    Node1 to change the registered key while holding the
    reservation for the disk.
    Failure. Persistent Reservation not present on Test Disk 3 from
    node Node1 after successful call to update reservation holder's
    registration key 0x30000000b.
    Failure. Persistent Reservation not present on Test Disk 1 from
    node Node1 after successful call to update reservation holder's
    registration key 0x10000000b.
    Failure. Persistent Reservation not present on Test Disk 0 from
    node Node1 after successful call to update reservation holder's
    registration key 0xb.
    Failure. Persistent Reservation not present on Test Disk 2 from
    node Node1 after successful call to update reservation holder's
    registration key 0x20000000b.
    Test Disk 0 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Test Disk 1 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Test Disk 2 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Test Disk 3 does not support SCSI-3 Persistent Reservations
    commands needed by clustered storage pools that use the Storage Spaces
    subsystem. Some storage devices require specific firmware versions or settings
    to function properly with failover clusters. Contact your storage administrator
    or storage vendor for help with configuring the storage to function properly
    with failover clusters that use Storage Spaces.
    Stop: 21.02.2015 18:01:02
    Thank you for your help.
    David

  • SCVMM losing connection to cluster nodes

    Hey guys'n girls, I hope this is the right forum for this question. I already opened a ticket at MS support as well because it's impacting our production environment indirectly, but even after a week there's been no contact. Losing faith in MS support there
    The problem we're having is that scvmm is that a host enters the 'needs attention' state, with a winrm error 0x80338126. I guess it has something to do with the network or with Kerberos, and I've found some info on it, but I still haven't been able to solve
    it. Do you guys have any ideas?
    Problem summary:
    We are seeing an issue on our new hyper-v platform. The platform should have been in production last week, but this issue is delaying our project as we can't seem to get it stable.
    The problem we are experiencing is that SCVMM loses the connection to some of the Hyper-V nodes. Not one
     specific node. Last week it happened to two nodes, and today it happened to another node. I see issues with WinRM, and I expect something to do with kerberos. See the bottom of this post for background details and software versions.
    The host gets the status 'needs attention', and if you look at the status of the machine, WinRM gives an error. The error is:
    Error (2916)
    VMM is unable to complete the request. The connection to the agent cc1-hyp-10.domaincloud1.local was lost.
    WinRM: URL: [http://cc1-hyp-10.domaincloud1.local:5985], Verb: [ENUMERATE], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/Win32_Service], Filter: [select * from Win32_Service where Name="WinRM"]
    Unknown error (0x80338126)
    Recommended Action
    Ensure that the Windows Remote Management (WinRM) service and the VMM agent are installed and running and that a firewall is not blocking HTTP/HTTPS traffic. Ensure that VMM server is able to communicate with cc1-hyp-10.domaincloud1.local over WinRM by successfully
    running the following command:
     winrm id –r:cc1-hyp-10.domaincloud1.local
    This
     problem can also be caused by a Windows Management Instrumentation (WMI) service crash. If the server is running Windows Server 2008 R2, ensure that KB 982293 (http://support.microsoft.com/kb/982293)
    is installed on it.
    If the error persists, restart cc1-hyp-10.domaincloud1.local and then try the operation again. /nRefer to
    http://support.microsoft.com/kb/2742275 for more details.
    Doing a simple test from the VMM server to the problematic cluster node shows this error:
    PS C:\> hostname
    CC1-VMM-01
    PS C:\> winrm id -r:cc1-hyp-10.domaincloud1.local
    WSManFault
        Message = WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that a firewall exception for the WinRM service is enabled and allows access from this
    computer. By default, the WinRM firewall exception for public profiles limits access to remote computers within the same local subnet.
    Error number:  -2144108250 0x80338126
    WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that a firewall exception for the WinRM service is enabled and allows access from this computer. By default, the WinRM
    firewall exception for public profiles limits access to remote computers within the same local subnet.
    I CAN connect from other hosts to this problematic cluster node:
    PS C:\> hostname
    CC1-HYP-16
    PS C:\> winrm id -r:cc1-hyp-10.domaincloud1.local
    IdentifyResponse
        ProtocolVersion =
    http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd
        ProductVendor = Microsoft Corporation
        ProductVersion = OS: 6.3.9600 SP: 0.0 Stack: 3.0
        SecurityProfiles
            SecurityProfileName =
    http://schemas.dmtf.org/wbem/wsman/1/wsman/secprofile/http/spnego-kerberos
    And I can connect from the vmm server to all other cluster nodes:
    PS C:\> hostname
    CC1-VMM-01
    PS C:\> winrm id -r:cc1-hyp-11.domaincloud1.local
    IdentifyResponse
        ProtocolVersion =
    http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd
        ProductVendor = Microsoft Corporation
        ProductVersion = OS: 6.3.9600 SP: 0.0 Stack: 3.0
        SecurityProfiles
            SecurityProfileName =
    http://schemas.dmtf.org/wbem/wsman/1/wsman/secprofile/http/spnego-kerberos
    So at this point only the test from the cc1-vmm-01 to cc1-hyp-10 seems to be problematic.
    I followed the steps in the page
    https://support.microsoft.com/kb/2742275 (which is referred to above). I tried the VMMCA, but it can't really get it working the way I want, or it seems to give outdated recommendations.
    I tried checking for duplicate SPN's by running setspn -x on affected machines. No results (although I do not understand
     what an SPN is or how it works). I rebuilt the performance counters.
    It tried setting 'sc config winrm type= own' as described in [http://blinditandnetworkadmin.blogspot.nl/2012/08/kb-how-to-troubleshoot-needs-attention.html].
    If I reboot this cc1-hyp-10 machine, it will start working perfectly again. However, then I can't troubleshoot the issue, and it will happen again.
    I want this problem to be solved, so vmm never loses connection to the hypervisors it's managing again!
    Background information:
    We've set up a platform with Hyper-V to run a VM workload. The platform consists of the following hardware:
    2 Dell R620's with 32GB of RAM, running hyper-v to virtualize the cloud management layer (DC's, VMM, SQL). These machines are called cc1-hyp-01 and cc1-hyp-02. They run the management vm's like cc1-dc-01/02, cc1-sql-01, cc1-vmm-01, etc. The names are self-explanatory.
    The VMM machine is NOT clustered.
    8 Dell M620 blades with 320GB of RAM, running hyper-v to virtualize the customer workload. The machines are
    called cc1-hyp-10 until cc1-hyp-17. They are in a cluster.
    2 Equallogic units form a SAN (premium storage), and we have a Dell R515 running iscsi target (budget storage).
    We have Dell Force10 switches and Cisco C3750X switches to connect everything together (mostly 10GB links).
    All hosts run Windows Server 2012R2 Datacenter edition. The VMM server runs System Center Virtual Machine Manage 2012 R2.
    All the latest Windows updates are installed on every host. There are no firewalls between any host (vmm and hypervisors) at this level. Windows firewalls are all disabled. No antivirus software is installed, no symantec software is installed.
    The only non-standard software that is installed is the Dell Host Integration Tools 4.7.1, Dell Openmanage Server Administrator, and some small stuff like 7-zip, bginfo, net-snap, etc.
    The SCVMM service is running under the domain account DOMAINCLOUD1\scvmm. This machine is in the local administrators group of each cluster node.
    On top of this cloud layer we're running the tenant layer with a lot of vm's for a specific customer (although they are all off now).

    I think I found the culprit, after an hour of analyzing wireshark dumps I found the vmm had jumbo frames enabled on the management interface to the hosts (and the underlying infrastructure does not).. Now my winrm commands started working again.

  • My app seems to consume an enormous amount of API calls every day? Please help!

    I am tired and sad and frustrated here, and feel like a total moron. I am very sorry if my post is too long and not very well put together, but im beggin anyone with some experience here to please give me some advice.
    I am currently testing out a app I have created to let people add events, search for events, comment on and get notified about changes in events they have signed up for. I have used Azure Mobile Service for this, and it all seems to have worked well. This is
    mostly a learning app for me, since I am very much a noob.
    Now we are nearing launch of the app, and I have started to look at the numbers in Azure. I am on the free version right now, where 16k api calls a day are free (will move to priced tier on launch) - but the numbers look completely off the hook here!
    Every day 2-3 devices seem to run up 600-700 API calls. I do run the testing app hard, of course, but I guess some user might do that to.
    So - 4% of the daily free calls are consumed by 3 devices. That means 33 users could fill the quota, and the first paid tier only gives room for 300 users.
    Am I reading something wrong here? Is my app then not viable? I think it, even with my noobness, has a potentil to get a few thousand users. Does that mean Im gonna have to go to the top tier? Because I can not afford that.
    Over to the stack part of the question:
    I log the user in, I create the tables, and then when they hit search I do something like this:
    eventenItemList = await eventenTable.Take (200).Where (item => item.Dateandtimeend >= DateAndTimeIn).
    Where (item => item.Dateandtime <= DateAndTimeInEnd).
    Where (item => item.Fylke == fylke).
    Where (item => item.Pris <= MaksPris).ToListAsync ();
    I would expect that to be a single API-call, but it seems to run up tens of calls - just there? How is that possible?
    When the user is not in the app, I run a background service in Android that goes through a local db of events the user has created and then checks against the db for any changes to them. I do that like this:
    var table = db.Table<MyEvents> ();
    foreach (var e in table) {
    eventenItemList = await eventenTable.Where (item => item.Id == e.EventId).ToListAsync ();
    if (eventenItemList.Count == 0) {
    } else {
    //I here notify the user that something new is up - and what it is. Time changed, comments or whatever.
    Im guessing this is stupid of me, since it probably makes one API call for each loop here? But in the numbers it just seems to do 2 calls - like I expect it to.
    I am horribly lost here, people. I ran around a hundred random clicks around the app this evening - and racked up over 1500 api calls. I have been cold sweating since that. Any advice or info about how this api call-system works would be very much appreciated.

    Replying here in case someone else stumbles upon this post. This question was handled on SO:http://stackoverflow.com/questions/28685710/app-consumes-an-extreme-amount-of-api-calls-in-azure

  • Error while creating contact through API in Install Base

    Hello
    I am trying to create contacts when creating a install base through API...
    I tried below code as per metalink note# 215456.1 and giving the below error. I checked setup andI have 'Ship To' exists in Instnace Party Account Relationsship setup in the aplication and also I have a party Id 1232890 exist in hz_parties table with party type as 'Person' and I passed contact_ip_id as instance_party_id from CSI_I_PARTIES table for the instance to be update...
    Also, can anybody help me how to purge the error messages before calling the API, suppose if i have 2 records and all two records will error then my second record error getting contatenated with my first error and message count also getting increased(see error message below as message count coming as 2 even though there is only one error)
    SET SERVEROUTPUT ON SIZE 1000000
    DECLARE
    p_instance_rec CSI_DATASTRUCTURES_PUB.INSTANCE_REC;
    p_ext_attrib_values_tbl
    CSI_DATASTRUCTURES_PUB.EXTEND_ATTRIB_VALUES_TBL;
    p_party_tbl CSI_DATASTRUCTURES_PUB.PARTY_TBL;
    p_account_tbl CSI_DATASTRUCTURES_PUB.PARTY_ACCOUNT_TBL;
    p_pricing_attrib_tbl CSI_DATASTRUCTURES_PUB.PRICING_ATTRIBS_TBL;
    p_org_assignments_tbl CSI_DATASTRUCTURES_PUB.ORGANIZATION_UNITS_TBL;
    p_asset_assignment_tbl CSI_DATASTRUCTURES_PUB.INSTANCE_ASSET_TBL;
    p_txn_rec CSI_DATASTRUCTURES_PUB.TRANSACTION_REC;
    x_instance_id_lst CSI_DATASTRUCTURES_PUB.ID_TBL;
    x_return_status VARCHAR2(2000);
    x_msg_count NUMBER;
    x_msg_data VARCHAR2(2000);
    x_msg_index_out NUMBER;
    t_output VARCHAR2(2000);
    t_msg_dummy NUMBER;
    BEGIN
    p_party_tbl(1).instance_party_id := null;
    p_party_tbl(1).instance_id := 1216497;
    p_party_tbl(1).party_source_table := 'HZ_PARTIES';
    p_party_tbl(1).party_id := 1232890;
    p_party_tbl(1).relationship_type_code := 'Ship To';
    p_party_tbl(1).contact_flag := 'Y';
    p_party_tbl(1).contact_ip_id := 1699185;
    x_msg_count := 0;
    p_party_tbl(1).OBJECT_VERSION_NUMBER := 1;
    -- Now call the stored program
    csi_item_instance_pub.update_item_instance(
    1.0,
    'F',
    'F',
    1,
    p_instance_rec,
    p_ext_attrib_values_tbl,
    p_party_tbl,
    p_account_tbl,
    p_pricing_attrib_tbl,
    p_org_assignments_tbl,
    p_asset_assignment_tbl,
    p_txn_rec,
    x_instance_id_lst,
    x_return_status,
    x_msg_count,
    x_msg_data);
    -- Output the results
    if x_msg_count > 0
    then
    for j in 1 .. x_msg_count loop
    fnd_msg_pub.get
    ( j
    , FND_API.G_FALSE
    , x_msg_data
    , t_msg_dummy
    t_output := ( 'Msg'
    || To_Char
    ( j
    || ': '
    || x_msg_data
    dbms_output.put_line
    ( SubStr
    ( t_output
    , 1
    , 255
    end loop;
    end if;
    dbms_output.put_line('x_return_status = '||x_return_status);
    dbms_output.put_line('x_msg_count = '||TO_CHAR(x_msg_count));
    dbms_output.put_line('x_msg_data = '||x_msg_data);
    -- COMMIT;
    END;
    ERROR
    SQL> @p
    Msg1: The Party Relationship Type (Ship To) entered is either invalid or it does
    not exist in the Installed Base Lookups
    Msg2: The Party Relationship Type (Ship To) entered is either invalid or it does
    not exist in the Installed Base Lookups
    x_return_status = E
    x_msg_count = 2
    x_msg_data = The Party Relationship Type (Ship To) entered is either invalid or
    it does not exist in the Installed Base Lookups
    PL/SQL procedure successfully completed.

    Hi
    We are in 11.5.10.2 and I already checked notes which you sent before and setups are fine as the relationship type' Ship to' having 'contacts' enabled in the setup.
    I am also seeing a differernt issue as once I update existing item instnace with the status 'Return for Credit' through API, system is not allowing me to update the extended attributes through front end application manually and I am seeing a note at the end of the screen as 'Note: This item instance cannot be updated. ' and this is only happening when I update the item instance status to 'Returned for Credit' not when I create new item instances with status as 'Created'. Is this intended functionality to restrict update on extended attributes if I change the status of item instnace to 'Return for Credit' ?
    Thanks

  • Difference in Creation of BRF+ artifact through API / Workbench

    Hello  ,
    I am new in BRF+ development. Need your help . While creating BRFPlus artifact in NW 7002 , what will be differences
    1) if we create  Application / function / Data object / Ruleset etc  using BRF+ workbench  and use ABAP code for calling /passing parameters
    2)if we  create   all these through through APIs .
    can anyone  guide me please .
    Thanks in advance.
    PS

    HI Carsten,
    Thanks for you response.
    Just to clarify some more ...Is there any issue later / limitation while extending/ changing  the functionality of rulesets if we create those with workbench in place of using APIs .
    For example in production server if we need to change / replace some context parameters OR adding one more row in decision table / or any other change.
    Thanks & Regards,
    PS

  • API call that will list the meetings of employees that are assigned to a manager

    Hi,
    What is the API call that will list the meetings of employees that are assigned to a manager?  The manager will be logged in and when I use the API call:
    http://[serverid]/api/xml?&action=report-my-meetings&principal-id=[##]
    I always get the same results which is a list of the meetings for the manager.  If put the id of the employee, I still get the managers meetings.
    I have looked through all of the API documentation and cannot find which call I could make to get the results that I want.  The documentation specifically says that report-my-meetings provides information about meetings that the logged in user is scheduled to attend.  So how do I get the information for the managers employees?
    What is needed is the Name, Duration, and Start Time of all meetings that the employee is enrolled in who is assigned to the manager that is logged in.
    Thanks for any help with this!

    I posted this in the wrong forum and have moved it to the XML forum.

  • Help with a Blind Configuration of a G5 Cluster node

    So I bought 2 G5 Cluster Nodes to dedicate some audiovisual processes to them. My only other mac computer is a Core 2 Duo Macbook Pro.
    Using Pacifist, I was able to do a clean install of Mac OSX onto the internal drive by putting it into an external enclosure.
    Now here is my problem: The cluster nodes have no videocard.
    I plan on using them through the OSX Screen Sharing function, when they will be conencted to the network, but I don't know how to do the initial configuration of Mac OS X on them, since I can not boot from a system using the Apple Partition Map on my Macbook pro, and the Cluster node will not boot from the GIUD partition scheme.
    Can anyone please help me?
    Thanks,
    Chuck

    Assuming you're running Mac OS X Server on the cluster node, just boot the server normally - it will run a special first-time-boot process that sets up a network listener.
    You can then install the Server Admin tools on your MacBook Pro and run Server Assistant. Server Assistant will look out over the network and find the new servers, then give you the opportunity to configure them remotely (assign account data, IP address, etc.).
    (note you can also do this as part of the initial install process - boot the server from the Install DVD and run the entire OS installation and configuration remotely via Server Assistant)
    Note: If you're not running Mac OS X Server on the cluster nodes then the above doesn't apply

  • Error: Halting this cluster node due to unrecoverable service failure

    Our cluster has experienced some sort of fault that has only become apparent today. The origin appears to have been nearly a month ago yet the symptoms have only just manifested.
    The node in question is a standalone instance running a DistributedCache service with local storage. It output the following to stdout on Jan-22:
    Coherence <Error>: Halting this cluster node due to unrecoverable service failure
    It finally failed today with OutOfMemoryError: Java heap space.
    We're running coherence-3.5.2.jar.
    Q1: It looks like this node failed on Jan-22 yet we did not notice. What is the best way to monitor node health?
    Q2: What might the root cause be for such a fault?
    I found the following in the logs:
    2011-01-22 01:18:58,296 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:18:58.296/9910749.462 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:18:58,296 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:18:58.296/9910749.462 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:19:04,772 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:04.772/9910755.938 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:19:04,772 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:04.772/9910755.938 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:19:05,785 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:05.785/9910756.951 Oracle Coherence EE 3.5.2/463 <Error> (thread=Termination Thread, member=33): Full Thread Dump
    Thread[Reference Handler,10,system]
    java.lang.Object.wait(Native Method)
    java.lang.Object.wait(Object.java:485)
    java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
    Thread[DistributedCache,5,Cluster]
    java.nio.Bits.copyToByteArray(Native Method)
    java.nio.DirectByteBuffer.get(DirectByteBuffer.java:224)
    com.tangosol.io.nio.ByteBufferInputStream.read(ByteBufferInputStream.java:123)
    java.io.DataInputStream.readFully(DataInputStream.java:178)
    java.io.DataInputStream.readFully(DataInputStream.java:152)
    com.tangosol.util.Binary.readExternal(Binary.java:1066)
    com.tangosol.util.Binary.<init>(Binary.java:183)
    com.tangosol.io.nio.BinaryMap$Block.readValue(BinaryMap.java:4304)
    com.tangosol.io.nio.BinaryMap$Block.getValue(BinaryMap.java:4130)
    com.tangosol.io.nio.BinaryMap.get(BinaryMap.java:377)
    com.tangosol.io.nio.BinaryMapStore.load(BinaryMapStore.java:64)
    com.tangosol.net.cache.SerializationPagedCache$WrapperBinaryStore.load(SerializationPagedCache.java:1547)
    com.tangosol.net.cache.SerializationPagedCache$PagedBinaryStore.load(SerializationPagedCache.java:1097)
    com.tangosol.net.cache.SerializationMap.get(SerializationMap.java:121)
    com.tangosol.net.cache.SerializationPagedCache.get(SerializationPagedCache.java:247)
    com.tangosol.net.cache.AbstractSerializationCache$1.getOldValue(AbstractSerializationCache.java:315)
    com.tangosol.net.cache.OverflowMap$Status.registerBackEvent(OverflowMap.java:4210)
    com.tangosol.net.cache.OverflowMap.onBackEvent(OverflowMap.java:2316)
    com.tangosol.net.cache.OverflowMap$BackMapListener.onMapEvent(OverflowMap.java:4544)
    com.tangosol.util.MultiplexingMapListener.entryDeleted(MultiplexingMapListener.java:49)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:214)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchEvent(AbstractSerializationCache.java:338)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchPendingEvent(AbstractSerializationCache.java:321)
    com.tangosol.net.cache.AbstractSerializationCache.removeBlind(AbstractSerializationCache.java:155)
    com.tangosol.net.cache.SerializationPagedCache.removeBlind(SerializationPagedCache.java:348)
    com.tangosol.util.AbstractKeyBasedMap$KeySet.remove(AbstractKeyBasedMap.java:556)
    com.tangosol.net.cache.OverflowMap.removeInternal(OverflowMap.java:1299)
    com.tangosol.net.cache.OverflowMap.remove(OverflowMap.java:380)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.clear(DistributedCache.CDB:24)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:32)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
    com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheRequest.onReceived(DistributedCacheRequest.CDB:12)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Finalizer,8,system]
    java.lang.Object.wait(Native Method)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
    java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
    Thread[PacketReceiver,7,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketSpeaker,8,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
    com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Logger@9216774 3.5.2/463,3,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListener1,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[main,5,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
    com.networkfleet.cacheserver.Launcher.main(Launcher.java:122)
    Thread[Signal Dispatcher,9,system]
    Thread[RMI TCP Accept-41006,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    ThreadCluster
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[TcpRingListener,6,Cluster]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketPublisher,6,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListenerN,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[DistributedCache:PofDistributedCache,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management:EventDispatcher,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Termination Thread,5,Cluster]
    java.lang.Thread.dumpThreads(Native Method)
    java.lang.Thread.getAllStackTraces(Thread.java:1487)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    java.lang.reflect.Method.invoke(Method.java:597)
    com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
    com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
    com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
    java.lang.Thread.run(Thread.java:619)
    2011-01-22 01:19:05,785 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:05.785/9910756.951 Oracle Coherence EE 3.5.2/463 <Error> (thread=Termination Thread, member=33): Full Thread Dump
    Thread[Reference Handler,10,system]
    java.lang.Object.wait(Native Method)
    java.lang.Object.wait(Object.java:485)
    java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
    Thread[DistributedCache,5,Cluster]
    java.nio.Bits.copyToByteArray(Native Method)
    java.nio.DirectByteBuffer.get(DirectByteBuffer.java:224)
    com.tangosol.io.nio.ByteBufferInputStream.read(ByteBufferInputStream.java:123)
    java.io.DataInputStream.readFully(DataInputStream.java:178)
    java.io.DataInputStream.readFully(DataInputStream.java:152)
    com.tangosol.util.Binary.readExternal(Binary.java:1066)
    com.tangosol.util.Binary.<init>(Binary.java:183)
    com.tangosol.io.nio.BinaryMap$Block.readValue(BinaryMap.java:4304)
    com.tangosol.io.nio.BinaryMap$Block.getValue(BinaryMap.java:4130)
    com.tangosol.io.nio.BinaryMap.get(BinaryMap.java:377)
    com.tangosol.io.nio.BinaryMapStore.load(BinaryMapStore.java:64)
    com.tangosol.net.cache.SerializationPagedCache$WrapperBinaryStore.load(SerializationPagedCache.java:1547)
    com.tangosol.net.cache.SerializationPagedCache$PagedBinaryStore.load(SerializationPagedCache.java:1097)
    com.tangosol.net.cache.SerializationMap.get(SerializationMap.java:121)
    com.tangosol.net.cache.SerializationPagedCache.get(SerializationPagedCache.java:247)
    com.tangosol.net.cache.AbstractSerializationCache$1.getOldValue(AbstractSerializationCache.java:315)
    com.tangosol.net.cache.OverflowMap$Status.registerBackEvent(OverflowMap.java:4210)
    com.tangosol.net.cache.OverflowMap.onBackEvent(OverflowMap.java:2316)
    com.tangosol.net.cache.OverflowMap$BackMapListener.onMapEvent(OverflowMap.java:4544)
    com.tangosol.util.MultiplexingMapListener.entryDeleted(MultiplexingMapListener.java:49)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:214)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchEvent(AbstractSerializationCache.java:338)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchPendingEvent(AbstractSerializationCache.java:321)
    com.tangosol.net.cache.AbstractSerializationCache.removeBlind(AbstractSerializationCache.java:155)
    com.tangosol.net.cache.SerializationPagedCache.removeBlind(SerializationPagedCache.java:348)
    com.tangosol.util.AbstractKeyBasedMap$KeySet.remove(AbstractKeyBasedMap.java:556)
    com.tangosol.net.cache.OverflowMap.removeInternal(OverflowMap.java:1299)
    com.tangosol.net.cache.OverflowMap.remove(OverflowMap.java:380)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.clear(DistributedCache.CDB:24)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:32)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
    com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheRequest.onReceived(DistributedCacheRequest.CDB:12)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Finalizer,8,system]
    java.lang.Object.wait(Native Method)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
    java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
    Thread[PacketReceiver,7,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketSpeaker,8,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
    com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Logger@9216774 3.5.2/463,3,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListener1,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[main,5,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
    com.networkfleet.cacheserver.Launcher.main(Launcher.java:122)
    Thread[Signal Dispatcher,9,system]
    Thread[RMI TCP Accept-41006,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    ThreadCluster
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[TcpRingListener,6,Cluster]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketPublisher,6,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListenerN,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[DistributedCache:PofDistributedCache,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management:EventDispatcher,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Termination Thread,5,Cluster]
    java.lang.Thread.dumpThreads(Native Method)
    java.lang.Thread.getAllStackTraces(Thread.java:1487)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    java.lang.reflect.Method.invoke(Method.java:597)
    com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
    com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
    com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
    java.lang.Thread.run(Thread.java:619)
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 INFO 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Info> (thread=main, member=33): Restarting Service: DistributedCache
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 INFO 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Info> (thread=main, member=33): Restarting Service: DistributedCache
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Error> (thread=main, member=33): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: Distr
    butedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=16, BackupPartitions=16}
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Error> (thread=main, member=33): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: Distr
    butedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=16, BackupPartitions=16}

    Hi
    It seems like the problem in this case is the call to clear() which will try to load all entries stored in the overflow scheme to emit potential cache events to listeners. This probably requires much more memory than there is Java heap available, hence the OOM.
    Our recommendation in this case is to call destroy() since this will bypass the even firing.
    /Charlie

Maybe you are looking for

  • Why is there a discrepancy in time between iCal on the computer and iCal on the iPhone?

    If I enter an event into iCal from the iPhone, it will be displayed three hours earlier when I access iCal from the computer.  If I enter the event from the computer, it displays three hours later in iCal on the iPhone.  The time zone is set correctl

  • LAN Network Card Ethernet Driver

    Computer Crashed, Vista.  Loaded Windows 7, Realtek PCIe FE GBE Family Controller Series Driver will not load on Windows 7. Is there a driver for this Network Card for Windows 7 ?

  • Cursor can't find newly commited Master row

    Hi all, This is my current headache. In my form I successfully COMMIT a row in a master table. Later in the same form, same session, the user wants to add a row to a detail table. But before the user can do that, I first check to see if the master ex

  • Triggers not Fire in Logical Standby

    Hi, I'still have this question even after reading some post in this forum concerning creating Triggers in a Logical Standby Let's see this Case. MY Primary Table A Trigger A1 Fire into > Table A1 MY Logical - Just Created as New Table A Trigger A1 Fi

  • How use Import Utility for Migration to 10g

    Hi everybody, I want to upgrade Oracle 8 - 9i Databases to 10gR2 using Export/Import Method. Do I use FULL Import or TOUSER Import Method? 1) The problem of a FULL import is the next: A FULL IMPORT imports objects from SYSTEM, OUTLN, DBSNMP (and this