Problems with BPEL PM in HA environment

Hi guys, i need your feed back for a case soa suite instalation HA for producction enviroment., BPEL PM,
Please any recomendation, to apply before to send to TARs.
Randomly a selection failure exception ocurr for reading payload request in BPEL instance execution for up & running two nodes BPEL PM cluster
PLATFORM:
Linux x86
Enterprise Linux Enterprise Linux Server 5.2
PRODUCT:
Oracle SOA Suite 10gR3 10.1.3 release, 10.1.3.4.0 (FixPatch)
DATABASE VERSION:
10gR2 10.2.0.4 (FixPatch), using RAC database as dehydration store.
Preparation for production.
INSTANCE NAME(S) AND TYPE OF SYSTEM(S):
BPEL PM cluster Active-Active on J2EE and Web Server cluster OC4J_SOA (OAS)
Using same "staticports.ini" values during instalatin process
Using same IP & Port for multicast in OAS & BPEL PM (jgroups-protocol.xml and opmn.xml - "*225.4.5.6:5000"
Planning for a harware load balancer front-end for BPEL PM cluster nodes
Now for test purpose the collaxa-config.xml with follow values, because not present any balancer for now:
soapCallbackUrl & soapServerUrl set to corresponding Web Server nodes:
Node1: soapCallbackUrl & soapServerUrl -> http://tvsoaapp1.com.mx:7777
Node2: soapCallbackUrl & soapServerUrl -> http://tvsoaapp2.com.mx:7777
DETAILED PROBLEM STATEMENT:
When Up & Running two BPEL PM Instances in cluster active-active topology, then:
Initiating a test instance through BPEL console (10.1.3.4.0) by passing certain parameters fill text area with the
HTML representation (HTML Form) works fine all time or when no use xpath expression for read request variable.
ISSUE 1:
And if initiating a test instance through BPEL console (10.1.3.4.0) by passing certain parameters fill the following text area with the
XML representation (XML Source) of the input message. 50% times in BPEL Console for node 1 OR node 2, execution BPEL instance fail with
follow exception message, for the any XPATH instruction that read input request variable.
com.oracle.bpel.client.BPELFault: faultName: {{http://schemas.xmlsoap.org/ws/2003/03/business-process/}selectionFailure}
messageType: {}
parts: {{summary=<summary>empty variable/expression result.
xpath variable/expression expression "/client:wsGLBCatMultiplesProcessRequest/client:catalogo" is empty at line 79, when attempting reading/copying it.
Please make sure the variable/expression result "/client:wsGLBCatMultiplesProcessRequest/client:catalogo" is not empty.
Possible reasons behind this problems are: some xml elements/attributes are optional or the xml data is invalid according to XML Schema.
To verify whether XML data received by a process is valid, user can turn on validateXML switch at the domain administration page.
</summary>
at com.collaxa.cube.ejb.impl.DeliveryBean.request(DeliveryBean.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
WORKAROUNDS USED:
Work fine for:
. Wnen one BPEL PM instance (node) is up and another down, work fine node 1 OR node 2.
. BPEL process without empty payload (no use xpath expression for read request variable)
Issue 2: Another today up & running cluster, test any BPEL Service, throws their endpoint url in web browser, i view change fields correspondig
to before test end point, so view mixe parameters betwen bpel services. the execution fail because xpath retry read another payload that
not exist in current bpel.
It may a bug BPEL Console UI, OHS ? or i need place set value configurations ??, Please help.

Hi,
can you please call this BPEL via another BPEL, just to be sure, that this is not a BPELConsole UI bug.

Similar Messages

  • Problem with instant message in clustered environment

    Hello, I have some problem with Instant Message service.
    We need to use it in our Production Environment (a clustered emvironment with a central instance, two dialog instances and two web dispatchers), some months ago we tested it in Development Environment (not clustered emvironment, just a single system) and it worked fine.
    So I did the same configurations on Production Environment but it did not work.
    But if I access on my portal (Production Env.) by the central instance (avoiding the dispatchers) the instant message service works.
    I think it can be a web dispatcher's configuration problem, in its logs I found the message :
    "<i>[Thr 3700] *** ERROR => htmlEncode: called with empty string [icpif.cpp 847]</i>"
    I' ve repeated the same configuration done in Dev.Env. (no dispatcher) on the Prod.Env. (with 2 dispatcher), is possible that I'm missing some configurations??
    Could someone helps me?
    Best Regard
    null

    Hi Alessandro,
    unfortunately I got the same problem but I haven't found the solution yet.
    Hoping someone will help us.
    Regards  Nicola

  • Problem with creating Kestore  in DOS Environment

    Hi
    I am new to java security ,trying to generate keys by using ketool command but getting problem.
    I am executing following keytool command from DOS prompt
    c:\>keytool -genkey -keyalg "RSA" -sigalg "SHA1withRSA" -keystore myKeystore -storepass abcdef -alias xyz -keypass wxyzabc
    when I execute the above command it displaying keytool Help. I suspect the problem with " -keystore myKeystore " parameter , if I remove that parameter the command gets executing.
    Without giving -keystore parameter can create keys , in that case where should I look for the generated keys ,please help me out.
    2) how can i create Certification request by using above generated keys.
    please try to help me.
    with regards.
    jl.

    c:\>keytool -genkey -keyalg "RSA" -sigalg
    "SHA1withRSA" -keystore myKeystore -storepass abcdef
    -alias xyz -keypass wxyzabcOn Win2K under 1.4.2, that exact line works just fine for keytool - it goes right to asking me for the identifying information. What's your environment look like?
    2) how can i create Certification request by using
    above generated keys.c:\> keytool -certreq -keystore myKeystore -storepass abcdef -alias xyz -keypass wxyzabc -file mycertreq.csr
    Grant

  • Problems with BPEL API calling a BPEL workflow

    Hallo,
    i have copied my text from the Application-Server forum to this forum becouse here it is a better place for my problem.
    I have a problem with the BPEL API. I try to invoke a BPEL workflow with the BPEL API. Up to the Point where i call the BPEL workflow with the command "deliveryService.post(processName, action, nm);".
    When this command is processed i following error:
    java.lang.NoClassDefFoundError: javax/ejb/EJBException
    It looks like i am missing a jar file, but in my Development tool an at the compilation i get no errors or warnings. Only when command above is called.
    The initialization of the deliveryService works fine.
    The only point i can image an error is at the definition of the jndi for the remote connection to BPEL. There i have following entry's:
    jndiProviderUrl = "http:ormi://server:port:instance/orabpel";
    jndiFactory = "oracle.j2ee.rmi.RMIInitialContextFactory";
    jndiUsername = "xxx";
    jndiPassword = "yyy";
    I get folowing errror message:
    java.lang.NoClassDefFoundError: javax/ejb/EJBException
    at com.oracle.bpel.client.util.ExceptionUtils.handleServerException(ExceptionUtils.java:76)
    at com.oracle.bpel.client.delivery.DeliveryService.getDeliveryBean(DeliveryService.java:254)
    at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:174)
    at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:149)
    Please can someone tell me what i am doing wrong?
    BS
    PS: i used following tutorial to create my code for calling BPEL:
    http://www.oracle.com/technology/products/ias/bpel/pdf/orabpel-Tutorial7-InvokingBPELProcesses.pdf

    got some steps more ... but now i havent any clue. Out of this message i think my ProviderURL is wrong. but i try this both and got always the same error message.
    jndiProviderUrl = "opnm:ormi://amy:6003/orabpel"
    jndiProviderUrl = "opnm:ormi://amy:6003:oc4j_soa/orabpel"
    java.lang.Exception: Erstellen von "ejb/collaxa/system/DeliveryBean"-Bean nicht erfolgreich. Es wurde folgende Exception gemeldet: "javax.naming.NamingException: Invalid provider URL
         at com.evermind.server.rmi.RMILocation.createRMILocation(RMILocation.java:80)
         at com.evermind.server.rmi.RMILocation.createRMILocation(RMILocation.java:57)
         at com.evermind.server.rmi.RMIClient.getLocations(RMIClient.java:661)
         at com.evermind.server.rmi.RMIClient.getDomain(RMIClient.java:640)
         at com.evermind.server.rmi.RMIClient.getContext(RMIClient.java:534)
         at com.evermind.server.rmi.RMIInitialContext.get(RMIInitialContext.java:44)
         at oracle.j2ee.rmi.RMIInitialContextFactory.getInitialContext(RMIInitialContextFactory.java:45)
         at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667)
         at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288)
         at javax.naming.InitialContext.init(InitialContext.java:223)
         at javax.naming.InitialContext.<init>(InitialContext.java:197)
         at com.oracle.bpel.client.util.BeanRegistry.lookupDeliveryBean(BeanRegistry.java:277)
         at com.oracle.bpel.client.delivery.DeliveryService.getDeliveryBean(DeliveryService.java:250)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:174)
         at com.oracle.bpel.client.delivery.DeliveryService.post(DeliveryService.java:149)
    Please can someone help me?
    What is wrong wis my ProviderURL?
    BS

  • Problem with BPEL Console

    Hi all,
    We are facing a peculiar type of problem on BPEL Console.
    Scenario: I have created an empty BPEL process. That process gets initiated by a CSV file present in the specific folder (with the help of file adapter), and subsequently the values from that CSV file is used as an input parameter for calling a procedure (with the help of DB Adapter). The success or failure of the BPEL process is informed to particular set of people through Email. Everything is working perfectly fine including the process instance and the Email part.
    Problem: We are not able to see any instance created on BPEL console if we are passing more than 2000-3000 records in a CSV file. But the process is happening because procedure is doing its work and we are getting the success or failure Email also.
    Point to note, For less than 2000 its working perfectly fine.
    Any help in this case will be helpful for our team.
    Regards,
    Abhi...

    This sounds like a bug in the administrative console. It would be interesting to see what happens if you query the database directly:
    There is a whitepaper that discusses that:
    http://www.oracle.com/technology/pub/articles/bpel_cookbook/blanvalet.html
    The relevant code for you is:
    whereCondition where = whereConditionHelper.whereInstancesStale();
    IInstanceHandle[] instances = getLocator(domainId, domainPassword).listInstances(where);
    BPEL ProcessManager API documentation can be found in [bpelhome]\integration\orabpel\docs\apidocs\index.html
    If this returns all instances, even when there are more than 2000, you know you hit a bug in the management console.
    Hope this helps,
    Lonneke

  • Major Problem with importing application from QA environment to PROD

    We did our first ever full move of our application from our QA server to our PROD server tonight. First, I exported the current PROD application version as a backup. Then I did an export of the application on QA and then imported it into PROD using the same application ID as it had previously (which is the same as it was in QA). When the installation portion of the import was running I received an error. The error was: ORA-20001: GET_BLOCK Error. ORA-20001: Execution of the statement was unsuccessful. ORA-00001: unique constraint (APEX_030200.WWV_FLOW_WORKSHEET_COND_PK) violated &lt;pre&gt;begin wwv_flow_api.create_worksheet_condition( p_id =&amp;gt; 2700222847807840wwv_flow_api.g_id_offset, p_flow_id=&amp;gt; wwv_flow.g_flow_id, p_page_id=&amp;gt; 42, p_worksheet_id =&amp;gt; 7285923079312021+wwv_flow_api.g_id_offset, p_report_id =&amp;gt; 2694211409728899+wwv_flow_api.g_id_offset, p_condition_type+*
    h1. We have been unable to get the app installed using the same application ID, so we installed it using a new application ID. However, we have now lost all of the user's saved interactive reports.
    h3. So, first, we need to know how to get the user's saved interactive reports put into the new application id. Second, we need to know what the proper procedures should have been for exporting our QA application and importing it into PROD without loosing the saved interactive reports. Hope someone can help us out very quickly - the natives will be very restless tomorrow morning when they find out that they don't have their saved reports.
    Thanks in advance for any help you can provide!
    Dale

    Thanks for your reply Scott.
    The problem is that the saved reports are not in the application we are exporting, they are in the one we are trying to update. I'll try to provide a more detailed description of what we are doing. We have two separate server environments, one for our production Oracle databases and our production APEX workspaces and applications. The other is for our development and QA Oracle databases and APEX workspaces and applications. Both environments run the same version of Oracle. So, on the production environment, we have a workspace called "payorprofile" and, within that workspace we have an application with application ID 126. That is where our users have been happily creating their saved interactive reports for the past 4 months. On the development environment, we have a workspace called "payorprofile" and, within that workspace we have an application with application ID 126. Now we have a new version of application 126 on the development environment that we need to promote to production. This newly QA'd version does NOT have the user's saved reports but it has all of the new and changed pages, LOVs, lists, breadcrumbs, etc. We needed to merge the new pages, etc. from dev with the data and user's saved reports on prod. What we did was to export the Dev version without saved reports and then we imported it into the prod system using the same application ID. It asked if we wanted to overlay the current application 126 with the new one and we said yes. During the install step we got the error noted in this post. The only thing we knew to do was to import the dev application as a new application ID (we chose 326). The application works fine, but, of course, we don't have the saved reports. Now we need to get the saved reports into the new application 326 - and I think with the help of some articles and posts on the web we can do that. However, we need to know what to do differently the next time we are ready to promote a version of the application to production.
    Thanks,
    Dale

  • Problem with BPEL Designer Plugin in Eclipse

    I am not able to open the BPEL designer in Eclipse.
    I am using Version: 3.0.1 with the Build id: 200409161125
    When I select New Project, i cannot see the "Oracle BPEL Project" option in the New Project wizard.
    Is there any way to integrate the plug-in into the Eclipse.Please help.
    /Kiran.

    Hi Kiran,
    Eclipse requires to detect new plugins.Usually you only need to launch Eclipse with the 'clean' optionnal parameter (run "eclipse.exe -clean").
    If the setup of the BPEL Designer plug-in have been completed properly, you should see the the 'Oracle BPEL Project' item under de 'New Project' wizard menu.
    Hope this could help.
    Edmond Cissé.

  • Problems with BPEL Designer ERROR build.xml:28: ORABPEL-00000

    following problem could someone help me, please
    I'm using Oracle BPEL Designer Plugin bpelz_0.9.10_win32.exe installed manually on Eclipse 3.2 SDK???
    Buildfile: C:\Programme\eclipse\workspace\SyncHelloWorld\build.xml
    main:
    [bpelc] bpelc> validating "C:\Programme\eclipse\workspace\SyncHelloWorld\SyncHelloWorld.bpel" ...
    BUILD FAILED
    C:\Programme\eclipse\workspace\SyncHelloWorld\build.xml:28: ORABPEL-00000
    Exception not handled by the Collaxa Cube system.
    An unhandled exception has been thrown in the Collaxa Cube system. The exception reported is: "java.lang.NoClassDefFoundError: sun/tools/javac/Main
         at com.collaxa.cube.util.JavaHelper.javac(JavaHelper.java:234)
         at com.collaxa.cube.util.JavaHelper.javac(JavaHelper.java:204)
         at com.collaxa.cube.lang.compiler.CubeProcessor.compileGeneratedClasses(CubeProcessor.java:1023)
         at com.collaxa.cube.lang.compiler.CubeProcessor.transformClientSide(CubeProcessor.java:579)
         at com.collaxa.cube.lang.compiler.CubeProcessor.transformClientSide(CubeProcessor.java:467)
         at com.collaxa.cube.lang.compiler.CubeParserHelper.compileClientSide(CubeParserHelper.java:68)
         at com.collaxa.cube.ant.taskdefs.Bpelc.execute(Bpelc.java:543)
         at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275)
         at org.apache.tools.ant.Task.perform(Task.java:364)
         at org.apache.tools.ant.Target.execute(Target.java:341)
         at org.apache.tools.ant.Target.performTasks(Target.java:369)
         at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216)
         at org.apache.tools.ant.Project.executeTarget(Project.java:1185)
         at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:40)
         at org.eclipse.ant.internal.core.ant.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32)
         at org.apache.tools.ant.Project.executeTargets(Project.java:1068)
         at org.eclipse.ant.internal.core.ant.InternalAntRunner.run(InternalAntRunner.java:706)
         at org.eclipse.ant.internal.core.ant.InternalAntRunner.run(InternalAntRunner.java:457)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.eclipse.ant.core.AntRunner.run(AntRunner.java:356)
         at org.eclipse.ant.internal.ui.launchConfigurations.AntLaunchDelegate$1.run(AntLaunchDelegate.java:230)
         at java.lang.Thread.run(Unknown Source)
    Exception: java.lang.NoClassDefFoundError: sun/tools/javac/Main
    Handled As: com.collaxa.cube.lang.compiler.CubeCException

    Hi
    Where does your JAVA_HOME variable point to? To JRE or JDK? This looks like Eclipse can't find javac in your path.
    ^_^

  • DAQmx non-cumulative buffered edge counting (like the PMT TTL problem) with PCI6221 in C program environment

    I have a PCI-6221. I am programming in C/C++ and DAQmx. My application is much like the previous thread counting the number of asynchronous TTL pulses from a PMT in some user specified time interval. The time interval is typically around 1 msec and the signal rates are around 1-10 kHz. So I expect to count 0 to 10 pulses per timebin. In traditional DAQ this was called Non-cumulative buffered edge counting.
    To get a clock at about 1kHz, I first create an "analogue in" task that samples at 1kHz. The only thing I use this task for is to route its sample clock to the gate of the counter. This defines my time interval as 1 msec.
    I then create the counter task. I am using count edges. So I think counter0 gate = PFI9 and source = PFI8.
    DAQmxCreateCICountEdgesChan(*taskHandle,chan,"",edge,initialCount,countDirection);
    DAQmxCfgSampClkTiming(*taskHandle,clockSource,rate,DAQmx_Val_Rising,DAQmx_Val_ContSamps,samplesPerChan);
    where clockSource="/Dev1/ai/SampleClock"
    I apply my signal to PFI8.
    After starting the task, I read an array of values with
    DAQmxReadCounterU32(taskHandle,samplesToRead,10.0,data,samplesToRead,read,NULL);
    I expect the array data to have values ranging from 0-10 for PMT input with an average rate of about 10 kHz. The trouble is that the array data does not seem to make sense. I get constantly increasing values. For very low input frequency, it just increments by one per array member.
    More troublesome, when I hold PFI8 at ground, so that I expect zero pulses to be counted at the source for each edge at the gate that occur 1 per msec, I get a timeout error. I really must be able to count zero events per bin.
    Am I doing something obviously wrong?

    Hello,
    The behavior of non-cumulative event counting is equivalent to period measurement. Additionally, the timeouts when no edges occur on your event counting terminal are the result of duplicate count prevention. In DAQ 7.4, the default behavior of duplicate count behavior should take care of this for you, but if you cannot upgrade to DAQ 7.4, I'd do a search of the discussion forums for "duplicate count prevention". There are a number of previous posts about this attribute, including this one.
    If you aren't using the second counter on your device, you should be able to use the Period Measurement 2 Counter High Frequency measurement method to get your desired values. The "Measurement Time" attribute in this case would be the "sample clock" (1 kHz in your example). Then just raed the data raw to get counts rather than scaled.
    If you'd like to do it using the AI timing engine. Configure a 1 counter period measurement task, use "clockSource" as your input terminal and use your PMT TTL pulses as the "Counter Timebase Source". Then read the data raw, since the scaling will not be appropriate for your needs.
    I hope this helps!
    gus....

  • Problem with Concurrent managers in RAC environment

    We have 12.1.2 EBS with 2-Node RAC
    Issue is that concurrent managers needs to be restarted, incase if one of the database is down. ?
    But ebs fwk pages, forms all works fine, even if one of the database node is down, but CM needs a restart
    Can someone let me know what could be the problem

    Please see the following docs.
    Parallel Concurrent Processing (PCP) Running Request Behavior when Standard Manager Failed Over [ID 1476803.1]
    Concurrent Manager Functionality Not Working And PCP Failover Takes Long Inspite of Enabling DCD With Database Server [ID 438921.1
    EBS - Technology Area - Webcast Recording 'E-Business Suite - RAC & Parallel Concurrent Processing (PCP)' [video] [ID 1359612.1]
    Thanks,
    Hussein

  • Tow fact assersion problem with BPEL and rule Engine

    I have imported two different xsd for buliding xml fact A and B. I have defined two RuleSetA and RuleSetB that RuleSetA only act on A and RuleSetB only act on B.
    When i call RuleSetA from BPEL decision Service it detects both of the facts A and B as a related fact to RuleSetA and expect that I use both of them for assertion and watch!!
    When this condition occure BPEL can not create ear file for that DecisionService!!
    I have set visible property of fact B to false in ruleAuthor and then Bpel detects only fact A in decisionService creation wizard. I have deployed BPEL but when i run it the DecisionService raise this error "undefined mypackage.B at line and coumn ..."
    It means that Rule engine expects that I should assert fact B too!! but it's irelevent.
    I have checked Decision Service deplyment descriptor files such as decisionservic.desc
    I can not find any name or setting for fact B in assertionlist setting.
    Please Help me what is the problem

    Hi,
    I'm not sure i understand what you're trying to achieve, but let me explain some of the rule engine and decision service concepts.
    The Oracle Rule Engine has the concept of a rule repository (that is what decision service considers a rule engine connection). A rule repository is a container of rule dictionaries (in the decision service world we use the term catalog for this).
    It is the rule dictionary you're selecting when creating a decision service partnerlink.
    So now a rule dictionary comprises of
    - A (common) datamodel.
    The datamodel comprises of the variables, fact types, functions
    - One or more rulesets
    All of the rulesets share the same datamodel of the rule dictionary, there doesn't exist
    a fact type model for a specific ruleset.
    Now, when you create a decision service partnerlink, we let you choose the ruleset
    that is being executed as part of the decision service. Then we query the rule dictionary
    data model for all the fact types that can potentially be used for executing the ruleset
    and assume that you have some knowledge about which fact types to assert and which
    one to query (watch).
    The thing is that the decision service partnerlink wizard doesn't have knowledge that
    your ruleset B uses fact type B and your ruleset A uses fact type A since both fact type
    A and B are part of the datamodel of the rule dictionary. So, when you choose to execute ruleset B, its in your responsibility to select appropriate fact types for assertion
    and query/watch (in this case you would choose fact type B for assertion, eventually the same or some other fact type for watch).
    Best Regards,
    Ralf

  • Problem With UNIX command in ODI Environment.

    A procedure is used in our package which involves moving a file from a folder to another in a UNIX server.
    The procedure uses an OS command - 'os.system' to execute unix command.
    The unix command is a grep command which seems to work fine when executed alone.
    But when this procedure is executed ,it fails.
    The following code raises an error and the procedure fails.:
    if os.system(cmd) <> 0:
    raise "command fails","see agent output for details";
    Whenever we are trying to reverse,we get "file not Found " error.
    We tried to execute the jython code by placing the files in local directories. The same error appears both when the file is in local and when it is in a remote directory.
    This is the error we get:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 20, in ?
    HeaderCmd failed:: See agent output for details      at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)      at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execSrcScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlS.treatTaskTrt(SnpSessTaskSqlS.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)      at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

    Hi,
    The unix command is a grep command which seems to work fine when executed alone. :-
    I guess you executed this from your Unix server , so it work fines.
    And when you are trying to execute outside from this (using ODI) , its failing , right ?
    Can you please check the permission settings for that script file (sh).
    Please try by giveing the full permission (specifically execution permission) for all users connecting from other machines.
    This can be one of the reason for the failure .
    please let me know if this is not working
    Regards,
    Rathish

  • Problem with uploading a file in Clustered Environment

    Hi,
    I have a problem with uploading a file in a clustered environment. I have an iview component which facilitates an upload action of an xml config file. The problem is that the upload of the modified XML file is reflected only in the central instance of the cluster and not in the dialog instances. The dialog instances hold the old config file.
    Is there any solution to upload the file to all the nodes in the cluster.
    Thanks
    Kiran

    Hi,
    This is a known problem with clustered environment. Remember that your portal component runs on just on dialog instance and it doesn't automatically have access to the other nodes.  However, there are some ways to get around this
    1. Use KM to store files. KM is a common repository for all application servers and therefore you needn't worry more
    2. Use an external batch oriented product (suresync/robocopy) to synch folders on the different DIs. You basically use your existing portal component, but there is a batch job which makes sure the upload folder is identical on all DIs (however, there is a slight delay depending on how often you run the batch job)
    3. Store the files on a shared disk directly from the portal component.
    Cheers
    Dagfinn

  • Problem with App-V 5 in Xenapp 6.5 Feature Pack 2 /PVS Server 7.1 Environment

    Hi I hope someone can help me. We are pretty desperate right now.
    At e customer of mine we are Using App-V 5 SP2 RDS Client together with a Xenapp 6.5 Feature Pack 2 /PVS Server 7.1 Environment.
    The Golden Image for PVS gets built with SCCM 2007. We “Install” the App-V Packages local with Powershell with e Install Wrapper.
    The Wrapper uses for Example the following command to “install” the Package during the Task Sequence.
    Add-AppVClientPackage -Path "C:\_SMSTaskSequence\Packages\I010012B\x86-GE\001\Biella-Dominalprint(V)-2.0.appv" -DynamicDeploymentConfiguration "C:\_SMSTaskSequence\Packages\I010012B\x86-GE\001\Biella-Dominalprint(V)-2.0_DeploymentConfig.xml"
    | Publish-AppvClientPackage -Global | Mount-AppvClientPackage -Verbose >C:\Windows\Logs\Install\Biella-Dominalprint(V)-2.0_001.LOG
    The Package was Sequenced with SP1 Sequencer.
    The Package contains e SystemX86 folder
    C:\ProgramData\App-V\5D8F5879-923B-4401-BE13-F6E120F6569C\83FF2A00-5D2A-4A3B-82FD-3238D786BEE3\Root\VFS\SystemX86
    In This Folder are a few Files. One of them is the tdbg5.ocx
    The Package works without a Problem after the Installation of the Server has finished.
    But when the Citrix Department does there Magic with PVS and they provision the VHD to another Server the Package is no longer Working. When we start the App we get an Error that it is Missing the tdbg5.ocx
    But when I Use ACDC 2.0 to start the App or modify the Shortcut with the command /appvve: 5D8F5879-923B-4401-BE13-F6E120F6569C_83FF2A00-5D2A-4A3B-82FD-3238D786BEE3 it is working. Also when I remove the Package with Powershell from the provisioned Server
    and reinstall it again it is working Again.
    We have this Problem with two other Apps and both of them have files in the Root\VFS\SystemX86 Folder.
    We also have e Virtualized Java Package. We have a Shortcut like this:
    C:\ProgramData\App-V\1A971C0F-1C0F-430F-A2E2-2DF8487E6FF0\80A50994-7B30-4FC3-9DA4-1CE8B979B337\Root\VFS\ProgramFilesX86\Java\jre1.6.0.38\bin\javaws.exe
    http://whatever.ch/zpv/webstart/zpv-prod.jnlp
    If I start that link on the provisioned Sever Java Launches but it’s not downloading the App to the Cache. But also here it works perfect after the installation with SCCM, and it also works when I remove the package on the provisioned Sever and “install”
    it again with Powershell. But here the /appvve workaround doesn’t work.
    I know that we should ask that in the Citrix forums, the Citrix department of the Customer is opening a Case with Citrix and one with MS Support.  But we are I a big hurry because the customer needs to start testing the Apps, because they need to start
    the pilot Test very soon. So I hoped that someone in the community can help me out with this.
    Thank you for your answers
    Pascal

    Another alternative, if  you dont want to use the Shared Content Store, is to have your PackageInstallationRoot pointing to a non-provisioned disk (non-PVS).  In our environment we had the same exact issue.  Each of our Citrix servers have
    a D drive thats non-provisioned that we use for Logs.  Once I pointed the PackageInstallationRoot over to this location everything started working.   The key here is to have a script run at server startup that deletes the PackageInstallationRoot
    since the provisioned non-persistent disk will reboot to a fresh image each time.   This way you wont end up with stale content out there as applications are updated and/or removed. 
    There are other things that also have to happen for this to work in these types of environments (related to roaming profile exclusions) that I can provide to you if you need them.  
    Let me know if you want/need more details.

  • 10.6.8 Causes New Problem with Mixed Graphics Hardware Environment - Suggestions?

    Hi. After installing the latest Mac OS update (10.6.8), I can no longer boot my 5,1 Mac Pro into its Mac OS partition.  An undefined display (and graphics card) now seems to be used as default, rather than the supported display/hardware, so I cannot interact with the user interface.  I simply have an inoperable blue screen on the correct display. I have 2 graphics cards in my Mac Pro: a supported GT120 (solely for the purpose of booting/using Mac OS), and an unsupported HD6970 (used extensively for Windows 7 3D gaming, in Slot 1).  Until this latest OS release, I could successfully boot and use my computer in both Mac OS and Windows environments.  I encountered other problems because of this mixed graphics environment, like an inability to put the system to sleep without having the Mac Pro PCI fan unnecessarily ramp up to full speed after waking - but it would always work correctly when first booted.  (I have contacted Apple Support about this sleep issue, but they do not support my configuration.) Does any boot command exist to force the use of the supported graphics card?  Or, do kext files exist to enable the Mac OS to at least recognize (not use) the 6970 card? With the 6970 removed, I have no problems booting the Mac OS, but I do NOT want to remove my hardware card every time I want to shift to the Mac OS.  Do anyone have any suggestions?

    Thanks for the suggestions.
    First, I cannot swap graphic cards in PCI slots, because the 6970 only operates correctly in slot 1 (the high-speed graphics slot).  I may have to revert to the prior OS release, but what about next month, with Lion?
    I am familiar with Netkas.org (I will probably try it or Groths.org next, thanks).  One of my previous graphics cards was a flashed 4970.  I do not have access to the prelease Lion kext files that he suggests using.  Full flash support does not appear to yet exist for the 6970, much less the HD 6990 or GTX 590 that it appears I may need next.
    Ideally, the Mac Pro firmware should ignore any attempted bus communication with an unrecognized PCI card.  Currently, it appears that the unsupported 6970 can cause boot or run-time problems (I also have occasional freezes during Mac OS hardware interaction like writing a file to disk).  Note that I have never had any problems arising from my unsupported Windows 5.1 PCI sound card, however.

Maybe you are looking for

  • CS5.5 Import for runtime sharing and preview problem

    Hello I'm having a strange issue with CS5.5 and the import for runtime sharing settings. Here's what I'm doing: I create two .fla's, Export.fla and Import.fla. In Export.fla, I create a shape, select it, press F8 and tick export for runtime sharing,

  • Tools not working photoshop cc 2014 surface pro 3.

    I tried resetting tools. I've even reinstalled the program. I can select any tool but when I go to work on the canvas, no cursor, nothing. I don't have any of these problems in illustrator. Don't know what else to do. Has anyone else had this problem

  • FAGLF101 (Reclassify AR balance)

    Hi I am trying to classify Receivables by age.  I have set up the following in OBBU V4 - less than a year V5 - 1 to 5 years V6 - Over 5 years I have also assigned Recon a/c, Adjustment a/c and Target a/c When i execute the txn FAGLF101, system possts

  • Adobe Reader - Was steuert die Fenstergröße?

    Manche heruntergeladenen PDF-Files öffnen sich nicht bildschirmfüllend, sondern als kleineres Fenster. Ich rede nicht vom Vollbildmodus! Nun kann ich natürlich hergehen und mit dem "Maximieren"-Button in der rechten oberen Fensterecke dieses PDF-File

  • After Mountain Lion update - problems synching contacts and accessing previous Time Machine backup

    I have just set up a MacBoook Pro with a new SSD drive to replace the internal hard drive. I made sure I had a current Time Machine then connected the SSD drive externally and installed Mountain Lion. I installed the SSD drive into the computer. Befo