Distributed processing using a PC

Does any one know of a way i could use my pc as a node type thingy for powering plugs on my powerbook with logic? I am looking for a method which does not require midi syncing or having 2 soundcards.
Any ideas welcome, thanks!

I found a little app which does almost the same thing. it's called wormhole, and works very well. there is considerable latency in my setup, but it's great to be able to use PC plug ins in a logic session.
would it be preferable to use ethernet or firewire to get maximum network bandwidth?

Similar Messages

  • Having trouble setting up Distributed Processing / Qmaster

    Hey Guys,
    I was able to (at one point) use Compressor 3 via Final Cut Pro Studio with Qadministrator to allow Distributed processing either via managed clusters or QuickCluster - both worked on both macs (a 2007 2.6Ghz Core 2 Duo iMac and a 2009 3.06Ghz Core 2 Duo iMac).
    I just upgraded to Final Cut Pro X and Compressor 4. As far as I can tell, the Qmaster settings now reside in the Compressor application itself, so I tried setting it up as best as I could using both the QuickCluster and managed cluster options (very similar to when using the older Qmaster) but no dice. I can see my controller's cluster from the secondary iMac, but it always displays submissions as "Not Available" and it does not help with processing. I've tried everything I can think of - I tried using FCS Remover for the older version of Final Cut, I tried looking around via terminal to see if there are any residual files & settings prior to the FCPX install, I've tried following as many instructions as I could find (including apple's official documentation on Compressor 4 on setting up a cluster) but NOTHING seems to work. I'm at a loss!!
    Unfortunately, any documentation or refrences to issues with Qmaster / Distributed processing is related to older versions of Compressor and whatnot.
    Can anyone help or have any suggestions? I have no idea how to get this working and I am having trouble finding anything useful online during my research
    Perhaps someone is familiar and can help me set it up correctly? I'm verrrry new to Final Cut in general, so I appologize in advance if i'm a bit slow but i'll try to keep up!
    Thanks,

    In spite of all Apple's hype I'm not sure distributed processing is actually working.
    First I ran in to the problem with permisions on the /Users/Shared/Library/Application Support folder.  There's some info about that in this discussion.  You'll need to fix it on each computer you're trying to use as a node.
      https://discussions.apple.com/thread/3139466?start=0&tstart=0
    Then I finally found some decent documentation on Compressor 4 here
      http://help.apple.com/compressor/mac/4.0/en/compressor/usermanual/#chapter=29%26 section=1
    However no matter what I tried I could not get the compression to spread across more than one computer.  I tried managed clusters, quick clusters, and even "this computer plus".  I was testing on a mac pro, a mac mini, and a macbook air.  I could disable the mac pro and the processing would move to the mini, disable the mini and it would move the the macbook air.  No matter what I do though it won't run on multiple machines.
    I'm also having trouble doing any kind of compositing in FCPX and getting it to compress properly.  I see this error
    7/20/11 11:07:42.438 PM ProMSRendererTool: [23:07:42.438] <<<< VTVideoDecoderSelection >>>> VTSelectAndCreateVideoDecoderInstanceInternal: no video decoder found for 'png '
    and then I end up with a hung job in the Share Monitor that if I try to cancel just sits there forever saying "canceling".
    I'm seeing a bunch of Adobe AIR Encrypted Local Storage errors in the log too.  Don't know what that has to do with FCPX but I'll have a look and try and figure it out.

  • What's the best network for distributed processing?

    Hey guys,
    I have a network of three computers... I have distributed processing all set up on Compressor, and it runs really fast through the ethernet connection I have set up. 
    What's the absolute best way to set up this network?
    The macbooks both have a firewire input, but that's not being used to transfer the data in this case... it's all ethernet. 
    The mac pro has 2 firewire ports... leaving one potentially open for a network. 
    Do you think it would be beneficial to set up a firewire network for these three computers, or is there any reason why a simple ethernet network is preferable?
    Thanks in advance for all of your help...
    Oh!  One more question!
    If I was going to set up a firewire network, how would I do it?
    ---Trav

    Hi MrWizard.. do you have Compressor.app V4.1? (Dec 2013) installed or the older ones.? According to you sign you're on older OSX 10.7.5.
    The idea you post is sound HOWEVER... it's somewhat fickle and troubesome.. I'm assuming old V3.4 Compressor/Qmaster later ....
    Hardware:
    You can CERTAINLY use the Firewire ports as NETWORKS NIC using IP over FW.. It certain works between TWO macs as Ive used it yeas back..
    Assume you have a FW hub or use the single  FW HUB  in the MACPRO ... or scout around for a fw switch  800 (if they exist!) or just use a FW800 to 400 cables and an cheap legacy FW400 hub... I saw one once in Taipei!
    Simply on each configure the Firewrie INTERFACEs in the System Preferences / Network .... (+ add interface etc etc), apply
    Either
    hard code (manual) an IP address (1.1.1.x)  for  each,
    Install and deploy OSX SERVER on your MACPRO (IP - 1.1.1.1) and set up a simple DNS and domain "mrwizards.cool_network"  for that FW subnet (1.1.1.1x)  then DHCP server  for that subnet so that when the two MACBOOM PROS  connect tey get a an IP address such as 1.1.1.2-253 and assign machine names. Set the dsn serahc on each machine to add 1.1.1.1 (your macpro to resolve machine names0. More work however easier to manage.
    test your local DNS between the machines or TRACEROUTE / PING the machines from each other and verify the IP path over the FW NICs works ok.
    as long as you can ascertain the IP traffic around your 1.1.1.x subnet the better.
    Once you configure these as a separate NETWORK, you will need some way to provide FireWire however... this is only the start of your headaches. Im assuming you are on an olde Compressor/Qmaster V3.5 with legacy FCS 3 suite... true/false?
    The configurations takes patience and trial and error and some clear understanding of what you are trying ot acheive if its really worth the effort.
    TRANSCODING over nodal cluster: AKA DISTRIBUTED SEGMENTED MULTIPASS TRANSCODING!
    ON paper you'd imaging the throughput would be worth it. Sadly the converse is true!.. it often takes LONGER to use a bunch of nodes rather than submission on a SINGLE machine using a CLUSTER on a single machine....
    the elapsed time for the submission relies on the processing speed of the slowest machine and its network connection
    final assembly of the quicktime peices for the object
    hmmmm often quickier just to use a single CLUSTER on the MACPRO!.
    You MAY want to consider this option assuming the hardware you have mentioned:
    what about this instead of FireWre NICS?
    USE Gbit ETHERNET allaround.
    Steps:
    Set the WIFI on the host to use the WIFI only network for you usual internal activities and bonjour (.local) etc.. assume low IP traffic. - all works oK
    OPTION: on the MACPRO dedicate EN0 (first ethernet NIC to WIFI router). Turn off WIFI on MACPRO.
    Utilise the ETHERNET (EN0 - builit in ethernet) on the MACBOOKs (older ones? .. will be very slow BTW)
    connect the MACPRO Second ETHERNET, and the MACBOOk (pros?) ethenets to a cheap 8 x port Ethernet HUB switch ($HK150 / €15) they are all switches these days). This gives you an isolated subnet... no router needed. Plug in the hub/switch and power on!
    look at step 3. above .. QUICK START: just hard code IP addresses as follows in each SYSTEM PREF/network):
    MAC PRO ip 1.1.1.1
    MACBOOK (Pro) #2 : ip address 1.1.1.2
    MACBOOK (Pro) #2 : ip address 1.1.1.3
    test the 1.1.1.1-1.1.1.3 network... (ping or traceroute)
    V3.5 Compressor and Qmaster for ye old FCS - dig in to QMASTER PREFS as:
    MACPRO - set servces and ALSO MAKE CONTROLLER., set NETWORK INTERFACE to the interface on the ETHERNET 2 (not all). set instances to more than 1 one instance but NOT max! Tick MANAGED SERVICES
    MACBOO (pro)'s set servces only, set NETWORK INTERFACE to the interface on the ETHERNET  (not all). set instances to more than 1 one instance but NOT max! Tick MANAGED SERVICES
    launch APPLE QADMINISTRATOR.app and make a managed cluster - easy (DONT waste your time with Quickcluster!.. just dont)
    IMPORTANT: make sure teh SOURCE and TARGETS of ALL the pbjects tha the transcodes will use are AVAILALBE (mounted and access) on ALL the machines that you want to participate.
    set up batch and make sure SEGMENTED is ticked in the inspector..... submit the job
    See how you go.
    However if you have Compressor V4.1 (dec 2013) then just set your network up and it should just work with out all teh voodoo in the previous steps.
    Post your results for others to see.
    Warwick
    Hong Kong

  • Compressor won't do distributed processing with FCPX.

    Running 4.1.3 with FCPX and Send to Compressor has all of the distributed processing groups greyed out in the selection drop down. I can only run compressor on This Computer.
    If I export the file from FCPX and then drop that into Compressor from the finder it works fine. Is there a reason it is not working when directly connected?
    I have deleted and reinstalled  both apps but get the same results.

    The thought is that the send to compressor menu items should have an option to use Compressor with distributed processing. Why can it not just export it to a temp location for me and then drop that into Compressor for me instead of requiring me to export it and then go open the finder and drop it in myself. There could be a second menu item "Send to Compressor (Distributed Encoding)" that you setup a defined temp location in preferences.
    Not a huge deal to click a few extra things, but would be nice to have things connected a little better.
    Side:
    I got a 4k camera and was about get a 5k iMac when I realized by chance that the new 5k model does not support TDM and I cannot cycle through my screenless Macs. Big bummer.

  • Distributed Processing Error...

    I am trying to run distributed processing through compressor and I keep running into the same error each time. I get "Quicktime Error: -120" and it gives me a different HOST each time. I did a little troubleshooting to narrow down where the problem is and it allows me to use distributed processing on the appropriate cluster with everything EXCEPT H.264.
    I can do MPEG-2, AIFF, Dolby Digital, etc. very quickly and easily with no problems at all. As soon as I try to do a compression to H.264, it gives me the Quicktime Error: -120 message and says it fails.
    Does anyone know what I can do fix this problem?
    I have Mac OS X (10.4.9), Final Cut Studio 2, with Compressor 3 and QMaster 3. Thank you for your help.

    I am getting the same error, but I am tring to encode on my machine and not another machine.
    I am Encodeing with the following settings...
    Name: 16.9 for Web
    Description: Web Compression
    File Extension: mov
    Estimated file size: 439.45 MB/hour of source
    Audio Encoder
    AAC, Stereo (L R), 48.000 kHz
    Video Encoder
    Format: QT
    Width: 640
    Height: 360
    Pixel aspect ratio: Square
    Crop: None
    Padding: None
    Frame rate: (100% of source)
    Frame Controls: Automatically selected: Off
    Codec Type: H.264
    Multi-pass: On, frame reorder: On
    Pixel depth: 24
    Spatial quality: 75
    Min. Spatial quality: 25
    Key frame interval: 24
    Temporal quality: 50
    Min. temporal quality: 25
    Average data rate: 1.024 (Mbps)
    Fast Start: on
    Compressed header
    requires QuickTime 3 Minimum
    Here is a copy of the log that was created.
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <services>
    <service displayName="POWERMACG5" hostName="tim-saffords-computer.local" type="servicecontroller:com.apple.stomp.transcoder" address="tcp://127.0.0.1:49164">
    <logs tms="208896850.332" tmt="08/15/2007 11:54:10.332" pnm="compressord">
    <mrk tms="209929152.114" tmt="08/27/2007 10:39:12.114" pid="296" kind="begin" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:1" msg="Preflighting."></mrk>
    <mrk tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" kind="begin" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <log tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" msg="Cluster storage URL = null"/>
    <log tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" msg="Not subscribing, null cluster storage."/>
    <mrk tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" kind="end" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <mrk tms="209929152.159" tmt="08/27/2007 10:39:12.159" pid="296" kind="begin" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.178" tmt="08/27/2007 10:39:12.178" pid="296" msg="Source file /private/var/tmp/folders.501/TemporaryItems/F2131D24-40B1-461B-BD70-F6.fcp is directly accessible."/>
    <log tms="209929152.179" tmt="08/27/2007 10:39:12.179" pid="296" msg="Source file can be opened."/>
    <log tms="209929152.179" tmt="08/27/2007 10:39:12.179" pid="296" msg="Source file can be read."/>
    <mrk tms="209929152.179" tmt="08/27/2007 10:39:12.179" pid="296" kind="end" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.334" tmt="08/27/2007 10:39:12.334" pid="296" msg="Enabling post processing due to streaming options: 2"/>
    <mrk tms="209929152.350" tmt="08/27/2007 10:39:12.350" pid="296" kind="end" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:1" msg="Preflighting service request end."></mrk>
    <mrk tms="209929152.458" tmt="08/27/2007 10:39:12.458" pid="296" kind="begin" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:3" msg="Preprocessing."></mrk>
    <mrk tms="209929152.471" tmt="08/27/2007 10:39:12.471" pid="296" kind="begin" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <log tms="209929152.472" tmt="08/27/2007 10:39:12.472" pid="296" msg="Cluster storage URL = null"/>
    <log tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" msg="Not subscribing, null cluster storage."/>
    <mrk tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" kind="end" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <mrk tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" kind="begin" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.480" tmt="08/27/2007 10:39:12.480" pid="296" msg="Source file /private/var/tmp/folders.501/TemporaryItems/F2131D24-40B1-461B-BD70-F6.fcp is directly accessible."/>
    <log tms="209929152.481" tmt="08/27/2007 10:39:12.481" pid="296" msg="Source file can be opened."/>
    <log tms="209929152.494" tmt="08/27/2007 10:39:12.494" pid="296" msg="Source file can be read."/>
    <mrk tms="209929152.495" tmt="08/27/2007 10:39:12.495" pid="296" kind="end" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.507" tmt="08/27/2007 10:39:12.507" pid="296" msg="preProcess for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov"/>
    <log tms="209929152.533" tmt="08/27/2007 10:39:12.533" pid="296" msg="Enabling post processing due to streaming options: 2"/>
    <log tms="209929152.543" tmt="08/27/2007 10:39:12.543" pid="296" msg="done preProcess for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov"/>
    <mrk tms="209929152.544" tmt="08/27/2007 10:39:12.544" pid="296" kind="end" what="service-request" req-id="FB2E1D96-82D0-44AD-9E6B-B2C9D96CBFCA:3" msg="Preprocessing service request end."></mrk>
    <mrk tms="209929152.632" tmt="08/27/2007 10:39:12.632" pid="296" kind="begin" what="service-request" req-id="BF170344-CFF9-4F61-A97A-502B6D3039FF:1" msg="Processing."></mrk>
    <mrk tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" kind="begin" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <log tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" msg="Cluster storage URL = null"/>
    <log tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" msg="Not subscribing, null cluster storage."/>
    <mrk tms="209929152.633" tmt="08/27/2007 10:39:12.633" pid="296" kind="end" what="CServiceControllerServer::mountClusterStorage"></mrk>
    <mrk tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" kind="begin" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" msg="Source file /private/var/tmp/folders.501/TemporaryItems/F2131D24-40B1-461B-BD70-F6.fcp is directly accessible."/>
    <log tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" msg="Source file can be opened."/>
    <log tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" msg="Source file can be read."/>
    <mrk tms="209929152.634" tmt="08/27/2007 10:39:12.634" pid="296" kind="end" what="CServiceControllerServer::checkRequiredFiles"></mrk>
    <log tms="209929152.648" tmt="08/27/2007 10:39:12.648" pid="296" msg="starting _processRequest for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov-1"/>
    <log tms="209929152.648" tmt="08/27/2007 10:39:12.648" pid="296" msg="Writing transcode segment: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov-1"/>
    <log tms="209929152.769" tmt="08/27/2007 10:39:12.769" pid="296" msg="QuickTiime Transcode, rendering in YUV 8 bit 444"/>
    <log tms="209929153.993" tmt="08/27/2007 10:39:13.993" pid="296" msg="Time for QuickTime transcode: 0.524838 seconds. status = -120"/>
    <log tms="209929154.300" tmt="08/27/2007 10:39:14.300" pid="296" msg="Done _processRequest for job target: file://localhost/Users/tsafford/Desktop/LIfetimeCut2-NoLimitLifetime-16.9%20for %20Web.mov-1"/>
    <mrk tms="209929154.366" tmt="08/27/2007 10:39:14.366" pid="296" kind="end" what="service-request" req-id="BF170344-CFF9-4F61-A97A-502B6D3039FF:1" msg="Processing service request error: QuickTime Error: -120"></mrk>
    </logs>
    </service>
    </services>

  • Distributed Processing

    I have a design question. I am looking to do some distributed processing on my data set (which is basically an object with six integer fields and a string). However, the processing will involve more than one object at a time (e.g. each request needs to work with all objects with attribute = value), which is potentially a large number. I think that rules out entry processors...unless I can run a filter within an entry processor. There is however also the scalability issue in that I will eventually have about 50 million objects minimum.
    As an invocable, I think I'll run into re-entrancy issues but assuming I can run queries on the backing map, I believe there is still the issue that there is no guarantee that all data will be processed and fail-overs etc. And from the documentation, I get the impression that invocables are meant more for management related tasks than raw data processing. Is there a way to guarantee fail-overs and complete processing? Should I really be doing heavy-duty querying on the backing map?
    I am also unable to partition some of the data off to another cache service as although an object A will have many related objects B, an object B can also have many related A (n:n relationship). Objects are related if they have the same value x for any attribute y.
    Is there any other option for processing this data? Result set sizes can run into millions.
    Please feel free to ask me more details. I have only attempted to give an overview of my problem here.
    Example
    Cache contains 10 million objects of type A, which define many-to-many relationship between a and b
    Cache contains 10 million objects of type A, which define many-to-many relationship between b and c
    Cache contains 10 million objects of type A, which define many-to-many relationship between c and d
    Same type objects are used since nature of relationship is the same in all cases.
    Challenge is to find the set <a,d>, where <a,b>, <b,c> and <c,d> relationships hold true. Sets a, b, c and d are not necessrily distinct sets in that values in d could also be in a, for example.
    Another example is to find set <d, b> where d values in set (x, y, z).
    Thanks in advance for your advice.

    Hi,
    user11218537 wrote:
    Thanks Robert. Very helpful comments.
    The way I have been doing intersections is by using the InFilter, with the appropriate extractors.
    For example, running with the same example you have put down:
    1. Retrieve a.x1...a.x6 from a. Let's call this S1.
    2. Retrieve the union of all b.x(i)-s for b-s where b.x1...b.x6 intersected with S1 is not empty. Let's call this union S2.
    3. Retrieve the union of all c.x(i)-s for c-s where c.x1...c.x6 intersected with S2 is not empty. Let's call this union S3.
    4. Retrieve all d.id-s where d.x1...d.x6 intersected with S3 is not empty.
    The thing is that in the above case, S1 runs into millions, as do S2, S3, S4. I have indexes defined on all six fields so retrieving S1 is not the issue, say with extractor getProperty1()
    In this case, I may have misunderstood what you had to do, but in my example S1 would be the up-to-six different integer values. You could possibly provide a bit more detailed information on what you need to do.
    But say S1 is 1 million strong. Defining then an InFilter from these values to retrieve matching S2 causes problems with delays and outofmemory exceptions. I am currently doing something like InFilter(S1, getProperty3()) for example, saying S2 contains objects where property 3 has any of the values in S1.
    Do you mean I should be doing the intersection manually, rather than cascading down the set of values via the InFilter? To resolve the outofmemory issues (sometimes with heap space, sometimes with "PacketReceiver"), I chop down the sets into 1000-10,000 strong InFilter and run the multiple smaller instances. Takes a long time though. Basically, what is the correct way to get 1 million objects with matching values from say a 50 million strong set? I'm guessing InFilter is inherently parallel in its implementation.
    My point is that you don't get them on the client side. You just work with the property values in a parallel aggregator to join up the values to S2.
    I'll look at the ParallelAwareAggregator to see if I could do the same in parallel. So instead of doing millions, each node is doing 10-100,000 from the set S1, and a proportional number from S2, S3 and S4. I'm guessing the logic will be the same, where InFilters are used to do the intersection.
    And you don't need to return the full set of matching cache entries to the client side at all, only the next partial result (which you still need to union up on the client side in aggregateResults).
    I will look at whether reverse maps are possible, but if I used a reverse map, would that be faster since I do have already have relevant indexes defined. I.e. is cache.get(keys) faster than cache.keySet(new Filter(keys, getProperty1()), if getProperty1() is indexed?
    Thanks.The reverse cache among others
    - gives you a single node to go for the set of cache keys having that particular x(i) value for the entire cache not only for that particular node, so it makes it more scalable than having to go for it to all nodes... also, since it is materialized in a cache, it possibly has a lower memory footprint as well (single serialized set, instead of one set per node and objects for each backing map key in the reverse index value)
    - gives you some control on the amount of data you retrieve from the cache at the same time, if you bump into OutOfMemory problems (you don't need to come up with the full matching set of keys in one go).
    You can also access the reverse cache from an aggregator to cut down on the latency for multiple reverse cache lookups if the reverse cache values likely have a large common part...
    Best regards,
    Robert

  • RequestTimeoutException error while invoking a BPEL process using RMI

    Hi,
    I am getting RequestTimeoutException error while invoking a BPEL process using this code:
    Locator locator = LocatorFactory.createLocator(jndiProps);
    String compositeDN = "default/"+processName+"!1.0";
    Composite composite = locator.lookupComposite(compositeDN);
    String serviceName = "client";
    Service deliveryService = composite.getService(serviceName);
    NormalizedMessage nm = new NormalizedMessageImpl();
    nm.getPayload().put("payload", requestXml);
    NormalizedMessage res = deliveryService.request("process", nm);
    responseMap = res.getPayload();
    The error stack trace is
    weblogic.rmi.extensions.RequestTimeoutException: RJVM response from 'weblogic.rjvm.RJVMImpl@604f2d14 - id: '-361032376059206
    2776S:10.67.232.164:[8001,-1,-1,-1,-1,-1,-1]:emaar_domain:soa_server1' connect time: 'Mon Jan 18 11:34:41 GST 2010'' for 'executeServiceMethod
    (Loracle.soa.management.CompositeDN;Ljava.lang.String;Ljava.lang.String;[Ljava.lang.Object;) 'timed out after: 60000ms.
    oracle.fabric.common.FabricInvocationException: weblogic.rmi.extensions.RequestTimeoutException: RJVM response from 'weblogic.rjvm.RJVMImpl@60
    4f2d14 - id: '-3610323760592062776S:10.67.232.164:[8001,-1,-1,-1,-1,-1,-1]:emaar_domain:soa_server1' connect time: 'Mon Jan 18 11:34:41 GST 20
    10'' for 'executeServiceMethod(Loracle.soa.management.CompositeDN;Ljava.lang.String;Ljava.lang.String;[Ljava.lang.Object;) 'timed out after: 6
    0000ms.
            at oracle.soa.management.internal.facade.ServiceImpl.request(ServiceImpl.java:135)
            at com.gss.common.bo.BpelUtil.invokeBPELProcess(BpelUtil.java:81)
    To add to it the BPEL process is executing successfuly and RMI call timeout is happening.
    Can I know how to increase the related timeout value?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Have got the same problem. Scenario at my end is little different though.
    I am trying to invoke a BPEL process from an ESB Service.
    I am trying to look into it..
    However, would be grateful, if someone can give some insight into this since many are running into this issue without being able to fix.
    Ashish.

  • Investment mgt:Error while distributing budgets using IM52 transaction code

    I am getting an error message"Availability control can not be activated for hierarchial projects" when I distribute budgets using IM52  transaction code in Investment management.
    Can you please tell me why and how to solve it?
    Edited by: aravind  reddy on Aug 19, 2008 4:34 PM

    I am getting an error message"Availability control can not be activated for hierarchial projects" when I distribute budgets using IM52  transaction code in Investment management.
    Can you please tell me why and how to solve it?
    Edited by: aravind  reddy on Aug 19, 2008 4:34 PM

  • Error While Deploying the BPEL Process using obant script

    Hi All,
    I am getting the following error while deploying the BPEL Process using obant script. we are using the BPEL Version 10.1.2.0.2.Any information in this regard will be really helpful.
    Buildfile: build.xml
    main:
    [bpelc] file:/home5102/dibyap/saravana/Test/CreditRatingService.wsdl
    [bpelc] validating "/home5102/dibyap/saravana/Test/CreditRatingService.bpel" ...
    BUILD FAILED
    /home5102/dibyap/saravana/Test/build.xml:15: ORABPEL-01002
    Domain directory not found.
    The process cannot be deployed to domain "default" because the domain directory "/opt02/app/ESIT/oracle/esit10gR2iAS/BPEL10gR2/iAS/integration/orabpel/domains/default/deploy" cannot be found or cannot b
    e written to.
    Please check your -deploy option value; "default" must refer to a domain that has been installed locally on your machine.
    Total time: 23 seconds
    dibyap@ios5102_ESIBT:/home5102/dibyap/saravana/Test>
    Thanks,
    Saravana

    In 10.1.2.0.2 you need to create your own build.xml
    I have found an example, it may be of some help. This does call a property file
    cheers
    James
    <?xml version="1.0" ?>
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Run cxant on this file to build, package and deploy the
    ASB_EFT BPEL process
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <project name="ASB_EFT" default="main" basedir=".">
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Name of the domain the generated BPEL suitcase will be deployed to
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <property name="deploy" value="default" />
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    What version number should be used to tag the generated BPEL archive?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <property name="rev" value="1.0" />
    <!-- BPEL Best Practices Properties -->
    <!-- Defaults Properties for TARGET environments
    # CHANGE THIS FILE TO REFLECT THE TARGET ENVIRONEMNT
    # either dev, test, or prod.properties
    -->
    <property file="ebusd.properties"/>
    <property name="env" value="${env.name}"/>
    <property name="current.project.name" value="${project.name}"/>
    <property name="target.project.name" value="${project.name}_${env}"/>
    <property name="deployment.profile" value ="${env}.properties"/>
    <property name="source.development.directory" location="${basedir}"/>
    <property name="target.env.directory" location="${basedir}/deploy/${project.name}_${env}"/>
    <property file="${deployment.profile}"/>
    <property name="build.fileencoding" value="UTF-8"/>
    <!-- Prints Environment
    -->
    <target name="print.env" description="Display environment settings">
    <echo message="Base Directory: ${basedir}"/>
    <echo message="Deployment Profile: ${deployment.profile}"/>
    <echo message="target.env.directory: ${target.env.directory}"/>
    <echo message="Deploy to Domain: ${deployToDomain}"/>
    <echo/>
    <echo message="os.name: ${os.name}"/>
    <echo message="os.version: ${os.version}"/>
    <echo message="os.arch: ${os.arch}"/>
    <echo/>
    <echo message="java.home: ${java.home}"/>
    <echo message="java.vm.name: ${java.vm.name}"/>
    <echo message="java.vm.vendor: ${java.vm.vendor}"/>
    <echo message="java.vm.version: ${java.vm.version}"/>
    <echo message="java.class.path: ${java.class.path}"/>
    <echo/>
    <echo message="env: ${env}"/>
    <echo message="current.project.name: ${current.project.name}"/>
    <echo message="target.project.name: ${target.project.name}"/>
    <echo message="server.name: ${server.name}"/>
    </target>
    <!--
    Copies the current directory structure along with
    all the file into the target.env.directory and
    change the name of the project
    -->
    <target name="create.environment">
    <copy todir="${target.env.directory}">
    <fileset dir="${basedir}"/>
    <filterset begintoken="@" endtoken="@">
    <filtersfile file="${deployment.profile}"/>
    </filterset>
    </copy>
    <move file="${target.env.directory}/${current.project.name}.jpr" tofile="${target.env.directory}/${target.project.name}.jpr"/>
    </target>
    <target name="main">
    <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    the bpelc task compiles and package BPEL processes into versioned BPEL
    archives (bpel_...jar). See the "Programming BPEL" guide for more
    information on the options of this task.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
    <bpelc input="${basedir}/bpel.xml" rev="${rev}" deploy="${deploy}" />
    </target>
    </project>
    here is a property file
    project.name=ASB_EFT
    env.name=ebusd
    deployToDomain=default
    server.name=[server]
    server.port=7788
    ebusd\:7788=http://[server]:7788/
    IntegrationMailAccount=OracleBPELTest
    IntegrationMailAddress=[email]
    IntegrationMailPassword=[password]
    archivedir=[directory]
    inbounddir=/[directory]
    errordir=[directory]
    outbounddir=[directory]
    bpelpw=bpel
    dbhost1=[dbserver]
    dbhost2=[dbserver]
    dbport=1523
    dbservice=bpel
    dbconnstr=jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=[server])(PORT=1523))(ADDRESS=(PROTOCOL=tcp)(HOST=[server])(PORT=1523)))(CONNECT_DATA=(SERVICE_NAME=ebusd)))

  • Invoking deployed bpel process using WLST

    Hi All,
    Am new to WLST. So please tolerate with me if i am asking the obvious
    I read through the forums and have successfully deployed a BPEL process using WLST as given below
    ant -f ant-sca-deploy.xml -DserverURL=http://localhost:8001 -DsarLocation=C:\oracle\Middleware\PS3\Oracle_SOA1\bin\sca_esd9_jca_bpel1.1_ccgd_trn_ob_rev1.jar -Doverwrite=true -Duser=weblogic -Dpassword=welcome1 -DforceDefault=true
    And i have also tried un deploying and it works fine.
    But, the problem is, i want to invoke the deployed BPEL process either by accessing the WSDL URL (http://localhost:8001/soa-infra/services/default/esd9_jca_bpel1.1_ccgd_trn_ob/bpelprocess1_client_ep?WSDL) or by any other means using WLST. I have to send an input to this deployed process.
    I have googled it and have found ways to invoke a web service using JAVA or VBS. But is it possible to do it through WLST?
    Thanks in advance.

    Sancho,
    Thanks for the prompt reply. We trying to automate the following process:-
    1. The user select multiple documents from a folder in a library that contains all released documents.
    2. He then locks the documents.
    3. Selects the locked documents and copies it to a folder in the personal library.
    We are trying to lock and copy the documents as a 1 step process, because its difficult for the users to select all of the documents locked earlier and then copy it the personal library.
    We are trying to take the folder name as the user parameter, when the files are locked, so that the process and create the lock the files, create the folder and copy the files in this folder.
    Thanks again for your time and help.
    Hetal

  • Error invoking bpel process using axis client

    When I am trying to invoke bpel process using axis client I'am having following error:
    AxisFault
    faultCode: {http://schemas.xmlsoap.org/soap/envelope/}Server.userException
    faultSubcode:
    faultString: org.xml.sax.SAXException: Bad envelope tag: html
    faultActor:
    faultNode:
    faultDetail:
         {http://xml.apache.org/axis/}stackTrace: org.xml.sax.SAXException: Bad envelope tag: html
         at org.apache.axis.message.EnvelopeBuilder.startElement(EnvelopeBuilder.java:109)
         at org.apache.axis.encoding.DeserializationContextImpl.startElement(DeserializationContextImpl.java:976)
         at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)
         at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source)
         at org.apache.xerces.impl.XMLNSDocumentScannerImpl$NSContentDispatcher.scanRootElementHook(Unknown Source)
         at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
         at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
         at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
         at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
         at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
         at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
         at javax.xml.parsers.SAXParser.parse(SAXParser.java:345)
         at org.apache.axis.encoding.DeserializationContextImpl.parse(DeserializationContextImpl.java:242)
         at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:538)
         at org.apache.axis.Message.getSOAPEnvelope(Message.java:376)
         at org.apache.axis.client.Call.invokeEngine(Call.java:2583)
         at org.apache.axis.client.Call.invoke(Call.java:2553)
         at org.apache.axis.client.Call.invoke(Call.java:1753)
         at com.oracle.sample.ws.ArrayClient.main(ArrayClient.java:44)
    org.xml.sax.SAXException: Bad envelope tag: html
         at org.apache.axis.AxisFault.makeFault(AxisFault.java:129)
         at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:543)
         at org.apache.axis.Message.getSOAPEnvelope(Message.java:376)
         at org.apache.axis.client.Call.invokeEngine(Call.java:2583)
         at org.apache.axis.client.Call.invoke(Call.java:2553)
         at org.apache.axis.client.Call.invoke(Call.java:1753)
         at com.oracle.sample.ws.ArrayClient.main(ArrayClient.java:44)
    Caused by: org.xml.sax.SAXException: Bad envelope tag: html
         at org.apache.axis.message.EnvelopeBuilder.startElement(EnvelopeBuilder.java:109)
         at org.apache.axis.encoding.DeserializationContextImpl.startElement(DeserializationContextImpl.java:976)
         at org.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)
         at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source)
         at org.apache.xerces.impl.XMLNSDocumentScannerImpl$NSContentDispatcher.scanRootElementHook(Unknown Source)
         at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
         at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
         at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
         at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
         at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
         at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
         at javax.xml.parsers.SAXParser.parse(SAXParser.java:345)
         at org.apache.axis.encoding.DeserializationContextImpl.parse(DeserializationContextImpl.java:242)
         at org.apache.axis.SOAPPart.getAsSOAPEnvelope(SOAPPart.java:538)
         ... 5 more
    My client code is following:
    Service service = new Service();
    Call call = (Call) service.createCall();
    call.setTargetEndpointAddress(new java.net.URL("http://localhost:9700/orabpel/default/Array"));
    SOAPEnvelope env = new SOAPEnvelope();
    Name bodyName = env.createName("ArrayRequest", "tns", "http://localhost/");
    SOAPBodyElement request = body.addBodyElement(bodyName);
    Name childName = env.createName("input","tns","http://localhost/");
    SOAPElement input = request.addChildElement(childName);
    input.addTextNode("ORCL");
    call.invoke(env);
    MessageContext mc = call.getMessageContext();
    System.out.println("\n============= Response ==============");
    XMLUtils.PrettyElementToStream(mc.getResponseMessage().getSOAPEnvelope().getAsDOM(), System.out);
    I'am having the same error with client generated by wsdl2java.
    Regards

    Hi -
    A few things that you may want to try to troubleshoot this issue:
    1) Run our sample of calling a BPEL process from Axis, located in:
    C:\orabpel\samples\interop\axis\AXISCallingSyncBPEL
    2) Run your client through a TCP tunnel to see the specific SOAP request message that is being sent to the BPEL process and the SOAP response that is being generated. This should help you determine which side of the communication is causing the problem, as well as to rule out proxy server or other issues that are very common problems for this situation.
    Dave

  • How to Upload a File in Bpel Process using JSP

    I am trying to upload file in bpel process using front end as a Jsp.
    i create the jsp page and i am able to pass the value from jsp to bpel process.
    In bpel process i don't know how to pass or assign the specified file name into file adapter for reading the files.
    Please help me...
    Saravanan

    You don't assign the url of the file to it.
    To either get the data from the file into the bpel process you could use the url-parameter together with the ora:readFile function....or you could let your web-application upload the file to some location on the server...and on this location you could use the file-adapter together with the polling, to start your bpel process.

  • How to kill Forms Runaway Process using 95% CPU and running for 2 hours.

    We had a situation at E-Business Suite customer (using Oracle VM server) where some of Form processes were not being cleared by form timeout settings automatically.
    Also when user exits the form session from front end, the linux form process (PID) and DB session did not exit properly, so they got hung.
    They were spiking CPU and memory usage and causing e-business suite to perform slowely and ultimately causing VM host to reboot the production VM guest (running on Linux).
    We could see the form processes (PIDs) using almost 100% cpu with "top" command and running for a long time.
    Also we verified those Form Sessions did not exist in the application itself.
    ie. Using from Grid Control -> OAM-> Site Map -> Monitoring (tab) -> "Form Sessions".
    It means that we could safely kill that form process from Linux using "kill -9 <PID>" command.
    But that required a continuous monitoring and manual DBA intervention as customer is 24x7 customer.
    So, I wrote a shell script to do the following;
    •     Cron job runs every half an hour 7 days a week which calls this shell script.
    •     Shell script runs and tries to find "top two" f60webmx processes (form sessions) using over 95% cpu with 2 minutes interval.
    •     If no process is found or CPU% is less than 95%, it exits and does nothing.
    •     If top process is found, it searches for its DB session using apps login (with hidden apps password file - /home/applmgr/.pwd).
    a.     If DB session is NOT found (which means form process is hung), it kills the process from unix and emails results to <[email protected]>
    b.     If DB session is found, it waits for 2 hours so that form process times automatically via form session timeout setting.
    It also emails the SQL to check the DB session for that form process.
    c.     If DB session is found and it does not timeout after 2 hours,
    it kills the process from unix (which in turn kills the DB session). Output is emailed.
    This are the files required for this;
    1. Cron job which calls the shell script looks like this;
    # Kill form runaway process, using over 95% cpu having no DB session or DB session for > 2hrs
    00,30 * * * * /home/applmgr/forms_runaway.sh 2>&1
    2. SQL that this script calls is /home/applmgr/frm_runaway.sql and looks like;
    set head off
    set verify off
    set feedback off
    set pagesize 0
    define form_client_PID = &1
    select count(*) from v$session s , v$process p, FND_FORM_SESSIONS_V f where S.AUDSID=f.audsid and p.addr=s.paddr and s.process='&form_client_PID';
    3. Actual shell script is /home/applmgr/forms_runaway.sh and looks like;
    # Author : Amandeep Singh
    # Description : Kills runaway form processes using more than 95% cpu
    # and Form Session with no DB session or DB session > 2hrs
    # Dated : 11-April-2012
    #!/bin/bash
    . /home/applmgr/.bash_profile
    PWD=`cat ~/.pwd`
    export PWD
    echo "`date`">/tmp/runaway_forms.log
    echo "----------------------------------">>/tmp/runaway_forms.log
    VAR1=`top -b -u applmgr -n 1|grep f60webmx|grep -v sh|grep -v awk|grep -v top|sort -nrk9|head -2|sed 's/^[ \t]*//;s/[ \t]*$//'| awk '{ if ($9 > 95 && $12 = "f60webmx") print $1 " "$9 " "$11 " "$12; }'`
    PID1=`echo $VAR1|awk '{print $1}'`
    CPU1=`echo $VAR1|awk '{print $2}'`
    TIME1=`echo $VAR1|awk '{print $3}'`
    PROG1=`echo $VAR1|awk '{print $4}'`
    PID_1=`echo $VAR1|awk '{print $5}'`
    CPU_1=`echo $VAR1|awk '{print $6}'`
    TIME_1=`echo $VAR1|awk '{print $7}'`
    PROG_1=`echo $VAR1|awk '{print $8}'`
    echo "PID1="$PID1", CPU%="$CPU1", Running Time="$TIME1", Program="$PROG1>>/tmp/runaway_forms.log
    echo "PID_1="$PID_1", CPU%="$CPU_1", Running Time="$TIME_1", Program="$PROG_1>>/tmp/runaway_forms.log
    echo " ">>/tmp/runaway_forms.log
    sleep 120
    echo "`date`">>/tmp/runaway_forms.log
    echo "----------------------------------">>/tmp/runaway_forms.log
    VAR2=`top -b -u applmgr -n 1|grep f60webmx|grep -v sh|grep -v awk|grep -v top|sort -nrk9|head -2|sed 's/^[ \t]*//;s/[ \t]*$//'| awk '{ if ($9 > 95 && $12 = "f60webmx") print $1 " "$9 " "$11 " "$12; }'`
    PID2=`echo $VAR2|awk '{print $1}'`
    CPU2=`echo $VAR2|awk '{print $2}'`
    TIME2=`echo $VAR2|awk '{print $3}'`
    PROG2=`echo $VAR2|awk '{print $4}'`
    PID_2=`echo $VAR2|awk '{print $5}'`
    CPU_2=`echo $VAR2|awk '{print $6}'`
    TIME_2=`echo $VAR2|awk '{print $7}'`
    PROG_2=`echo $VAR2|awk '{print $8}'`
    HRS=`echo $TIME1|cut -d: -f1`
    exprHRS=`expr "$HRS"`
    echo "PID2="$PID2", CPU%="$CPU2", Running Time="$TIME2", Program="$PROG2>>/tmp/runaway_forms.log
    echo "PID_2="$PID_2", CPU%="$CPU_2", Running Time="$TIME_2", Program="$PROG_2>>/tmp/runaway_forms.log
    echo " ">>/tmp/runaway_forms.log
    # If PID1 or PID2 is NULL
    if [ -z ${PID1} ] || [ -z ${PID2} ]
    then
    echo "no top processes found. Either PID is NULL OR CPU% is less than 95%. Exiting...">>/tmp/runaway_forms.log
    elif
    # If PID1 is equal to PID2 or PID1=PID_2 or PID_1=PID2 or PID_1=PID_2
    [ ${PID1} -eq ${PID2} ] || [ ${PID1} -eq ${PID_2} ] || [ ${PID_1} -eq ${PID2} ] || [ ${PID_1} -eq ${PID_2} ];
    then
    DB_SESSION=`$ORACLE_HOME/bin/sqlplus -S apps/$PWD @/home/applmgr/frm_runaway.sql $PID1 << EOF
    EOF`
    echo " ">>/tmp/runaway_forms.log
    echo "DB_SESSION ="$DB_SESSION >>/tmp/runaway_forms.log
    # if no DB session found for PID
    if [ $DB_SESSION -eq 0 ] then
    echo " ">>/tmp/runaway_forms.log
    echo "Killed Following Runaway Forms Process:">>/tmp/runaway_forms.log
    echo "-------------------------------------------------------------------">>/tmp/runaway_forms.log
    echo "PID="$PID1", CPU%="$CPU1", Running Time="$TIME1", Program="$PROG1>>/tmp/runaway_forms.log
    kill -9 $PID1
    #Email the output
    mailx -s "Killed: `hostname -a` Runaway Form Processes" [email protected] </tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    else
    # If DB session exists for PID
    if [ ${exprHRS} -gt 120 ]; then
    echo $DB_SESSION "of Database sessions exist for this forms process-PID="$PID1". But its running for more than 2 hours. ">>/tmp/runaway_forms.log
    echo "Process running time is "$exprHRS" minutes.">>/tmp/runaway_forms.log
    echo "Killed Following Runaway Forms Process:">>/tmp/runaway_forms.log
    echo "-------------------------------------------------------------------">>/tmp/runaway_forms.log
    echo "PID="$PID1", CPU%="$CPU1", Running Time="$TIME1", Program="$PROG1>>/tmp/runaway_forms.log
    kill -9 $PID1
    #Email the output
    mailx -s "`hostname -a`: Runaway Form Processes" [email protected] </tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    else
    echo "Process running time is "$exprHRS" minutes.">>/tmp/runaway_forms.log
    echo $DB_SESSION "of Database sessions exist for PID="$PID1" and is less than 2 hours old. Not killing...">>/tmp/runaway_forms.log
    echo "For more details on this PID, run following SQL query;">>/tmp/runaway_forms.log
    echo "-----------------------------------------------------------------------">>/tmp/runaway_forms.log
    echo "set pages 9999 lines 150">>/tmp/runaway_forms.log
    echo "select f.user_form_name, f.user_name, p.spid DB_OS_ID , s.process client_os_id,, s.audsid, f.PROCESS_SPID Forms_SPID,">>/tmp/runaway_forms.log
    echo "to_char(s.logon_time,'DD-Mon-YY hh:mi:ss'), s.seconds_in_wait">>/tmp/runaway_forms.log
    echo "from v\$session s , v\$process p, FND_FORM_SESSIONS_V f">>/tmp/runaway_forms.log
    echo "where S.AUDSID=f.audsid and p.addr=s.paddr and s.process='"$PID1"' order by p.spid;">>/tmp/runaway_forms.log
    mailx -s "`hostname -a`: Runaway Form Processes" [email protected] </tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    fi
    fi
    else
    #if PID1 and PID2 are not equal or CPU% is less than 95%.
    echo "No unique CPU hogging form processes found. Exiting...">>/tmp/runaway_forms.log
    cat /tmp/runaway_forms.log
    fi
    If you have the same problem with some other unix and DB processes, the script can be easily modified and used.
    But use this with thorough testing first (by commenting out <kill -9 $PID1> lines.
    Good luck.
    Edited by: R12_AppsDBA on 19/04/2012 13:10

    Thanks for sharing the script!
    Hussein

  • Dynamic configuration in integration process using abap mapping

    Hi everybody,
    i have the following scenario:
    file adapter -> integration process -> file adapter
    The integration process uses an ABAP mapping and sets the filename in dynamic configuration as follows:
    *-- Set Parameter
        clear ls_dyn_record.
        ls_dyn_record-name      = gc_dyn_config_name.
        ls_dyn_record-namespace = gc_dyn_config_ns.
        ls_dyn_record-value     = <new_file_name>
    *-- Write configuration
        ir_dyn_config->add_record( ls_dyn_record ).
    But now the new filename is not reflected in the file adapter (receiver). In the integration monitor (SXMB_MONI) i still find the old filename.
    Whats wrong?
    Elko

    The ABAP mapping is more complex, setting filename in Dyn. Conf. is just one step in mapping.
    If I check the Workflow protocol of the Integration Process, I find the following in the Trace of the ABAP-Mapping:
    The filename has been set to 3233340.SWNF00HW.P10I. The Suffix P10I has been added in the ABAP mapping.
    When I check the subsequent message in SXMB_MONI I find:
    The added suffix is missing in the filename !!
    Elko

  • Communication between thread in the same process using file interface.

    Hi,
    I am developing  driver and i need to communicate between two thread.
    >can anyone guide me on implementing communication between two thread in the same process using File Interface. First thread will be driver and second will be application.I need to send IOCTL like commands using File interface ie is WriteFile(),ReadFile()
    from Host process to driver through file-interface(which run's in driver context).Host process should not be blocked for the duration of the driver to process the command.
    >File-interface will run in driver context and it will be responsible to receive command from application and pass it to the driver.
    what are the complexity introduced?
    >Can anyone also give me the link/reference to get more information on this topic?
    >How to replace IOCTL command's for instance baud _rate change command with a file interface for example with IRP.

    Here  is the detailed query:
    Hardware Abstraction Layer will interact with Driver(Both will be running in complete different process) .there is a IOCTL for command and  File interface for read and write.
    My requirement is:
    Both should run in the same process so HAL will run as one thread and driver as another thread in the same process .I don't want HAL to wait for completion of request and also i don't want driver to be blocked .
    We are planning to use a File Interface for communication between Hardware abstraction layer and Driver
    HAL will send the command or read/write operation to a file interface and driver will get the command or read/write request from the File interface
    There is a flexibility to change Hardware Abstraction layer and also the Driver
    Is it possible to use IOCTL between two thread under same process? if not what other options do we have.
    Can we use File interface to  send command (like IOCTL) between two thread?

Maybe you are looking for