Polling Intervals

Does anybody know how to set the polling interval in the adapters (like DB, File, FTP, etc) to less than 1 sec and also to the extent of 0.001 sec (1 millisec) ??
Please consider this as urgent.

Stephen, thanx for your reply.
Actually, I have a integration prototype for which I have to show the throughput (100000 messages of 262 bytes each) of the hub (BPEL PM) at different polling intervals like "0.1", "0.01" and "0.001".
The customer wants to shutdown (crash) the hub and also the network in different scenarios to test messaging reliability aspects like message persistence, guaranteed delivery, once-and-only-once delivery.
The JMS implementation is over OJMS with JMS adapters as the senders and the DB Adapters and the FTP adapters as the receivers.
Regards,
Amit

Similar Messages

  • How to change the default workspace polling behaviour

    Here are the instructions on how to change the polling interval in workspace:
    The workspace source project is usually under (C:\Adobe\Adobe LiveCycle ES2\LiveCycle_ES_SDK\misc\Process_Management\Workspace)
    The file that needs to be modified is SessionManager.as and is under  (C:\Adobe\Adobe LiveCycle ES2\LiveCycle_ES_SDK\misc\Process_Management\Workspace\ws\Workspace\foundation\src\lc\fou ndation)
    Set the value in a javascript file and then read this from the Workspace app.  You can see an example of this in the same createChannelSet() method wrt the “enableSmallMessages” variable. To do this:
    1- Add the following lines to the workspace-config.js file right after the enableSmallMessages var definition (the js file is in the sdk under (C:\Adobe\Adobe LiveCycle ES2\LiveCycle_ES_SDK\misc\Process_Management\Workspace\ws\Workspace\ui\html-template\js) – note that my example sets it to 10 seconds:
    1- Then in the createChannelSet() method, define a new var with the same name and type=Number and default it to 3000 (to be consistent with the current behaviour):
                   var pollingInterval:Number = 3000;
    2- Then, in the existing block where ExternalInterface.available is true, add the line to read the new value from the js file like this:
                  pollingInterval = ExternalInterface.call(“pollingInterval”);
    3- Set pollingInterval on the channel with:
                 channel.pollingInterval = pollingInterval;
    4- Add this function to the workspace-config.js file
          // pollingInterval is the channelset polling interval in millisecs   
          function pollingInterval()
                return 10000;
    Compile this code and rebuild the workspace war, making sure the war contents contain the updated js file too, and then the polling interval will be read from the js file, making it much easier to change.
    Here is the doc on how to customize workspace: http://help.adobe.com/en_US/livecycle/9.0/customizeworkspaceui.pdf

    Changing the Workspace Polling intervals just got a whole lot easier.  Go to http://blogs.adobe.com/ADEP/2011/08/workspace-polling-in-adep.html for more info.

  • Advice for polling XML

    Hi,
    I have an app that allows users to edit XML data. Ultimately
    this app should be able to accomodate multiple simultaneous users.
    Can anyone suggest a method I should employ that will keep
    the XML data in my HTTPService's synced and updated in the UI in
    realtime? I have considered a timed polling script on the flex
    side, but changes could potentially conflict between the polling
    intervals.
    My app uses PHP on the server and I am using Javascript
    callbacks for other portions of the app.
    Thank you!

    Looks like Flex Data Services is the way to go. For anyone
    having same need:
    "Data push
    Flex Data Services offers data-push capability, enabling data
    to automatically be pushed to the client application without
    polling. This highly scalable capability can push data to thousands
    of concurrent users, providing up-to-the-second views of critical
    data, such as stock trader applications, live resource monitoring,
    shop floor automation, and more."
    http://www.adobe.com/products/flex/dataservices/

  • Oracle database Adapter polling in OSB Cluster

    Hi,
    I have OSB 11g with two nodes. I have a database adapter that polls a table after every 30 seconds and perform logical delete on the rows read. I came across interesting situation. When one node is up and PollingInterval=30, it works okay and polling is done right after 30 seconds but when both nodes are up the PollingInterval property doesnt work as expected.
    First node will poll after 24 seconds and then second node will again poll after 6 seconds (node1+node2=30 seconds in total).
    and After i restart the servers the first node is polling at 18 sec and second node is polling at 12 sec (node1 +node2=30 sec), so different polling intervals.
    How does it determines the first server should poll at 24 or 18 sec and second node should poll at 6 or 12 sec?
    I would like to know is there any property or configuration required to poll the table for each node for every 15 seconds (node1(15sec)+node2(15sec)=30 sec)?
    Please help me, It is very urgent?
    Thanks
    Rajesh.

    Hi,
          If any one has the same issue, please let me know.
    Also if you know the solution, can you please update here, it's very helpful for me.
    Thanks,
    Rejesh.

  • [solved] Ntpd: polling interval stuck at 64 s

    I've switched from one-shot ntp updates at boot to ntpd yesterday. Everything works fine, but I've noticed the update interval ("poll" in the ntpq output) seems to be stuck at 64 s, no matter how long ntpd has been running:
    # ntpq -pn
    remote refid st t when poll reach delay offset jitter
    ==============================================================================
    -78.46.223.89 129.69.1.153 2 u 46 64 377 57.993 -3.914 3.215
    +178.63.144.202 192.53.103.108 2 u 48 64 377 58.355 -4.997 2.041
    +188.40.51.2 192.53.103.108 2 u 49 64 377 58.183 -7.654 6.429
    *195.34.187.132 131.188.3.221 2 u 54 64 377 51.717 -5.382 1.119
    The ntp.conf man page says the defaults are 64 s (min) and 1024 s (max), so why isn't "poll" increasing? I've even set a higher minpoll and maxpoll in the config file, see below, but the value is still stuck at 64 s for some reason.
    Running 64 s update intervals for extended periods is putting undue strain on the time servers, so I'd really like ntpd to use a higher value.
    # cat /etc/ntp.conf
    # With the default settings below, ntpd will only synchronize your clock.
    # For details, see:
    # - the ntp.conf man page
    # - http://support.ntp.org/bin/view/Support/GettingStarted
    # - https://wiki.archlinux.org/index.php/Network_Time_Protocol_daemon
    # Associate to public NTP pool servers; see http://www.pool.ntp.org/
    server 0.de.pool.ntp.org
    server 1.de.pool.ntp.org
    server 2.de.pool.ntp.org
    server 3.de.pool.ntp.org
    minpoll 8
    maxpoll 12
    # Only allow read-only access from localhost
    restrict default noquery nopeer
    restrict 127.0.0.1
    restrict ::1
    # Location of drift and log files
    driftfile /var/lib/ntp/ntp.drift
    logfile /var/log/ntp.log
    # NOTE: If you run dhcpcd and have lines like 'restrict' and 'fudge' appearing
    # here, be sure to add '-Y -N' to the dhcpcd_ethX variables in /etc/conf.d/net
    P.S. I'm on a pure systemd setup, in case that's important...
    P.P.S. I've noticed right now "poll" is at 256 s, i.e. 2**8. So at least the new minpoll value seems to work now.
    ============================
    Update, 2012-09-23:
    Now the polling interval has finally hit 1024 s as it should! So my interpretation is that ntpd simply needs to run a few days before it actually uses the longer polling intervals.
    Last edited by Morn (2012-09-23 16:55:10)

    I've switched from one-shot ntp updates at boot to ntpd yesterday. Everything works fine, but I've noticed the update interval ("poll" in the ntpq output) seems to be stuck at 64 s, no matter how long ntpd has been running:
    # ntpq -pn
    remote refid st t when poll reach delay offset jitter
    ==============================================================================
    -78.46.223.89 129.69.1.153 2 u 46 64 377 57.993 -3.914 3.215
    +178.63.144.202 192.53.103.108 2 u 48 64 377 58.355 -4.997 2.041
    +188.40.51.2 192.53.103.108 2 u 49 64 377 58.183 -7.654 6.429
    *195.34.187.132 131.188.3.221 2 u 54 64 377 51.717 -5.382 1.119
    The ntp.conf man page says the defaults are 64 s (min) and 1024 s (max), so why isn't "poll" increasing? I've even set a higher minpoll and maxpoll in the config file, see below, but the value is still stuck at 64 s for some reason.
    Running 64 s update intervals for extended periods is putting undue strain on the time servers, so I'd really like ntpd to use a higher value.
    # cat /etc/ntp.conf
    # With the default settings below, ntpd will only synchronize your clock.
    # For details, see:
    # - the ntp.conf man page
    # - http://support.ntp.org/bin/view/Support/GettingStarted
    # - https://wiki.archlinux.org/index.php/Network_Time_Protocol_daemon
    # Associate to public NTP pool servers; see http://www.pool.ntp.org/
    server 0.de.pool.ntp.org
    server 1.de.pool.ntp.org
    server 2.de.pool.ntp.org
    server 3.de.pool.ntp.org
    minpoll 8
    maxpoll 12
    # Only allow read-only access from localhost
    restrict default noquery nopeer
    restrict 127.0.0.1
    restrict ::1
    # Location of drift and log files
    driftfile /var/lib/ntp/ntp.drift
    logfile /var/log/ntp.log
    # NOTE: If you run dhcpcd and have lines like 'restrict' and 'fudge' appearing
    # here, be sure to add '-Y -N' to the dhcpcd_ethX variables in /etc/conf.d/net
    P.S. I'm on a pure systemd setup, in case that's important...
    P.P.S. I've noticed right now "poll" is at 256 s, i.e. 2**8. So at least the new minpoll value seems to work now.
    ============================
    Update, 2012-09-23:
    Now the polling interval has finally hit 1024 s as it should! So my interpretation is that ntpd simply needs to run a few days before it actually uses the longer polling intervals.
    Last edited by Morn (2012-09-23 16:55:10)

  • Help with Delta Values needed

    Below is an example of the output from a “show access-list” command on the Cisco PIX/ASA.
    NDC-FW-01# show access-list
    access-list allow-in line 1 extended permit tcp any host <IP_1> eq www (hitcnt=186) 0x67305930
    access-list allow-in line 2 extended permit tcp any host <IP_1> eq https (hitcnt=0) 0x4612a177
    access-list allow-in line 11 extended permit tcp any host <IP_2> eq www (hitcnt=480) 0xce0a6156
    access-list allow-in line 12 extended permit tcp any host <IP_2> eq https (hitcnt=64) 0xf530e0aa
    access-list allow-in line 20 extended permit tcp any host <IP_3> eq www (hitcnt=7671) 0xea971ac0
    access-list allow-in line 21 extended permit tcp any host <IP_3> eq https (hitcnt=41920) 0x8d30dc38
    access-list allow-in line 22 extended permit tcp any host <IP_4> eq https (hitcnt=34) 0xbf7c0975
    What I want to be able to do is monitor the delta value of the hit count between polling intervals. I want to do this, ideally, for only some access-list, and for only some of the access-list entries within those access-lists.
    Is this something I can do directly or do I need to use a third party piece of software to do this and if so, can anyone suggest which software to use ?
    Thanks very much
    DavidT

    That's nothing the ASA can do native. But if you have a linux-Box, it shouldn't be to hard to script with some lines of AWK:  http://www.gnu.org/software/gawk/manual/gawk.html
    BTW: You should move this Thread to "Firewalling" as it has nothing to do with IPS ...
    Don't stop after you've improved your network! Improve the world by lending money to the working poor:
    http://www.kiva.org/invitedby/karsteni

  • JDBC Adapter: J2EE server crashes while sending large messages

    We want to use the following scenario to transfer data from a MS SQL Server to SAP BW via XI:
    JDBC Sender Adapter – XI – SAP ABAP Proxy.
    All works fine with a small amount of data. But if the select statement delivers too many record sets and the size of the transformed XML payload is greater then 50 MB the J2EE server crashes. A complete restart is necessary. It seems to be am memory problem.
    Here are the entries from our log files:
    dev_server0
    [Thr 6151] Mon Jul 24 12:46:57 2006
    [Thr 6151] JLaunchIExitJava: exit hook is called (rc=666)
    [Thr 6151] **********************************************************************
    ERROR => The Java VM terminated with a non-zero exit code.
    Please see SAP Note 940893 , section 'J2EE Engine exit codes'
    for additional information and trouble shooting.
    [Thr 6151] SigISetIgnoreAction : SIG_IGN for signal 17
    [Thr 6151] JLaunchCloseProgram: good bye (exitcode=666)
    std_server0.out
    FATAL: Caught OutOfMemoryError! Node will exit with exit code 666java.lang.OutOfMemoryError
    Is this a general problem of the XI or a specific one of our configuration? Is it possible to transfer such large messages via XI? If not, is there a workaround for such scenarios?
    (Memory heap size of the J2EE server is 1024 MB.)

    > Hi Gil,
    >
    > i had nearly the same problems some times in praxis
    > and the mapping was the reason. Just change your
    > interface determination temporary, delete the mapping
    > and test again to find out if the mapping is the
    > reason.
    >
    > Regards,
    > Udo
    I have changed my interface determination so that no message mapping is used. The J2EE server still crashes.
    > Hi Gil,
    > This does sounds like a memory problem especially
    > when it comes to 50M message with a minimum XI sys
    > requierments...
    > To be sure you can check on the RWB for the
    > componnent monitoring at the JDBC adapters and look
    > for your adapter
    > look at the status of the adapter and the trace
    > there...
    Hi Nimrod
    In case of such an error I have no entries in channel monitor. So I can't see anything there. I have also no entries in message monitor of the RWB in this case. So I don't get any information with standard XI tools.
    > My reccomendation to you is to set the poll intervall
    > to a shorter period,this way you'll make sure you get
    > less records...I hope you have remembered to add  a
    > status/flag column on the table to be set after
    > selection so no duplicate records will be taken on
    > the second pools.
    >
    The problem is that the source of my data is not a simple SQL statement but a stored procedure. So I don't know exactly how many records will be delivered. A update command is not possible.

  • File to IDOC, with External Definition to hold multiple IDocs

    Hello Experts,
    I am configuring a sample File to IDOC scenario. Here are the steps I am following, can you please guide me what I am missing here.
    I am having a flat file with 10 records to be transferred to my ECC system through IDOC adapter to create 10 different IDocs.
    1. I am picking up my flat file using a 3rd party Business System with File Content Conversion
    2. I uploaded CREMAS Idoc and exported as XSD file and modified Occurences paramter to unbounded and uploaded back as an External file definition into IR
    3. I defined an SAP WEBAS ABAP type Business System with LS (representing my ECC system) and configured with IDOC Rcvr channel.
    4. In the IDOC recvr Channel, I specified RFC destination pointing to my ECC system & Port, Release...etc
    5. On the ECC side, I configurated a PARTNER PROFILE with the above mentioned LS(used in step 3) and added an Inbound Idoc paramter CREMAS with process code CRE1. Selected the option Trigger Immediately.
    Testing:
    1. When I checked my CC monitoring in RWB, my file on the sourced is getting picked up successfull with respect to the specified polling intervals.
    2. I don;t see any messages in my SXMB_MONI. I verified SXMB_ADM configuration it looks fine to me
    3. I used TCode:WE02 on ECC side, nothing is there
    4. I used Tcode SM58 on ECC side, nothign is there
    I have no idea what is happening after successfully picking up the file on the source side.
    Is there a way to trace where exactly the error is or what is wrong with my scenario?
    Just an observation, We don;t need to specifiy any LS on the Sender Business Sytem(3rd party in my case) which picks up my file on the source side. Rather I am specifiying LS on the Target Business Sytem(type WEBAS ABAP in my case) to send my IDOC to ECC system. I am using the same LS to define PP on ECC system.
    Edited by: Chris Rock on Oct 21, 2008 5:03 PM

    Aamir,
    I tried full cache refresh, also cleared SLD Cache for both IR & ID. No luck.
    I think either some thing wrong in connection parameters, but atleast if I see some error in Moni or RWB, it helps, but in my situation, nothing is showing up there.
    Connections:
    I specified PORT & RFC Dest in IDOC RCVR Adapter along with WEBAS Release version. These PORT & RFCs are created in XI using SM59 & IDX1 and both passed the connection test.
    On the ECC side, I created a PP with inbound Idoc CREMAS and process code as CRE1. This PP uses the LS that was used in RCVR BS specified in XI side.
    I also loaded Idoc meta data into XI with Tcode: IDX2
    Please help me, any body. I will sincerely award the points.

  • Sender Mail Adapter problem

    Hi,
    I am having problems to get my sender mail adapter running.
    I configured it with the following parameters:
    <i>Transport protocol: POP3
    Message Protocol: XIALL
    Adapter Engine: Integration Server
    URL: pop://server
    username + pw
    poll intervall: 1 Min.</i>
    The adapter is active and if I look into the adapter monitoring everything seems to be fine (green light).
    But the adapter doesn't poll the messages from the mail account and I cannot see any activity in the message monitoring.
    If I try the same configuration with Thunderbird everthing works fine. Also my receiving mail adapter works without any problems for the same account.
    Do you have any idea what the problem could be or how I could find out what my mail adapter is doing?
    Thanks,
    Andreas

    Hi all, hi Andreas,
    have you solved you problem? Could you please describe how you did it.
    I have the same situation and I haven't any ideas how can I manage it.
    I checked all parameters and compared they with documentation. Everything seems to be ok.
    I didn't see any messages in the message monitoring, but in the chanel monitoring I found follows:
    exception caught during processing mail message; java.net.ConnectException: Connection refused: connect
    I tried the same configuration with email client and it worked. I tried change POP on IMAP with URL imap://server/folder. It didn't work also.
    Could you please help me?
    Thank you in advance,
    Anna

  • Append messages with out header tag " ?xml version="1.0" encoding="UTF-8"?

    Hi all,
    I am doing file to file scenario.When I use APPEND in the File adapter it is also adding <?xml version="1.0" encoding="UTF-8" ?>
    I need to send a file for every 10 minutes consolidate all files and send at the end of the day.
    <?xml version="1.0" encoding="UTF-8" ?>
    <ID>31154</ID>
    The next time when i send the file with different< ID>31155</ID>
    it should append ignoring <?xml version="1.0" encoding="UTF-8" ?>
    The consolidated file must look like this...
    <?xml version="1.0" encoding="UTF-8" ?>
    <ID>31154</ID>
    < ID>31155</ID>
    Thanks ,
    Srinivas

    Hey
    as pointed out by everyone else,there is no straight way for this,one thing u can do is that create two separate scenarios.
    in first scenario use content conversion on receiver side and keep on appending the text for 10 mins (guess this is ur pooling interval),now since ur using FCC u wont get <?xml version="1.0" encoding="UTF-8" ?>,you will get a flat file on receiver side.
    after 10 mins u can have one more scenario which picks up this flat file and this time use FCC on sender side so that it will convert this flat file to XML,in this way you will get <?xml version="1.0" encoding="UTF-8" ?> only once.
    hope this solves ur problem.
    just make sure that you specify correct polling intervals for both the scenarios.
    thanx
    ahmad

  • File Adapter: FTP issue

    Hello,
    There is a problem with the file adapter of the XI30 SP14:
    It is set to get a file from a server X using a FTP protocol.
    INT_001_V01_COM_CTDCLNT600_FILE_SENDER
    Sender Adapter v2300 for Party '', Service 'CTDCLNT600':
    Configured at 2006-01-16 15:13:52 EST
    History:
    - 2006-01-16 15:23:54 EST: Polling interval started. Length: 60.000 s
    - 2006-01-16 15:23:54 EST: Processing finished successfully
    - 2006-01-16 15:23:52 EST: Processing started
    - 2006-01-16 15:22:54 EST: Polling interval started. Length: 60.000 s
    - 2006-01-16 15:22:54 EST: Processing finished successfully
    It seems everything is working fine, but the in the SXMB_MONI transaction, no message appears.
    I have accessed the FTP manually and I have downloaded the file, so the connectivity issue is descarted.
    Any suggestion.
    Jesus Barba Lobaton

    Hello all,
    The current configuration of the Sender file adapter is:
    Transfer protocol: FTP
    Message protocol: File Content Conversion
    Adapter Engine: Integration Server
    Source Directory: /out
    File: D_P.DAT
    In FTP: the file D_P.DAT is placed under /out directory
    Server: Server Internal IP
    Port: 21
    Connection security: None
    User: userX
    Pass: FTPuserX
    Quality of Service: Exactly once
    Poll Intervals: 60
    Processing mode: Delete
    File Type: Binary
    How can I verify a log of the processing process? as the SXMB_MONI transaction does not show anything.
    The most frustrating of all, is no error message appears. I have verified the XI, APPS and default trace logs with log viewer but there is nothing.
    I have also checked this thread but none of the solution provide solved the issue
    Pbm in File adapter..Post Sp14
    OS: Windows 2003
    Any idea?
    Jesus

  • Sender JDBC adapter: duplicate data sent

    Hi experts,
    The scenerio is XI is polling the Data from source system using JDBC  sender adapter. But the duplicate data is being sent to target system. I have check the chaneel configuaration and it is as follows:
    Polling Intervals(secs) = 3600
    and schedulling for the channel in RWB is set as: Daily at ......AM for 2 hours
    as per my understanding the here the adpater is polling the data for one hour and again it starts polling for another one hour which is set in secheduling.
    If I set the Availability planing as Daily at ......AM for 1 hour then problem can be resolved.
    Please advise.
    thanks,
    Regards,
    Mohan

    This could be resolved if you can add a field in the DB table like 'ReadFlag'.And your sql statement should have a check on this field.
    You can ask someone(who can actually make changes in the Database) to add the field in the Table.
    Please refer this blog for more understanding
    /people/yining.mao/blog/2006/09/13/tips-and-tutorial-for-sender-jdbc-adapter

  • How to remove XML messages from Integration Engine

    Hi,
    i had created file - XI -file scenario activated it and check messages using tcode SXMB_MONI .Error messages coming after every minute as i set polling intervals to 60 seconds .Now i had deleted related objects from IR and ID but still XML messages coming in ....please anyone can help to stop this scenario anyhow.

    Dear Jai ,
    i had checked everything before this and even i have checked now and i had created that scenaio by my own userid and passwd. Checked sender interface, reciever interface in SXMB_MONI it's the same which i had given name to them . Even we people are surprised that after deleting all the things in IR and ID . i don't know exactly why it is coming but what i am guessing is that after activating the scenarios it might get executed from XI runtime cache and there is no dependency on IR and ID objects . so what i had done that i completely refreshed the XI Runtime Cache by using tcode SXI_CACHE but it's still not working that way also.
    Message was edited by:
            ashwin dhakne
    Message was edited by:
            ashwin dhakne

  • JDBC Sender Adapter Threads and DB connections

    Hello,
    I have got a few questions regarding the behaviour of the JDBC sender adapter.
    Suppose I have configured a short polling intervall of 10 seconds. On the first poll a message is processed. How does the JDBC sender adapter behave when there is a second poll after 10 seconds and data is available? Does it wait until the first poll has been processed completely or does it start a second thread processing a second message in parallel? If the latter is true, would a second database connection be retrieved or would both threads share one database connection? What would happen if a large number of unprocessed threads were waiting?
    Kind regards,
    Heiko

    hi heiko,
    whether the first thread is successful or not automatically a new thread is created after the polling interval.if the previous thread is accessing a data the database would lock that one for theat thread so until the lock is released the other thread has to wait for it.so you can see the thread waiting in SAP WEb GUI.There you can see a green indicator on the status column corresponding to the service.
    so a new service is created after the pollling interval irrespective of the pevious service's result and it waits in the queue till its resource is otained.
    configuring sender jdbc to poll a databe is not an advisable one that too this short polling interval.this will increase the queue length and obviously cause performance issues...........

  • Transaction boundaries for an Integration Knowledge Module

    Hi All,
    We are currently using the SQL Incremental Update IKM to push changes from our source to target database. Our target database is denormalized to extract out data that can change over time into tables that are indexed on load timestamps that come from CDC (so that we can do time based queries on these tables to find changes between polling intervals). We want the ability to run multiple IKMs in one transaction for a parent-child type relationship, so that if we have an error in processing the child, then the parent is rolled back, leaving the target database in a consistent state.
    We have tried assigning the one transaction to all of the steps in the the IKM as well as the LKM and CKM so that the load of the data into the ODI work tables is in the same transaction as the commit to the intermediate store. If we tell ODI not to commit until the end of the integration module and there is an error in inserting rows, rather than rolling back, the errors are placed into the E$<table_name> table in the target database and the IKM finishes as normal.
    Is is possible to roll back this transaction so that errors aren't placed into these E$ tables? We would rather have human intervention fix the error in the target database and rerun the scenario than fix the error in the target database and then copy values out of these E$ tables. Especially since the E$ tables are emptied on the next run. Any help would be greatly appreciated.
    Regards,
    Aaron.

    If you turn FLOW_CONTROL off, you won't get the data moved into the E$ tables, it will simply try to do the set-wise updates and inserts, if it fails, then the task will fail, causing your rollback.

Maybe you are looking for

  • Concat multiple values in a queue

    Hello, I have the following problem in message mapping: I have one target element and several values of a source structure which I collect using an If / then function. Afterwards I remove all contexts. What I want to do now is to concat the remaining

  • Syncronous process in BPM and faults

    Hi Could anybody give me a hint how to generate fault messages as response to synchronous interface inside BPM? I have a SOAP service - BPM, that is the receiver of the request, handles it by collecting information from different systems via synchron

  • Firewall blocks ssh since Sept 12 update

    I have a Mac Pro Early 2008 running Lion 10.7.1 (11826). Since the "Security Update 2011-005" yesterday morning (Sept 12), the firewall does not allow incoming ssh connections, even though "remote login" is enabled in the "Sharing" preferences pane,

  • Android: Consistent Segmentation Fault Crashes

    Hello, On the Nexus 7 (running Android 4.3), the DPS app consistently crashes within 5-20 seconds of running the application. I looked at the logs with the Android Device Manager and this is what I found. I have included everything that occurred betw

  • Where can one find documentation of the pushreplication pattern / of the coherence incubator?

    Hi all, I am looking for documentation / explanations of the pushreplication pattern or of the coherence incubator in general. Any idea whether something like that exists? Many thanks in advance, Adam.