Best practices for handling large messages in JCAPS 5.1.3?

Hi all,
We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
Our setup looks like this:
We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
My questions:
* Any ideas how to handle large messages most efficiently?
* Any ideas why the crashes occur without error messages in the logs or nothing?
* Any ideas why messages sometimes disappear?
* Any other suggestions?
Thanks
/Alex

* Any ideas how to handle large messages most efficiently? --
Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
Generally we use following process
After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
which contains file name and file location. Most of places we never send file content in JMS message.
* Any ideas why the crashes occur without error messages in the logs or nothing?
Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
stop processing or processing might take lot of time
* Any ideas why messages sometimes disappear?
Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
of messages.
* Any other suggestions
If file size is more then better to stream the file to local directory from FTP location and send only the file
location in JMS message.
Hope it would help.

Similar Messages

  • Best Practice for Handling Timeout

    Hi all,
    What is the best practice for handling timeout in a web container?
    By the way, we have been using weblogic 10.3
    thanks

    Are you asking about this specific to web services, or just the JEE container in general for a web application?
    If a web application, Frank Nimphius recently blogged a JEE filter example that you could use too:
    http://thepeninsulasedge.com/frank_nimphius/2007/08/22/adf-faces-detecting-and-handling-user-session-expiry/
    Ignore the tailor fitted ADF options, this should work for any app.
    CM.

  • Best practices for handling elements and symbols (including preloading)

    I am trying to learn Edge Animate and I have not seen enough animations to know how this is typically handled and I searched the forum and have not found an answer either.
    If you have many different elements and symbols for a project, what is the best practice for having them appear, disappear, etc. on the timeline? I ask this question not only from a performance based perspective, but also keeping in mind the idea of preloading. This is a 2 part question:
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.

    Hi, escargo-
    Good questions!
    Part 1: Using elements and symbols later in the timeline:
    Since artwork is always imported directly to the stage in an "always on" status, should we place a visibility OFF on every item until we need it?
    or should they be opacity 0 until I need them?
    or should they be set to visibility hidden until I need them?
    Which of these is the best option if you don't want the element / symbol visible until later in the timeline? Does it matter?
    I would recommend that you set your visibility to "off" instead of simply changing the opacity.  The reason I suggest this is that when your visibility is set to off, your object's hit points also disappear.  If you have any type of interactivity, having the object still visible but with 0 opacity will interfere with anything you have underneath it in the display order.
    Part 2: Impact on page loading
    Does the above question have any impact upon page loading speed
    or is this something handled in preloading?
    or do you need to make a special preloader?
    Thanks for the help.
    No, none of this has any impact on page load.  As you already noticed, all of the assets of your project will load before it displays.  If you want only part of your composition to load, you may want to do what we call a multi-composition project.  There's a sample of that in the Edge Animate API in the Advanced section, and plenty of posts in the forums (and one in the team's blog) explaining how to do that.
    http://www.adobe.com/devnet-docs/edgeanimate/api/current/index.html
    https://blogs.adobe.com/edge/
    Hope that helps!
    -Elaine

  • Best practice for handling errors in EOIO processing on AEX?

    Hi,
    I'm looing for resources and information describing the best practice on how to handle processing errors of asynchronous integrations with Exactly-Once-In-Order QoS on an AEX (7.31) Java only installation. Most information I've found so far are describing a the monitoring and restart jobs on AS ABAP.
    Situation to solve:
    multiple different SOAP messages are integrated using one queue with an RFC receiver. On error the message status is set to Holding and all following messages are Waiting. Currently I need to manually watch over the processing, delete the message with status holding and restart the waiting ones.
    Seems like I can setup component based message alerting to trigger an email or whatever alert. I still need to decide on how to handle the error and resolve it (ie. delete the errornous message, correct the data at sender and trigger another update). I still need to manually find the oldest entry with status waiting and restart it. I've found a restart job in Background jobs in configuration and monitoring home, but it can be only scheduled in intervals of 1 or more hours.
    Is there something better?
    Thank you.
    Best regards,
    Nikolaus

    Hi Nikolaus -
    AFAIK - For EOIO, you have to cancel the failed message and then process the next message in the sequence manually..
    Restart job only works the messages which are in error state but not in holding state.. So you have to manually push the message... So there is no other alternative.
    But it should not be that difficult to identify the messages in a sequence..
    How to deal with stuck EOIO messages in the XI ... | SCN
    Though it is for older version, it should be the same.. you should be able to select the additional columns such as sequence ID from the settings..

  • Best practices for a large application

    I have an existing application that is written in character
    based Oracle Developer and has many forms and reports. A few years
    ago I converted an inquiry portion of the system to standard ASP,
    this portion has about 25 pages. Initially I want to rewrite the
    Inquiry portion in Flex (partly to teach myself flex), but
    eventually (soon) I will need to convert the remainder of the
    application, so I want a flexible and robust framework from the
    beginning.
    So far for fun I wrote a simple query and I like what I have
    done but I realized that trying to write the entire application in
    a single script with hundreds of states would be impossible. The
    application is a fairly traditional type app with a login script, a
    horizontal menu bar and panels that allow data entry, queries and
    reports. In Oracle and ASP each "panel" is a seperate program and
    is loaded as needed. I see advantages in this approach, and would
    like to continue it (creating as much reusable stuff as possible).
    So where can I find documentation, and or examples on the
    best practice in laying out a framework for what will eventually be
    a sizeable app?
    BTW, what about the ever present problem of writing reports?
    Paul

    As a matter of fact, modules are exactly what you're looking
    for! :)
    Flex's doc team has a good intro document here:
    http://blogs.adobe.com/flexdoc/2007/01/modules_documentation_update.html
    And this gentleman has a quick example in his blog:
    http://blog.flexexamples.com/2007/08/06/building-a-simple-flex-module/
    The app I'm working on now is primarily based on modules, and
    they're a good way to keep things organized and keep loading times
    low. Documentation on them has been so-so, but its getting better.
    There are little undocumented gotchas that you'll undoubtedly run
    into as you develop, but they've been covered here and in the
    flexcoders Yahoo group time and again, so you'll readily be able to
    find help.
    When creating a module, you can either place them in the
    project that they'll eventually be used with, or create a stand
    alone project just for that module (the preferred and more
    organized method.) Creating a stand-alone module project is easy:
    just change the "mx:Application" tag in your main mxml file to a
    "mx:Module" tag instead.
    If you're using Cairngorm as your app's framework, you'll
    have to tinker with things a bit to get it all working smoothly,
    but its not that hard, and I believe the doc team is working on a
    definitive method for using modules in Cairngorm based apps.
    Hope this helps!

  • Best Practices for Handling queries for searching XML content

    Experts: We have a requirement to get the count of 4 M rows from a specific XML tag with value begins with, I have a text index created but the query is extremely slow when I use the contains operator.
    select count(1) from employee
    where
    contains ( doc, 'scott% INPATH ( /root/element1/element2/element3/element4/element5)') >0
    what is oracle's best practices recommendation to query / index such searches?
    Thanks

    Can you provide a test case that shows the structure of the data and how you've generated the index? Otherwise, the generic advice is going to be "use prefix indexing".

  • Best practice for handling data for a large number of indicators

    I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
    I was curious what others have done in similar circumstances.
    Bill
    “A child of five could understand this. Send someone to fetch a child of five.”
    ― Groucho Marx
    Solved!
    Go to Solution.

    I can certainly feel your pain.
    Note that's really what is going on in that png  You can see the Action Engine responsible for updating the display to the far right. 
    In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified.  So I worked it this way from no choice of mine.  I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway.  Defer Panel Updates was my very good friend.  The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
    (the GUI did scale poorly though!  That is a lot of wires!  I was greateful to Jack for the Idea to make align and distribute work on wires)
    Jeff

  • Best practice for handling original files once movie is complete?

    So I'm taking movies from my Canon S5IS (and other cameras in the past) and making projects in iMovie, sharing in the Media Browser, importing into iDVD, and burning to DVD.
    I can't help but wonder if I might need the original footage one day. Do most people keep their original files for future media (replacement for DVD) which I realize would require recreation of the movies that were created in 2008 with iMovie (with title screens, transitions, etc.)? Or do most people delete the originals with the feeling that DVD will be a suitable way to watch home movies for the foreseeable future?
    I just can't figure out what to do. I don't want to burn dozens of DVDs of raw footage, only to have keep up with them in a safe deposit box and have to deal with the anxiety of having to recreate movies one day (which is daunting enough now...unbelievably daunting to think about the exponential growth as time progresses).
    Hope this make sense. Reading that DVD movies are not suitable for editing due to the codec has made me realize I need to think through this before destroying all these originals as I'm finished with them.
    Thanks in advance!
    -John

    If any of your cams are miniDV, then you simply need to keep the original tapes and tape is still the safest long term archiving solution, when stored properly.
    Other cams that use flash memory, hard drives, even DVD cams, do not offer the security that tape does. If you are wanting to save those types of files, the best option would be to store them on one or two external hard drives, bearing in mind those drives could fail anytime. Back up to your back up in that case.
    Another nice thing about miniDV cams is that you can export your finished movie back to a tape also, using iMovie HD6, and have safe copies of original and finished material.
    Message was edited by: Forest Mccready

  • Best Practice for very large itunes and photo library..using Os X Server

    Ok setup....
    one Imac, one new Macbook Pro, one Macbook, all on leopard. Wired and wireless, all airport extremes and express'
    have purchased a mac mini plus a firewire 800 2TB Raid drive.
    I have a 190GB ever increasing music library (I rip one to one no compression) and a 300gb photo library.
    So..question Will it be easier to set up OS X Server on the mini and access my itunes library via that?
    Is it easy to do so?
    I only rip via the Imac, so the library is connected to that and shared to the laptops...how does one go about making the imac automatically connect to the music if i transfer all music to the server ?
    The photo bit can wait depending on the answer to the music..
    many thanks
    Adrian

    I have a much larger itunes collection (500gb/ 300k songs, a lot more photos, and several terabytes of movies). I share them out via a linux server. We use apple TV for music/video and the bottleneck appears to be the mac running itunes in the middle. I have all of the laptops (macbook pros) set up with their own "instance" of itunes that just references the files on the server. You can enable sharing on itunes itself, but with a library this size performance on things like loading cover art and browsing the library is not great. Please note also I haven't tried 8.x so there may be some performance enhancements that have improved things.
    There is a lag on accessing music/video on the server of a second or so. I suspect that this is due to speed in the mac accessing the network shares, but it's not bad and you never know it once the music starts or the video starts. Some of this on the video front may be the codec settings I used to encode the video.
    I suspect that as long as you are doing just music, this isn't going to be an issue for you with a mini. I also suspect that you don't need OSX server at all. You can just do a file share in OSX and give each machine a local itunes instance pointing back at the files on the server and have a good setup.

  • Best practice for creating large drop down menus?

    I'm attempting to transition from using Photoshop to Fireworks cs4 for design and prototyping of websites.
    Right now I'm working on re-creating a nav bar design in FW. The design calls for large drop down menus with lots of non-standard content similar to nymag.com
    I've got each button setup as a 2 state button, however when the large drop down spans across to the right, the other buttons next to it appear to be over the first fly out. If I move that layer on top of the others, all the other buttons begin to act strangely. Any ideas?

    I have to agree with Pixlor and here's why:
    http://www.losingfight.com/blog/2006/08/11/the-sordid-tale-of-mm_menufw_menujs/
    and another:
    http://apptools.com/rants/menus.php
    Don't waste your time on them, you'll only end up pulling your hair out  :-)
    Nadia
    Adobe® Community Expert : Dreamweaver
    Unique CSS Templates |Tutorials |SEO Articles
    http://www.DreamweaverResources.com
    Book: Ultimate CSS Reference
    http://www.sitepoint.com/launch/005dfd4/3/133
    http://twitter.com/nadiap

  • Best Practices for loading large sets of Data

    Just a general question regarding an initial load with a large set of data.
    Does it make any sense to use a materialized view to aid with load times for an initial load? Or do I simply let the query run for as long as it takes.
    Just looking for advice on what is the common approach here.
    Thanks!

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Best practice for handling local external assets in Air?

    When setting up a project (as3 mobile not flex framework ideally), where and how might one place their runtime-loaded application assets?
    Especially, does anyone have example code for referencing local files such that it works across android, iOS and the local debugger/local playback?
    Thanks,
    Scott

    Just have a folder to collect your assets and call with an relative path. Because you'r going to attach the files and folder while packaging which is refered by your app.

  • Best practice for handling custom apps tracks with regards to EP upgrade?

    Hi,
    We've are currently in the progress of upgrading from EP 6 to EP 7.0, and in that context we need to "move" our tracks containg development of custom J2EE components.
    What we've done so far is :
    1.Create a new version 7.00 of each software component we have developed with correct EP 7 dependencies
    2. Create a new version 7.00 of our product in the SLD: Bouvet_EP
    3. Attached the new versions of the SCs to the new product version
    4. Create a new track with the SC of version 7.00 along with relevant dependencies
    My question now is how do we get the EP 6 component source code into the new track, so that we can change dependecies of the DCs and build it again for EP 7.0?
    Should we somehow export the code from the old track, check in and transport ? (how do we then export the code from the track)
    Regards
    Dagfinn

    Hi Dagfinn,
    This is a really interesting thread. I have not encountered this scenario till now. However i can only guess.
    1. Copy the latest sca files generated for all the SC's in your track from one of the subdirectories of JTrans and place those sca files in the inbox directory of target CMS. Check if these sca are available in the Check-in tab. I think this will not work because the SC verion you have defined in SLD for WAS 7.0 is different than the one in SLD for WAS 6.40.
    2. Second and crude method may be you create a SC in the source SLD similar to ones created in target SLD. Create a track for these SC's in the source system. Then create a track connection between the newly created track and existing tracks. Forward all the sources to the target track. Then assemble this SC and copy the sca file and repeat the process above.
    I dont know. Possibly this may click. Notes 877029 & 790922 also give some hints on migration of JDI server.
    Please do keep this thread updated with your progress.
    Regards
    Sidharth

  • Using Liquid, what is the best practice for handling pagination when you have more than 500 items?

    Right now I can only get the first 500 items of my webapp, and don't know how to show the rest of the items.
    IN MY PAGE:
    {module_webapps id="16734" filter="all" template="/Layouts/WebApps/Applications/dashboard-list-a.tpl" render="collection"}
    IN MY TEMPLATE LAYOUT:
    {% for item in items %}
    <tr>
    <td class="name"><a href="{{item.url}}">{{item.name}}</a></td>
    <td class="status">Application {{item.['application status']}}</td>
    </tr>
    {%endfor%}

    <p><a href="{{webApp.pagination.previousPageUrl}}">Previous Page</a></p>
    <p>Current Page: {{webApp.pagination.currentPage}}</p>
    <p><a href="{{webApp.pagination.nextPageUrl}}">Next Page</a></p>

  • Best Practice for disparately sized data

    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    A

    Angel 1058 wrote:
    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    AHi A,
    It depends... if there is a relationship between keys and sizes, e.g. if this or that part of the key means that the size of the value will be big, then you can implement a key partitioning strategy possibly together with keyassociation on the key in a way that it will evenly spread the large entries across the partitions (and have enough partitions).
    Unfortunately you would likely not get a totally even distribution across nodes because of having fairly small amount of entries compared to the square of the number of nodes (btw, which version of Coherence are you using?)...
    Best regards,
    Robert

Maybe you are looking for