Large OPMN header vs. Small ORMI header.

We are developing an application using OC4J. We used to just use a lightweight OC4J container with ORMI. The problem then was that each time the server was restarted; we had to adjust the port number our client used. I addition, creating new OC4J containers was a bit of a strain. Lately we have switch to a complete Oracle application server running multiple OC4J instances. This is much better as we now can just use the OC4J instance name in or URL and nor worry about ports. However, the OPMN protocol has a very large overhead: Sending a ping to our appserver using ORMI resulted in a 4Kb message, whereas the same ping using OPMN caused a 182Kb message! This is a problem for us, as some of our clients have very low bandwidth.
I won’t bother to post the OPMN message as it is over 5500 lines, but it starts like this:
MESSAGE
<!DOCTYPE pdml>
<pdml version=’9.0.4’ name=’opmn’ host=’myServer’ id=’8803’ timestamp=’1179144209761’>
<statistics>
Now, for every line in the message, a long series of XML strings are parsed. For instance, for every oc4j Instance, the same xml is processed, even though we define in our URL what instance to use. CPU time, up time, ports, hosts, protocols, states, heap, memory, pids, processes, message values, connect types, and so on are wrapped in this XML.
</statistics>
</pdml>
MESSAGE AGAIN
There is so much information wrapped in this message that it clogs down the line and is a major problem for us.
Is there a way to turn off all this overhead for the OPMN protocol? Can we easily use ORMI with a fixed port on an OC4J instance on an application server?

Geir,
What would be your specific requirements of administration/stability/logging/performance features to make it necessary to use OPMN?
If you conclude it is not necessary, you can still use the standalone OC4J container. As you may have already learned, it is through the rmi.xml file that you can set the port numbers for both ORMI/ORMIS services. Or, as a blog suggests, you can override those ports (for versions above 10.1.3) by passing them as agruments to the command line.
Regarding to your notes no the OPMN message sizes, it is perhaps due to the admin info it's due to carry since it is an admin protocol, not a transport one.
And to answer your last question, yes you can. Oracle presets a range of ports that are managed by the OPMN process; nevertheless, you can actually extract a couple of numbers from that range (by editing opmn.xml) and setting them as they fit your needs for your OC4J instances.
Rick B.

Similar Messages

  • How do I divide a large catalogue into two smaller one on the same computer?

    How can I divide a large catalogue into two smaller ones on the same computer?  Can I just create a new catalogue and move files and folders from the old one to the new one?  I am using PSE 12 in Windows 7.

    A quick update....
    I copied the folder in ~Library/Mail/V2/Mailboxes that contains all of my local mailboxes over to the same location in the new account. When I go into mail, the entire file structure is there, however it does not let me view (or search) any of the messages. The messages can be accessed through the Finder, though.
    I tried to "Rebuild" the local mailboxes, but it didn't seem to do anything. Any advice would be appreciated.
    JEG

  • Pros and cons between the large log buffer and small log buffer?

    pros and cons between the large log buffer and small log buffer?
    Many people suggest that small log buffer (1-3MB) is better because we can avoid the waiting events from users. But I think that we can also have advantage with the bigger on...it's because we can reduce the redo log file I/O...
    What is the optimal size of the log buffer? should I consider OLTP vs DSS as well?

    Hi,
    It's interesting to note that some very large shops find that a > 10m log buffer provides better throughput. Also, check-out this new world-record benchmark, with a 60m log_buffer. The TPC notes that they chose it based on the cpu_count:
    log_buffer = 67108864 # 1048576x cpuhttp://www.dba-oracle.com/t_tpc_ibm_oracle_benchmark_terabyte.htm

  • Few large nodes or many small nodes

    Hi guys,
    In general, what option is better to implement a RAC system; few large nodes or many small nodes?
    Say we have a system with 4 nodes of 4 CPU and a system with 8 nodes of 2 CPU. Will there be a performance difference?
    I understand there won't be a clear cut answer for this. But I'd like to learn from your experiences.
    Regards,

    Hi,
    The worst case in terms of block transfer is 3-way, doesn't matter if you have 100 nodes a single block will be accessed at max in 3 hops. But there are other factors to consider
    example if you're using FC for SAN connectivity I'd assume trying to connect 4 servers could cost more than 2 servers.
    On the load let's say your load is 80 (whatever units) and equally distributed among 4 servers each servers will have 20 (units). If one goes down or shutdown to do a rolling patch then load of that will be distributed among other 3 so these will have 20 + 20/3 = 26.666. Imagine the same scenario if there was only two servers then each will have 40 and if one goes down one server has to carry the entire load. So you have to do some capacity planning interms of cpu to decide if 4 nodes better or 2 nodes better.

  • How do I Make a large song into 2 smaller songs?

    How do I Make a large song into 2 smaller songs?

    the following user tip might be of some help with that:
    b noir: Chopping a track into smaller pieces using iTunes

  • 1 large lun or many smaller luns

    Hi,
    I'm running Oracle 10g/11g. I'm NOT using ASM (that isn't an option right now). My storage is IBM DS3500 with IBM SVC in front of it.
    My question is, is it better to have 1 large lun or many smaller luns for the database (assuming its the same number of spindles in both cases)?
    Are there any limitations with queue depth..etc. I need to worry about with the 1 large lun?
    Any info would be greatly appreciated.
    Thanks!

    Hi,
    You opened this thread on ASM forum and you are not using ASM Filesystem (???????)....what gets dificult to answer your questions.
    Well...
    First you need to consult the manual/whitepapers/technotes of filesystem that you will use to check what are the recommendations for the database using this filesystem.
    eg. using JFS2 on AIX you can enable CIO...
    Another point:
    Create large luns can be useful and can be not. All depends on the characteristics of your env.
    e.g: I believe is not good placing 2 databases with different characteristcs of access/troughput in same filesystem. One database can cause performance issue on others database if theys share same Lun.
    I particularly dislike Large Luns to an environment that will store several database .... I usually use Large Luns for large databases, and yet without sharing the area with other databases.
    My thoughts {message:id=9676881} although it is valid for ASM.
    I recommend you read it:
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#PFGRF015
    Regards,
    Levi Pereira

  • PLL Library Size - Few Larger Libraries Or More Smaller Libraries

    Hi
    I am interested in people's thoughts on whether it is better to have fewer larger libraries or more smaller libraries. If you do break into smaller pll libraries, should the grouping be that some forms will not use modules in library, ie do not have to attach to all forms. For common modules that all forms require access to, do you achieve anything in having many little libraries, rather than on larger library?
    Is it correct that pll libraries are loaded into memory at run time when form is loaded and library is attached?
    What are issues to consider here
    Thanks

    Hi Till,
    My honest opinion...do not merge the libraries. Switch between them using iPhoto Library manager and leave it at that.
    A 22 gig library is way too big to run efficiently.
    Lori

  • Query performance - A single large document VS multiple small documents

    Hi all,
    What are the performance trade offs when using a single large document VS multiple small documents ?
    I want to store xml snippets with similar structure in a container. Is there any benefit while querying, if I use a single large document to store all these snippets against adding each snippet as a different document. Would it degrade the performance when adding an xml snippet each time by modifying an existing document?
    How could we decide whether to use a single large document VS multiple small documents?
    Thanks,
    Anoop

    Hello Anoop,
    In case you wanted to get a comparison between the storage types for containers, wholedoc and node, let us know.
    What are the performance trade offs when using a
    single large document VS multiple small documents ?Depends on what is more important to you, performance when creating the container, and inserting the document(s) or performance when retrieving data.
    For querying the best option is to go with smaller documents, as node indexes would help in improving query performance.
    For inserting initial data, you can construct your large document composed of smaller xml snippets and insert the document as a whole.
    If you further want to modify this document changing its structure implies performance penalties; more indicated is to store the xml snippets as documents.
    Overall, I see no point in using a large document that will hold all of your xml snippets, so I strongly recommend going with multiple smaller documents.
    Regards,
    Andrei Costache
    Oracle Support Services

  • Splitting a large video file into smaller segments/clips using FCE

    Is there a way to split a large FCE / iMovie Captured event into smaller segments/clips.
    Am attempting to convert my home videos ( Analog: 8mm) into iMovie clips.
    Used a ADS Pyro that converts my composite signal ( RCA: Yellow+Red/White) to a digital signal via Firewire using the iMovie capture. However, this creates a single 2 hour file that takes approx 26Gb because there is now DV information.
    I would like to (a) Break this into smaller segments/clips then (b) Change the date on these.

    afernandes wrote:
    Thanks Michel,
    I will try this out.
    Do you know if this will create a new file(s) ?
    http://discussions.apple.com/thread.jspa?threadID=2460925&tstart=0
    What I want to do is to break up my 2 hours video into smaller chunks then burn the good chunks as raw footage ( AVI/MOV) onto backup data DVDs. Then export all the chunks into compresssed files (MPEG-4?) and save these on another data DVD.
    Avoid to compress. Save as quicktime movie.
    Michel Boissonneault

  • Compressing a large file into a small file.

    So i have a pretty large file that i am trying to make very small with good quality. the file before exporting it is about 1gig. I need to make it 100mbs. Right now i've tried compressing it with the h.264 compressing type, and i am having to go as low as 300kbits. I use aac 48 for the audio. It is just way to pixelated to submit something like this. But i guess i could make the actual video a smaller size something like 720x480 and just letterboxing it to keep it widescreen? Any hints on a good way to make this 21 minute video around 100mbs?

    There are three ways to decrease the file size of a video.
    1. Reduce the image size. For example, the change of a 720x480 DV image to a 320x240 will decrease the size by a factor of 4
    2. Reduce the frame rate. For example, changing from a 30 fps to 15 fps will decrease the size by a factor of 2
    3. Increase the compression/ change code. This is the black magic part of online material. Only you can decide what's good enough.
    x

  • One large message or multipels small ones?

    Hi:
    I'm working in an application using weblogic 8.1.
    We receive in a JMS queue a large message containing thouthands of transactions to be processed.
    What should be more eficient? processing the whole transactions in the message (with one MDB) or split the large message in smaller ones and process "in paralel" with multiples MDB and somehow join all the results to return as the whole process result (hiding the client from this split/join) ?
    Keeping all the execution in the same transaction would be desireable, but we could create multiple independant transactions to process each small message, but the result must be consistent in what transactions were processed successfully and what weren't.
    If splitting is the option: Is there any tip on doing this?
    thanks in advance.
    Guillermo.

    Hi Guillermo,
    I would recommend reading the book
    Enterprise Integration Patterns by Gregor Hohpe
    http://www.eaipatterns.com/
    This book describes the split and join process very good.
    If the messaging overhead is little compared to total job time it would be a good idea to split
    (if the tasks also run well in paralel, get quick out of the DB if you run against one DB).
    Are you doing transactions against a DB?
    Are you using several computers to distribute the work or a multiprocessor computer?
    You can also test performance and try to split to 1, 5 or 10 transactions/msg.
    When you split, keep track of total number of transations per job (= N),
    and when number of successful transactions + number of failed transactions = N you are finished.
    You could send the result per msg to a success or failure channel.
    Regards,
    Magnus Strand

  • Compressing large file into several small files

    what can i use to compress a 5gb file in several smaller files, and that will easy rejoined at a later date?
    thanks

    Hi, Simon.
    Actually, what it sounds like you want to do is take a large file and break it up into several compressed files that can later be rejoined.
    Two ideas for you:
    1. Put a copy of the file in a folder of its own, then create a disk image of that folder. You can then create a segmented disk image using the segment verb of the hditutil command in Terminal. Disk Utility provides a graphical user interface (GUI) to some of the functions in hdiutil, but unfortunately not the segment verb, so you have to use hditutil in Terminal to segment a disk image.
    2. If you have StuffIt Deluxe, you can create a segmented archive. This takes one large StuffIt archive and breaks it into smaller segments of a size you define.2.1. You first make a StuffIt archive of the large file, then use StuffIt's Segment function to break this into segments.
    2.2. Copying all the segments back to your hard drive and Unstuffing the first segment (which is readily identifiable) will unpack all the segments and recreate the original, large file.I'm not sure if StufIt Standard Edition supports creating segmented archives, but I know StuffIt Deluxe does as I have that product.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X

  • Large servlet or several small ones??

    I am building a servlet for a web application which is becoming quite large. I am beginning to wonder about the resources that this will use on the server.
    Does anyone have any views on wether it would be better to split the servlet into several smaller or keep one large do it all.
    cheers
    chris

    I read these question and answers, and I'm sure small servlets are a better programming way, but my question is what is faster?
    I mean one big servlet need time to load and initialize but this is only for the first time, many small servlets or a framework need time to instantiate objects at every call.
    Am I wrong?

  • Dual monitors: window size too large when opens in smaller monitor.

    I use Windows 7 with two monitors. When I open Firefox directly from my desktop, it opens on the larger monitor, and the window size is just fine. But if I click a link from an email, Firefox opens in the second, smaller monitor, and the window is too large for that screen.
    Please help me figure out how to either (a) make Firefox open with a smaller window; or (b) make Firefox open on the primary, larger monitor.

    oh,  forgot to say Mavericks   10.9.5    running on the mac pro..

  • Larger image changed, but smaller thumbnails still show old image

    Hi,
    I want to change the image for my podcast. I added the new image to my server and added an image tag to my rss file pointing to it. Now in the iTunes store, the larger thumbnail shows the correct image, but the smaller thumbnails in the store show the old one. Do I just need to wait longer, or is there something else I have to do?
    Feed = http://feeds.feedburner.com/walkingdeadcast
    Thanks!
    Jason

    For some reason the thumbnails in the Search results take longer to update than the image on the Store page. They're both drawn from the same tag in your feed, so there's nothing you can do except wait: it will probably sort itself out in a day or two.
    Edit: actually I just searched on your podcast title and the search results showed the same image as the main Store page.
    Another edit: I assume you are talking about the Search results, not what you see when you subscribe? Those images are embedded in the media files, not referenced in the feed. I just downloaded one of the episodes on subscription and I see it has a different image. In order to change this you will have to embed new images in every episode and re-upload them - see here.

Maybe you are looking for

  • Hp officejet pro 8600 scan to windows 8 computer

    can anyone out there assist, attempting to get my hp officejet pro 8600 to scan to my windows 8 computer?

  • Automatic creation of production order against the sales order.

    Hi Experts,        My client requirement is he wants the production order to be generated or created automatically as the sales order is created, please suggest weather it is possible or not. Regards, Naveen. Edited by: vaddapalli naveen on Mar 13, 2

  • Where can I get a new hard disk in the UK?

    I have a Hitachi 80Gb internal hard disk (Hitachi HDS722580VLAT20) which TechTool tells me is failing. I can't afford to have a new one fitted, so I think I'm going to have to do it myself. The trouble is that I can't find one anywhere. Apple don't s

  • Re: Batchs

    Hello, 1) I use Forte for batch applications. But the definition of a batch may be different with that kind of architecture : you have events. In fact, I think that classic batch (long processing done by night to close the day activity for instance)

  • Change the language for iTunes

    Header 1 Header 2 thanks