Best practices to scale OHS (WebTier)

Hello guys.
I am looking for some best practices to configure a WebTier/OHS installation in a big box (Oracle Sparc hardware).
What is the best approach: 1) uses just one instance of OHS and tuning its MPM Worker (MaxClients, ThreadsPerChild, etc) or; 2) run multiple OHS instances (components in Webtier context) in a same Box (each one listening to a specific IP/Port addr).
The webtier will be used as frontend (reverse proxy and load balancer) for a Weblogic cluster (backend).
Regards.

What Luis said: set the project to 4k or smaller .... whatever is good enough for the output you require. Editing in 5k already requires you to upscale the 4k footage.
The RED Raw settings have nothing to do with the scaling. They are meant to dial in white balance, color, saturation and other parameters.
Dennis Widmyer wrote:
So is this correct?  Even though I'm viewing my reel as a Proxy for Playback, is it okay that my Project Properites video size is still 5K?  Or should've that been changed to 1080p before I did any editing on this project?  Is the 5K project setting slowing me down?
Wether the 5k project is slowing you down depends on the specs of your hardware. Do you intend the output to be 1080? Then your scaling problems are over.

Similar Messages

  • What is the best way to scale Red footage in FCX?

    Hi everyone,
    I'm currently editing a feature film in Final Cut X Pro (10.0.9).  I've edited on FCX in the past, but this is my first feature project with it.  The feature was shot on two cameras.  Camera A was a Red Epic (5K).  Camera B was a Red Scarlet (4K).  An assistant editor was in charge of getting the project set up.  I'm the co-director, so I didn't come on board the edit until a few weeks after he had everything imported, in sync, and properly structured in keyword folders per scene number.  The guy did a great job and organized the **** out of everything.  He then cut a few assemblies for us.
    Now I'm working on it on my own and it's been going great, but I have two questions which have dogged me:
    1. There's been a handful of shots I've wanted to enlarge / scale / zoom-in on.  A good example is a medium shot, which I prefer to now be a CU of the actor's face.  The way I've been doing this up until now, is by selecting the clip in the timeline, going to the Video inspector window in the top right, and changing the Scale percentage.  I usually find though that, if I go beyond 150%, I lose a lot of quality.  Now, at first I thought this was just because I was using Proxy Media (only for Playback) in my edit.  So I did a test and opened the same clip in a separate project reel.  I then scaled the clip again to 175% but de-selected 'Use Proxy media for Playback.'  So in essence, I did an 'online' to see the raw media in my timeline.  And after doing this, the scaled clip's quality definitely improved.  But not enough to give me the confidence to do this often throughout the film. 
    So my question is, is there a better / smarter way to do this?  I've been reading PDFs lately about tbe best practices for a Red workflow with FCX.  And I see that they recommend making tweaks to files using the 'RED Raw Settings Control.'  I confess, I've never dabbled with this.  To me, any sort of RED tweaking will happen during the mastering phase.  Not during the basic edit.  But would this option be better for scaling clips?  Or is scaling clips something I should hold off on for now and do in the more final stages of mastering. 
    2. This second question sort of ties into my basic Project Property settings.  So first, a recap of the workflow we used to get the project into FCX:
    - Imported all RED Raw footage from both cameras.
    - Synced up audio and multi-cam clips.
    - Playback settings are 'Use proxy media.'  (no Optimized media was created and as far as I know now, Optimized / Proxy media wasn't creating during Import, nor were Import folders / keyword collections created at that point.)
    Now, when I look at my Project Properties, it says my 'Video Properties' are in 5K.  And my Render Format is Apple ProRes 422 (Proxy).  So the way I read that, any straight up Export I do of the default video file is going to be 5K.  That includes any sort of Apple ProRes export.
    So is this correct?  Even though I'm viewing my reel as a Proxy for Playback, is it okay that my Project Properites video size is still 5K?  Or should've that been changed to 1080p before I did any editing on this project?  Is the 5K project setting slowing me down?  Or adding on more time to my Vimeo Shares and straight Exports?
    I know I'm asking a lot of questions here, so perhaps it will be easier if someone can explain what their workflow is with 5K footage in 5K.  We're doing a small screening of the rough cut tonight for friends and I wanted to export an Apple ProRes 422, but it's like 300GB big!  So it seems the only option I have is to use the Apple Devices preset and export an H.264. 
    Thank you in advance to anyone that can provide me some advice. 

    What Luis said: set the project to 4k or smaller .... whatever is good enough for the output you require. Editing in 5k already requires you to upscale the 4k footage.
    The RED Raw settings have nothing to do with the scaling. They are meant to dial in white balance, color, saturation and other parameters.
    Dennis Widmyer wrote:
    So is this correct?  Even though I'm viewing my reel as a Proxy for Playback, is it okay that my Project Properites video size is still 5K?  Or should've that been changed to 1080p before I did any editing on this project?  Is the 5K project setting slowing me down?
    Wether the 5k project is slowing you down depends on the specs of your hardware. Do you intend the output to be 1080? Then your scaling problems are over.

  • Best practices for setting up projects

    We recently adopted using Captivate for our WBT modules.
    As a former Flash and Director user, I can say it’s
    fast and does some great things. Doesn’t play so nice with
    others on different occasions, but I’m learning. This forum
    has been a great source for search and read on specific topics.
    I’m trying to understand best practices for using this
    product. We’ve had some problems with file size and
    incorporating audio and video into our projects. Fortunately, the
    forum has helped a lot with that. What I haven’t found a lot
    of information on is good or better ways to set up individual
    files, use multiple files and publish projects. We’ve decided
    to go the route of putting standalones on our Intranet. My gut says
    yuck, but for our situation I have yet to find a better way.
    My question for discussion, then is: what are some best
    practices for setting up individual files, using multiple files and
    publishing projects? Any references or input on this would be
    appreciated.

    Hi,
    Here are some of my suggestions:
    1) Set up a style guide for all your standard slides. Eg.
    Title slide, Index slide, chapter slide, end slide, screen capture,
    non-screen capture, quizzes etc. This makes life a lot easier.
    2) Create your own buttons and captions. The standard ones
    are pretty ordinary, and it's hard to get a slick looking style
    happening with the standard captions. They are pretty easy to
    create (search for add print button to learn how to create
    buttons). There should instructions on how to customise captions
    somewhere on this forum. Customising means that you can also use
    words, symbols, colours unique to your organisation.
    3) Google elearning providers. Most use captivate and will
    allow you to open samples or temporarily view selected modules.
    This will give you great insight on what not to do and some good
    ideas on what works well.
    4) Timings: Using the above research, I got others to
    complete the sample modules to get a feel for timings. The results
    were clear, 10 mins good, 15 mins okay, 20 mins kind of okay, 30
    mins bad, bad, bad. It's truly better to have a learner complete
    2-3 short modules in 30 mins than one big monster. The other
    benefit is that shorter files equal smaller size.
    5) Narration: It's best to narrate each slide individually
    (particularly for screen capture slides). You are more likely to
    get it right on the first take, it's easier to edit and you don't
    have to re-record the whole thing if you need to update it in
    future. To get a slicker effect, use at least two voices: one male,
    one female and use slightly different accents.
    6) Screen capture slides: If you are recording filling out
    long window based databse pages where the compulsory fields are
    marked (eg. with a red asterisk) - you don't need to show how to
    fill out every field. It's much easier for the learner (and you) to
    show how to fill out the first few fields, then fade the screen
    capture out, fade the end of the form in with the instructions on
    what to do next. This will reduce your file size. In one of my
    forms, this meant the removal of about 18 slides!
    7) Auto captions: they are verbose (eg. 'Click on Print
    Button' instead of 'Click Print'; 'Select the Print Preview item'
    instead of 'Select Print Preview'). You have to edit them.
    8) PC training syntax: Buttons and hyperlinks should normally
    be 'click'; selections from drop down boxes or file lists are
    normally 'select': Captivate sometimes mixes them up. Instructions
    should always be written in the correct order: eg. Good: Click
    'File', Select 'Print Preview'; Bad: Select 'Print Preview' from
    the 'File Menu'. Button names, hyperlinks, selections are normally
    written in bold
    9) Instruction syntax: should always be written in an active
    voice: eg. 'Click Options to open the printer menu' instead of
    'When the Options button is clicked on, the printer menu will open'
    10) Break all modules into chapters. Frame each chapter with
    a chapter slide. It's also a good idea to show the Index page
    before each chapter slide with a progress indicator (I use an
    animated arrow to flash next to the name of the next chapter), I
    use a start button rather a 'next' button for the start of each
    chapter. You should always have a module overview with the purpose
    of the course and a summary slide which states what was covered and
    they have complete the module.
    11) Put a transparent click button somewhere on each slide.
    Set the properties of the click box to take the learner back to the
    start of the current chapter by pressing F2. This allows them to
    jump back to the start of their chapter at any time. You can also
    do a similar thing on the index pages which jumps them to another
    chapter.
    12) Recording video capture: best to do it at normal speed
    and be concious of where your mouse is. Minimise your clicks. Most
    people (until they start working with captivate) are sloppy with
    their mouse and you end up with lots of unnecessarily slides that
    you have to delete out. The speed will default to how you recorded
    it and this will reduce the amount of time you spend on changing
    timings.
    13) Captions: My rule of thumb is minimum of 4 seconds - and
    longer depending on the amount of words. Eg. Click 'Print Preview'
    is 4 seconds, a paragraph is longer. If you creating knowledge
    based modules, make the timing long (eg. 2-3 minutes) and put in a
    next button so that the learner can click when they are ready.
    Also, narration means the slides will normally be slightly longer.
    14) Be creative: Capitvate is desk bound. There are some
    learners that just don't respond no matter how interactive
    Captivate can be. Incorporate non-captivate and desk free
    activities. Eg. As part of our OHS module, there is an activity
    where the learner has to print off the floor plan, and then wander
    around the floor marking on th emap key items such as: fire exits;
    first aid kit, broom and mop cupboard, stationary cupboard, etc.
    Good luck!

  • Best Practice to generate UUIDs in a Cluster-Server Environment

    Hi all,
    I just need some inputs over the best practices to generate UUIDs in typical internet world where there are multiple servers/JVMs involved for load balancing or traffic distribution etc. I know JAVA is shipped with very efficient UUID generator API.
    But still that doesn't solve the issue in multiple server environment.
    For the discussion sake lets assume I need it to be unique over the setup than a near unique.
    How do you guys approach it?
    Thanks you all in advance.

    codeNombre wrote:
    jverd wrote:
    codeNombre wrote:
    Thanks jverd,
    So adding to the theory of "distinguishing all possible servers" in addition to UUID over each server would be the way to go.If you're unreasonably paranoid, sure.I think its a common problem and there is a big number of folks who might still be bugged about the "relative uniqueness" of UUID in long run. People who don't understand probability and scale, sure.
    Again coming back to my original problem in an "internet world", shouldn't the requirement like unique id between different servers be dealt with generating the UUID's at a layer before entering into the multi-server setup. Where would that be? I don't have the answer..Again, that is the POINT of the UUID class--so that you can generate as many IDs as you want and still be confident that nobody anywhere is in the world has ever generated any of those same IDs. However, if your requirements say UUID is not good enough, then you need to define what is, and that means having a lot of foresight as to how this system will evolve and how long it will live, AND having total control over some aspect of your servers, AND having a process that is so good that it's LESS LIKELY for a human to screw up and re-use a "unique" server ID than the probabilities I presented in my previous post.

  • Upscale / Upsize / Resize - best practice in Lightroom

    Hi, I'm using LR 2 and CS4.
    Before I had Lightroom I would open a file in Bridge and in ACR I would choose the biggest size that it would interpolate to before doing an image re-size in CS2 using Bicubic interpolation to the size that I wanted.
    Today I've gone to do an image size increase but since I did the last one I have purchased OnOne Perfect Resize 7.0.
    As I have been doing re-sizing before I got the Perfect Resize I didn't think about it too much.
    Whilst the re-size ran it struck me that I may not be doing this the best way.
    Follow this logic if you will.
    Before:
    ACR > select biggest size > image re-size bicubic interpolation.
    Then with LR2
    Ctrl+E to open in PS (not using ACR to make it the biggest it can be) > image re-size bicubic interpolation.
    Now with LR2 and OnOne Perfect Resize
    Ctrl+E to open in PS > Perfect Resize.
    I feel like I might be "missing" the step of using the RAW engine to make the file as big as possible before I use OnOne.
    When I Ctrl+E I get the native image size (for the 5D MkII is 4368x2912 px or 14.56x9.707 inches).
    I am making a canvas 24x20"
    If instead I open in LR as Smart Object in PS and then double click the smart icon I can click the link at the bottom and choose size 6144 by 4096 but when I go back to the main document it is the same size... but maybe if I saved that and then opened the saved TIFF and ran OnOne I would end up with a "better" resized resulting document.
    I hope that makes sense!?!?!?!
    Anyway I was wondering with the combo of software I am using what "best practice" for large scale re-sizing is. I remember that stepwise re-sizing fell out of favour a while ago but I'm wondering what is now the considered best way to do it if you have access to the software that was derived from Genuine Fractals.

    I am indeed. LR3 is a nice to have. What I use does the job I need but I can see the benefits of LR3 - just no cash for it right now.

  • Font sizing best practice in 2014

    Decided to obtain a better working knowledge of font sizing best practice, read until my eyes bled, and find I continue to have questions. So much posted is old (older than few years) and change happens fast.
    To the point; Since use of em units (or percentages) is often preferred, I now see that rems (root em) is another option, which leads me to ask:
    Should I consider using “rem” sizing?… or, do browser zooming capabilities of modern browsers mean I do not have to obsess over how precise I must be in guesstimating how viewers actually see my written content as they have better control at their end? Is em compounding of sizes something that must be considered?
    Thanks-

    Use a mixture of what is best suited.
    For document level I use pixels as in
    /* Document level adjustments */
    html {
      font-size: 13px;
    @media (min-width: 760px) {
      html { font-size: 15px; }
    @media (min-width: 900px) {
      html { font-size: 17px; }
    Then for the modules I use root level ems as in
    /* Modules will scale with document */
    header {
      font-size: 1.5rem;
    footer {
      font-size: 0.75rem;
    aside {
      font-size: 0.85rem;
    Then the size that will scale with the modules
    /* Type will scale with modules */
    h1 {
      font-size: 3em;
    h2 {
      font-size: 2.5em;
    h3 {
      font-size: 2em;
    Using this method I keep each scenario under complete control.
    A List Apart has a nice article on the subject.

  • Best practice for lazy-loading collection once but making sure it's there?

    I'm confused on the best practice to handle the 'setup' of a form, where I need a remote call to take place just once for the form, but I also need to make use of this collection for a combobox that will change when different rows in the datagrid or clicked. Easier if I just explain...
    You click on a row in a datagrid to edit an object (for this example let's say it's an "Employee")
    The form you go to needs to have a collection of "Department" objects loaded by a remote call. This collection of departments only should happen once, since it's not common for them to change. The collection of departments is used to populate a form combobox.
    You need to figure out which department of the comboBox is the selectedIndex by iterating over the departments and finding the one that matches the employee.department.id
    Individually, I know how I can do each of the above, but due to the asynch nature of Flex, I'm having trouble setting up things. Here are some issues...
    My initial thought was just put the loading of the departments in an init() method on the employeeForm which would load as creationComplete() event on the form. Then, on the grid component page when the event handler for clicking on a row was fired, I call a setup() method on my employeeForm which will figure out which selectedIndex to set on the combobox by looking at the departments.
    The problem is the resultHandler for the departments load might not have returned (so the departments might not be there when 'setUp' is called), yet I can't put my business logic to determine the correct combobox in the departmentResultHandler since that would mean I'd always have to fire the call to the remote server object every time which I don't want.
    I have to be missing a simple best practice? Suggestions welcome.

    Hi there rickcr
    This is pretty rough and you'll need to do some tidying up but have a look below.
    <?xml version="1.0"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
        <mx:Script>
            <![CDATA[
                import mx.controls.Alert;
                import mx.collections.ArrayCollection;
                private var comboData:ArrayCollection;
                private function setUp():void {
                    if (comboData) {
                        Alert.show('Data Is Present')
                        populateForm()
                    } else {
                        Alert.show('Data Not')
                        getData();
                private function getData():void {
                    comboData = new ArrayCollection();
                    // On the result of this call the setUp again
                private function populateForm():void {
                    // populate your form
            ]]>
        </mx:Script>
        <mx:TabNavigator left="50" right="638" top="50" bottom="413" minWidth="500" minHeight="500">
            <mx:Canvas label="Tab 1" width="100%" height="100%">
            </mx:Canvas>
            <mx:Canvas label="Tab 2" width="100%" height="100%" show="setUp()">
            </mx:Canvas>
        </mx:TabNavigator>
    </mx:Application>
    I think this example is kind of showing what you want.  When you first click tab 2 there is no data.  When you click tab 2 again there is. The data for your combo is going to be stored in comboData.  When the component first gets created the comboData is not instansiated, just decalred.  This allows you to say
    if (comboData)
    This means if the variable has your data in it you can populate the form.  At first it doesn't so on the else condition you can call your data, and then on the result of your data coming back you can say
    comboData = new ArrayCollection(), put the data in it and recall the setUp procedure again.  This time comboData is populayed and exists so it will run the populate form method and you can decide which selected Item to set.
    If this is on a bigger scale you'll want to look into creating a proper manager class to handle this, but this demo simple shows you can test to see if the data is tthere.
    Hope it helps and gives you some ideas.
    Andrew

  • BPC 7M SP6 - best practice for multi server setup

    Experts,
    We are considering purchasing new hardware for our BPC 7M implementation. My question is what is the recommended or best practice setup for SQL and Analysis Services? Should they be on the same server or each on a dedicated server?
    The hardware we're looking at would have 4 dual core processors and 32 GB RAM in a x64 base. Would this adequately support both services?
    Our primary application cube is just under 2GB and appset database is about 12 GB. We have over 1400 users and a concurrency count of 250 users. We'll have 5 app/web servers to handle this concurrency.
    Please let me know if I am missing information to be able to answer this question.
    Thank you,
    Hitesh

    I don't think there's really a preference on that point. As long as it's 64bit, the servers scale well (CPU, RAM), so SQL and SSAS can be on the same server. But it is important to look also beyond CPU and RAM and make sure there's no other bottlenecks like storage (Best practice is to split the database files on several disks and of course to have the logs on disks that are used only for the logs). Also the memory allocation in SQL and OLAP should be adjusted so that each has enough memory at all times.
    Another point to consider is high availability. Clustering is quite common on that tier. And you could consider having the active node for SQL on one server and the active node for OLAP (SSAS) on the other server. It costs more in SQL licensing but you get to fully utilize both servers, at the cost of degraded performance in the event of a failover.
    Bruno
    Edited by: Bruno Ranchy on Jul 3, 2010 9:13 AM

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • What is the best practice for full browser video to achieve the highest quality?

    I'd like to get your thoughts on the best way to deliver full-browser (scale to the size of the browser window) video. I'm skilled in the creation of the content but learning to make the most out of Flash CS5 and would love to hear what you would suggest.
    Most of the tutorials I can find on full browser/scalable video are for earlier versions of Flash; what is the best practice today? Best resolution/format for the video?
    If there is an Adobe guide to this I'm happy to eat humble pie if someone can redirect me to it; I'm using CS5 Production Premium.
    I like the full screen video effect they have on the "Sounds of pertussis" web-site; this is exactly what I'm trying to create but I'm not sure what is the best way to approach it - any hints/tips you can offer would be great?
    Thanks in advance!

    Use the little squares over your video to mask the quality. Sounds of Pertussis is not full screen video, but rather full stage. Which is easier to work with since all the controls and other assets stay on screen. You set up your html file to allow full screen. Then bring in your video (netstream or flvPlayback component) and scale that to the full size of your stage  (since in this case it's basically the background) . I made a quickie demo here. (The video is from a cheapo SD consumer camera, so pretty poor quality to start.)
    In AS3 is would look something like
    import flash.display.Loader;
    import flash.net.URLRequest;
    import flash.display.Bitmap;
    import flash.display.BitmapData;
    import flash.ui.Mouse;
    import flash.events.Event;
    import flash.events.MouseEvent;
    import flash.display.StageDisplayState;
    stage.align = StageAlign.TOP_LEFT;
    stage.scaleMode = StageScaleMode.NO_SCALE;
    // determine current stage size
    var sw:int = int(stage.stageWidth);
    var sh:int = int(stage.stageHeight);
    // load video
    var nc:NetConnection = new NetConnection();
    nc.connect(null);
    var ns:NetStream = new NetStream(nc);
    var vid:Video = new Video(656, 480); // size off video
    this.addChildAt(vid, 0);
    vid.attachNetStream(ns);
    //path to your video_file
    ns.play("content/GS.f4v"); 
    var netClient:Object = new Object();
    ns.client = netClient;
    // add listener for resizing of the stage so we can scale our assets
    stage.addEventListener(Event.RESIZE, resizeHandler);
    stage.dispatchEvent(new Event(Event.RESIZE));
    function resizeHandler(e:Event = null):void
    // determine current stage size
        var sw:int = stage.stageWidth;
        var sh:int = stage.stageHeight;
    // scale video size depending on stage size
        vid.width = sw;
        vid.height = sh;
    // Don't scale video smaller than certain size
        if (vid.height < 480)
        vid.height = 480;
        if (vid.width < 656)
        vid.width = 656;
    // choose the smaller scale property (x or y) and match the other to it so the size is proportional;
        (vid.scaleX > vid.scaleY) ? vid.scaleY = vid.scaleX : vid.scaleX = vid.scaleY;
    // add event listener for full screen button
    fullScreenStage_mc.buttonMode = true;
    fullScreenStage_mc.mouseChildren = false;
    fullScreenStage_mc.addEventListener(MouseEvent.CLICK, goFullStage, false, 0, true);
    function goFullStage(event:MouseEvent):void
        //vid.fullScreenTakeOver = false; // keeps flvPlayer component from becoming full screen if you use it instead  
        if (stage.displayState == StageDisplayState.NORMAL)
            stage.displayState=StageDisplayState.FULL_SCREEN;
        else
            stage.displayState=StageDisplayState.NORMAL;

  • Best Practice on querying Data from Database

    Hello and I was wondering what is the preferred and best practice for querying data from an SQL database inside a JSP page. Is it using the JSTL library or another method? Thanks

    It depends on the size of the application really.
    The "correct and preferred" approach in a large MVC app would be to have a seperate class that does all the database access, retrieving the data into java objects.
    Check out [url http://java.sun.com/blueprints/corej2eepatterns/Patterns/DataAccessObject.html] DAO pattern
    You then "save" the data into request/session attributes, and forward to a jsp page to render the result.
    Most approaches recommend a separation between JSP (the view) and SQL code.
    The JSTL sql tags are provided more for "quick and dirty" code applicable in small applications, or for fast prototyping. That approach is not really robust for large scale applications.
    Cheers,
    evnafets

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Oracle BPM Best Practices

    Hi all,
    Anybody has any information on the Oracle BPM Best Practices?
    Any guide?

    All,
    I was trying to find a developers guide for using Oracle BPM Suite (11g). I found the one in the following link, however this looks like a pretty detailed one...
    http://download.oracle.com/docs/cd/B31017_01/integrate.1013/b28981/toc.htm
    Can you someone help me find any other flavors of the developers guide? I am looking for the following...
    1. Methods of work - Best Practices for design and development of BPM process models.
    2. Naming Conventions for Process Modeling - Best Practices
    3. Coding standards for Process Modeling (J Developer)
    4. Guide with FAQ's for connecting / Publishing Process Models to the MDS Database.
    5. Deployment Standards - best practices....
    6. Infrastructure - Recommendations for Scale out deployment in Linux v/s Windows OS.
    Regards,
    Dinesh Reddy

  • IP over Infiniband network configuration best practices

    Hi EEC Team,
    A question I've been asked a few times, do we have any best practices or ideas on how best to implement the IPoIB network?
    Should it be Class B or C?
    Also, what are your thoughts in regards to the netmask, if we use /24 it doesn't give us the ability to visually separate two different racks (ie Exalogic / Exadata), whereas netmask /23, we can do something like:
    Exalogic : 192.168.*10*.0
    Exadata : 192.168.*11*.0
    While still being on the same subnet.
    Your thoughts?
    Gavin

    I think it depends on a couple of factors, such as the following:
    a) How many racks will be connected together on the same IPoIB fabric
    b) What rack configuration do you have today, and do you foresee any expansion in the future - it is possible that you will move from a purely physical environment to a virtual environment, and you should consider the number of virtual hosts and their IP requirements when choosing a subnet mask.
    Class C (/24) with 256 IP values is a good start. However, you may want to choose a mask of length 23 or even 22 to ensure that you have enough IPs for running the required number of WLS, OHS, Coherence Server instances on two or more compute nodes assigned to a department for running its application.
    In general, when setting a net mask, it is always important that you consider such growth projections and possibilities.
    By the way, in my view, Exalogic and Exadata need not be in the same IP subnet, especially if you want to separate application traffic from database traffic. Of course, they can be separated by VLANs too.
    Hope this helps.
    Thanks
    Guru

  • Best Practice for stopping unsolicited e-mails that are not detected as SPAM or Marketing?

    I have roughly 40,000 mailboxes behind some IronPort appliances.   My question for all of you veteran SMTP admins is how do you handle situations where you have an individual sending multiple e-mails to one of your mailboxes?  It's not really a big enough of an issue to setup a content filter for the sending e-mail address, we don't have Blocklists roled out to everyone yet, so that's not a good fit, although I really like the IronPort quarantine with it's SafeList and Blocklist features.
    What I'm looking for is do you even take the time to try to work with the sender and get them to stop sending the e-mails?
    Do you bounce the e-mails back to the sender?
    Setup a client side (Outlook) rule to just automatically delete the e-mails?
    Just looking for some "best practice" or good advice on how to handle these minor issues.  The major ones are easy but it's these little ones that turn into administrative issues.
    Thanks all, look forward to your input...
    Jason Meyer

    If the sender is mailbombing you (sending a large number of mails just to flood your mailbox) then that's clear network abuse; you ignore him and complain to his system administrator or to his upstream provider. As that's likely to take some time if it works at all, you also want a block in place as soon as you've recorded enough evidence to document the abuse.
    There used to be a vulnerability in MS Small Business Server whereby some chump would send out a mail to over 500(?) recipients including at least two SBS boxes with the bug. Each box would then send a further copy of the mail, thereby creating a loop. Swift coding was necessary to protect one's own recipients from the deluge. (And that's a sales argument in favour of appliances versus outsourcing to the Cloud, by the way.) Strictly speaking, the abuse was the fault of the SBS systems administrators rather than the original chump, so careful header parsing can sometimes be necessary.
    A more likely scenario is that you have someone who just keeps on sending the odd spam, week after week. Let's take the worst case; that it's addressed just to you, the sender's domain is reasonably fragrant (or at least impractical to block) and there's no headers or body phrases that you can add to a filter to create a general solution. You have to block the specific sender.
    At the moment I'm doing this with a simple dictionary-driven rule. If I get a complaint from any of my mailbox owners and am satisfied that nothing else will workthen I simply add the sender's address to the dictionary. The rule is already in place and only requires a dictionary update. One hallmark of this type of case is that I have no qualms about simply dropping the mail, rather than sending some sort of NDR.
    Sender-blacklist: if (mail-from-dictionary-match("blocked-senders", 1)) { drop(); }
    Now at the scale you are discussing, this solution may not work. Is there time to properly examine each case, or will your colleagues simply start slamming addresses into such a dictionary? How quickly will the list grow to the point where it starts to consume an unreasonable slice of your CPU time? Indeed, I'm not sure how far such a solution will scale even if processing capacity is not an issue.
    I need a review process to remove addresses from my dictionary, but then the same principle applies to any filter that uses specific static data. I try to keep records of every case where I include such data, and if I cannot find justification for a specific listing then I remove it at once.

Maybe you are looking for