Best Practices - Update Explorer Properties of BW Objects

What are some best practices of using the process chain type "Update Explorer Properties of BW Objects"?
We have the option of updating Conversion Indexes, Hierarchy Indexes, Authorization Indexes, and RKF/CKF Indexes.
When should we run each update process?
Here are some options we're considering:
Conversion Indexes - Run this within InfoCube load process chains that contain currency conversions within explorer objects.
Hierarchy Indexes - When would this need to be run? Does this need to be run for PartProviders and/or Snapshots? Do ACRs handle this update? Should this be run within InfoCube load chains, or after ACRs?
Authorization - We plan to run this a couple times a day for all explorer objects.
RKF/CKS - Does this need to run after InfoCube loads? With PartProvider and/or Snapshot indexes? After transports have completed?
Thanks,
Cote

Does anyone productively use explorer and this process type for process chains?

Similar Messages

  • Best Practice Question - Business Logic in Value Objects?

    Just wondering what people's thoughts on best practices for setting properties of Value Objects.
    For instance, I have several getter/setters in one of my Value Objects with logic in the setter that uses the value to set values of other properties.
    For example, I have a Value Object that has the following properties:
    category (of type Category, which is another Value Object with properties "name" and "id")
    categoryId (of type int)
    categoryUpdated (of type Boolean)
    I have a collection of Category Objects in the Model. When I set the categoryId of this class, I set the "categoryUpdated" to true, and dispatch an CairngormEvent to that find the "category" with the specified "categoryId" and set the "category" property to this item.
    So what is the best practice? To simply make the "categoryId" a public variable, and create a new Event/Command to perform all this logic? Or is it ok to do it all in the Value Object setter?
    Thanks.

    Hi Eric,
    I can't speak for best practices, but the only logic I've ever added to a VO were getters: an example was a set of getters on an airline flight VO to get overall flight departure/arrival times/cities from an array of flight segments in a property of the VO.
    I feel uneasy (in the nicest possible way) about your VO for two reasons: firstly it has a strong dependency on bits of the Cairngorm framework to look up the category (and VOs normally don't need to depend on anything), and secondly the intent of Commands in Cairngorm is more to represent user gestures than to wire up VO properties (I often see people shoehorning stuff into Commands that might be better of as plain old business utility classes). I would rather see an UpdateCategoryCommand (that's what the user is trying to do?) that updates categoryId and hits a delegate to populate the category property.
    That said you might have a very good reason for doing this. Could you tell us where the code is that's setting categoryId, and if anything is using categoryUpdated?
    Cheers,
    Robin

  • Best practice using regular properties

    What is considered best practice when it comes to using properties ? Example like hostname, port number when connecting towards an external resource.
    Should property files be used ? Is this considered a bad practice ? Should deployment descriptors be used - if so - how do one update these properties when changed ?
    Are there any utility classes that makes easy access to this kind of properties ?
    ---- Trond

    Depends on what properties. Many properties like hostname etc can be retrieved using different API calls - such as the request object or other portal specific objects.
    Properties that you applications need, that might change - can be stored in a properties file. I use a singleton to retreive them - and have a reload method on the singleton that I can call if I need to reload the properties once the server has started.
    Kunal

  • Best practice - updating figure numbers in a file, possibly to sub-sub-chapters

    Hi,
    I'm a newbie trying to unlearn my InDesign mindset to work in FrameMaker. What is best practice for producing figure numbers to accompany diagrams throughout a document? A quick CTRL+F in the Framemaker 12 Help book doesn't seem to point me in a particular direction. Do diagrams need to be inserted into a table, where there is a cell for the image and a cell for the figure details in another? I've read that I should  use a letter and colon in the tag to keep it separate from other things that update, e.g. F: (then figure number descriptor). Is there anything else to be aware of, such as when resetting counts for chapters etc?
    Some details:
    Framemaker12.
    There are currently 116 chapters (aviation subjects) to make.
    Each of these chapters will be its own book in pdf form, some of these chapters run to over 1000 pages.
    Figure number ideally takes the form: "Figure (a number from one of the 1-116 chapters used) - figure number" e.g. "Figure 34 - 6." would be the the 6th image in the book 'chapter 34'.
    The figure number has to cross reference to explaining text, possibly a few pages away.
    These figures are required to update as content is added or removed.
    The (aviation) chapter is an individual book.
    H1 is the equivalent of the sub-chapter.
    H2 is the equivalent of the sub-sub-chapter.
    H3 is used in the body copy styling, but is not a required detail of the figure number.
    I'm thinking of making sub-chapters in to individual files. These will be more manageable on their own. They will then be combined in the correct order to form the book for one of these (1 of 116) subject chapters.
    Am I on the right track?
    Many thanks.
    Gary

    Hi,
    Many thanks for the link you provided. I have implemented your recommendation into my file. I have also read somewhere about sizing anchored frames to an imported graphic using 'esc' + 'm' + 'p'.
    What confuses me, coming from InDesign is being able to import these graphics at the size they were made ( WxH in mm at 300ppi) and keeping them anchored to a point in the text flow.
    I currently have 1 and 2 column master pages built. When I bring in a graphic my process is:
    insert a single cell table on the next space after current text > drop the title below the cell > give the title a 'figure' format. When I import a graphic it either tries to fit it in the current 2 column layout with only part of it showing in a box which is half the width of a single column!
    A current example: page 1 (2 column page) the text flows for 1.5 columns. At the end of the text I inserted a single cell table, then imported and image into the cell.
    Page 2 (2 column page) has the last line of page 1's text in the top left column.
    Page 3 (2 page column)  has the last 3 words of page 1 in its top left column.  The right column has the table in it with part of the image showing. The image has also bee distorted, like it's trying to fit. These columns are 14 cm wide, the cell is 2 cm wide at this point. I have tried to give cells for images 'wider' attributes using the object style designer but with no luck.
    Ideally I'm trying to make 2 versions. 1) an anchored frame that fits in a 1 column width on a 2 column width page. 2) An anchored frame that fits the full width of my landscape pages (minus some border dimension),  this full width frame should be created on a new proceeding page. I'd like to be able drop in images to suit these different frames with as much automation as possible.
    I notice many tutorials tell you how to do a given area of the program, but I haven't been able to find one that discusses workflow order. Do you import all text first, then add empty graphic boxes and/or tables throughout and then import images? I'm importing text from Word,  but the images are separate, having been vectored or cleaned up in Photoshop - they won't be imported from the same word file.
    many thanks

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best practice updating plots ?

    Folks -
    I'm looking for best practice advice, better yet point-me-to-the-FAQ.  What's the one-true-Labview-way to keep a stacked plot of a waveform chart updated ?  I've got a main loop consisting of a flat sequence, the first two frames of which  which may be updating either of two 1-D arrays. There is  a time axis common to both.  I need both plotted soon (1-2 sec) after the update happens. RIght now, the three arrays are just shared variables , written in the subvis , while the plot is outside the flat sequence , inside the Until-stop. I put the three together into a waveform, but I'm not at all sure this is good practice . Advice ?
    thanks
    Alex
    Attachments:
    OUTLINE-PPS-V2.vi ‏74 KB

    Thanks 10^6 . I am confused but have a hardware blockage to events (6133) , and can't find coherent guidance from NI on the one true path to Labview goodness, only asking stupid questions.
    Whatever you are doing to the chart data does NOT create multiple traces, you create a single waveform, but writing the y data twice, the lower set simply overwriting the data wired higher up.
    Ahh, thanks. Tried to find documents on how to build a waveform of multiple plots. the NI examples I've found don't have time axes.  I can't find one summary document about plots, so have to try things until they work. XY charts did but Could not reliably update. Tryng to 
    Never hide an event structure inside a long sequence. If you would press the "commit" button during a time where the code is elsewehere, you might lock up the front panel forever.
    I was afraid of that , intended on disabling the commit button except when needed (second frame) .  
     Among my problems are : in a 2 minute period, I wait for a switch signal external to me. That must start a sequence of waiting for another switch to close and checking a file frequently for updates. if the operator likes that new file, then commit (copy to another SV array). The second signal is my cue to set up and arm a few usb and pci digitizers. then about 30 sec of things happen in a sequence. If I could get events out of the 6133, a state machine would be possible, but NI says no, gotta poll.
    What is the point for all these network shared variables? 
    Bind variables from subvi's  to indicators.
    What else needs to access those? Any other remote code?
    A few are actual Network SV from elsewhere, or are local copies I make and then need on indicators.
    In any case, I recommend to rearchitect this entire thing as a plain state machine. One outer loop. One case structure, and each frame a state of a single case structure. Now you only need once instance of each variable.
    Yeah, that was the plan until I found out that 6133 doesn't support events. Need to try harder to re-arrange .
    thanks again.

  • Best Practice for caching global list of objects

    Here's my situation, (I'm guessing this is mostly a question about cache synchronization):
    I have a database with several tables that contain between 10-50 rows of information. The values in these tables CAN be added/edited/deleted, but this happens VERY RARELY. I have to retrieve a list of these objects VERY FREQUENTLY (sometimes all, sometimes with a simple filter) throughout the application.
    What I would like to do is to load these up at startup time and then only query the cache from then on out, managing the cache manually when necessary.
    My questions are:
    What's the best way to guarantee that I can load a list of objects into the cache and always have them there?
    In the above scenario, would I only need to synchronize the cache on add and delete? Would edits be handled automatically?
    Is it better to ditch this approach and to just cache them myself (this doesn't sound great for deploying in a cluster)?
    Ideas?

    The cache synch feature as it exists today is kind of an "all or nothing" thing. You either synch everything in your app, or nothing in your app. There isn't really any mechanism within TopLink cache synch you can exploit for more app specific cache synch.
    Keeping in mind that I haven't spent much time looking at your app and use cases, I still think that the helper class is the way to go, because it sounds like your need for refreshing is rather infrequent and very specific. I would just make use of JMS and have your app send updates.
    I.e., in some node in the cluster:
    Vector changed = new Vector();
    UnitOfWork uow= session.acquireUnitOfWork();
    MyObject mo = uow.registerObject(someObject);
    // user updates mo in a GUI
    changed.addElement(mo);
    uow.commit();
    MoHelper.broadcastChange(changed);
    Then in MoHelper:
    public void broadcast(Vector changed) {
    Hashtable classnameAndIds = new Hashtable();
    iterate over changed
    if (i.getClassname() exists in classAndIDs)
    classAndIds.get(i.getClassname()).add(i.getId());
    else {
    Vector vc = new Vector();
    vc.add(i.getId())
    classAndIds.add(i.getClassname(),vc);
    jmsTopic.send(classAndIds);
    Then in each node in the cluster you have a listener to the topic/queue:
    public void processJMSMessage(Hashtable classnameAndIds) {
    iterate over classAndIds
    Class c = Class.forname(classname);
    ReadAllQuery raq = new ReadAllQuery(c);
    raq.refreshIdentityMapResult();
    ExpressionBuilder b = new ExpressionBuilder();
    Expression exp = b.get("id").in(idsVector);
    roq.setSelectionCriteria(exp);
    session.executeQuery(roq);
    - Don

  • Best Practice: Update OWB 10.2.0.1 to 10.2.0.3 ?

    Hey OWB-Guys,
    I've searched the forum and metalink, but I could not find what I'm looking for. I want to update my OWB from version 10.2.0.1 to 10.2.0.3 (I'm working on WinXP Pro Sp2, so I can't use 10.2.0.4, right?).
    Our system architecture will change soon, so I have to start the Control Center Service locally on my client computer in the future. I think there will be no OWB installed on DB server, if I understood our DBA correctly.
    What will I have to do, to update OWB? Install patchset from Metalink on my client computer? What about the repository owner and repository user in the database?
    Do you have a document / link with a guideline through the things to do?
    Thanks in advance and have a nice weekend!
    Steffen

    Hey, nobody who upgraded from 10.2.0.1 to 10.2.0.3 ??? Please help me with some general hints or links describing that issue...
    Thank you!
    Steffen

  • Best Practices For Portal Content Objects Transport System

    Hi All,
    I am going to make some documentation on Transport Sytem for Portal content objects in Best Practices.
    Please help in out and send me some documents related to SAP Best Practices for transport  for Portal Content Objects.
    Thanks,
    Iqbal Ahmad
    Edited by: Iqbal Ahmad on Sep 15, 2008 6:31 PM

    Hi Iqbal,
    Hope you are doing good
    Well, have a look at these links.
    http://help.sap.com/saphelp_nw04/helpdata/en/91/4931eca9ef05449bfe272289d20b37/frameset.htm
    This document, gives a detailed description.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f570c7ee-0901-0010-269b-f743aefad0db
    Hope this helps.
    Cheers,
    Sandeep Tudumu

  • Best Practice - Securing Schema from User Access

    Scenario:
    User A requires access to schema called BLAH.
    User A is a developer that built an application using this schema in a separate development environment, although has the same privileges mirrored to production (same roles etc - required for operation of the application built).
    This means that the User has roles that grant Select, Update etc rights for the schema / table in order to use (and maintain) the applications.
    How can we restrict access to the BLAH schema in PRODUCTION, enforcing it to only be accessible via middle tier / application (proxy authentication?)?
    We've looked at using proxy authentication, however, it's not possible to grant roles and rights to the proxy account and NOT have them granted to the user (so they can dive straight in using development tooling and hit prod etc)>
    We've tried granting it on a session basis using proxy authentication (i.e. user a connects via proxy, an we ENABLE a disabled role on the user based on this connection), however, it causes performance issues.
    Are we tackling this the wrong way? What's the best practice for securing oracle schemas (and objects in general) for user access where the users actually get oracle user account (or even use SSO) for day to day business as usual.
    To me this feels like a common scenario, especially where SSO comes into play ...

    What about situations where we have Legacy Oracle Forms stuff? In these cases the user must be granted select etc rights to particular objects, as this can't connect via a middle tier.
    The problem we have is that our existing middle tier implementation is built expecting the user credentials to be passed to it during initial authentication and does not use a proxy, or super user style account.  We have, historically, been 100% reliant on Oracle rights and controls to validate and restrict access to our underlying data.  From what you are saying, we should start to look at using proxy or super user access and move this control process further up - i.e. into Code or Packages ?  If so, does this mean that there is no specific way to restrict schema access to given proxy accounts and then grant normal user accounts to connect through these to get access (kind of a delegated access scenario), without using disabled roles?

  • OIM best practice and E-Business

    I have the business requirement to provision different types of users in EBS. There are different applications developed within EBS for which the user provisioning flow may vary slightly.
    what is the best practice with regards to creating resource objects and forms ? should I create a separate RO and set of Forms for each set of users

    EBS, and SAP, implementations with complex and varying approval work flows is clearly one of the most challenging applications of OIM. There are a number of design patterns but without a lot of detail about your specific implementation it is very hard to say which pattern is the most appropriate.
    (Feel free to contact me on [email protected] if you want to discuss this in more detail but don't want to put all the detail in a public forum.)
    Best regards
    /M

  • Integrated Planning Best Practices

    Folks -
    Does anyone have or know if there is a SAP BEst Practices White Paper, addressing Modelling, Planning Objects, and other design considerations, etc..for Integrated Planning
    I am not looking to install SAP Best Practices buidling blocks etc.
    All I am looking for is a white paper.
    Will reward of course.

    Hi Abhi,
    BI Integrated Planning is still too new. However, you can take the lessons learned and best practices for BW (data modeling) and for BPS (planning models) which also apply for BI-IP in a very similar way.
    Regards,
    Marc
    SAP NetWeaver RIG

  • Best Practices for File Organizati​on/Project Explorer

    So we are finally getting SCC at my organization to manage our LabVIEW development, and that is good! 
    Now, we are starting in on discussions about how we should organize our files on disk and how we should use the Project Explorer. When I started here about 3 years ago, I wasn't very familiar with the project explorer, so I read the article at http://zone.ni.com/devzone/cda/tut/p/id/7197. Two of the main things I took away from that article are:
    1. Organize Files in a logical manner on disk. Whatever that is, it is not a flat file structure.
    2. The top level VI should be separate from other source code. Preferably, it should reside in the application folder.
    Push Back Against These Recommendations
    Before I was hired, most, if not all LabVIEW development was done utilizing a flat file structure and the top level VI lived with the source code. Since we didn't have a proper SCC, each individual organized files as he saw fit. So I started using the Project Explorer (not even its use is totally accepted right now) and I began follow recommendations 1 and 2 above. I didn't always follow #1 very strictly, but I have been working towards it, and I have always followed #2 religiously. 
    Since we are starting these discussions on how we should organize files on disk I'm starting to get some push back to following these two recommendations.
    The arguments I get in favor of using a flat file structure is that you always know where every file is; including the top-level VI. It is also argued that it is a lot of effort to organize and search for VIs when they all reside in different folders. I think the fear is that by getting "clever" and organizing our files in such a manner we'll make things complicated and we will somehow shoot ourselves in the foot. 
    The argument I get against separating the top level VI from the rest of the source code is that it:
    (a) Won't be clear where it is (like it is buried within hundreds of VIs). However, it is argued, you can just put a "!" in front of the file name and then it is always the at top of the flat file structure.
    (b) An extension of argument of (a) is that things either look or seem messy when VIs (including top level VI) don't live in a sub-folder and are just hanging out with the Project Explorer file. 
    (c) I think there may be some fear of breaking the VI by moving it and altering the dependencies for the VI. 
    Convincing Others its Good to Follow These Recommendations
    So, if I want to follow NI's recommendations, I need to come up with reasons we should follow these recommendations. Also, I should state that I care about following these recommendations because its what NI recommends. They've been around the block a few times and I'm sure there are good reasons why these are best practices. However, I don't think I've given a very compelling case for why these recommendations should be followed.
    So I'll tell you all what I think good reasons are for these recommendations and perhaps I can get some feedback or additional support? If I'm crazy for wanting to follow these recommendations maybe someone can point out why I'm crazy. 
    (a) Arguments for Following Both
    I. I passed the CLAD a couple of weeks ago, and I have started studying for the CLD. Part of the CLD is following both of these recommendations (see page 6 of http://ftp.ni.com/evaluation/certification/cld/cld​_exam_prep_guide_english.pdf). While this isn't a reason in and of itself, it suggests that if it important when being certified it is important in practice!
    II. If we hire new developers that are familiar with LabVIEW, they will most likely be familiar with these recommendations, especially if they are certified. That will lead to increased productivity out of the door because they won't have to learn our special way of doing things.
    (b) Arguments for Organized File Structure
    I. Unused VIs are easier to identify and remove. Right now we never remove VIs because we don't know if they are used or not. This leads to a lot of VI bloat.
    II. It is hard to know what a specific VIs function is in a flat file structure by looking at the name.
    (c) Arguments for Separating Top Level VI from Source Code
    I. Placing the top level VI is an intuitive place for this VI. As long as the top level VI is the only VI in the application folder there is no mistake it is the top level VI, especially once you open it. This makes it easy for new developers to find the top level VI. I'd argue it isn't very intuitive for new developers to know that a VI in the source code folder that is prefaced with a "!" is the top level VI.
    Summary
    So that is what I think so far. Is there anything else I am missing to support following those two recommendations or am I just being inflexible?
    Thanks!

    zenthoef,
    As a CLA, I have struggled with file structure over the years.  Here are my recommendations:
    1.  Put the top level VI and the project in the top-level folder.  This makes it very clear where to begin.
    2.  Put the remaining user interface VIs in a separate folder.  Again, it makes it very clear what the functionality of these VIs are.
    3.  If you are using object, put each object in a separate folder.  Place the family of objects in one folder, with each object in a subfolder.
    4.  Keep the remaining VIs either in a single folder.  This can contain a small number of subfolder if your project is large, but too many folders makes it hard to figure out where your VIs are.  For example, you might have a DAQ subfolder, an Analysis subfolder, and a Report subfolder.  But if you had a Test1 folder, a Test2 folder, and you had a VI that was used by both tests, where would it go?  Keep it simple.
    5.  You inferred that it is hard to figure out what a VI does by its name.  That implies that 1) you need better names, and 2) your VIs are too complicated.  A VI should do a single function which can be adequately described by its name.  That VI might be something like Analyze Data.vi, which would contain a bunch more subVIs (like Get 1st Harmonics.vi), but each VI would contain a single function.  You wouldn't save the data to a report in the Analyze Data.vi, for example.
    The most compelling reason for following these suggestions is that it is easier to figure out what the code is doing after you haven't looked at it for a while.  Once you have an application that is working and bug free, you shouldn't have to touch the code until you want to add features.  If that is even 6 months later, you will probably have forgotten how the code works.  As a consultant, I have had to update other people's code, and just figuring how where to start can be a challenge.
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com
    Tom Brass
    Certified LabVIEW Architect
    Saint Bernard Engineering, Inc.
    www.saintbernardengineering.com

  • Best practice for linking fields from multiple entity objects

    I am currently transitioning from PHP to ADF. I'm looking for the best practice for linking data from multiple entity objects.
    Example:
    EO 'REQUESTS' has fields: req_id, name, dt, his_stat_id, her_stat_id
    EO 'STATUSES' has fields: stat_id, short_txt_descr
    'REQUESTS' is linked to EO 'STATUSES' on: STATUSES.stat_id = REQUESTS.his_status_id
    'REQUESTS' is also linked to EO 'STATUSES' on: STATUSES.stat_id = REQUESTS.her_status_id
    REQUESTS.his_status_id is independent of REQUESTS.her_status_id
    When I create a VO for REQUESTS, I want to display: REQUESTS.name, REQUESTS.dt, STATUSES.short_txt_descr (for his_stat_id), STATUS.short_txt_descr (for her_stat_id)
    What is the best practice for accomplishing this? It appears I could do it a few different ways:
    1. Create the REQUESTS VO with a LOV for his_stat_id and her_stat_id
    2. Create the REQUESTS VO with the join to STATUSES performed within the query for the VO. This would require joining on the STATUSES EO twice (his_stat_id, her_stat_id)
    3. I just started reading about View Links - would that somehow do what I'm looking for?
    I also need to be able to update his_status_id and her_status_id through the by selecting a STATUSES.short_txt_descr from a dropdown.
    Any suggestions on how to approach such a stupidly simple task?
    Using jDeveloper 11.1.2.2.0 if that makes a difference in the solution.
    Thanks ahead of time,
    CJ

    CJ,
    I vote for solution 1 as it's just your use case. As you said you what to update the his_status_id and her_status_id through the by selecting a STATUSES.short_txt_descr by a drop down. This is exactly the LOV solution.
    ViewLinks are used fro master detail navigation (which you don't do here) and Joining the data make it difficult to update (and you still need a LOV for the drop down box.
    Timo

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

Maybe you are looking for