Comparator Best Practice

I am looking for some suggestions on how to sort a Collection of Objects multiple ways.
I have the following Class:
public class User {
private String firstName;
private Date dob;
private String lastName;
}And I want to sort it in different ways, by firstName or by dob etc.
My idea after doing some reading is to provide a class that holds the different sort types
import java.util.*;
public class UserSortTypes {
    static final Comparator<User> FIRST_NAME_ORDER = new Comparator<User>() {
        public int compare(User u1, User u2) {
            return u1.getFirstName().compareTo(u2.getFirstName());
static final Comparator<User> DOB_ORDER = new Comparator<User>() {
        public int compare(User u1, User u2) {
            return u1.getDOB().compareTo(u2.getDOB());
}and then use that class when sorting etc.
What is the best practice for sorting a collection multiple ways?
Thank you,
Al

I'd do something like this, which is the same way you did it,...
public class UserComparator implements Comparator<User> {
     private SortField field;
     private SortOrder sortOrder = SortOrder.ASCENDING;
     @Override
     public int compare(User o1, User o2) {
          int retVal = 0;
          // null-check...
          switch (field) {
          case FIRSTNAME:
               retVal = o1.getFirstName().compareTo(o2.getFirstName());
          case LASTNAME:
               retVal = o1.getLastName().compareTo(o2.getLastName());
          if(sortOrder == SortOrder.DESCENDING){
               retVal *= -1;
          return retVal;
     public enum SortField {
          FIRSTNAME, LASTNAME;
     public enum SortOrder {
          ASCENDING, DESCENDING;
}Best practice? I dunno,...
For our presentation logic, we carry it to excess... we use one Comparator for sorting a List, displayed to the browser. Using it for for all types.
... 1.4 ...
public class BeanComparator implements Comparator {
     private String property = null;
     private boolean ascending = true;
     public int compare(Object o1, Object o2) {
          int result = 0;
          Object value1 = this.getProperty(o1, property);
          Object value2 = this.getProperty(o2, property);
          // null checks,...
          if (value1 instanceof String) {
               String s1 = (String) value1;
               String s2 = (String) value2;
               result = s1.compareToIgnoreCase(s2);
          } else if (value1 instanceof Date) {
               Date d1 = (Date) value1;
               Date d2 = (Date) value2;
               result = d1.compareTo(d2);
                // and other types,..
          if (!ascending) {
               result = result * -1;
          return result;
     private Object getProperty(Object obj, String property) {
          // get property using reflection,...
}regards
slowfly

Similar Messages

  • Best practices of having a different external/internal domain

    In the midst of migrating from a joint Windows/Mac server environment to a completely Apple one. Previously, DNS was hosted on the Windows machine using the companyname.local internal domain. When we set up the Apple server, our Apple contact created a new internal domain, called companyname.ltd. (Supposedly there was some conflict in having a 10.5 server be part of a .local domain - either way it was no worries either way.) Companyname.net is our website.
    The goal now is to have the Leopard server run everything - DNS, Kerio mailserver, website, the works. In setting up the DNS on the Mac server this go around, we were advised to just use companyname.net as the internal domain name instead of .ltd or .local or something like that. I happen to like having a separate local domain just for clarity's sake - users know if they are internal/external, but supposedly the Kerio setup would respond much better to just the one companyname.net.
    So after all that - what's the best practice of what I should do? Is it ok to have companyname.net be the local domain, even when companyname.net is also the address to our external website? Or should the local domain be something different from that public URL? Or does it really not matter one way or the other? I've been running companyname.net as the local domain for a week or so now with pretty much no issues, I'd just hate to hit a point where something breaks long term because of an initial setup mixup.
    Thanks in advance for any advice you all can offer!

    Part of this is personal preference, but there are some technical elements to it, too.
    You may find that your decision is swayed by the number of mobile users in your network. If your internal machines are all stationary then it doesn't matter if they're configured for companyname.local (or any other internal-only domain), but if you're a mobile user (e.g. on a laptop that you take to/from work/home/clients/starbucks, etc.) then you'll find it a huge PITA to have to reconfigure things like your mail client to get mail from mail.companyname.local when you're in the office but mail.companyname.net when you're outside.
    For this reason we opted to use the same domain name internally as well as externally. Everyone can set their mail client (and other apps) to use one hostname and DNS controls where they go - e.g. if they're in the office or on VPN, the office DNS server hands out the internal address of the mail server, but if they're remote they get the public address.
    For the most part, users don't know the difference - most of them wouldn't know how to tell anyway - and using one domain name puts the onus on the network administrator to make sure it's correct which IMHO certainly raises the chance of it working correctly when compared to hoping/expecting/praying that all company employees understand your network and know which server name to use when.
    Now one of the downsides of this is that you need to maintain two copies of your companyname.net domain zone data - one for the internal view and one for external (but that's not much more effort than maintaining companyname.net and companyname.local) and make sure you edit the right one.
    It also means you cannot use Apple's Server Admin to manage your DNS on a single machine - Server Admin only understands one view (either internal or external, but not both at the same time). If you have two DNS servers (one for public use and one for internal-only use) then that's not so much of an issue.
    Of course, you can always drive DNS manually by editing the zone files directly.

  • Error while Connecting report Best Practices v1.31 with SAP

    Hello experts,
    I'm facing an issue while trying to connect some of my reports from Best Practices for BI with SAP.
    It only happens when it's about info sets, the other ones that are with SAP tables go smoothly without a problem.
    The most interesting is I have already one of the reports connected to SAP info sets.
    I have already verified the document of steps of creation of additional database that comes with BP pack. They seem ok.
    Here goes what Crystal Reports throws to me after changing the data source to SAP:
    For report "GL Statement" one of the Financial Analysis one which uses InfoSet: /KYK/IS_FIGL_I3:
    - Failed to retrieve data from the database; - click ok then...
    - Database connector error: It wasn't indicated any variant for exercise (something like this after translating) - click ok then
    - Database connector error: RFC_INVALID_HANDLE
    For report "Cost Analysis: Planned vs. Actual Order Costs" one of the Financial Analysis one which uses InfoSet: ZBPBI131_INFO_ODVR and ZBPBI131_INFO_COAS; and also the Query CO_OM_OP_20_Q1:
    - Failed to retrieve data from the database; - click ok then...
    - Database connector error: check class for selections raised errors - click ok then
    - Database connector error: RFC_INVALID_HANDLE
    Obs.: Those "Z" infosets are already created in SAP environment.
    The one that works fine is one of the Purchasing Analysis reports:
    - Purchasing Group Analysis -> InfoSet: /KYK/IS_MCE1
    I'm kind of lost to solve this, because I'm not sure if it can be in the SAP JCO or some parameter that was done wrongly in SAP and I have already check possible solutions for both.
    Thanks in advance,
    Carlos Henrique Matos da Silva - SAP BusinessObjects BI - Brazil.

    I re-checked step 3.2.3 - Uploading Crystal User Roles (transaction PFCG) - of the manual where it talks about CRYSTAL_ENTITLEMENT and CRYSTAL_DESIGNER roles, I noticed in the Authorizations tab that the status was saying it hadn't been generated and I had a yellow sign, so then that was what I did (I generated) as it says in the manual.
    Both statuses are now saying "Authorization profile is generated" and the sign is now green on the tab.
    I had another issue in the User tab (it was yellow as Authorizations one before generating)....all I needed to do to change to green was comparing user (User Comparison button).
    After all that, I tried once more to refresh the Crystal report and I still have the error messages being thrown.
    There's one more issue in one of the tabs of PFCG transaction, it is on the Menu one where it is with a red sign, but there's nothing talking about it in the manual. I just have a folder called "Role menu" without anything in it.
    Can it be the reason why I'm facing errors when connecting the report to SAP infoSets? (remember one of my reports which is connected to an infoSet works good)
    Thanks in advance,
    Carlos Henrique Matos da Silva - SAP BusinessObjects BI - Brazil.

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best practice in datawarehousing

    This is the scenario for the client I am working with.
    They are taking data in the obiee reports , directly from essbase cube. ETL used is SSIS and schedular is Control M . Some times our client faced issue of server going down , still they are not doing or applying best practices. Best practice is to maintain proper datawarehouse.
    Whatever the users are getting generated are coming from LDAP which they are maintaining in Microsoft AD as well as LDAP.
    Could anybody focus some light on this situation how better my client should do the datawarehoussing to improve profits or do good business as compared to current scenarios ?
    Thanks,
    Amol

    any reply please ????
    Thanks,
    Amol

  • SAP PI conceptual best practice for synchronous scenarios

    Hi,
    <br /><br />Apologies for the length of this post but I'm sure this is an area most of you have thought about in your journey with SAP PI.
    <br /><br />We have recently upgraded our SAP PI system from 7.0 to 7.1 and I'd like to document  best practice guidelines for our internal development team to follow.
    I'd be grateful for any feedback related to my thoughts below which may help to consolidate my knowledge to date.
    <br /><br />Prior to the upgrade we have implemented a number of synchronous and asynchronous scenarios using SAP PI as the hub at runtime using the Integration Directory configuration.
    No interfaces to date are exposes directly from our backend systems using transaction SOAMANAGER.
    <br /><br />Our asynchronous scenarios operate through the SAP PI hub at runtime which builds in resilience and harnesses the benefits of the queue-based approach.
    <br /><br />My queries relate to the implementation of synchronous scenarios where there is no mapping or routing requirement.  Perhaps it's best that I outline my experience/thoughts on the 3 options and summarise my queries/concerns that people may be able to advise upon afterwards.
    <br /><br />1) Use SAP PI Integration Directory.  I appreciate going through SAP PI at runtime is not necessary and adds latency to the process but the monitoring capability in transaction SXMB_MONI provide full access for audit purposes and we have implemented alerting running hourly so all process errors are raised and we handle accordingly.  In our SAP PI Production system we have a full record of sync messages recorded while these don't show in the backend system as we don't have propogation turned on.  When we first looked at this, the reduction in speed seemed to be outweighed by the quality of the monitoring/alerting given none of the processes are particularly intensive and don't require instant responses.  We have some inbound interfaces called by two sender systems so we have the overhead of maintaing the Integration Repository/Directory design/configuration twice for these systems but the nice thing is SXMB_MONI shows which system sent the message.  Extra work but seemingly for improved visibility of the process.  I'm not suggesting this is the correct long term approach but states where we are currently.
    <br /><br />2) Use the Advanced Adapter Engine.  I've heard mixed reviews about this functionaslity, there areh obvious improvements in speed by avoiding the ABAP stack on the SAP PI server at runtime, but some people have complained about the lack of SXMB_MONI support.  I don't know if this is still the case as we're at SAP PI 7.1 EHP1 but I plan to test and evaluate once Basis have set up the pre-requisite RFC etc. 
    <br /><br />3) Use the backend system's SOAP runtime and SOAMANAGER.  Using this option I can still model inbound interfaces in SAP PI but expose these using transaction SOAMANAGER in the backend ABAP system.  [I would have tested out the direct P2P connection option but our backend systems are still at Netweaver 7.0 and this option is not supported until 7.1 so that's out for now.]  The clear benefits of exposing the service directly from the backend system is obviously performance which in some of our planned processes would be desirable.  My understanding is that the logging/tracing options in SOAMANAGER have to be switched on while you investigate so there is no automatic recording of interface detail for retrospective review. 
    <br /><br />Queries:
    <br /><br />I have the feeling that there is no clear cut answer to which of the options you select from above but the decision should be based upon the requirements.
    <br /><br />I'm curious to understand SAPs intention with these options  -
    <br /><br />- For synchronous scenarios is it assumed that the client should always handle errors therefore the lack of monitoring should be less of a concern and option 3 desirable when no mapping/routing is required? 
    <br /><br />- Not only does option 3 offer the best performance, but the generated WSDL is ready once built for any further system to implement thereby offering the maximum benefit of SOA, therefore should we always use option 3 whenever possible?
    <br /><br />- Is it intended that the AAE runtime should be used when available but only for asynchronous scenarios or those requiring SAP PI functionality like mapping/routing otherwise customers should use option 3?  I accept there are some areas of functionality not yet supported with the AAE so that would be another factor.
    <br /><br />Thanks for any advice, it is much appreciated.
    <br /><br />Alan
    Edited by: Alan Cecchini on Aug 19, 2010 11:48 AM
    Edited by: Alan Cecchini on Aug 19, 2010 11:50 AM
    Edited by: Alan Cecchini on Aug 20, 2010 12:11 PM

    Hi Aaron,
    I was hoping for a better more concrete answer to my questions.
    I've had discussion with a number of experienced SAP developers and read many articles.
    There is no definitive paper that sets out the best approach here but I have gleaned the following key points:
    - Make interfaces asynchronous whenever possible to reduce system dependencies and improve the user experience (e.g. by eliminating wait times when they are not essential, such as by sending them an email with confirmation details rather than waiting for the server to respond)
    - It is the responsibility of the client to handle errors in synchronous scenarios hence monitoring lost through P-P services compared to the details information in transaction SXMB_MONI for PI services is not such a big issue.  You can always turn on monitoring in SOAMANAGER to trace errors if need be.
    - Choice of integration technique varies considerably by release level (for PI and Netweaver) so system landscape will be a significant factor.  For example, we have some systems on Netweaver 7.0 and other on 7.1.  As you need 7.1 for direction connection PI services we'd rather wait until all systems are at the higher level than have mixed usage in our landscape - it is already complex enough.
    - We've not tried the AAE option in a Production scenarios yet but this is only really important for high volume interfaces, something that is not a concern at the moment.  Obviously cumulative performance may be an issue in time so we plan to start looking at AAE soon.
    Hope these comments may be useful.
    Alan

  • Best Practice Analyzer database mismatch error

    Hi all,
    I am getting the following critical error when I run the BPA on a couple of our BizTalk servers and wondered if anyone had seen the same?
    "The version of BizTalk Server does not Match the Version of BizTalk Management Database Schemas"
    I am using v1.2 of BPA aagainst a BizTalk 2010 install.
    This has only surfaced since we upgraded from BizTalk 2009 R2 BUT not on all of our environments.
    It does not seem to be causing any runtime issues however as all applications seem to be running fine!!
    Looking at the BizTalkDBVersion tables in SQL everything looks the same on servers which present this error and those that do not i.e. There is an entry for version 3.9.469.0 ... which matches the BizTalk Server version reported in the registry
    at "\HKLM\Software\Microsoft\BizTalk Server\3.0\Product Version\"
    The only thing I can see is that as this was an upgrade there is also an entry in the
    BizTalkDBVersion tables for the 2009R2 version (3.8.368.0), so maybe the BPA is selecting this value and comparing against the regisrty version?]
    However, this doesn't explain why I see this issue on 2 upgraded servers but not the 3rd? 
    Any ideas?
    Regards,
    Dave

    Hi Dave,
    There is no version as BizTalk 2009 R2. v3.8.368.0 refers to BizTalk 2009 (not R2).
    The above error occurred because BizTalk Server Best Practices Analyzer has detected that the version of BizTalk Server does not match the version of the BizTalk Database Schemas. This can happen if the BizTalk database was deleted and then restored
    with an incorrect database.
    Check the version of SQL Server upgraded against the version of BizTalk server.
    Reference BPA Help file:
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • Best practice for application module  for scalability

    if i compare application module with forms 6i runtime session.(correct me if i wrong)
    In forms 6i, we create for single form for purchase entry. in which we select table like po , po_item, item_master, customer master
    also single form for sales entry. in which we select table like sales , sales_item, item_master, customer master, po , po_item
    So my question is. in jdeveloper , we planning to make separate jsp page and applicaiton module for purchase entry,
    separare jsp page and application module for sales.
    is it ok.
    or
    what is the best practice in this senario.?? ( or in scalability senario)
    if i made one single application module for whole application (let say 300 entity's(tables)) will performance of my app server degrade.

    You might want to read the chapter about AM Granularity in the ADF Developer Guide:
    http://download.oracle.com/docs/html/B25947_01/bcservices009.htm#sm0229

  • Auto-Anchor Controller's Best Practice

    Hi All,
    I got confused with this setup. I have 2 Wlc's.One is the internal controller and another one configured for the anchor controller (different subnet-DMZ zone) for guest traffic. Where do i configure DHCP assignment for this users..? Should Production controller intervine in this dhcp process or shall i direct to Anchor to take care of everything..? which is recommended ?
    And also any best practice doc is available for this ..?
    Please help...
    thanks in advance.

    Prasan,
    Just keep in mind that there are best practices that are published and best practices that you learn from experience. Being a consultant, I get to implement wireless in various networks and everyone's network is quite different. Also code versions can change a best practice because of bug issues or how a standard might of changed and how that standard was implemented in code. The biggest best practice secret is really working with various client devices, scanner's, laptops, smartphones, etc., and seeing how those change because of newer models and it firmware updates. It's amazing to understand how some devices will require a few checkboxes in the WLAN to be disabled compared to others. Even with anchoring for guest and using a custom WebAuth to make sure the splash page works with various types of browsers.
    What I can say is to always try the defaults if possible when you have issues and then enable things one by one.
    Sent from Cisco Technical Support iPhone App

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best practice for replicate the IDM (OIM 11.1.1.5.0) environment

    Dear,
    I need to replicate the IDM production to build the IDM test environment. I have the OIM 11.1.1.5.0 in production with lots of custom codes.
    I have two approaches:
    approach1:
    Manually deploy the code through Jdeveloper and export and import all the artifacts. Issue is this will require a lot of time and resolving dependencies.
    approach2:
    Take the OIM, MDS and SOA schemas export and Import the same into new IDM Test DB environment.
    could you please suggest me what is the best practice and if you have some pointers to achieve the same.
    Appreciate your help.
    Thanks,

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practice for adding and removing eventListeners?

    Hi,
    What is the best practice in regards to CPU usage and performance for dealing with eventListeners when adding and removing movieclips to the stage?
    1. Add the eventListeners when the mc is instantiated and leave them be until exiting the app
    or
    2. Add and remove the eventListeners as the mc is added or removed from the stage (via an addedToStage and removedFromStage listener method)
    I would appreciate any thoughts you could share with me. Thanks!
    JP

    Thanks neh and Applauz78.
    As I understand it, the main concern with removing listeners is to conserve memory. However, I've tested memory use and this is not really an issue for my app, so I'm more concerned if there will be any effect on CPU (app response) if I'm constantly adding and removing a list of listeners every time a user loads an mc to the stage, as compared to just leaving them active and "ready to go" when needed.
    Is there any way to measure CPU use for an AIR app on iOS?
    (It may help to know my app is small - I'm talking well under 100 active listeners total for all movieclips combined.)

  • Best Practice for disparately sized data

    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    A

    Angel 1058 wrote:
    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    AHi A,
    It depends... if there is a relationship between keys and sizes, e.g. if this or that part of the key means that the size of the value will be big, then you can implement a key partitioning strategy possibly together with keyassociation on the key in a way that it will evenly spread the large entries across the partitions (and have enough partitions).
    Unfortunately you would likely not get a totally even distribution across nodes because of having fairly small amount of entries compared to the square of the number of nodes (btw, which version of Coherence are you using?)...
    Best regards,
    Robert

Maybe you are looking for

  • Did you hit this error before in javafx MediaPlayer?

    Hi,All I 'd like to make a javafx mediaplayer to play mp3 ,and I found some of mp3 wouldn't be supported by javafx mediaplayer,I don't know how can I solve this bug? I posted a mp3 file to the public network ,if you are interested in this issue, you

  • Smart playlists not updating live after 3.0 upgrade

    I have a smart playlist: 'songs I've not played in a while'. Before the upgrade, if i had 100 songs in the playlist and played one, the playlist would update and the next time I'd access it I'd have 99 songs in it. however, after upgrading the playli

  • Does Oracle log errors for me?

    Hi All, Quick question here: I am receiving the following error upon insert: Code: ORA-00904: invalid column name in DB Manager function: ExecuteSQL Does the database log which table had the wrong insert? This is happening in code that I don't have e

  • CRM Doubt.

    Hi CRM Experts, I have doubts on CRM Extraction.As i am new to CRM, Please help on this. 1. What is the use of BWA1 Transaction in CRM ? I have given the datasource "0BBP_BP_STATUS_ATTR" but it is throwing the error as "DataSource 0BBP_BP_STATUS_ATTR

  • Why my Tecra M1 runs very slowly?

    Hi all, I have a Tecra M1 PM 1.6 GHZ. I worked pefectly couple months ago. However, I runs very slowly now even if I re-installed Windows XP and deinstalled all application software. Does anyone know what could be a season for this problem? Thanks.