Coarse-grained object Design for Performance

Hi,
I’ve several 1-1 mapping between one coarse-grained object with several fine grained objects. In the DB all the fine-grained objects are stored in separate table. In my application the relationship between coarse-grained object and fine-grained objects will not be modified. In this scenario, for good performance which solution is better?
Maintaining the 1-1 mapping between coarse-grained object and fine-grained objects OR adding all the attribute from the fine-grained objects into the coarse-grained object.
Thanks
-Mani

Mani,
The answer depends upon the data usage in your application.
If these fine grained read-only objects are shared between your coarse grain objects then it may be more optimal to keep them as separate classes with 1:1 relationships. If these read-only classes are also of fixed and reasonable quantity (reference data) you could also pre-fetch them all into a full cache (identity map). Then as each course grain object is read the 1:1 relationships can be resolved in memory without additional queries.
If you combine these objects all into one coarse gain object then you will have to pay the cost of a more complex multi-table join on each read. This may be more efficient if the fine grain objects are never or rarely shared between coarse grain objects. To be sure you will want to check with your DBA to see what they can measure as being more optimal for the given schema.
If these are shared and/or reference data then you may also want to consider marking them as read-only in the project. That way they will not be considered for change tracking in the UnitOfWork. This should also provide an additional performance boost.
Doug

Similar Messages

  • What is a good design for remote Views?

    Hi All,
    I am thinking how would I design my process for performance consideration in retrieving dynamic values of table/view data.
    The requirement is like this:
    1. We have 50 databases residing on each own server counterpart (50 servers).
    2. Each database has table Patch_LeveL (apps_name, patch_level), which contains  just one row only to reflect the latest patch level applied for the apps on this database. Note that are we constantly on applying service packs for this apps.
    3. On our central monitoring server (db). I create 50 database LINKS for each of the 50 databases.
    4. I created 50 views over these links to make  it centralized to represent  the 50 Patch_Level tables. As   patch_level_view1, 2, 3.....patch_level_view50.
    5. I then create a central view as .... as  union of the 50 individual views Actually I am just planning to do the above activity.
    My question is....is this a good design for performance? Can you share be a better approach?
    Is there a limitation of joining "union" of 50 views?
    Thanks a lot,

    Is there a limitation of joining "union" of 50 views?What can happen is if the connection to one of these servers is interrupted, the big "union" view will not work. As said by above poster MV, with lets say an hourly refresh, helps with this situation as you have the data that was gathered last time (and most likely it is still valid).

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Using Designer to perform reverse engineering for Adabas entities

    Hi Experts,
    Customer will migrate from Adabas to Oracle db. Is it possible to use designer to perform reverse engineering for entities of Adabas?
    Thanks for your help in advance.
    Queenie

    If there is an ODBC driver for Adabas, it MAY be possible, though I have never had an Adabas database to try it on. I know that Adabas isn't natively a relational database using SQL, which is what Designer's Design Capture utility expects, so it will work through ODBC or not at all.

  • Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.

    Hi Team,
    Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.
    Thank You,
    Best Regards,
    neeraj

    Hi Team,
    Any suggession on Tools for Performance testing of application designed for Blackberry phones.. which can get some details on CPU usage, memory, response time?? any help on the same is much appriciated.
    Thank You,
    Best Regards,
    neeraj

  • Design table for performance (order - ordervisibility)

    I have tables like order, orderitem, etc. and requirements for order visibility:
    - One order can be seen by many users
    - One user can see many orders
    create table order (orderid number, ...);
    create table ordervisibility (orderid number, username varchar2(20));
    Order size is estimated to be >=500Mio. Application is a data warehouse with reporting.
    The relationship order - user will be skewed: some users will be allowed to see many orders while some will only see a small percentage.
    How should we design the system for performance?
    We think about creating table ordervisibility as index-organized table. Or are there better approaches like partitioning? Or is the approach with tables order na ordervisibility already suboptimal?

    <<<What is your 4 digit Oracle version (result of SELECT * FROM V$VERSION)?
    The system does not exist yet. Version will be derived from the requirements or most likely the latest version (e.g. 11.2.0.3 EE with partitioning option)
    <<<Are you wanting to prevent users from seeing other orders? You said one user can see many orders but are they NOT supposed to see other orders?
    Yes, you are right. We want to prevent other users to see orders that do not belong to them.
    <<<What determines which orders a user is allowed to see? A state code, a region code?
    The user in the OLTP system decides who can see orders (the user creating the order and maybe other users he selects)
    <<<Does this database, data and system already exist or is this a new design? Are there any other security mechanisms already in place? Is any of your data currently encrypted?
    The system does not exist yet. No other security mechanisms are planned (e.g. encryption).
    VPD is considered to implement the visibility (or alternatively views.)

  • Performance and configuration Adobe LiveCyce Designer for SAP

    Hi there,
    I have a question for LiveCycle Designer for SAP. If the SAP system, a very large amount of data to the LiveCycle Designer is passed (eg data from over 1000 pages about 40,000 lines) then it comes to the following exception: com.adobe.ProcessingException: com.adobe.ads.exception.TransientException: A problem was encountered with the results: render Result array is null .; [Error Log file "2014.11.14.145246SAFPFILE1.pdf" written to F: \ usr \ sap \ R44 \ SYS \ global \\ Adobe Document
    For smaller amounts of data from SAP, it goes without problems.
    Does anyone know the settings in SAP or by Adobe LiveCyle designer?
    Is there such a thing as a limitation of number of rows? Or is there a limit on the size of the transferred data from SAP?
    What would be a possible solution?

    Did you raise this SAP via an OSS message?
    Chintan

  • Need to update Object permissions for a form in the form designer

    Hi Experts,
    I created a new form and attached to a process definiton.
    Later created a new version and changed some properties for the attributes.
    somehow I missed the object permissions for the system administrator. not even a single permission is checked out of "Allow Insert, Allow Update and Allow Delete". I tried to assign the permissions by creating a new verion. when i try to save it is saying " YOU DO NOT HAVE ENOUGH PERMISSIONS TO ADD THIS OBJECT". I tried to assign the permission for the form to the SYSTEM ADMINISTRATORS group from web console but, there also I got the same error. can anyone please help me to change the Object permissions if you have come across this problem.
    Edited by: 804064 on Oct 21, 2010 6:07 AM

    If I import the older version of the form, The permissions are back.
    the second alternate solution is
    Export the form and in the XML edit the following values
    Just change “0” to “1” for insert, update and delete.
    <DataObjPermission>
    <SEL_DELETE_ALLOW>1</SEL_DELETE_ALLOW>
    <SEL_UPDATE_ALLOW>1</SEL_UPDATE_ALLOW>
    <SEL_UPDATE>1289570464000</SEL_UPDATE>
    <SEL_INSERT_ALLOW>1</SEL_INSERT_ALLOW>
    <UGP_KEY UserGroup = "SYSTEM ADMINISTRATORS"/>
    </DataObjPermission>

  • Error ACLContainer: 65315 does NOT EXIST in  the Object Cache for parentID:

    Expert,
    I did the following steps to upgrade the OWB repository from 10g to 11g
    • Created a dummy workspace in the 11.2 repository
    • Created users in the destination environment
    • Run Repository Assistant against the 11.1 source database
    • Then selected *“Export entire repository to a file”* and selected the MDL(10g) to import
    After 99% of completing I have got the below error
    Error ACLContainer: 65315 does NOT EXIST in  the Object Cache for parentID: 65314
    Please let me know the solution.
    Thanks,
    Balaa...

    Hi I had the same error and worked on it for almost a week with no success but there is few work around for this try it it might work for you.
    Step 1>Run a health check on OWB 10.2 enviroment(make sure you clone your OWB 10.2 enviroment and then do this healthcheck ,these checks are tricky )
    refer
    Note 559542.1 Health Check of the Oracle Warehouse Builder 10.2 Metadata Repository
    This will give you info about your missing ACL
    Step 2> download these two scripts fixLostACLContainer.sql ,fixAllACLContainers.sql
    please refer :
    Note 559542.1 Health Check of the Oracle Warehouse Builder 10.2 Metadata Repository
    OWB 10.2 - Internal ERROR: Can not find the ACL container for object (Doc ID 460411.1)
    Note 754763.1 Repository Cleanup Script for OWB 10.2 and OWB 11.1
    Note 460411.1 Opening Map Returns Cannot find ACL Containter for Object
    Note 1165068.1 Internal Error: Can Not Find The ACL Containter For for object:CMPPhysical Object
    It might resolve this ACL issue but it did not work for me.
    If none of these work then
    Perform export from design center of OWB 10.2 and import through design center of OWB 11.2.(ONLY OPTION)
    It worked for me.
    Varun

  • Steps for performing Flat file to XML

    hey,
    does any one have steps for performing flat file (.csv) to XML conversion. how is the mapping in the design performed.
    kalyan.

    Kalyan,
    Consider my example, I have input file as csv structure and want it to convert into .xml file thats it.
    Input file
    J,24
    P,22
    I want the output file like
    <Emp_Details>
      <F1>J</F1>
      <F2>24</F2>
    </Emp_Details>
    <Emp_Details>
      <F1>P</F1>
      <F2>22</F2>
    </Emp_Details>
    I doesn't know whether the above matches exactly ur reqmt, but there is the option.
    Step 1 : Create Scenario & Business service
    http://www.flickr.com/photo_zoom.gne?id=699386732&size=o
    Step 2: Create sender & receiver comm.channel
    http://www.flickr.com/photo_zoom.gne?id=699386698&size=o
    http://www.flickr.com/photo_zoom.gne?id=699386664&size=o
    Step 3:Create all the objects in ID and cross verify below  whether u've created everything.
    http://www.flickr.com/photo_zoom.gne?id=699386690&size=o
    Step 4: Activate and run the interface.
    Your results :
    http://www.flickr.com/photo_zoom.gne?id=699386686&size=o
    Hope it helps!!!
    Best regards,
    raj.

  • Datafiles in swapping mode - for performance

    Hi there,
    One of the Senior DBAs told me that it is better to keep the datafiles in swapping mode, which means..
    Suppose we need to create 4 Tablespaces, 2 for data files and 2 for index files and we have two drives called E, F. In this case he said, the performance will be increased if we prepare
    E drive
    Datafile_Tablespace_A (datafile TS no. 1)
    Index_Tablespace_D (index TS for datafile no.2)
    F drive
    Index_Tablespace_B (index TS for datafile no.1)
    Datafile_Tablespace_C (datafile TS no. 2)
    According to him, Oracle works better in swapping mode, is it true? I was under the impression that index and datafile tablespaces should be built on separate drives.
    Even though my quetions is in general, for reference - The OS we are using is windows 2003 server and parition is Raid-5 and the Oralce 10.2.0.1 version.
    If anybody can clarify, I would be obliged.
    Thanks

    I'm going to default to one of Billy's responses:
    {message:id=4060608}
    >
    Irrelevant as that does not change any of the storage fundamentals in Oracle. The database does not know or care what you use as storage system.. why should it? It is the kernel and disk/file system drivers job to deal with the actual storage hardware. From a database perspective, it wants the ability to read() and write() - in other words, use the standard I/O interface provided by the kernel.
    I/O performance must not be factor. If it is, then you storage layer is incorrectly designed and implemented. Striping (RAID 0) for example must be dealt with at the storage layer and not at the application layer. Tablespaces and datafiles in Oracle makes extremely poor tools to implement striping of any sort. It does not make sense to attempt I/O balancing such as striping at tablespace and datafile level in Oracle.
    So why then use separate tablespaces? You may need different tablespaces to implement different block sizes for performance.. but this is an exception to the rule. And you do not address actual storage performance here, but more how Oracle should manage the smallest unit of data in the tablespace.
    So besides this exception, what other reasons? Could be you want to physically separate one logical data base (Oracle schema) from another. Could be that you want to implement transportable tablespaces.
    All these requirements are quite explicit in that more than one tablespace is needed. If there is no such requirement, why then consider using multiple tablespaces? It only increases the complexity of space management.
    Consider using different tablespaces for indexes and table data. In a year's time, you may find that the index tablespace has been oversized and the data tablespace undersized. You now have too much space on the one hand, too little on the other, and no easy way to "move" the freespace to where it is needed.
    It is far easier to deal with a single tablespace - as it allows far more flexibility in how you use that for data and index objects, then attempting some kind of split.
    So I will look for a sound and unambiguous technical requirement that very clearly says "multiple tablespaces needed". If not, I will not beat myself over the head trying to find reasons for implementing multiple tablespaces.>
    There are also many other threads on this forum about separating data and indexes, try and search for them.

  • Tablespace design for joined tables

    Hello altogether,
    Right now I'm designing the database structure for an application requiring high performance on database side.
    While thinking about the tablespace structure I came to the following question:
    When I am sure that I will have a join on two (or more) tables very often, is it better to put them in one tablespace, or is it better to put them into different tablespaces?
    Thanks for your help!
    Daniel

    The tablespace that any particular object is in is irrelevant from a performance standpoint (assuming the tablespace parameters are identical, of course. An object in a dictionary managed tablespace may well have different performance characteristics than an object in a locally managed tablespace using ASSM). Now, if different tablespaces have data files on different physical devices, you may see some performance gains from putting different objects into different tablespaces simply because this would tend to distribute I/O over more physical devices. Of course, you could accomplish the same thing by creating data files on different devices in the same tablespace. And with newer disk subsystems, this sort of manual load balancing tends to be rather pointless.
    If you have locally attached storage and you want to distribute I/O across devices by putting objects in different tablespaces rather than assigning multiple data files to the same tablespace, it generally isn't going to matter much which objects are in which tablespace so long as the I/O is relatively evenly distributed (and assuming that there aren't any I/O patterns that need to be accounted for (i.e. if object A is used heavily during the day and object B is used heavily at night and those are the only two objects in the database, putting them in different tablespaces is pointless because you'd never be using the extra spindles simultaneously). You'd have just as much luck putting objects that start with the letters A-L in one tablespace and M-Z in another as putting tables and indexes in different tablespaces or putting related objects in the same tablespace or putting related objects in different tablespaces. So long as I/O gets distributed, it doesn't matter which objects are where.
    Personally, I'd never have separate tablespaces for performance. Much easier to have a single tablespace with multiple data files on different devices. Multiple tablespaces should exist for managability reasons, recoverability reasons, and because different objects require different tablespace settings.
    Justin

  • What do you think of this Design for Multiple Threads

    Hi Java Experts ;
    I'm curious to know what you think about this design for a multhreaded app i'm working on.
    I have a controller thread that (a.) creates threads and (b.) keeps a reference to threads that it creates (by sticking them in a hashtable along with the generated Id for that thread).
    When a created thread completes its job, it decrements the thread counter and removes its reference from the hashtable before it finishes its run method.
    Now here's the interesting part in my controller thread i'm creating new threads on the fly with this infinite loop
         public void run(){
              while(true){
                   try{
                        Thread.sleep(500);
                   }catch(InterruptedException e){
                   if(threadCounter<maxThreadCount && moreJobsToDo.size()>0){
                        createDomainThread();
              }//end while
    what do you think about this pattern ? My goal is to maintain about 200+ created threads running at all times with this pattern.

    jeff kesslerman's book says about threads
    "5.2.3 Threads
    The impact that threads have on RAM footprint isn't a problem for most programs, but running threads do need space to store their stack state, and the system- specific data structures do consume memory.
    Because runtime implementations vary widely in how threads are handled, you might encounter situations where the impact threads have on footprint is significant. For example, some ports of the JRE create a heavyweight OS process for each running thread. In an application that uses many threads, this means that thread costs, rather than class or object costs, can become the dominant factor in the program's memory consumption.
    You shouldn't avoid using threads-they're necessary in many cases, and generally don't have a large impact on footprint. However, you should be aware that the impact can be very different across runtimes. This is one of the reasons it's a good idea to measure performance characteristics under your program's different target environments. "
    Question #1: how does garbage collection go about clearing the stack state, and the system- specific data structures that consume memory.
    Question #2: in my situation, each worker thread's activity is not as brief as a simple server request. Each thread does a lot work and does a lot of network connections. on average I would say 100 http request - each of which can be easily blocked or delayed significantly. Therefore each worker thread may operate for between 2 - 10 minutes.
    Do you think thread pooling is useful in this situation ?
    stev

  • What does "coarse-grained" and "fine-grained" mean?

    I've seen these terms used in almost every pattern article, but I've never seen them defined. Can someone provide me the definitions for coarse- and fine-grained? Also, what are the pros and cons of each.

    it's an anology to sand or other particles. You can have sand that is very fine: small grains or you can have sand that is very coarse: large grains.
    A fine-grained approach is one uses lots of little Objects and a coarse-grained approach uses a smaller number of larger Objects. The reasoning for using a coarse grained approach is that it limits the 'chattiness' of the application. That is you don't have lots of little messages going back and forth, each one with it's own overhead.
    In practice, I've fouind the coarse-grained approach to not be that great. Chattiness is not that big of a deal anymore and big Objects take longer to load. Your application basically does squat while it waits for the be ball of wax and then it's got to disassemble it. If you use lot's of small Objects, the app can start working on them as soon as it starts receiving them.
    This isn't to say this doesn't have it's place, it's just not for every application.

  • Coarse grained and fine grained

    I want to now the difference b/wn coarse grained and fine grained entity bean, please explain me with some example

    I think the name is self explanatory.However since u do not seem to be very clear i will try and explain.Suppose you have an person entity bean.There are essentially two ways to model the bean.
    Essentially u need to identify the sub parts of this person bean.It would have a first name, last name, address, ph.num,ssn and so on.If you were to model this as a fine grained bean in that case you would have all of these as parameters and would be passing the required parameters in the corresponding create method.However the Address portion can comprise of a state,city street,house number and pin.If you were to make this a separate entity which would present a local interface and use a value object to create and return this, this design would make it a coarse grained bean.Hope it makes some sense,if you require any other clarifications i would oblige.
    cheers
    sicilian

Maybe you are looking for

  • [SOLVED]Can't Locate CD Drive

    Ok, so I am having trouble mounting a CD, because I don't know where it mounts from. There is nothing in /dev or dmesg [phil@pwned ~]$ cd /dev && ls -l lrwxrwxrwx 1 root root 11 2009-07-31 18:48 audio -> sound/audio drwxr-xr-x 2 root root 0 2009-07-3

  • Custom authentication in WL 8.1

    I have a web app that uses BASIC authentication. What I want is for the standard web app login box to be used, but then I want to use a custom database table to verify the username password. How do I do this in weblogic 8.1.

  • Firefox set up message appears each time i open firefox

    Every time I open Firefox I get a message saying "You have chosen to open Firefox Setup 4.0.1exe which is a: binary file from: http://mozilla.cdn.leaseweb.com Would you like to save this file?" I have tried saving it again, re-running the set up file

  • External boot drive for current MBA

    Hello, Can anybody please tell me, if it is possible to create a bootable and usable external hardrive system for a mid 2011 MBA using Snow Leopard? I tried connecting an earlier boot drive and although the MBA recognised it as a removable drive, it

  • Prob in material creation in mm01

    Hi All, I am facing problem while i am creating material in MM01 transaction. my material name was created along with that material type,for example if i created material with name as mat1 and material type is coupons. then my material was created as