My design in practise

Hello everyone,
There is a more interested scenario I must face to:
I have almost 40 tables which I created by phisical reason, right now I just simplified this as 3 tables, A, B and C. Each tables has some columns independent, but others depend on columns in other tables or a result of several columns in other tables. For example, A.column1 is independent, but A.column2 is the value of B.column10, and B.column10 is the sum of C.column25 and C.column26. User maybe sometime want to view one of the tables, and I must make sure the values are updated and accurate.
This scenario descides I must either include the business logical(for example calculate sum of C.column25 and C.column26 or query out entity bean C) into entity bean or its session facade(or business delegate). I have worked out several solutions, but I don't feel be confident, they are:
1st:
Put logic into entity bean to query out other entity beans, and then calculate the value of the columns in current entity. In the business logic, I must make sure every request of user viewing entity A, then the logic calculates through the A-B-C chain all the way down.
I think it isn't good, because entity engages other entity's access.
2nd:
Make entity as dummy, add session bean for each entity consisting of business logic. Session A will access session B to get values in B. The business logic design is same as above.
How about this one?
3rd:
Make entity as dummy, use CMP relation fueature of EJB2.0. Only one session bean is here, it consists of all of the business logics. The session bean is responsiable for all queries and calculation on all entities(imagine, in my project there are 40 entities should be deal with by this session bean, the performance shouldn't be good). The advantage of this solution is centeralized business logic, container could help to complete some queries using relation fueature.
Which one is better? Or a best one I haven't thought out? Any comments and suggestions are very very welcome!
Thanks!

Overall, storing computational fields is frequently not a good idea and not only from a storage point-of-view: failure to update them correctly may result in situations like C.c25 = 10 and C.c26 = 12 but B.c10 = 43.
So, first, drop A.c2 and B.c10 from your schema.
On the EJB 2.0 front, go for local interfaces and employ CMR to associate EBs. Compute the fields on the fly (in EB) only when necessary.
Do the same thing for the database with views. This will surely have some impact on the database server, though. Better, try to determine the requirements of the users who will want to see the data and supply reports for the common requirements for easing the load somewhat. Views may not be necessary for the remaining users, as users who can directly query a database possibly can do joins as well...

Similar Messages

  • Multiple Transit to ISP design guide

    Hi,
    I will be implementing transit to multiple internation ISP soon and now looking for design best practise in terms of the BGP routing control.
    Hope someone have the url or good documentation on this matter.
    Thx in advance.

    Hi,
    Please find attached some presentations I have about the topic. Clearly not enough but maybe a good starting point.
    Drop me a mail, I've got some more docs I can share with you.
    Cheers,
    Mihai

  • Ways to implement factory pattern

    Hi I have following implmentation to factory
    Method 1:
    interface Product(){
    public void method();
    interface Factory(){
    public Product createProduct();
    class ConcreteProduct1 implements Product{
    public void method(){
    System.out.println("Concrete Product 1");
    class ConcreteFactory1 implements Factory{
    public Product createProduct(){
    return new ConcreteProdct1();
    }similarly ConcreteFactory 2 & 3
    in mainclass we create as
    Factory f = new ConcreteFactory1(); Product p = f.createProduct();Method 2:
    Product creation as above and factory comes below as
    public class Factory{
    public Product createProduct(int type){
    if(type == 1)
    return new ConcreteProduct1();
    else
    return new ConcreteProdct2();
    return null;
    }main class comes as
    Factory f = new Factory();
    Product p = f.createProduct(1); which implementation is better and how?
    Thanks,..

    You should absolutely use method 1 because it is more object oriented and you don't have to keep track of a type key (if you DO should it then be implemented as an enum and not as an, ughh, int!). Actually, you may also extend the first solution with more products and concrete factories without access to the source as you just need to implement the interfaces. You don't need to edit the factory class that use type keys to track which product type to instantiate.
    Well, this is at least my opinion and it is built on what I have learnt so far from reading the two good books Effective Java Programming Language Guide of Joshua Bloch and Refactoring of Martin Fowler. If you want to learn more about good use of Java and good object oriented design in practise are these books you should read.

  • Best practise BW Query design for Crystal Reports integration

    Hi all,
    I am looking for a guide on best practices when designing a BW Query to be used as data foundation for a Crystal Report.
    The scenario is that I am responsible for developing the Crystal Reports part, but not the BW Query part, therefore I would like to provide a list of best practises to the person who is responsible for the Query, this way make sure that the integration will work as good as possible. The setup is of course using BO Integration Kit for SAP.
    An example is how to use authorization variables in the query to provide data security. This is just one example, there are problably a number of other things to be aware of. A document containing suggestions for best practices is what I am looking for, or if the document does not exist, input to what should be on such a list.
    Thank you in advance.
    Regards,
    Rasmus

    Hi Rasmus,
    in regards to the Best Practices for Crystal Reports you can leverage all the knowledge you have on the Query Design today already. if you not the person for designing the query I think it is important to make sure people designing the queries do understand how Crystal Reports is leveraging the elements from the BI Query.
    /people/ingo.hilgefort/blog/2008/02/19/businessobjects-and-sap-part-2
    You should try to put as much as possible into the BI query from the logic point of view.
    and you can also build common BI queries - there is no need to build a BI query for each report.
    ingo

  • Best practise - Domain model design

    Hello forum,
    we're writing an application divided into three sub projects where one of the sub projects will be realized using J2EE and the other two sub projects are stand alone fat client applications realized using Swing. So that's the background...
    And now the questions:
    After doing some research on J2EE best practise topics I found the TransferObject-Pattern (http://java.sun.com/blueprints/corej2eepatterns/Patterns/TransferObject.html) which we certainly want to apply to the J2EE sub project and to one of the standalone client applications also. To avoid code duplications I like the "Entity Inherits Transfer Object Strategy" approach outlined in the document referenced above. But why does the entity bean inherit from the transfer object class and not vice versa? In my opinion the tranfer object adds additional functionality (coarse grained getData()-method) to the class and isn't it a design goal in OO languages that the class that extends a base class has more functionality than the base class?
    For the standalone application we want to use a similar approach and the first idea is to desgin the entitys and let the TO classes extend these entitys.
    When I get it right the basic idea behind all of these design schemes is the "Proxy pattern" but when I design it using the previously mentioned way (Entity <-- EntityTO) I will have a very mighty prox beeing able to execute all operations the base class is able to execute.
    Any tips and comments welcome!
    Thanks in advance!
    Henning

    Hello Kaj,
    at first - thanks for your fast response and sorry for coming back to this topic so late.
    After reading a bit more on patterns in general what about avoiding inheritance
    completely and using the proxy pattern instead (As explained eg.
    http://www.javaworld.com/javaworld/jw-02-2002/jw-0222-designpatterns.html here) - so moving the design to a "has a" relationship rather than an "is a" relationship.
    In the previous post you said that the client shouldn't be aware that there are entity beans and therefore the mentioned implementation was chosen - But if I implement it vice versa (Entity is base class, TO extends entity) and do not expose any of the methods of the entity bean I would achieve the same effect, or not? Clients are only able to work with the TOs.
    I have some headaches implementing it in SUN's recommended way because of the Serialization support necessary within the TOs. Implemented in SUN's way the Entity bean would also have serialization support which isn't necessary because they're persisted using Hibernate.
    Thanks in advance
    Henning

  • VoWireles design and best practise

    We have 34xAP (1130, 1240) 2x 4402 WLC and 60x7921 with 1.3.3.
    1.Is better use WLC 5.2.193.0 or upgrade to 6.0.196.0?
    2. Is it possible use WPA2 + AES + CCKM?
    3. Which security setting is best for use for 7921 and fast roaming?

    1.Is better use WLC 5.2.193.0 or upgrade to 6.0.196.0?
    I never recommend using the 5.X.  It's full of bugs.
    2. Is it possible use WPA2 + AES + CCKM?
    It is.  Upgrade the firmware of your 7921/7925 to 1.3.4.
    3. Which security setting is best for use for 7921 and fast roaming?
    Follow the deployment guide.
    Cisco Unified IP Phone 7921 Implementation for Voice over WLAN
    http://www.cisco.com/en/US/customer/docs/solutions/Enterprise/Mobility/vowlan/41dg/vowlan_ch10.html
    Don't forget to rate our posts.  Thanks.

  • How to design inside OWB

    My question is about design inside OWB. We are implemeting OWB for the first time along with Cognos. We have around 20 tables (datasource) which are used in cognos to report. This 20 target tables are built in OWB from sources in mainframe.
    We bring all required tables from mainframe and to some tranformations and load into this target tables.
    Now my question is how this tables should be placed in OWB.
    should we put all tables in the same module under project
    or each have diffrent module.
    what is the general practise. Do we have only one location or different locations for each files.
    We have seperate server where we created mappings in owb. How do we take all this mappings to production server. Do we have to export. How do we change the locations .
    Is there any documentation for best practices. Any help will be greatly apreciated
    thanks

    Hi,
    Tables
    The general practice would be to have one module for design area. For example, in general DWH terminalogy, one can end up to have 1 module for Staging, 1 for Data Marts. As you are talking about 20 tables I think you can go with one module for all the tables, rather than creating 1 for each table.
    Files
    If all of your files are located under the same directory structure ( either in Unix or Windows) you need only one Location. If the files are in different directories then you should have multiple locations. The usual practice is to break-up the files according to business area. In that way you will not end up with too many locations and also you can segregate the files.
    Dev to Prod
    For deploying owb mappings onto Test or Prod, you need to create an MDL file by using OWB Exort Utility and then Import into the relevant area. The locations need to be re-registered using the new server credentials. The way we have achieved is to create collections for non Mapping objects( ie source modules / file definitions / locations ), export the collections and imported, re-registered the locations. For mappings we created an MDL file , expoted and imported into relevant areas.
    HTH
    Mahesh

  • Hi i am trying to make a cut path from a jpeg i designed in photoshop so that i can cut a sticker on my plotter of the jpeg  can you help

    hi i am trying to make a cut path from a jpeg i designed in photoshop so that i can cut a sticker on my plotter of the jpeg  can you help

    Peter,
    The cleanest way is to lock your imge, the draw the cut shape with the Pen Tool, set Fill to None and a suitable Stroke Weight. For rounded paths, you ClickDrag Handles out from carefully chosen Anchor Points all the way round, ending where you started. You may adjust the position of the current Anchor Point while you draw by pressing the Spacebar, which lets you freeze the Handles and move the Anchor Point about; you can Ctrl/Cmd+Z to go back, and you may start over. To practise with the Pen Tool, you may try some (outlined) letters from a font with serifs, or any relevant curved path that resembles the desired cut path for the image.
    Or, if you have a distinct colouring where you want the cut path, you may Object>Image Trace the image with the right settings (including creating (stroked) paths, then expand and delete everything but the cut path.

  • Data Modeling Best Practise

    Hi Friends !
    When designing a system, what are the best practise for Data Modeling? Please share few tips. Thanks.
    With Regards
    Rekha

    Hi,
    below link can be usefull ,
    BI Data Modeling and Frontend Design
    Also you can get the best practive(config guide) from service.sap.com.
    Best practicess
    http://help.sap.com/bestpractices
    http://help.sap.com/bp_bblibrary/600/html/
    Regards,
    Satya

  • MPLS Design Best Practices for SP

    When deploying a new MPLS backbone for a Service Provider, what will be consider the best practices in general? For example what about the following list and any other items:
    - Define the Internet as a VRF?
    - Use private ASNs?
    - Define a VRF per special service?
    - Use at least two route reflectors?
    - Use OSPF as IGP?
    - Limit the CE-PE routing support to OSPF and BGP?
    What will be the best approach for management of the devices? A management VRF or the nodes to natively be on a management network?
    What to consider when designing from scratch?

    William some recommended practises, although you can point out your specific constraints in adopting any.
    - Define the Internet as a VRF?
    (Yes Logical speperation is the way to go.)
    - Use private ASNs?
    (No, use a Public AS, you may have to peer outside your AS in a VRF with other AS's)
    - Define a VRF per special service?
    (This is Perfect , Logical Seperation)
    - Use at least two route reflectors?
    (Right, atleast 2 and above that depends on the size of your network)
    - Use OSPF as IGP?
    (I dont see any problems with OSPF in scaling for big networks)
    - Limit the CE-PE routing support to OSPF and BGP?
    (This aspect shouldnt impact much really, you can very well support all the protocols, as its more of serving your customers, rather than dictating the conditions.
    Yes have a seperate VRF for Device Managements (also give a thought for a management subnet, which would be unique across your network)
    You should generally start with a overview topology, introdcution of the objectives. And then go ahead with the suggested phy topo,
    And then move on to the logical services, beggining from Core IGP, then core BGP, and then all the add on protocols, multicast , MPLS TE etc/. Then you can cover specilized service and their logic and description in the end.
    Pretty much, just simply think of building out right from scratch that is Physical Layer and Move to Layer 2 and then Layer 3 Layer 4 .
    So basically you doc should be index in a manner following the sequence of the OSI layers, this gives a good flow to the doc. And rest remains is the description of the logic used in each service or deployment method, that would be your skill.
    HTH-Cheers,
    Swaroop

  • PLANNING ON TOTALS - AUTOMATIC DISAGGREGATION IN QUERY DESIGNER

    I've seen that in the Bex Analyzer on the planning tab of key figures we have a functionality called PLANNING ON TOTALS with several possible disaggregation tecniques.
    Next to it I have a warning message saying "NOT SUPPORTED BY SERVER".
    If the system was somehow able to distribute following some basic distribution rules it could save lots of time on function and level customization.
    Anyone knows what it's required to activate this functionality on the BW server.
    Regards,
    Alberto
    Points awarded to any useful answer

    Dear IP consultants,
    because it looks like to be a bigger problem in IP (in OSS or help is nothing documented), I would like to summarise. Maybe some can help:
    I have read Mary's answer after we have discussed this here and in other forums.
    Mary's answer was:
    "Totals level planning or top down planning is scheduled for release in a couple of weeks. ...I would guess using attributes for top down planning would not be available yet but maybe Gregor or Marc can shed some light on that..........It is SPS12 and SP13"
    Today morning I received the message from another consultant, he wrote:
    "Attached is the screen shot of query designer..... look on the right side under planning tab I have the planning on totals option enabled. I have hierarchy display on, on one of the characteristic."
    So it works in his system, he uses SAP_BW 700 0013 SAPKW70013 SAP NetWeaver BI 7.0 (we have SAP_BW 700 0014 SAPKW70014, which should be better than in his system)
    There was another reply above"...We have SPS12, but in Query designer, feature "planning on hierarchy nodes" not enabled. I see message "Not Supported by Server". Can you explain this problem? Where i can find any documentation about new features of BI-IP functionality like "disaggregation" and "planning on hierarchy nodes"?
    So the official explanation is: This functionality is available. But the practise and mails shows that it works in special cases only. What kind of special customising has to be done to activate it? Is there anyone, who has any idea, what the reason could be?
    regards
    Eckhard Lewin

  • Database development practise advise

    Hi friends,
    I am a fresher in the Development field..coming straight from college and certified in Oracle SQL so far, currently working on Pl/sql certification. I want to practise Quality industry Database Development as well as getting the experience that is required. Please help me with advise on what I should use as sample cases to work on or any other advise you can have. I have oracle 11G software installed on my machine.
    What is the way forward?
    Regards
    Tdewa

    I think besides the options told by Srini an Andy, reading the below book at this stage especially would be very helpful because it would be very tough to unlearn the wrong habits once you would come into the field.
    http://www.amazon.com/Effective-Oracle-Design-Osborne-ORACLE/dp/0072230657/ref=sr_1_1?ie=UTF8&qid=1328500593&sr=8-1
    HTH
    Aman....

  • Best Practices Document for Creating/Designing Queries/Reports

    Hello all,
    I hope I am not posting my very first question in a wrong forum. I am looking for any document(s) related to Best Practises for Designing Queries/Reports using BeX Query Designer (or any other tool for that matter). I did some searching, but could not find anything related to what I am looking for.
    Thanks in advance!
    Amit

    Hi Amit:
       The documentation below might help.
    Regards,
    Francisco Milán.
    Chapter 6 of "Performance Tuning for SAP BW"
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10fb4502-111c-2a10-3080-df0e41d44bb3?QuickLink=index&overridelayout=true&15882789067966
    "30 technical tips and tricks to speed query, report, and dashboard performance" by Dr. Bjarne Berg
    http://www.google.ca/url?sa=t&rct=j&q=30%20technical%20tips%20and%20tricks%20to%20speed%20query%2C%20report%2C%20and%20dashboard%20performance&source=web&cd=2&ved=0CFQQFjAB&url=http%3A%2F%2Fcsc-studentweb.lr.edu%2Fswp%2FBerg%2Farticles%2FNW2010%2FBerg_NW2010_30_tips_for_SAP_BI_performance_v4.pptx&ei=oVIYULT3HIOJ6AHUuICADw&usg=AFQjCNGZnqACv5u3ai81xvKH1Kq-qckavg
    "Improving Query Performance by Effective and Efficient Maintenance of Aggregates" paper by Naween Kumar Yadav.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/906de68a-0a28-2c10-398b-ef0e11ef2022?QuickLink=index&overridelayout=true&43426414363758
    "Summary of BI/BW 7.0 performance improvements" blog by Jens Gleichmann.
    http://scn.sap.com/people/jens.gleichmann/blog/2010/10/12/summary-of-bibw-70-performance-improvements
    "BEx Front end Performance"
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/40955430-dc1c-2a10-349a-fa181307d7df?QuickLink=index&overridelayout=true&15882789070879
    "Performance Optimization of Long Running Queries Using OLAP Cache" paper by Raghavendra Padmanaban T.S. and Ananda Theerthan J.
    https://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f048c590-31a4-2c10-8599-bd01fabb93d4&overridelayout=true
    "BW Performance Tuning" paper by Dr. Thomas Becker and Alexander Peter
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/701382b6-d41c-2a10-5f82-e780e546d3b6?QuickLink=index&overridelayout=true&15878494103977
    "Performance Tuning for SAP Business Information Warehouse" paper by Alexander Peter and Uwe Heinz
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c?QuickLink=index&overridelayout=true&5003637415559
    "How to… Performance Tuning with the OLAP Cache"
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/9f4a452b-0301-0010-8ca6-ef25a095834a?QuickLink=index&overridelayout=true&5003637412341
    Queries section of "Performance Tuning Massive SAP BW Systems - Tips & Tricks" paper by Jay Narayanan.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94?QuickLink=index&overridelayout=true&5003637414215

  • Best practises for install BPEL

    We are planning to start to use BPEL, but we are having problems finding any documents on best practise on the install of the product.
    1. Sould we install in Lunix or Windows.
    2. In prodcution we will need fault tollerance can BPEL do this well.
    3. Does the BPEL monitoring work well.
    We want to install this right first time rather than just install from the CD's then try to fix the issues we wrong design later, we have been burnt by do this way before.
    Regards
    Sean Bell

    Hi,
    1) install it on the platform you know best. because
    2) you can only get fault tolerance or high availability on a platform you know how to administer.
    for HA look at
    http://www.oracle.com/technology/products/ias/bpel/pdf/bpel-admin-webinar.pdf
    3) BAM is the word you are searching for:
    http://www.oracle.com/technology/products/integration/bam/index.html

  • Best Practises on SMART scans

    For Exadata x2-2 is there a best practises document to enable SMART scans for all the application code on exadata x2-2?

    We cover more in our book, but here are the key points:
    1) Smarts scans require a full segment scan to happen (full table scan, fast full index scan or fast full bitmap index scan)
    2) Additionally, smart scans require a direct path read to happen (reads directly to PGA, bypassing buffer cache) - this is automatically done for all parallel scans (unless parallel_degree_policy has been changed to AUTO). For serial sessions the decision to do a serial direct path read depends on the segment size, smalltable_threshold parameter value (which is derived from buffer cache size) and how many blocks of a segment are already cached. If you want to force the use of a serial direct path read for your serial sessions, then you can set serialdirect_read = always.
    3) Thanks to the above requirements, smart scans are not used for index range scans, index unique scans and any single row/single block lookups. So if migrating an old DW/reporting application to Exadata, then you probably want to get rid of all the old hints and hacks in there, as you don't care about indexes for DW/reporting that much anymore (in some cases not at all). Note that OLTP databases still absolutely require indexes as usual - smart scans are for large bulk processing ops (reporting, analytics etc, not OLTP style single/a few row lookups).
    Ideal execution plan for taking advantage of smart scans for reporting would be:
    1) accessing only required partitions thanks to partition pruning (partitioning key column choices must come from how the application code will query the data)
    2) full scan the partitions (which allows smart scans to kick in)
    2.1) no index range scans (single block reads!) and ...
    3) joins all the data with hash joins, propagating results up the plan tree to next hash join etc
    3.1) This allows bloom filter predicate pushdown to cell to pre-filter rows fetched from probe row-source in hash join.
    So, simple stuff really - and many of your every-day-optimizer problems just disappear when there's no trouble deciding whether to do a full scan vs a nested loop with some index. Of course this was a broad generalization, your mileage may vary.
    Even though DWs and reporting apps benefit greatly from smart scans and some well-partitioned databases don't need any indexes at all for reporting workloads, the design advice does not change for OLTP at all. It's just RAC with faster single block reads thanks to flash cache. All your OLTP workloads, ERP databases etc still need all their indexes as before Exadata (with the exception of any special indexes which were created for speeding up only some reports, which can take better advantage of smart scans now).
    Note that there are many DW databases out there which are not used just only for brute force reporting and analytics, but also for frequent single row lookups (golden trade warehouses being one example or other reference data). So these would likely still need the indexes to support fast single (a few) row lookups. So it all comes from the nature of your workload, how many rows you're fetching and how frequently you'll be doing it.
    And note that the smart scans only make data access faster, not sorts, joins, PL/SQL functions coded into select column list or where clause or application loops doing single-row processing ... These still work like usual (with exception to the bloom filter pushdown optimizations for hash-join) ... Of course when moving to Exadata from your old E25k you'll see speedup as the Xeons with their large caches are just fast :-)
    Tanel Poder
    Blog - http://blog.tanelpoder.com
    Book - http://apress.com/book/view/9781430233923

Maybe you are looking for

  • I have Ilife but no no iweb? do i have to buy the entire suite to get iweb?

    do i have to buy the entire suite to get iweb? the ilife i have came with my powerbook. Is it possible to buy just the iweb portion.?? Thanks, Rich

  • Payment via FPY1 and  a lock for automatic clearing

    Hello, I would like to lock a debit item for automatic clearing but let it be paid via FPY1. When trying to lock it using the posting lock it can't be paid via FPY1. What can I do ? Thanks, Vered

  • Convert milliseconds to formatted data in JAVA

    hi all, i want to convert my time in milliseconds to HH:MM:SS:sss format , i use ; Date date = new Date(timeInMills); SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss:SSS"); String s = sdf.format(date.getTime()); but it gives me: 02:00:00:150 in

  • Theme changing on client device affect Terminal (RemoteApp) session

    Hi, I noticed that when users are changing their default theme (Client side - PC) , It also affects the server side session windows when using RemoteApp, I can see that the Active-window color is being changed (server side). Since I changed Server de

  • Blur tool issue

    when I try any of the 3 blur options  all I get is a thumb tack          No adjustment circle on screen    I click on thumb tack in different areas of picture  nothing happens no grids or dotted lines anywhere on pict either Help