Design Advice

Hi There,
I have GL table which has Total_AMT, Ledger, DeptID, Acct, Date Column. We DO NOT have Actual and Budget Columns in the fact table. Total_AMT has both Budget and Actual number which identify by Ledger column which has two values Actual and Budget.
How do I design my measures or Scenario dimension to get Actual and Budget?
Thanks

It can become the whole scenario dimension or just a couple of entries in it. For example, you might want to add a Actula-Budget Variance or variance percent member there or other scenarios. But you would use the ledger to define what scenario the data is going into

Similar Messages

  • Time-series / temporal database - design advice for DWH/OLAP???

    I am in front of task to design some DWH as effectively as it can be - for time series data analysis - are there some special design advices or best practices available? Or can the ordinary DWH/OLAP design concepts be used? I ask this - because I have seen the term 'time series database' in academia literature (but without further references) and also - I have heard the term 'temporal database' (as far as I have heard - it is not just a matter for logging of data changes etc.)
    So - it would be very nice if some can give me some hints about this type design problems?

    Hi Frank,
    Thanks for that - after 8 years of working with Oracle Forms and afterwards the same again with ADF, I still find it hard sometimes when using ADF to understand the best approach to a particular problem - there is so many different ways of doing things/where to put the code/how to call it etc... ! Things seemed so much simplier back in the Forms days !
    Chandra - thanks for the information but this doesn't suit my requirements - I originally went down that path thinking/expecting it to be the holy grail but ran into all sorts of problems as it means that the dates are always being converted into users timezone regardless of whether or not they are creating the transaction or viewing an earlier one. I need the correct "date" to be stored in the database when a user creates/updates a record (for example in California) and this needs to be preserved for other users in different timezones. For example, when a management user in London views that record, the date has got to remain the date that the user entered, and not what the date was in London at the time (eg user entered 14th Feb (23:00) - when London user views it, it must still say 14th Feb even though it was the 15th in London at the time). Global settings like you are using in the adf-config file made this difficult. This is why I went back to stripping all timezone settings back out of the ADF application and relied on database session timezones instead - and when displaying a default date to the user, use the timestamp from the database to ensure the users "date" is displayed.
    Cheers,
    Brent

  • Design advice for custom painting

    Hi,
    Can someone give me some high-level design advice on designing a JPanel subclass for custom painting? My panel class is becoming very complex, with lots of drawing and scaling methods, so I'm wondering if I could abstract away some of these graphical elements by creating new classes to make the design more object-oriented. However, I'm finding that there are also disadvantages in representing some of my graphic components as classes. Specifically,
    1. It will lead to a much higher level of class coupling. My panel will depend on all these new classes to work correctly. In fact the situation is even worse because my panel is an inner class and, to do some of the scaling, needs to use methods from an object stored in the parent class. I would therefore have to also pass this object reference as an argument to many of these new classes.
    2. It will lead to a lot of awkward passing of data between classes. For example, I need to use g2.drawImage(img, x, y, w, h, this), so I will have to pass not only the graphics context but also the panel reference itself.
    Is it common for panel subclasses that do custom painting to be complex?
    thanks,
    Eric

    I wrote the map view for a commercial GIS system. Drawing and scaling on a JPanel is challenging, but it need not be complex.
    1. To eliminate class coupling, you need to create a couple of interfaces: Renderable (what you want drawn) and Renderer (the thing doing the low-level drawing). Renderer will have before and after setup and reset methods (to do things like scaling and rotation), and methods that the renderables can use to draw graphics. The Renderable interface can be as simple as a single method: draw(Renderer).
    Every type of graphic that you draw on the screen would be a different class that implements Renderable, and which knows how to draw itself using whatever lower-level drawing commands you put in the Renderer. If you construct each Renderable in terms of java.awt.Shape, then Renderable.draw() could call a method Renderer.draw(java.awt.Shape, java.awt.Color).
    2. The Panel becomes fairly simple. It has a Renderer and a collection of Renderable objects. Its paint() method calls the Renderer setup method, calls Renderable.draw(Renderer) on each object, and calls the Renderer reset method. Each Renderable in turn calls Renderable.draw(java.awt.Shape, java.awt.Color) one or more times.
    Renderer should get a Graphics2D from the Panel when the setup method is called. That's when the Renderer does all of the scaling, positioning, and rotation on the Graphics2D. The Renderable implementations shouldn't even need to know about it.
    I don't think custom painting code is necessarily complex, merely challenging to write. If you're only drawing a few lines and circles, you probably don't have to be too concerned about design elegance and code maintainability. The map view I designed for our GIS system, on the other hand, has to handle all kinds of map geometry, icons, text, and aerial photos.

  • Business logic in EO, VO - design advice required

    Hi all,
    I'm looking for some design advice on the best way to approach this issue - or in fact advice that I'm making an issue out of something where it doens't need to be !
    Lets say I have a single entity object called UsersEO. I have two view objects called NewUserVO and ExistingUserVO. I use the NewUserVO just for inserting new records and the ExistingUserVO for modifying existing records. When users are editing existing users, I need to provide a checkbox that indicates that when the record is saved, their password needs to be re-generated and emailed out to the user.
    I want to include the check of the "GeneratePassword" attribute while processing the rest of my user business logic, in the doDML method of UsersEOImpl. From here I will call a database procedure that handles changing the password and emailing the user.
    Initially I created a transient attribute "GeneratePassword" in ExistingUserVO but then to access this from within the UsersEOImpl.doDML would require me getting the AM from the transaction, and getting the current row of the ExistingUserVO. I believe that this is bad practice to access a VO from within a EO ?
    Is my only option here to create the transient attribute "GeneratePassword" on the UsersEO and then include in the ExistingUserVO but not the NewUserVO ? That way I can easily access the "GeneratePassword" attribute from with the doDML without have to accessing the VO.
    If I have 10 different VO's that require different attributes, it just seems strange to add 10 attributes to the underlying EO and then only include the relevant entity attribute in each VO - or is this exactly what I should be doing ?
    On a slightly different note, but similar theme - if I had a "VOTransaction" attribute in my EO and I included this in each VO created from the EO - how would I set different values in this attribute for each VO so that within the EOImpl I would know which VO was triggering the entity validation ??
    Many thanks for your help !
    Cheers,
    Brent

    >
    If I have 10 different VO's that require different attributes, it just seems strange to add 10 attributes to the underlying EO and then only include the relevant entity attribute in each VO - or is this exactly what I should be doing ?
    How about creating a BaseVO which extends EO (including transient attributes for password change and voName/Type detail ) that provide a default value for voName/Type attribute and all other VO extending the BaseVO and overriding that attribute setter to specify the name/type?

  • Cocoa app design advice

    hello,
    as i'm studying cocoa i've created a test app and i'm looking for some advice/direction when it comes to design. this is what i've done so far:
    1. placed a custom view on my main window linked to my custom subclass "dropview" of nsview
    2. placed a table view on my main window linked to a "appcontroller" object
    with the custom view i can accept drags and when i do i use a nstask method to search for files and place these files in an array. then i want to display these files in my table view. here are my questions: since the array variable is defined in my nsview subclass what is the right way to send that information to my appcontoller object? so far i have made a declaration of this variable above the @interface dropview and used #import dropview.h in my appcontroller.m file. then when the app launches it tries to set the table view from the start before my array variable is defined, but once the array variable is defined how do i tell the table view to update? i have tried to tell my table view to reloaddata from my dropview class but the table view is linked to my appcontroller class. i hope this makes sense and any advice on how i will communicate between the 2 classes is greatly appreciated!
    thank you,
    rick

    You should create one servlet, which accepts the params of any form and passes them to the backend.
    In summary:
    The one servlet is a 'controller' servlet ( the 'C' in MVC design).
    It takes in all url requests for all JSP pages (never allow a user to directly call up a JSP page) or from a submit from a form tag on any JSP page, or from a click on a hyperlink. It verifies the person is logged in (authenticated) and has permission to view the JSP page it wants to view (authorization). If its not authenticated or authorized, it dispatches to an error page. If it is, it determines what JSP page the request is comming from and what it wants (example: update button was clicked). It then instansiates business logic and sends the JSP page's information to it. The business logic performs the work. The data coming back from the business logic is put in request scope by the servlet (not by the business logic) and the servlet dispatches to the appropriate JSP page (which will get data out of request scope to be displayed).

  • File to file design advice

    Hi experts,
    i have a file to file scenario. The third party wants the IDOC data as a IDOC file. So we have created file port and posting the idoc to that file port which creates file in the application server. here is my doubt when it comes to PI design.
    requirement: the file name in ECC app server will be name1_<idoc no>. at third party it should be name2_<idoc no>. no data transformation is needed.
    design 1: creating java mapping and do the dynamic configuration for the file name using dummy message types.
    design 2: there is a adapter module provided by SAP which converts idoc text file to XML and vice versa. use that module and get the XML, do one to one mapping, using UDF change the file name by dynamic configuration. in message mapping we can import the idoc structure.
    Please advice me which is effecive one in all aspects like performance, cost etc..
    Thanks in advance,
    --Naresh

    Hi Ravi,
    refer the comments title SERVICE PACK in the below blog.
    /people/william.li/blog/2009/04/01/how-to-use-user-module-for-conversion-of-idoc-messages-between-flat-and-xml-formats
    -->The link which i provided is just expains about how to convert IDOC into either .txt and xml and how PI can handle them by using the standard module provided by SAP.
    anyways Ravi......i think i am diverting the actual thread question....... ................................
    also thanks for the PI7.3 link )
    Thanks
    Edited by: pavan kumar on Jul 28, 2011 11:41 PM

  • Arch workflow design advice for a designer?

    Sorry for the ambiguous title, I couldn't figure out what to call this post.
    I'm new to Arch, though not Linux, and I must say, this is an amazing distro (I'm on the 64bit version). Dead simple, super fast, and nearly as flexible as a Gentoo system (that can use binaries!). Pacman is rockin'.
    I'm a designer by trade: Web, video, and image. And I STILL boot into Windows for important tasks like Flash work, video work, and ftp work. I would obviously like to reduce that dependency, though there is little hope in the video department, right now.
    But for web, I see no reason I couldn't do it all in linux. But I'm not sure how to go about it. Here is the workflow I need, and I was wondering if you could advise how I might set up such a system (I have just a base system with Gnome installed right now):
    * WYSIWYG html and CSS editting (NVU/Kompose is fine for html, but NOT for CSS) for the design phase
    * A way to output image slices with html (does GIMP do this?)
    * Accurate web fonts
    * Reliable ftp, preferably one with drag n' drop functionality (I use filezilla on Windows, but I think the linux version lacks the drag n' drop)
    It's not a real complicated workflow, I just need to save time wherever possible because I need to work very fast. In windows, it's like having a ball and chain strapped to your leg, but it does work. With linux, I will very much appreciate access to terminal and file management advantages.
    I'm not stuck on Gnome, I just like the simplicity. I'm mainly interested in speed and efficiency (NOTE efficiency... I like time savers and fluxbox always seems to add clicks to my tasks). Let me know what you think! I may be able to move my flash work over with a little help from VirtualBox too, but I think I'm stuck when it comes to video . Thanks for any advice you might have!

    No offense, but using WYSIWYG to design web pages doesn't sound very professional imo. They just don't offer the control that one would want with the code. I have tried a few (Frontpage, Dreamweaver, NVU, Bluefish, ...) and they all suck. They just don't do what you want it to. You drag something or add some formatting and it just messes up the code. It's better to just use a text editor and view the results in browser. Maybe that's slow or inefficient for you, but I find that's the best way to do it.
    As for image slicing, I find that annoying as well. In Photoshop I never really liked the way it worked. I sliced a few images and then trashed most of the others. I tend to go for simple designs and focus on making it mostly CSS, so when I slice images it's usually a 1px wide/high gradient which would get repeated. I don't need image slicing for that. As for graphic intensive sites... well... really, you should review that. People still have slow connections and having a lot of graphics is just bad, even if your client wants it. You might as well go with flash, and waste some more bandwidth
    If you really want to do it though, I think Inkscape is quite a nice tool. I do all my designing in it, and though I don't use slicing, you can do it quite easily (though it's a bit hackish) by adding a layer and creating transparent rectangles around the stuff you want, then just select the rectangle and export it. I'm not sure if there's a more automatic way - there are plenty of tutorials.
    The MS-fonts should be fine, I just want to know that I am looking at an accurate representation of what I my windows customers will see.
    Fonts won't help you much there. You know most people use IE, so you need to view the website in IE regardless, and that means you need Windows (I think wine uses some weird IE version which uses gecko). Maybe there's some good Linux alternative for viewing stuff in IE, but I just view it on Windows. Also the font shouldn't change the general layout of the site... I don't see how that would be a problem unless it's some weird font that not everyone has, in which case you'd use @font-face anyway...

  • Design advice for vertical list calculations

    I'm extending a product management life cycle sharpoint 365 site,
    With purchase orders, magazine store, production targets (date based) and sold dates.
    So that in our production environment we can see how much is stored, how much can be sold, and what we need to buy in etc.
    The thing i'm a bit troubled about that sharepoint lists are not Excel, but this has to be done with Sharepoint lists.
    They prefer not to have edits directly in the aspX code, but editing workflows in Sharepoint designer is OK
    In excel one could easily add a cell formula with the content of Sum the value in the row left of me and add it to the value of myself one row earlier (like B2 contained  = A2+B1 ); and then copy that formula to the whole B column
    The nice thing with Excel is that when you change some value in A, like A2 = 10 and and later A5=10 then B7 would be 20
    Changing later a value like A3 =4 would recalculate quickly and re- totals the B column.
    Sharepoint Lists, calculated fields work only horizontally, so to do some vertical actions one needs a workflow, and do some lookup based upon (calculated previous) ID field, ea ID -1. Or stepp through to All ID's till current. What borders me a bit, is that
    my list will grow large at some point. So stepping through all ID's to sum them till current Item seams 'slow' to me, on the other hand if i only check the previous version then the whole column (B) wouldnt be recalculated, if someone changed an older entry.
    Extremly simplified i have a single list with the columns below (where stored act as my B column).
    bought | stored | sold
    0 | 5 | 0
    2 | 5 | 0  (raw products need to be manufactured before stored so they're added 1 by 1 later).
    0 | 6 | 0
    0 | 7 | 0
    0 | 4 | 3 (but when sold we can subtract directly from storage)
    Ofcourse i need some horizontal calculations because i need to track as well if there has been bought enough for production. But i wonder what would be Wise to do, base thing on current ID and ID minus 1, or to walk through all items by work flow (recalculate
    whole list), or like with changes; recalculate from current changed till the end  (not sure how to detect end yet.. but well something like that).
    I just wonder what would be wise here, and the best direction for this.
    The table i showed is a  extreme simplified, in fact also some other tables and workflows will be the feeders of the data.
    Its just that the whole thing makes me a bit worry and wonder what would be best, and maybe i oversee something maybe there are other ways for vertical calculations over lists.

    After lots of thinking, and seeing how slow office 365 SharePoint reacted upon my list workflows.
    I've decided to use a "site variables list", in which I store variables as rows and their value in a columns.
    And I refer to them by ID (or one could use another indexed unique value).
    It's maybe not an exact calculation of the whole thing (build around several lists) but everything is a lot faster then stepping trough each item in a huge list. And it also allows for a bit more easy tweaking of these "vertical" calculations.
    If for some reason those calculations would need adjustments (by change of management definitions), I have easy access to those variables to adjust them.
    On a side note, when I use those variables, it turned out it worked a bit better to create in the workflow local variables, then do the calculation, and put it in the right table you want those numbers to appear in. As compared to directly referring to the
    total. It takes just 5 sec or so to update. With this method size of the lists have no almost no impact on the speed of the workflow now.

  • Fact table design advice

    Good Morning All,
     I'm working on developing a cube that measure's budget and actual cost for a customer that I'm working with. We have serveral dimensions that comes into play:
    Organization - this dimension defines the various internal departments at the customer location where each department sets a budget or actual cost for each month.
    DateTimePeriod - this dimension defines the transaction date when the budget or actual cost was recorded. This dimension contains year, quarter, month and day columns.
    Expense Item - this dimension defines a specific expense item that a budget and actual cost is assigned too such as rent, utility, software licences,etc...
    Cost Type - this dimension defines if the cost within the fact table is a budget or actual cost.
    Within my fact table I store primary key fields values for each of the dimension table listed above. Included with this table is a cost column that represents the budget or actual cost. The problem that I'm having is....The budget cost and actual cost are seperate
    records...For example, I have one record that has the budget cost and then I have another record that has the actual cost....
    My feeling is that the budget and cost records should be store on same record instead of seperate records. Also I would like note we're using PerformancePoint to surface the cube data to the client and both the budget and cost needs to drill down to the
    month level only for phase 1. I have a feeling that the customer would want in the in future to measure down to the day level...
    So my question is....What is a better design:
    Keeping the actual and budget costs within a fact table on seperate rows using the Cost Type dimension to identify if the cost is a budget cost or an actual cost or....
    Keeping the actual and budget costs within a fact table on the same row and removing the need for a Cost Type dimension......
    Please help...
    Make sure you mark my reply as the answer if it had solved your request. Brandon M. Hunter MCTS - SharePoint 2010 Configuration

    Why? Wouldn't be easier if I make a database change and add the budget and actual cost on the same row??What would be the advantage or disadvantage to your approach???
    As per my experience, there could be more than one version of a budget value. Initially we start with budget for an account, and then have the actual for the same account for the same period. If we 100% sure that we get only these two versions (Budget and
    Actual) then upto a certain point, two columns implementation is fine. What if the budget is revised, how do you hold it, adding another column? What if you need to maintain a forecast value for same account, same period, create another column for that? Considering
    all accounting and budgeting scenario, I still suggest to have multiple rows for all these versions (or scenario, or cost type). Again, refer AdventureWorksDW for seeing this implementation.
    Since you are going to build a cube from this, you can easily view accounts recorded like this (which is mostly viewed by business users when analysing accounts). It is an advantage too.
    In terms of disadvantages, I see only one disadvantage which is storage cost.
    Dinesh Priyankara
    http://dinesql.blogspot.com/
    Please use Mark as answer (Or Propose as answer) or Vote as helpful if the post is useful.

  • R/3 to AS/400 Interface Design Advice

    Hi,
    I need some design adivice on a Synchronous Interface involving R/3 and and an AS/400 system.The requirement is as follows,
    The shipping Company uses AS/400 systems,Goods to be shipped are scanned at thier end using hand held scanners,Once a box is scanned for shipment , a message from the scanner containing Order Info and Line Item details needs to be sent to an R/3 table to update and then an acknowledgement  needs to be sent to the AS/400 system,The frequency of these messages could be around 20 per minute, I am not very aware how things work on the AS/400 system,but know that currently they use a ODBC link.
    Could I use XI as a middleware for this kind of interface, If yes how would i connect to the AS/400 system(which is inside our Landscape) and would XI handle and process 20 messages (around 50Kb each) per minute.would this work as if it were a direct connection..to R/3..?
    I was thinking of using RFC's however wanted to know any better options .Any suggestions on this would be greatly appreciated...! Knidly let me know if you ned any more info... Thanks a ton..!!

    At a high level, it looks like we have to go for a
    JDBC-XI-Proxy scenario for your interface.
    As/400 databases system send the data to the XI system which send the data to SAP system via a Inbound Proxy(Synchronous)
    Here are some links for JDBC :
    AS400 acess using JDBC adapter
    Re: JDCB Connection from XI to AS400
    JDBC drivers for DB2 on AS400 V5R3
    Regards,
    Ravu

  • Data / Hosting Center design advice…

    Need advice, on how to build a Data- Hosting Center infrastructure, (Best practices)…
    I need to deliver costumer access on Ethernet, where costumers can get access on variable access rates (CAR ingress/Egress), some costumers are connected with no redundancy, and others need redundancy (hrsp) between to routers…
    Build on a number of 7507 routers with GEIP+ interface to connect to the backbone routers, and a number of PA-2FE-TX interfaces to provide costumer access with, each customer getting his own FE interface, that have been CAR’ed down to the access needed from the customer?, the big issue here is that some customers don’t need more that 4 – 10 Mbits (full duplex), and the use of a 100Mbit interface for a customer that only need under 10Mbit is over kill? … any solution on how to solve this issue ?... is the solution to this problem, to connect one 100Mbit port to a switch, and running a trunk (dot1q) interfaces out on the switch, and then connect the customers to at switch port,
    Or is the best solution setting up a number of 7606 routers??? …
    I also need to deliver the L2 infrastructure, so I also need to build op a L2 infrastructure, that can support the customers equipments (Firewalls / Servers) that are segmented up I Vlans, here I need to secure my infrastructure so a customer error I the L2 network don’t influence other customers??, is that setting up a number of 6500 switches that is connected to a number of smaller switches, using MST so each customer have its own MST instance?? …
    Thanks in advance
    /Peter

    Hello,
    1.PIX is the precursor to the ASA so at this point the ASA is probably a better choice since it'll be around longer plus I'm sure they have beefed up the base hardware compared to the pix.
    2.Your external router is dependant on how much traffic your going to be dropping into your hosting site. A 7200 series router is a fairly beefy router and should be able to handle what you need if your looking.
    3.One of the nice things about the 6500 is you can put a FWSM and segment all your different hosting servers to provide a more granular network control.
    I don't have any case studys but will look around and post them if I find some.
    Patrick

  • Achieving Parallelism and avoiding Concurrency - Need Design Advice

    Requirement:
    Extract Data from the database using SSIS.
    Design Details:
    There are two tables
    ReportExtract and ReportExtractQueue. ReportExtract is a reference table and contains all the details related to the extracts.
    ReportExtractQueue is a Queue Table designed for SSIS Package to process the extracts as per priority.
    2. The SSIS Package, at any point of time picks up only one ReportExtractID to process from ReportExtractQueue where the ProcessingStatus = ‘Q’ and the Priority is High (in ascending order). This is achieved
    by calling a stored procedure which has logic as below.
    DECLARE @ReportExtractID int;
    SELECT @ReportExtractID = (SELECT TOP 1 ReportExtractID
    FROM ReportExtractQueue
    WHERE ProcessingStatus = 'Q' -- InQueue
    ORDER BY Priority);
    IF @ReportExtractID IS NOT NULL
    BEGIN
    UPDATE ReportExtractQueue
    SET ProcessingStatus = 'P' --InProgress
    WHERE ReportExtractID = @ReportExtractID
    SELECT @ReportExtractID AS ReportExtractID
    END;
    3. Once the package picks up the ReportExtractID from the Queue, the ProcessingStatus is changed to ‘P’ (InProgress) and then processes the extract. After the Processing is complete the ProcessingStatus is
    updated again to either ‘S’ (Success) / ‘F’ (Failed).
    4. Scheduling for Parallel Execution: 
    Four copies of the same SSIS Package are copied to four different folders.
    Four Jobs are created, each referring to a copy of the SSIS Package.
    The Jobs are scheduled to run every 15 minutes to poll the Queue table through SSIS package.
    The Start Time for all the Jobs is same.
    Problem: Since all the Jobs start at the same time and run every 15 minutes. How do I ensure each job picks up a unique ReportExtractID from the Queue table. So far, from what i tested in the test
    environments I have not faced this issue. But I want to avoid Concurrency Issues on the Queue table.
    If I use the rowlock, readpast hints on the Queue table, will it resolve the Concurrency Issues (if any).      
    If yes, then should I use these hints even while updating the Processing Status? Should I follow any other design approach?.
    Please advise
    -- Praveen

    The jobs will clash.
    And the design is not scalabale.
    SSIS is not the best solution here in general.
    You better have the stored procedure pick records up and then run the processing package - many times, even in parallel - given this package
    can tolerate multiple copies running. SQL Agent runs them in async mode, the package in Agent can start with sp_start_job
    Arthur My Blog

  • Looking for design advice and inspiration

    Hi, all. I'm designing a Dreamweaver template for our IT
    department's web apps. Do you have some inspirations on
    design/layout? Our apps need to take advantage of the full browser
    screen (lots of data to display in grids) and need to be standards
    compliant (ADA, XHTML, etc.). In other words, lots of screen real
    estate but still look slick :)
    Any recommendations will be much appreciated. Thanks.

    There are great inspirations here:
    http://cssliquid.com/category/gallery/
    Cheers
    Pablo
    An Eye of Menorca
    www.dellimages.com
    "curious_Lee" <[email protected]> wrote in
    message
    news:evlkud$c4p$[email protected]..
    > Hi, all. I'm designing a Dreamweaver template for our IT
    department's web
    > apps.
    > Do you have some inspirations on design/layout? Our apps
    need to take
    > advantage
    > of the full browser screen (lots of data to display in
    grids) and need to
    > be
    > standards compliant (ADA, XHTML, etc.). In other words,
    lots of screen
    > real
    > estate but still look slick :)
    >
    > Any recommendations will be much appreciated. Thanks.
    >

  • Java EE design advice for a re-designed DB app

    I'm currently tasked with rewriting a legacy DB app in Java. The original was written in Delphi. It worked great for a number of years, but the powers that be have recently decided to redesign and rewrite it in Java. Basically I just have the same set of business requirements as the original did.
    Overall, the app is a desktop GUI application that helps track contents of a natural history museum collection. The collection contains a bunch of specimens (dead animals) collected all over the globe at various times over the last 200 years. Multiple users (1 - 10 uesrs) will have to have access to the data at the same time. I also have to provide a nice Swing GUI for it.
    Here's my question: Is this the type of app that lends itself to a Java EE design? I'm imagining using a Java EE app server that connects to the DB. The app server would provide DB access, producing entity beans, as well as managing a number of session beans (EJBs) that implement the business logic (security, user management/session management). I would also have a Swing GUI that would connect to the beans remotely. This sounds like it would help me keep a good disconnect between the UI layer (Swing), the business logic (EJBs), and the data layer (entity beans accessed using the Java Persistance API). Does this sound reasonable? I'm a veteran Swing developer, but not a seasoned Java EE developer/designer.
    Also, if I use this architecture, I can imagine one issue that I might run into (I'm sure there are many others). I can imagine that I would want to retrieve the entity beans (lets say mypackage.MyPersonBean) through some call to an EJB, and then use the bean in some rendered Swing component. What happens when the Swing component needs to access the results of MyPersonBean.getAddresses() if the addresses are lazy loaded?
    As you can probably tell, I really have more than one design question here. Help/comments about any of this is greatly appreciated.

    I was thinking the same thing, but don't have a
    successful experience to validate my gut feelings.
    Here's my only suggestion (which dubwai could
    hopefully confirm or correct): write your entity
    classes/data model classes with no knowledge of
    lazy-loading etc. Then subclass them, overriding
    just the getChildren() type of methods and build the
    lazy-loading knowledge into the subclass.More or less, yes. Don't over-think it, though. If you define your basic data 'types' as interfaces, you don't need to get into complex type hierarchies or multiple versions of the types unless that becomes necessary and if it does, the changes should not affect the presentation layer.
    Since you are on-board with this and I think you are completely following, there is a technique for the lazy loading that you can use here.
    In the case where it's a one-to-one relationship, you can do the lazy-loading by creating a simple wrapper class for the child object. This class will have a reference to either null or a filled in Object. This is a little more OO because the Object is taking care of itself. Whether this abstraction is useful to you, you will have to decide.
    In the case of a one-to-many relationship, you can create a custom Collection (List or Set) that manages the stub loading. If you make a generic abstract version and subclass it for the different child types, you might be able to reuse a lot of the data retrieval code. You can do the same thing with the wrapper too.
    I will caution you to try to keep it as simple as you can without painting yourself into a corner. Only do things that you are going to use now and write things so they can be expanded upon later. Reducing coupling is a core technique for that.
    When the
    GUI asks for an object in the getData() call, hand
    them a subclass object, but don't let them know it.
    In other words, have the method "public DataClass
    getData()" return a SubDataClass object. The caller
    will only know that they received a DataClass
    object, but lazy-loading awareness will be built
    into it. This way, the lazy-loading stuff is
    completely transparent to the caller but you still
    have simple data classes that can be used outside of
    a lazy-loading context.Yes this is the idea, but don't write the other versions until you need them.
    It's also possible to use
    this method if you need to add transparent
    lazy-loading to classes that you aren't the author
    of. (Only classes that have been tagged 'final' or
    have 'final' public methods would be beyond this
    method's reach.)Yes, you can use the wrapper approach above but if the author of that class made a lot of unecessary assumptions you might have trouble.
    This approach allows for some enhancements, too.You
    can create a thread that retrieves the children of
    Foo (e.g. bars) incrementally after the Foo is
    returned to the caller. Often you can load the
    bars
    in the time it takes the user to click around to
    the
    point where they are needed or at least be partly
    done. This will make the app seem very fast to the
    user because they get the Foo very quickly (because
    you didn't load the chidren) and then the bars
    really
    quickly (because you loaded them during user
    'think-time').
    I love this idea. I'm hoping to code this into my
    GUI app soon.I would advise that you get the main lazy-loading working without this (keep in mind when writing the code) and do it once you are sure you will finish on time.

  • Streams with DataGuard design advice

    I have two 10gR2 RAC installs with DataGuard physical copy mode. We will call the main system A and the standby system B. I have a third 10gR2 RAC install with two-way Streams Replication to system A. We will call this RAC system C.
    When I have a failure scenario with system A, planned or unplanned, I need for system C's Streams Replication to start replicating with system B. When the system A is available again I need for system C to start replicating with system A again.
    I am sure this is possible, and I am not the only one that wants to do something like this, but how? What are the pitfals?
    Any advice on personal experience with this would be greatly appreciated!

    Nice concept and I can only applaud to its ambitions.
    +"I am sure this is possible, and I am not the only one that wants to do something like this".+
    I would like to share your confidence, but i am afraid there are so many pitfalls than success will depends on how much pain you and you hierarchy can cope with.
    Some thoughts:
    Unless your dataguard is Synchronous, at the very moment where A fails, there will be missing TXN in C,
    which may have been applied in B as Streams is quite fast. This alone tells us that a forced switch cannot
    be guarantee consistent : You will have errors and some nasty one such as sequence numbers consumed
    on A (or B) just before the crash, already replicated to B (or A)but never shipped to C. Upon awake C will
    re-emit values already known on B (dup key on index?)
    I hope you don't sell airplane ticket for in such a case you can sell some seats twice.
    Does C have to appear as another A or is it allowed to have a different DB_NAME? (How will you set C in B?
    is C another A which retakes A name or is C a distinct source).if C must have the same DB_NAME, the global
    name must be the same. Your TNS construction will be have to cope with 2 identical TNS entry in your network
    referring to 2 different hosts and DB. Possible with cascade lines at (ADDRESSES= ... , but to be tested.
    If C is another A then C must have the same DB_name as LCR do have their origin DB name into them.
    If C has a distinct name from A it must have its on apply process, not a problem it will be idle while A is alive,
    but also a capture that capture nothing while A is alive, for it is Capture site who is supposed to send the ticks
    to advance counters on B. Since C will be down in normal time, you will have to emulate this feature periodically
    reseting the first_scn manually for this standby capture - you can jump archives providing you jump to another
    archive with built in data dictionary - or accept to create a capture on B only when C wakes up. The best would
    be to consider C as a copy of B (multi-master DML+DDL?) and re-instantiate tables without transferring any data
    the apply and capture on C and B to whatever SCN is found to be max on both sites when C wakes up.
    All this is possible, with lot of fun and hard work.
    As of the return of the Jedi, I mean A after its recovered from its crash you did not tell us if the re-sync C-->A
    is hot or cold. Cold is trivial but If it is hot, it can be done configuring a downstream capture from C to A with
    the init SCN set to around the crash and re-load all archives produced on C. But then C and A must have a different
    DB_name or maybe setting a different TAG will be enough, to be tested. Also there will be the critical switch
    the multi-master replication C<-->B to A<-->B. This alone is a master piece of re-sync tables.
    In all cases, I wish you a happy and great project and eager to hear about it.
    Last, there was a PDF describing how to deal with dataguard switch with a DB using streams, but it is of little help,
    for it assumes that the switch is gentle : no SCN missing. did not found it but maybe somebody can point a link to it.
    Regards,
    Bernard Polarski

Maybe you are looking for

  • PT880----Possible cold boot issue-----What are YOUR boot times??

    I may have an issue with this board that I hadn't really noticed since I've been so heavy into the max overclock I could get. My post and then boot time is strange and annoying-----and sometimes I have to just shut off and turn on-----to actually com

  • How to remove constant asignemnt from 0FUNCT_LOC info object

    Hi, I am trying to remove constant assignment for the 0FUNCT_LOC info object but that option is greyd out. Could you please advise me how i can take out the assignment. It's in development. Thanks, Naveen

  • Query pertaining to Package compilation and execution

    Hey there, I'm trying to create a package as follows, Package Specification CREATE OR REPLACE PACKAGE test IS PROCEDURE main(); END test; Package Body CREATE OR REPLACE PACKAGE body test AS PROCEDURE main BEGIN null; END main; PROCEDURE proca BEGIN p

  • How to favourite photos

    Hi, I recently switched from iPhone 5 to iPhone 5s. Using iPhoto and Image Capture on my Mac, I saved all the photos into a Mac folder. Then, I used iTunes to import these photos into my new phone. However, I have noticed that the little heart/favour

  • SIGNED_DATA_FORMAT

    Dear Forum Members, Within a SAP BPC I'm currently working on I want to limit the number or decimals that is accepted by the database.   Hence forth I tried to set the property SIGNED_DATA_FORMAT  within the application parameters with 25,0 no decima