DW best practices when needing to update data

Hi all,
I have a few general questions about data warehousing...
We often need to update/process the data after that we imported it into the DW (with ETS tools). Since data in a DW is not supposed to be updated I wonder what the corrct way to do this is.
The scenario is data coming from a few systems, we import it and transform it, then, when we want to run reports etc. we sometime need to apply some changes to the data, like for example add a column with some result. Is the correct way to do that adding tables in the DW? Other systems use separate tables (inside or outside the DW) in order to transform the data... when is it worth to create separate tables outside the DW and when to create/add columns in the DW? What is allowed/best practice in DW?
Thanks,
A.

It is a view. That is what the error message is saying.
Why not deal with facts instead of speculating? Look at the Oracle Data Dictionary and see what the object DPIT.DEDUCTIONS is.
Use TOAD. Use SQL*Plus and select on ALL_OBJECTS. Use OEM. Etc.

Similar Messages

  • Best practice how to retrieve & update data w/o any jsf-lifecycle-overhead

    I have a request scoped jsf managed bean called "ManagedBean". This bean has a method annotated with "@PostConstruct" that retrieves data from a database. The data is shown in a jsp "showAndEditData.jsp" in <h:inputText /> components - so the data is editable.
    The workflow is as follows:
    First, when navigating to "showAndEditData.jsp", the ManagedBean is created, the "@PostConstruct"-method is invoked, and the data retrieved from the database is shown to the user.
    Second, the user changes the data.
    Third, the user presses the submit button, the ManagedBean is created again, the "@PostConstruct"-method is invoked again, and the data is retrieved from the database again. Then the data is overridden by the changes the user made and passed to the business-tier (where it will be saved to the database).
    Every step that i marked with "*again*" is completely unneccessary and a huge overhead.
    Is there a way to prevent these unneccessary steps.
    Or asking in other words: Is there a best practice how to retrieve and update data efficently and without any overhead using JSF?
    I do not want to use session scoped managed beans, because this would be a huge overhead as well.

    The first "again" is neccessary, because after successfull validation, you need new object in request to store the submitted value.
    I agree to the second and third, really unneccessary and does not make sense.
    Additionally I think it�s bad practice putting data in session beansTotal agree, its a disadvantage of JSF that we often must use session.
    Think there is also an bigger problem with this.
    Dont know how your apps are working, my apps start an new database transaction per commit on every new request.
    So in this case, if you do an second query on postback, which uses an different database transaction, it could get different data as for the inital request.
    But user did his changes <b>accordingly</b> to values of the first snapshot during the inital request.
    If these values would be queried again on postback, and they have been changed meanwhile, it becomes inconsistent, because values of snapshot two, do not fit to user input.
    In my opionion zebhed has posted an major mistake in JSF.
    Dont now, where to store the data, perhaps page scope could solve this.
    Not very knowledge of that section, but still ask myself, if this data perhaps could be stored in the components and on an postback the data are rendered from components + submittedvalues instead of model.

  • Best Practices to separate voice and Data vlans

    Hello All .
    I am coming to the community to get some advices on a specific subject .
    One of my customer is actually using vlan access-list to isolate it is data  from it is voice vlan traffic .
    As most of us knows VLAN ACLs are very difficult to deploy and manage at an access-port level that is highly mobile. Because of these management issues they have been looking for a replacement solution consisting of firewalls but apparently the price of the solution was too high in the sky .
    Can someone guide me towards security best practices when it comes to data and voice vlan traffic isolation please ?
    thanks
    Regards
    T.

    thomas.fayet wrote:Hi again Collin , May I ask you what type of fw / switches / ios version you are using for this topology ? Also is the media traffic going through your fw if one voice vlan wants to talk to another voice vlan ? rgds
    Access Switches: 3560
    Distro: 4500 or 6500
    FW: ASA5510 or Juniper SSG 140 (phasing out the Junipers)
    It depends. In the drawing above, no voice traffic would leave the voice enclave until it talks to a remote site. If we add other sites to the drawing, at a minimum call-sig would traverse the firewall and depending on the location of the callers, all voice traffic may cross the firewall. All of that depends on how you have your call managers/vm/voice gateways designed and where the callers are.

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best practice for Plan and actual data

    Hello, what is the best practice for Plan and actual data?  should they both be in the same app or different?
    Thanks.

    Hi Zack,
    It will be easier for you to maintain the data in a single application. Every application needs to have the category dimension, mandatorily. So, you can use this dimension to maintain the actual and plan data.
    Hope this helps.

  • Best practices when carry forward for audit adjustments

    Dear experts,
    I would like to know if someone can share his best practices when performing carry forward for audit adjustments.
    We are actually doing legal consolidation for one customer and we are facing one issue.
    The accounting team needs to pass audit adjustments around April-May for last year.
    So from January to April / May, the opening balance must be based on December closing of prior year.
    Then from May / June to December, the opening balance must be based on Audit closing of prior year.
    We originally planned to create two members for December period, XXXX.DEC and XXXX.AUD
    Once the accountants would know their audit closing balance, they would have to input it on the XXXX.AUD period and a business rule could compute the difference between the closing of AUD and DEC periods and store the result on an opening flow.
    The opening flow hierarchy would be as follow:
    F_OPETOT (Opening balance Total)
        F_OPE (Opening balance from December)
        F_OPEAUD (Opening balance from the difference between closing balance of Audit and December periods)
    Now, assume that we are in October, but for any reason, the accountant run a carry forward for February, he is going to impact the opening balance because at this time (October), we have the audit adjustments.
    How to avoid such a thing? What are the best practices in this case?
    I guess it is something that you may have encounter if you did a consolidation project.
    Any help will be greatly appreciated.
    Thanks
    Antoine Epinette

    Cookman and I have been arguing about this since the paleozoic era. Here's my logic for capturing everything.
    Less wear and tear on the tape and the deck.
    You've got everything on the system. Can't tell you how many times a client has said "I know that there was a better take." The only way to disabuse them of this notion is to look at every take. if it's not on the system, you've got to spend more time finding the tape, and adding "wear and tear on the tape and the deck." And then there's the moment where you need to replace the audio for one word from another take. You can quickly check all the other takes (particularly if you've done a thorough job logging the material - see below)_.
    Once it's on the system, you still need to log and learn the material. You can scan thru material much faster once it's captured. Jumping around the material is much easier.
    There's no question that logging the material before you capture makes you learn the material in a more thorough way, but with enough selfdiscipline, you can learn the material as thoroughly once it's been captured.

  • Best practice when modifying SAP Standard Development Component

    Hello Experts,
    What is best practice when modifying SAP Standard Development Component (Java Web Dynpro)? Iu2019m looking for the best method to do modifications to SAP Standard DC so that my changes will be kept (or need low maintenance) after a new service package (or EHP) is applied.
    Thanks,
    Kevin

    Hi,
      'How to use Busiess Packages in Enterprise Portal 6.0' is available in this link.
    http://help.sap.com/bp_epv260/EP_EN/documentation/How-to_Guides/misc/Using_Business_Packages.pdf
    Check out for the best practices.
    Regards,
    Harini S

  • Best practices when making service requests

    Best practices when making service requests
    We've been working on moving our old services that were built with an different service request tool into RequestCenter and were wondering if anyone had any thoughts about best standards or practices for the new forms that they would be willing to share.  For example, one such standard might be that the customer - initiator information will always be displayed at the top of the request.
    Are there any other standardizations you could share that help lend consistency and provide improved readability for request forms?  Maybe someone has a design framework guide they would be willing to share?
    Thanks!
    Tim

    Thanks for the comments and the book suggestion.
    We've been placing the customer information at the top because wanted the customer to review the information before subbmitng the form.  Our LDAP data is somewhat spotty and we want to make sure we have the right information when the form is submitted but I can see the advantages to placing it at the bottom as well.  I'll have to think that over more.
    Does anyone find tha certain fields work better than others?  For example, we've not had much

  • Best Practices for converting SAP HR data (4.7 to ECC)

    Hello Experts ...
    We are going from 4.6 to ECC ... no upgrade ..it will be a new implementation ...
    I am looking for best practices to convert SAP HR data from one sap instance(4.6) to another(ECC) ...
    I am not sure if direct input or LSMW or any other method/tool is the best way ...
    Will really appreciate and award point if I can get good advise or documentation ...
    Let me know if my question is not clear enough.
    Thanks,

    Hi
    You can check SAP Marketplace and there follow the download link. You will require a SAP Marketplace login to download the Best Practices. Fortunately Best practices for ECC 5.00 and 6.00 are available there but they are country specific versions. I know HCM for US is available there.
    Reward points, if helpful.
    Regards
    Waz

  • Books / links on best practices when writing on-line Help

    Hi everyone
    Not sure were to place this topic...
    I have not posted in here for ages...
    I am a RoboHelp user and I am looking for one or several
    books about best practices when writing on-line help. For examples,
    what are the "rules" or "do's" and "don'ts" for CSS, topic linking,
    number of clicks, links within a topic, index building, etc.
    Just wondering if some people on this forum know about some
    good books where all of the rules or do's would be compiled?
    Thanks in advance for any input.
    Regards

    KeepItSimple,Stupid!
    That is, just because there are neat things like drop-down
    text, marquees, and such, doesn't mean you should use them.
    Stick to the basic HTML fonts and colors (use the
    w3schools web site for all
    things HTML and CSS.
    Instead of styles, create your lists by selecting Normal
    paragraphs and formatting with the Bullet and Number toolbar
    buttons.
    Keep your tables as simple as possible (try not to nest them
    and have all sorts of row and column spans, and try to avoid lists
    and figures, if you can). Also, break up very long tables into
    functional groupings with introductory headings.
    Use
    Peter Grainge's web
    site and
    Rick
    Stone's web site for all the best workarounds and diagnostics.
    Good luck,
    Leon

  • A must read best practices when starting out in Designer

    Hi,
    Here is a link to a blog by Vishal Gupta on best practices when developing XFA Forms.
    http://www.adobe.com/devnet/livecycle/articles/best-practices-xfa-forms.html
    Please go read it now; it is excellent :-)
    Niall

    I followed below two links. I think it should be the same even though the links are 2008 R2 migration steps.
    http://kpytko.pl/active-directory-domain-services/adding-first-windows-server-2008-r2-domain-controller-within-windows-2003-network/
    http://blog.zwiegnet.com/windows-server/migrate-server-2003-to-2008r2-active-directory-and-fsmo-roles/
    Hope this help!

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Need best practice when accessing an ucm content after being transferred.

    Hi All,
    I have a business requirement where I need to auto-transfer the content to another UCM when this content expires in the source UCM.
    This content needs to be deleted after it spends a certain duration in the target UCM.
    Can anybody advise me the best practice to do this in the Oracle UCM?
    I have set up an expiration date and trying to auto Replicate the content to the target UCM once the content reaches the expiration date.
    I am not aware of the best practice to access the content when it is in the target UCM?
    Any help in this case would be greatly appreciated.
    Regards,
    Ashwin

    SR,
    Unfortunately temp tables are the way to go. In Apex we call them collections (not the same as PL/SQL collections) and there's an API for working with them. In other words, the majority of the leg work has already been done for you. You don't have to create the tables or worry about tying data to different sessions. Start you learning here:
    http://download.oracle.com/docs/cd/E14373_01/appdev.32/e11838/advnc.htm#BABFFJJJ
    Regards,
    Dan
    http://danielmcghan.us
    http://sourceforge.net/projects/tapigen
    http://sourceforge.net/projects/plrecur
    You can reward this reply by marking it as either Helpful or Correct ;-)

  • Best practice when doing large cascading updates

    Hello all
    I am looking for some help with tackling a fairly large cascading update.
    I have an object tree that needs to be merged using JPA and Toplink.
    Each update consists of 5-10000 objects with a decent depth as well.
    Can anyone give me some pointers/hints towards a Best practice for doing this? Looping though each object with JPA's merge takes minutes to complete, so i would rather not do that.
    I have never actually used TopLinks own API before, so i am especially interested if TopLink has an effective way of handling this, preferably with a link to some related reading material?
    Note that i have a somewhat duplicate question on (Noting for good forum practice)
    http://stackoverflow.com/questions/14235577/how-to-execute-a-cascading-jpa-toplink-batch-update

    Not certain what you think you can't do. Take a long clip and open it in the Viewer. Set In and Out points. Drop that into the Timeline. Now you can move along in the Viewer clip and set new Ins and Outs and drop that into the Timeline. Clips in the Timeline are created from the Ins and Outs you set in the Viewer.
    Is that what you want to do? If it is, I don't where making copies of the clip would work for you
    Later, if you want to match up a clip in the Timeline to that master clip, just use Match Clip (find) in the timeline to find where it correaltes to your main clip
    You can have FCE automatically create subclips at camera cut points by using DV Stop/Start Detect if that is what you're looking for

Maybe you are looking for

  • Sound Card Trouble After -Syu

    Hello, I'm really not sure what I've done, but my sound is no longer working, and alsamixer does not detect my sound card anymore. I upgraded to kernel26 2.6.30.5-1 earlier today and so I suppose that could be to blame. I rebooted into the fallback k

  • How to get the current selected column and row

    Hi, A difficult one, how do i know which column (and row would also be nice) of a JTable is selected? e.g. I have a JButton which is called "Edit" when i select a cell in the JTable and click the button "Edit" a new window must be visible as a form w

  • Keyboard & Mouse Not Working (fresh install/open source ati driver)

    As the title says, I have a fresh install of arch, installed and configured the xorg as the "Beginners Guide" suggested. my xorg.conf is as following: Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "C

  • Internet is being really sloth-like

    Okay, the last few weeks my internet has been totally flip-flop. Sometimes it's lightning fast like Normal, and sometimes it won't even hardly connect, like now. I use a linksys router conneted to a cable modem about 30 feet away, and I have an Airpo

  • IP: How to enhance/enrich planning-data

    Dear experts, we would like to trigger a parameterized (customizable filters) process based on the mechanisms of the integrated planning. This process shall enrich the existing quantitive planning-figures by its monetary value. The evaluation of the