Database design question about historical data in a group of tables

Hi Folks,
I have a group of tables having relationships among them. In order to keep the change history, we can not update the data, instead, we add new data to the table(s) and mark older data as whatever non-current status. They all have timestamps in these tables.
For example, If table Parent changes, we add new record Parent(new) and keep older record(s). But the table child has not changed, so how could we link the parent and child table(s)?
One solution is to use a unique sequencial number to identify the snapshot of all these group of tables, so the FK contains this sequencial number to keep all tables in sync from point of time t1 to t2 and so on.
But the problem is if only one table changes, we have to insert new records to ALL tables with the new sequencial number to indicate a new snapshot of all group of tables, this obviously has lots of redundancy when change occurs in one place only.
However, If we only adds new records to a changed table, lets say Parent, how could we distinguish the current record in Parent table and its child tables to reflect a consistent snapshot of all tables? Because the record in parent table Parent(t2), Parent(t1) all associate to child(t1), since at time of t2, child table has not changed, only Parent table changed.
Your opinions are appreciated!
Thanks a lot.

There are books on the subject of dealing with time series data. You may need to read one or two as this is a very complex topic. Though not applications of time series classification are complex it is difficult to tell based on what information can go in a post.
What has to be reflected on the parent when a child row is changed? Do both the old child row and the new child row belong to the same parent?
What activity at the parent level would affect the child rows? That is, is there any activty on the parent that requires new child rows to be populated?
One way of tying the child rows to a specific set of parents would be to carry the parent key, timestamp to the child as a begin_parent_timestamp and then also potentially have a end timestamp if a change to a parent ends the relationship. If changes to the parent do not ever end the relationsip then the end_timestamp would not be necessary. In this case if you want to join a parent to only the most recent version of a child you can perform a select parent kye, child key, max(parent_timestamp) from child group by parent key, child key. One child row would match serveral parents.
Without more specifics it is hard to make suggestions that might prove usable but your table relationships might be too complex to deal with in this kind of forum.
There is a newsgroup on database theory that may be a good place to seek ideas on this type of problem.
HTH -- Mark D Powell --

Similar Messages

  • Question about tranferring data from iPhone 3gs to iPhone 4

    I just had a couple quick questions about transferring data from my old phone from my new iPhone 4. The reason i am wondering is because i am worried about whether i will encounter any problems when doing so.
    First off i have already sold my phone today, i reset all data and settings from the phone and gave it to my buddy so its gone. I did a full sync and backup yesterday so all the necessary files should be on my computer(windows 7). Now, im basically wondering if i will run into any problems if i restore my iphone 4 from a backup. My 3gs was running 3.1.2 on att. Now i know IDEALLY i would have updated it to iOS 4 before backing it up and used the newest version of itunes, but i did not. Does anyone think this will be a problem for me?
    Now with that out of the way, my biggest fear is losing my old data(text messages and notes mainly because i am a pack rat for those type of things) so id like to be SURE that none of my old backups will be deleted in any scenario. The reason i dont just restore it right now is because i want my new phone to be as clutter free as possible. I am going to be putting on here only the apps that i used often and would basically like to transfer over the BARE minimum; texts, notes, and highly used apps... So i guess my main question is can you transfer over only certain things like texts and notes after setting up the phone as a new phone. And if i were to set up the phone as a new phone what would happen to my old backups? Would i be able to selectively restore?
    Im afraid that it might not be a possibility to transfer only certain things even though it should be.. i should be able to select a text messages folder and put it on my new phone and be done with it... But anyway i dont want to rant. Can anyone explain to me how this all will work?
    ULTIMATE GOAL: Transfer only texts, notes, certain apps(and their data) and NOTHING ELSE.
    MOST IMPORTANT THING: Not losing texts and notes. I can deal with putting all the old **** on my new phone and cluttering/slowing it down if i NEED to.
    Thank you in advance, sorry for the long post.

    If the most important thing for you is keeping old text messages, notes, and voicemail, then you'll need to sync the phone from your existing backup. I know of no other way to access those items.
    Once you have synced to the new phone, check that you have those items that were important. Then you can reconnect your phone to iTunes, and change the sync settings to remove the apps or other items you no longer want to keep on the phone.
    iPhone backups are stored by iTunes; you can see them by opening your iTunes preferences, clicking on "Devices" and then looking in the window. You can delete old backups from here. I don't know how you can open/read the backups though.
    I don't expect you'd have any problems syncing from your old phone's backup, but it's definitely an either/or situation. Since you got rid of the old phone already, it's too late to email yourself your notes, or copy the text messages. Your previous backup is your only solution.

  • Question about displaying data

    Based on the search criteria I need to show some info. with check box on each item, but it might retrun upto fifty items (between 1 to 50 items). Should I create checkbox/output text dynamically or how do I handle this scenario? In the jspx how do I loop it and display? Thank you.

    hi user13094256
    See my reply in your other forum thread with the same question
    at Question about displaying data
    regards
    Jan Vervecken

  • Design question about when to use inner classes for models

    This is a general design question about when to use inner classes or separate classes when dealing with table models and such. Typically I'd want to have everything related to a table within one classes, but looking at some tutorials that teach how to add a button to a table I'm finding that you have to implement quite a sophisticated tablemodel which, if nothing else, is somewhat unweildy to put as an inner class.
    The tutorial I'm following in particular is this one:
    http://www.devx.com/getHelpOn/10MinuteSolution/20425
    I was just wondering if somebody can give me their personal opinion as to when they would place that abstracttablemodel into a separate class and when they would just have it as an inner class. I guess re-usability is one consideration, but just wanted to get some good design suggestions.

    It's funny that you mention that because I was comparing how the example I linked to above creates a usable button in the table and how you implemented it in another thread where you used a ButtonColumn object. I was trying to compare both implementations, but being a newbie at this, they seemed entirely different from each other. The way I understand it with the example above is that it creates a TableRenderer which should be able to render any component object, then it sets the defaultRenderer to the default and JButton.Class' renderer to that custom renderer. I don't totally understand your design in the thread
    http://forum.java.sun.com/thread.jspa?forumID=57&threadID=680674
    quite yet, but it's implemented in quite a bit different way. Like I was saying the buttonClass that you created seem to be creating an object of which function I don't quite see. It looks more like a method, but I'm still trying to see how you did it, since it obviously worked.
    Man adding a button to a table is much more difficult than I imagined.
    Message was edited by:
    deadseasquirrels

  • Questions about Mapping GL Accounts to Group Accounts

    Hi,
    I have some questions about mapping gl accounts to group accounts while configuring OBIEE APPS 7.9.6.3 with EBS R12 as a source:
    FIRST QUESTION.-
    For file file_group_acct_codes_ora.csv, I have the following accounts from my customer:
    101101 - Caja Administrativa
    101102 - Fondo Revolvente
    101103 - Caja de Cambios
    101104 - Efectivo en cajero
    This group of accounts is named CASH, now my customer said that this group begins in 101101 and ends in 101199 but in this moment only have this 4 accounts in GL, the rest of the accounts, I mean 101105-101199 are not used right now, they are gonna used in the future.
    So, my question is, in file_group_acct_codes_ora.csv how I need to put this group:
    In this way:
    CHART OF ACCOUNTS ID,FROM ACCT,TO ACCT,GROUP_ACCT_NUM
    50308,101101,101104,CASH
    Or in this way:
    CHART OF ACCOUNTS ID,FROM ACCT,TO ACCT,GROUP_ACCT_NUM
    50308,101101,101199,CASH
    I mean, is there any problem if I use the second way, or is necessary to do it in the first way, and why?
    SECOND QUESTION.-
    For file file_group_acct_names.csv, when I update with a new group of accounts, is there any rule or size boundary for GROUP_ACCOUNT_NAME?
    THIRD QUESTION.-
    For file_group_acct_names.csv, what is the value in column LANGUAGE? I mean, is EBS language?, DB language?, server language?
    I hope that someone can help me, because I need to clarify this and don't do the first full load and this load ends with error because of this.
    Regards,
    Arnulfo

    I'll take some broad swipes at this and let the smarter people come fill in the details.
    We have a true 1:1 setup in our office and have moved to PHDs as a means of protecting against downtime. The thinking is that we will have a spare machine lying around with our base installation ready to go. If a user's machine fails we'll replace it with the spare machine, let it sync the user directory from the server, and we're back in business. It's no substitute for a real backup system, but it potentially avoids having to run a restore from your backups. It also reduces network traffic compared to plain networked homes, and still lets your users work if the server goes down, but provides the benefits of centralized management. John DeTroye wrote a nice article about this.
    If you've already got data on your "client" Mac you will need to move it onto the server. PHDs will download data from the server to the client on the first sync, but will not upload a complete home directory from the client to an empty directory on the server. You'll find some posts in this forum discussing how people have gone about migrating data prior to that first sync.
    WGM allows you to establish exclusions for stuff you don't want to sync.
    One thing to watch out for in the scenario you describe is the so-called "rabbit effect." Assume Bob uses Mac1 as his primary machine. If one day he logs into Mac2 his home directory will be downloaded to Mac2. Once he returns to Mac1 he'll still be cluttering up Mac2 with his data. If he logs into Mac3 the next day and Tom and Sue are also periodically logging into different machines, you can see how you'll end up with a mess pretty quickly.
    Hope this helps.

  • Question about inputting data to a pivot table

    Hi,
    I have 2 questions about using the ADF pivot table component (I would like it for data input).
    1. Is it possible to paste into multiple cells (i.e. using cut in Excel for example, and then pasting the cells into the pivot table) ?
    2. What is the recommended approach for the case where there is no data values (no rows on the database table)? For example, if I have regions, products, measures and time periods and then want to be able to select some regions and some products to enter new sales figures - and then create the rows on the database with save button, is there a recommended way to do this?
    (Jdeveloper version: 11.1.2.0.0)

    Anybody able to help with this?

  • Design question about SNASw, DLSW and VDLC

    Hello,
    I have a question about Ethernet redundancy in an APPN environment.
    Let's have an example with 3 routers running SNASw that are on the SAME LAN (no vlans) as the Mainframe's OSA (one OSA only). APPN is configured on the Mainframe.
    Using DLSw+, all downstream PUs are connected to the 3 routers. Can I define in the VDLC interface of each router the SAME MAC address, and this MAC address be the destination MAC of the downstream PUs?

    Hi,
    yes, headend routers are the ones in front of the OSA/mainframe.
    If you replace a tokenring with ethernet in the data center/headend, than the snasw dlsw solution is almost perfect for you. If you use hpr/ip to connect upstream to the host you are all set.
    In that case you dont advertise any mac addresses on the local ethernet between the snasw/dlsw routers and the osa since it is hpr/ip. Basicaly ip routing only.
    From the clients perspective, they dont really know that there is a change since you replicate the old tokenring mac address as vdlc mac address/snasw port and the end systems still connect to them like they did before.
    In respect to dlsw ethernet redundancy we have to be a bit carefull not to mix the scenarios.
    Dlsw ethernet redundancy is designed for the branch. Not the data center.
    If you use dlsw ethernet redundancy with ethernet switches, and in almost all cases today ethernet means ethernet switches, you configure a mac address mapping between artificiall local mac addresses and your real remote mac address of the host.
    On each router you configure a unique local mac address. Than you point half of your end systems dmac to the local mac address configured on router1 and the other half to the local mac address configured on router2. That way you achive load balancing.
    The two routers exchange their mapping and in case router1 looses the connection to router2, router1 will activate the mapping it learned from router2 aswell and then take over those circuits additional.
    If you decide that you configure on router1 the local mac address equal to the remote mac address, because you have a large number of clients and can not simply change the damc's on all of them, than you need to configure a "dummy" mapping on router2 and router1 will get all the circuits in this example. router2 would be purely for redundancy in case router1 goes down.
    If you think about this than it is clear why dlsw ethernet redundancy is designed for the branch. In the branch we map local to remote mac addresses and the remote mac addresses are the hosts. Typically there are only a limited number of host mac addresses to map.
    If you turn this around and put dlsw ethernet redundancy on the host end than you have to map all clients. If you have only one or two clients this is certainly doable. But if you have a large nuber of clients this is simply not manageable.
    thanks...
    Matthias

  • Sql developer: question about exporting data

    Hi,
    we're recently working with sql-developer. i've got a question about how we can export query results to txt/csv files for use in other applications.
    First a problem: if we start a query that looks like this:
    select * from
    select * from A where start_date = &date
    ) a,
    select * from B where start_date = &date
    ) b
    where a.name = b.name
    Sql-developer asks twice to input a value for the variable 'date', although it's the same variable and it's supposed to have the same value.
    We solve this by making a script:
    first we define the variable, then we put the query.
    When we start the script, the query runs ok and sql developer asks to input the value for the variable once.
    But now the result of the query is shown in the script output. The script output seems to be limited in number of lines and difficult to export.
    So my question is: what's the best way to export query results to txt/csv files, avoiding the problem mentioned above?
    i hope there is a solution where we can use a single query or script.
    Thanks in advance!

    Using bind variables like ":date" should solve the problem of being asked twice for the same thing.
    Executing the query normally (F9), gives you the export options you require through the context menu inside the Results grid.
    Regards,
    K.

  • A simple question about DAQ data sampling

    Hi all:
    Now I meet a very basic question about DAQ sampling.  I am using NI PCI-6040E DAQ card, SCXI-1001 chasis, SCXI-1102, SCXI1300 terminal block, and SCXI1160 relay module.
    I am not sure , is it possible if I want to test a voltage (1 volt) on a resistor. Now I am connecting the Ch0+ to resistor+ (24 volt), and connecting Ch0- to resistor-(23 volt). So the voltage between resistor+ and resistor- should be 1volt.  Actually, It is working at the first, but after I want to test 9 channels based on this connection. The Measurement & Automation can't read data from the DAQ card. (when I test it by multimeter, I can get the voltage data on the Ch0+ and Ch0- of SCXI1300 terminal block).
    That's strange, does anybody know what' s wrong about it?/
    Thanks a lot

    Hi hanwei,
    According to the specifications of the PCI 6040E (page 3), the input signal and common mode voltage should never exceed 11V from ground.  I believe this is the reason you are able to measure the potential of a battery but not the 24V signal (even though the differential value is only 1V). 
    Best Regards
    Hani R.
    Applications Engineer
    National Instruments

  • Design question about instant download a patch

    Hi All,
    Here is a design question for you:
    Background:
    The application we built is being upgraded from time to time and we send it to our users.
    our users use it on a network so there is only one file to upgrade (and being done by the sys admin)
    We send them a �patch�; actually, it�s a new version of the application they place it in the relevant folder and continue to work with a new version.
    Problems:
    1. We send them the patch via email - sometimes it takes a while until they read their email and at they are using an �old version of the system�.
    2. Some of them are not computer savvy (when it�s not the sys admin) and we need to guide them as where exactly to place the file (there are 3 files)
    Our (conceptual never build yet) Solution:
    Build a program just like Norton antivirus (or other) that prompts the user from the task bar (next to the clock) that a new version is available and by
    clicking once it will automatically download the file and store it in the correct folder.
    Question:
    1. Did anyone try anything like this before (or something like) who can tell me about it?
    2. Do you think this kind of system will work for us?
    3. Does anyone have a better solution?
    Thanks
    Peter

    thanks, I posted it there.No, he meant that Webstart is the mechanism you should use. It supports net-based distribution and automatic centralized updates of apps. Exactly what you want.

  • Design question about instant download from

    Hi All,
    Here is a design question for you:
    Background:
    The application we built is being upgraded from time to time and we send it to our users.
    our users use it on a network so there is only one file to upgrade (and being done by the sys admin)
    We send them a �patch�; actually, it�s a new version of the application they place it in the relevant folder and continue to work with a new version.
    Problems:
    1. We send them the patch via email - sometimes it takes a while until they read their email and at they are using an �old version of the system�.
    2. Some of them are not computer savvy (when it�s not the sys admin) and we need to guide them as where exactly to place the file (there are 3 files)
    Our (conceptual never build yet) Solution:
    Build a program just like Norton antivirus (or other) that prompts the user from the task bar (next to the clock) that a new version is available and by
    clicking once it will automatically download the file and store it in the correct folder.
    Question:
    1. Did anyone try anything like this before (or something like) who can tell me about it?
    2. Do you think this kind of system will work for us?
    3. Does anyone have a better solution?
    Thanks
    Peter

    Java WebStart is the deployment technique suited for your purposes.
    The deployment will be done via a webserver running a jnlp servlet (provided).
    The applications may either run offline, or check online for automatic updates.
    You have several configuration options.
    ArgoUML and jEdit are two open source apps delivered this way.
    Your app will need to be adapted though.
    And your customers need an online browser with installed jvm.

  • Question about the data in 2LIS_04_P_COMP extractor (0pp_c05 infocube)

    Hi gurus.
    My Organization need to compare between the materials that go in to the production process and the materials that located inside the products that go out from the production process.
    In the R/3 there is a calculation that divides the loss of material in the process amongst the final productions, so the loss material goes back to the full picture.
    This calculation called DUV (Distribution of Usage Variances).
    I want to know if the data that extract to BW through 2LIS_04_P_COMP is after this calculation or not.
    An answer from what table dose this extractor takes the data can help as well.
    I can not find any documentation about it.
    Maybe some one can help me.
    Thanks in advanced
    Hagai Jacoby – BW advisor

    Dear Adir,
    This datasource gets data from AFKO and AFPO tables in R/3
    Hope it helps.
    Thanks,
    Krish

  • Sliding window for historical data purge in multiple related tables

    All,
    It is a well known question of how to efficiently BACKUP and PURGE historical data based on a sliding window.
    I have a group of tables, they all have to be backed up and purged based on a sliding time window. These tables have FKs related to each other and these FKs are not necessary the timestamp column. I am considering using partition based on the timestamp column for all these tables, so i can export those out of date partitions and then drop them. The price I have to pay by this design is that timestamp column is actually duplicated many times among parent table, child tables, grand-child tables although the value is the same, but I have to do the partition based on this column in all tables.
    It's very much alike the statspack tables, one stats$snapshot and many child tables to store actual statistic data. I am just wondering how statspack.purge does this, since using DELETE statement is very inefficient and time consuming. In statspack tables, snap_time is only stored in stats$snapshot table, not everywhere in it's child table, and they are not partitioned. I guess the procedure is using DELETE statement.
    Any thought on other good design options? Or how would you optimize statspack tables historical data backup and purge? Thanks!

    hey oracle gurus, any thoughts?

  • Question about the MAKZN field in the RBKP table

    Hello all.
    I have a question about the MAKZN field. Does anyone know what field in MIRO is assigned to this field? We have an issue where a line item amount was not selected invoice was out of balance but the agent selected accept and post. And invoice posted. but I am interested in knowing where the amount if keyed in because when I go to the RBKP table I see an amount entered in the MAKZN (manually accept net difference amount)

    Hi,
    it seems as if the value was calculated internally:
    program SAPLMR1M
    dynpro 6000
    PAI module fcode_6000
    Include LMR1MI3W
    *-------- buchen ------------------------------------------------------*
        WHEN fcobu OR fcomanak.
    *--- identical code in PAI Module FCODE_6250 --------------------------*
          PERFORM ota_check USING vf_kred-xcpdk rbkpv-xcpdd
                            CHANGING rc.
          IF rc NE 0.
            CLEAR ok-code.
            EXIT.
          ENDIF.
          IF ok-code = fcomanak.
            PERFORM diff_akzeptieren.
            ok-code = fcobu.
          ENDIF.
    where:
    fcomanak          LIKE ok-code VALUE 'MANAK', " Manuell akzeptiert
    *&      Form  DIFF_AKZEPTIEREN
    *       Differenz manuell akzeptieren
      FORM diff_akzeptieren.
    *       Manuell akzeptierter Betrag
        rbkpv-makzn  = rbkpv-makzn + rbkpv-diffn.
        rbkpv-makzmw = rbkpv-makzmw + rbkpv-diffmw.
    *       Differenzbeträge
        CLEAR: rbkpv-diffn, rbkpv-diffmw.
      ENDFORM.                             " DIFF_AKZEPTIEREN
    maybe it´s happenning when releasing manually the invoice in MRBR?
    Best regards.

  • A question about BPEL data formats

    Hi everyone,
    I am new to BPEL and have a question regarding the format of data "inside" BPEL variables. Perhaps someone can clear this up?
    When defining a variable in BPEL, I can either give it an XML Schema simple or complex type - or a "WSDL message type". I am confused with regard to the last one. A WSDL message has parts, and the parts have types themselves. These types, however, can be from an arbitrary type language. XML Schema is one possibility, but there might be others. I have two questions with regard to this:
    1) If I use some proprietary schema instead of XML Schema, can I handle data in this format in the BPEL process? I.e., can I assign "literal" data to the parts in my own format, and (if it is XML-based) manipulate it with XPath?
    2) WSDL allows the definition of styles and encodings in the concrete part. How does the conversion happen between what's on the wire (in the SOAP body) and inside the BPEL variables? I.e. when using RPC/Encoded, does the Oracle BPEL engine automatically convert everything into plain-XML-Schema-validated-XML and back or does RPC/Encoded XML show up inside the variables? What if the BPEL engine doesn't understand the concrete format?
    Thanks a lot!
    Reto

    Fabio,
    I am not sure I understood you clearly. But ODI uses the underlying technology of the target database to perform the CDC. There are several KMs for each technology that let you achieve the same.
    eg. For Oracle, a trigger based CDC is available which creates triggers on the underlying tables in the database to capture the changes.
    Also for Oracle, a Logminer based CDC is available which reads the Oracle logs to capture the changes made over to tables.

Maybe you are looking for