Data Aging design
Hi
This may be one for the gurus, or they're could be a standard way to do it.
I'm looking for some help in designing a database. The problem I have is related to data aging on Oracle instances. I can't find any help on it.
The problem I have is that I've got 3 categories of data, which is new/warm/old. Essentially, new is data which is 1 hour old. Warm is data which is kept for 24 hours, old is kept for 6-8 weeks.
So the data needs to age. I think obviously the mechanics of it involves partitioning, and using a rolling or sliding window mechanism to move the partitions, but don't know.
Any help is appreciated.
Thanks for your help.
Bob
scope_creep wrote:
The problem I have is that I've got 3 categories of data, which is new/warm/old. Essentially, new is data which is 1 hour old. Warm is data which is kept for 24 hours, old is kept for 6-8 weeks.
So the data needs to age. I think obviously the mechanics of it involves partitioning, and using a rolling or sliding window mechanism to move the partitions, but don't know. Partitioning without a doubt.
Any other method will require you to
a) read the old data from the current table
b) write the old data to the old table
c) delete the old data from the current table
d) commit
This is horrible expensive as you simply moving and pushing data from one part of the database to another. Lots and lots of I/O. And I/O is the most expensive operation you can do on the database.
Using partitioning.
a) create a staging table (this can be done once up front or dynamically at the time for multi-threading/thread safety)
b) issue an alter table exchange partition (current table) and place the partition with the old data into the staging table
c) issue an alter table exchange partition (old table) and place the staging table contents into the old partition table
d) optionally drop and purge staging table if dynamically created
How much I/O? Negligible as this only entails a data dictionary update. The actual row contents are not moved. It simply changes ownership (sub-second usually). Which means that performance is consistent and not subject to the size of the data to "move".
I would also include indexes with the partition exchange and skip validation as new data is not placed into a partition - the existing data has already been validated for that specific partition.
Partitioning is an extra licensing option for Enterprise Edition. IMO, it does not make sense to use EE and not have partitioning. It is one the highest impact features there is - it reduces I/O, significantly increases performance, and makes data management exponentially easier.
Similar Messages
-
Performance impacts of attributes versus entities in data model design
I'm trying to understand the performance implications of two possible data model designs.
Here's my entity structure:
global > the person > the account > the option
Typically at runtime I instantiate one person, one account, and five option 's.
There are various amounts determined by the person's age that need to be assigned to the correct option.
Here are my two designs:
Design one
attributes on the person entity:
the person's age
the person's option 1 amount
the person's option 2 amount
the person's option 3 amount
the person's option 4 amount
the person's option 5 amount
attributes on the option endity:
the option's amount
supporting rule table:
the option's amount =
the person's option 1 amount if the option is number 1
the person's option 2 amount if the option is number 2
the person's option 3 amount if the option is number 3
the person's option 4 amount if the option is number 4
the person's option 5 amount if the option is number 5
Design two
attributes on the person entity:
the person's age
attributes on the option entity:
the option's amount
the option's option 1 amount
the option's option 2 amount
the option's option 3 amount
the option's option 4 amount
the option's option 5 amount
supporting rule table:
the option's amount =
the option's option 1 amount if the option is number 1
the option's option 2 amount if the option is number 2
the option's option 3 amount if the option is number 3
the option's option 4 amount if the option is number 4
the option's option 5 amount if the option is number 5
Given the two designs, I can see what looks like an advantage for Design one in that at runtime you have less attributes (6 on the one pension member + 1 on each of 5 options =11) than Design two (1 on the one pension member + 6 on each of 5 options = 31), but I'm not sure. An advantage for Design two might be that the algorithm has to do less traversing of the entity structure: the supporting rule table finds everything for the option's amount on the option.
Either way there is a rule table to determine the amounts:
Design one
the person's option 1 amount =
2 if the person's age = 10
5 if the person's age = 11
7 if the person's age = 12, etc.
Design two
the option's option 1 amount =
2 if the person's age = 10
5 if the person's age = 11
7 if the person's age = 12, etc.
Here it looks like the rulebase would have to do more traversing of the entity structure for Design two.
Which design is going to have better performance with a large amount of rules, or would it make a difference at all?Hi!
In our experience you only need to think about things like this if you were dealing with 100s or 1000s of instances (typically via ODS). As you have a very low number, the differences will be negligible, and you should (usually) go with the solution which is the most similar to the source material or the business user's understanding. I also assume this is an OWD project? Which can be even better, since the inferencing is done incrementally when new data is added to the rulebase, rather than in one "big bang" like ODS.
It looks like design 1 is the simplest to understand and explain. I'm just wondering why you need the option entity at all, since it seems like a to-one relationship? So the person can only have one option 1 amount, one option 2 amount etc, and there are only ever going to be (up to) 5 options...is that assumption correct? If so, you could just keep these as attributes on the person level without the need for instances. If there are other requirements for an option instance then of course, use them, but given the information here, the option entity doesnt seem to be needed. That would be the fastest of all :-)
Either way, as the number of instances is so low, you should have nothing to worry about in terms of performance.
Hope this helps! Write back if you have any more info / questions.
Cheers,
Ben -
Hello -
New user to BO Data Services Designer. Company is using Data Services Version 12.2.
I have many tables that all need the same transformation- converting varchars fields to upper(varchar) fields
Example:
I have a table called Items. It has 40 columns, 25 of which are of type varchar.
What I have been doing is ....
I make the Item table as the source table then create a Query transform that is then attached to a new target table called - ITEMS_Production.
I can manually drag and drop columns from the source table to the query and then in the mapping area I can manually type in upper(table_name.column_name) or select the upper function and then select the table.column name from the function drop down list.
Obviously, I want to do this quicker as I have to do this for lots and lots of tables.
How can set up Data Services so that I can drag and drop an upper transform quickly and easily or automate this process.
I also know Python-Java so am happy to script something if need be.
the logic would be something like -
if there is a column with a type varchar, add the upper() transformation to column. Go to next column.
Excited to be using this new tool-
Thanks in advance.
RayUse the DS Workbench.
-
Some queries on Data Guard design
Hi, I have some basic questions around Data Guard design
Q1. Looking at the Oracle instructions for creating a logical standby, it firstly advocates creating a physical standby and then converting it to a logical standby. However I thought a logical standby could have a completely different physical structure from the primary. How can this be the case if a logical standby first starts its life as a physical standby ( where structure needs to be identical ) ?
Q2. Is it normal practice to backup your standby database as well – if so why ?
Q3. Can RMAN backup a Standby Database whilst it is in the mounted state ( rather than open ) ?
Q4. What's the point of Cascaded Redo Apply rather than simply getting the Primary to ship to each Standby ?
I guess you could be trying to reduce node to node latency if some of the standby were quite distant from the primary
Q5. Is it possible to convert a Logical Standby back to a Physical one ?
Q6. What the maximium number of standbys you can have - Oracle suggests 30 but I thought I remember reading somewhere in relation to 11gR2 that this limit has now been increased ?
thanks,
JimHi,
Q5 . Is it possible to convert a Logical Standby back to a Physical one ? --- Yes it is possible .
-- Primary
select switchover_status from v$database;
SWITCHOVER_STATUS
SESSIONS ACTIVE
2) switch operation
-- Primary - starat database switch
alter database commit to switchover to physical standby with session shutdown;
-- see alertlog.
Switchover: Complete - Database shutdown required
Completed: alter database commit to switchover to physical standby with session shutdown
-- database role ve switchover_status
select NAME,OPEN_MODE,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL from v$database;
NAME OPEN_MODE DATABASE_ROLE SWITCHOVER_STATUS PROTECTION_MODE PROTECTION_LEVEL
TESTCRD READ WRITE PHYSICAL STANDBY RECOVERY NEEDED MAXIMUM AVAILABILITY UNPROTECTED
-- Primary
shutdown abort
-- standby - switch to primary
alter database commit to switchover to primary with session shutdown;
-- alert log
Completed: alter database commit to switchover to primary with session shutdown
alter database open
3) Old Primary open
sqlplus / as sysdba
startup nomount
alter database mount standby database
-- database role ve switchover_status
select NAME,OPEN_MODE,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL from v$database;
NAME OPEN_MODE DATABASE_ROLE SWITCHOVER_STATUS PROTECTION_MODE PROTECTION_LEVEL
TESTCRD READ WRITE PHYSICAL STANDBY RECOVERY NEEDED MAXIMUM AVAILABILITY UNPROTECTED
recover managed standby database using current logfile disconnect from session;
4) Old standby
SQL> select NAME,OPEN_MODE,DATABASE_ROLE,SWITCHOVER_STATUS,PROTECTION_MODE,PROTECTION_LEVEL from v$database;
NAME OPEN_MODE DATABASE_ROLE SWITCHOVER_STATUS PROTECTION_MODE PROTECTION_LEVEL
AZKKDB READ WRITE PRIMARY TO STANDBY MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE
Q2 . . Is it normal practice to backup your standby database as well – if so why ?
Hence you only have to backup one of the both (you can decide to only backup the standby though, if you want to remove load on the primary). But if you are confident with the backup of the primary, there is no need for backing up the standby. (Since you can recreate the standby with the backup from primary).
Please see link : http://blog.dbvisit.com/rman-backups-on-your-standby-database-yes-it-is-easy/
Thank you -
How to save data model design in pdf or any format..?
how to save data model design in pdf or any format..?
i ve created design but not able to save it any mage or pdf formatFile -> Print Diagram -> To PDF File
-
My latest MVC 5 project built with VS 2013 compiled and works fine on my PC but can not be published to server using command line aspnet_compiler. I kept getting the web.config(132): error ASPCONFIG: Could not load type 'System.Data.Entity.Design.AspNet.EntityDesignerBuildProvider'
In my web.config file, I have the following as suggested by many for solution:
<add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
<add assembly="System.Data.Entity.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/>
However, the error is persisting. I have no clue how to fix this. Hope someone can help.Hello FredVanc,
As you mentions, it could work with your develop environment, so I am wondering it is that on the server machine, the assembly is not there actually. Here are some information I found which might be helpful:
https://support.microsoft.com/en-gb/kb/958975?wa=wsignin1.0
https://social.msdn.microsoft.com/Forums/en-US/1295cfa8-751b-4e9b-a7a7-14e1ad1393b6/compiling-error?forum=adodotnetentityframework
By the way, since this issue is related with ASP.ENT project, I suggest that you could post it to the ASP.NET forum, there are web experts will help you:
http://forums.asp.net/
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Archiving financial document using Data aging
Hi,
reading documentation about Data archiving, ILM and SAP Hana I came across archiving of finacial accounting documents in SMART FI 1.0.
It is stated, that the archiving with the archiving object FI_DOCUMENT is replaced by Data Aging for Financial Documents.
There are also Data Aging Objects for IDOCs and Application logs, but still the corresponding archiving objects are described.
Can somebody explain to me, how data aging for FI accounting documents without archiving object FI_DOCUMNT fits to the idea of ILM?
Kind regards
HajoHello Gunasekar,
please check with TC:SU53 or set up a authorization trace I assume this is missing authorization issue on read access of archived documents.
If you have an older SAP Version please also check the settings in FB00 regarding Read Access Archive or Database.
If you have I guess 4.7 > please also select in the header of your transaction the button "Data Source" and check whether you can see the archive file and selct this file.
Regards,
Sebastian -
<<Do not ask for or offer points>>
Hi
Right now we need to implement HR related data - the main concentration is PA, OM
Please suggest me best suitable data flow design
Note: All my datasource will not support delta
How we can handle these loads in SAP BI side - I mean to stay how to handle these loads by using DTP
Please suggest some good data flow. I will assign some good points on this
Regards
Madan Mohan
Edited by: Matt on Feb 19, 2010 7:03 AMHi Madan,
You can find the data flow in metadata repository in RSA1. Goto the metadata repository-> select your datasource ->
network display.
DTP: You can extract both full/delta using the DTP.
Please refer the below links also.
Personnel Administration:
[http://help.sap.com/saphelp_nw04/helpdata/EN/7d/eab4391de05604e10000000a114084/frameset.htm]
Organizational Management:
[http://help.sap.com/saphelp_nw04/helpdata/EN/2d/7739a2a6d5cc4c8d63a514599dc30f/frameset.htm]
Regards
Prasad -
Data Services Designer Issue with Job Server
I am on Business Objects Edge Pro using the Data Services Designer. When I try execute a job I get "The job server you have selected is not working." In addition to that when I try to make any personal change to the environment I get a BODI-1260016 error. Finally when you go to the help and about they show both the Job Engine and Job Server not responding.
I have opened up my CMC and it shows all servers enabled and functioning. I do not know where to go from here.
Thanks
CassidyVoila. I know I am a bit late to the conversation, but here was my solution. I was running Designer on my local machine. We also have Designer on the server. So I decided to remote to the server and compare settings. When the server desktop came up, good old Windows was there, announcing that BODS had changed since the last time it was run, and would I like to allow it to run (Y/N/Cancel)? Thanks, Windows. I said Yes, then went back to my local workstation, tried again to run a job, and no problem.
This has happened with other software services (scripted ftp for example) that run on servers. Seems it can happen after a Microsoft Tuesday, or after software is upgraded. Always remember to log on to the server and clear any security dialogs that are preventing your service from running. Not really a BO solution, but it was my solution. YMMV. -
Data Services Designer - Error when pulling large source tables
Hi all,
I have been trying yo load data from an SAP table - BSIS into MS SQL server database using BO Data Services Designer XI 3.2. It is a simple data flow with one souce table (BSIS).
When we execute the job, it says what is mentioned below:
*"Process to execute Dataflow is started"*
*"Cache statistics determined that DataFlow uses <0> caches with a total use of <0> bytes. This is less than the virtual memory <1609564160> bytes available for caches. Statistics is switching the cache type to IN MEMORY."*
*"Dataflow using IN MEMORY cache."
It stays there for a while and says "Dataflow terminated due to error"
In the error window, it says DF received a bad system message.
Does not specify the error... It asks to contact the customer support with error logs, ATL files and DDL scripts.
Can anyone help me out???
Thank you and regards,
Suneer.Hi,
please do not post the short dump in this forum.
I blieve the system will read from table dt_iobj_dest.
The problem is that 0FISCPER, 0FISCYEAR and 0FISCVARNT are not registered.
You can register the infoobject 0FISCYEAR and 0FISCVARNT
retroactively:
1) se24, CL_UG_BW_FIELD_MAPPING
2) F8 -> test environment
3) GET_INSTANCE, do not use IT_FIELD_RESTRICT
4) IF_UG_BW_MAPPING_SERVICES
5) REGISTER_INFO_OBJECT
6) specify I_RFCDEST
I_INFOOBJECT
attention: I_RFCDEST is case-sensitive
kind reg.
Michael
ps. please do not forget to assign points. -
Hi guys,
Can any one help me with the data flow design in ODI.
Thanks,
PraveenAbsolutely, but we'll need some details :)
What technologies are your sources/targets? Are you intending on performing any flow control checks, lookups, joins or filters?
Edited by: _Phil on Oct 11, 2012 12:16 PM -
I am looking for a good article about data center design. I need some information about requirements for topics such temperature control, power, security, ventilation, etc.
The practice of System and Network administration is a great resource for the "other" things that make a data center successful.
http://www.amazon.com/gp/product/0321492668?ie=UTF8&tag=wwwcolinmcnam-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0321492668
Another great resource that you should reference is your local power company. PG&E for example will consult with you, and sometimes pay for upgrades to lower your power consumption.
Here is an article that talks about that in general -
http://www.colinmcnamara.com/2008/02/22/moving-towards-a-green-data-center-truth-behind-the-hype
If this is helpful please rate it
--Colin -
Hi there,
Can anyone enlighten me - what options exist to create data templates other than hand crafting the XML? Is there a BI Publisher data template designer available? We are using BI Publisher as part of E-Business Suite (11.5.10.2)
Also, does any utility exist which will take a SQL statement and create an valid data template based on it?
Thanks,
- Matt[email protected] wrote:
> If I correctly understand what you're trying to do then Adobe XPAAJ should be adquate:
>
>
>
> Below is a link to a "video" demonstration of building an application that uses XPAAJ:
>
>
>
> (I don't think your question about having to write an XDP file is crucial to what you're trying to do, but you don't need to write an XDP file -- it is a "save as" option within Designer. You can save a Designer file as either PDF or XDP. Or both, one after the other, if you need both.)
Careful with the recommendations for XPAAJ. There are specific restrictions around whether you can
use XPAAJ or not. Specifically, you have to own a copy of one of the server-based LiveCycle
products. This does not include LiveCycle Designer, which is a client product.
Maybe someone from Adobe can clarify.
Justin Klei
Cardinal Solutions Group
www.cardinalsolutions.com -
When to use the Export To Data Moderler design option
Hi,
Is there an advantage or difference between sending someone a model created using the Export > "To Data Modeler Design" tool as opposed to just sending them a copy of the folder tree for the same model? If yes, could you please explain what the differences are?
Thank you,
Beatriz.Hi Beatriz,
you can export just one relational model or objects belonging to one or several subviews.
Philip -
DI Integrator XI and Data Service Designer
Hi all
I am playing with DI XI (11.7) now, but very soon we would have Data Service Designer (new version of DI which is called Data Service Designer). I'm just wondering what the difference is between DI Integrator 11.7 and Data Service Designer. The interface and object types are very different and a big surprise for me? Or I might ask what's new in Data Service Designer?
Thanks for your help.You will be very familiar with DataServices Designer. You will find more transforms - the data quality transforms - and a few menu items, flags etc. But really, DataServices is just a name change. For DQ customers things changed a lot.
Maybe you are looking for
-
Can I delete a whole disk folder from my Time Machine backup drive?
I temporarily mounted a drive (let's call it "Mumble"), forgetting to exclude it from Time Machine's list of disks to ignore. I caught this oversight after Time Machine had done just one hourly backup, and Time Machine is no longer backing it up, i.e
-
An error was encountered while syncing ZERO
I get the message: "You are about to synchronize your Calendar data for the first time after upgrading to Mac OS X 10.9.4", and gives me the options of: Replacing device data, Merging the data, or Cancelling. Either of the first two options produces
-
Will LR 5.3 read Nikon picture controls applied to RAW files in-camera?
I've never used picture controls on my D4 because my understanding was that, even though they're embedded in RAW files in-camera, only Nikon software will read the data. Can anyone verify this condition for me? If my LR software will read these set
-
Hi, I downloaded the Trial version and the 3D menu worked perfectly but when I downloaded the full version, the 3D menu is gone. What did I do wrong? Below is my system info, please I appreciate if anybody can give me a light on this. Thank you. Adob
-
Photoshop CS5 v's Photoshop CS5 Extended
Please can anybody help - in plain English. What extra features would I get if I purchased Photoshop CS5 Extended over Photoshop CS5. From what I can make out its the Repousse tool. Is there anything else I would miss out on. Ta