Table Backup and Best Practice

Hi Guys,
We have 2 tables, master and child with millions of records in it.These tables gets populated from flat files that we receive from external systems.The major operation on this table are insert/delete(update). The way we do update is, if the record from flat file already exist in table we delete the matching record from master table & child table and re-insert the data from flat files.
Business has decided to delete (archive) old and inactive data from these 2 tables. This process is run every year. I want to take a backup of this table by creating a similar table in database and start the delete process.
What would be the best approach to take backup of table,because we run the archive process every year. Archive process here refers to physically deleting a record from a table.
Any help is greatly appreciated.

922855 wrote:
Hi Guys,
We have 2 tables, master and child with millions of records in it.These tables gets populated from flat files that we receive from external systems.The major operation on this table are insert/delete(update). The way we do update is, if the record from flat file already exist in table we delete the matching record from master table & child table and re-insert the data from flat files.
Business has decided to delete (archive) old and inactive data from these 2 tables. This process is run every year. I want to take a backup of this table by creating a similar table in database and start the delete process.
What would be the best approach to take backup of table,because we run the archive process every year. Archive process here refers to physically deleting a record from a table.
Any help is greatly appreciated.expdp

Similar Messages

  • Can anyone recommend tips and best practices for FrameMaker-to-RoboHelp migration ?

    Hi. I'm planning a migration from FM (unstructured) to RH. I'd appreciate any tips and best practices for the migration process. (Note that at the moment I plan to import the FM documents into, not link to them from, RH.)
    For example, my current FM files are presently not optimally "chunked", so that autoconverting FM file sections (based on, say, Header 1 paragraph layout) won't always result in an optimal topic set. I'm thinking of going through the FM docs and inserting dummy paragraphs with a tag somethike like "topic_break", placed in more appropriate locations that the existing headers. Then, during import to RH, I'd use the topic_break paragraph to demark the topics. Is this a good technique? Beyond paragraph-based import delineation, do you know of any guidelines for redrafting FM chapter file content into RH topics?
    Also, are there any considerations/gotchas in the areas of text review workflow, multiple authoring, etc. after the migration? (I've not managed an ongoing RH doc project before, so any advice would be greatly appreciated.
    Thanks in advance!
    -Kurt
    BTW, the main reason for the migration: Info is presently scattered in various (and way to many) PDF files. There's no global index. I'd like to make a RoboHelp HTML interface (probably WebHelp layout) so it can be a one-stop documentation shop for users.

    Jeff
    Fm may produce better output for your requirements but for many what Rh produces works just fine. My recent finding re Word converting images to JPG before import will mean a better experience for many.
    Once Rh is set up, and it's not difficult, for many its printed documents will do the job. I would say try it and then judge.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • FWSM interface monitoring and best practices documentation.

    Hello everyone
     I have a couple of questions regarding vlan interface monitoring and best practices specifically for this service module.
     I couldn’t find a suggestion or guideline as for how to define a VLAN interface on a management station. The FWSM total throughput is 5.5gbs and the interfaces are mapped to vlans carried on trunks over 10gb etherchannels. Is there a common practice, or past experience, to set some physical parameters to logical interfaces? "show interface" command states BW as unknown.
     Additionally, do any of you have a document addressing best practices for FWSM? I have this for other platforms and general recommendations based on newer ASA versions but nothing related to FWSM.
    Thanks a lot!
    Regards
    Guido

    Hi,
    If you are looking for some more command to check for the throughput through the module:-
    show firewall module <number> traffic
    Also , I think as this is End of life , you might have to check for some old documentation from Cisco on the best practices.
    http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-series-switches/prod_white_paper0900aecd805457cc.html
    https://supportforums.cisco.com/discussion/11540181/ask-expertconfiguring-troubleshooting-best-practices-asa-fwsm-failover
    Thanks and Regards,
    Vibhor Amrodia

  • EP Naming Conventions and Best Practices

    Hi all
    Please provide me EP Naming Conventions and Best Practices documents
    Thanks
    Vijay

    Hi Daya,
    For SAP Best Practices for Portal, read thru these documents :-
    [SAP Best Practices for Portal - doc 1 |http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm]
    [SAP Best Practices for EP |http://www.sap.com/services/pdf/BWP_SAP_Best_Practices_for_Enterprise_Portals.pdf]
    And for Naming Conventions in EP, please go through these two links:-
    [Naming Conventions in EP|naming standards;
    [EP Naming Conventions |https://websmp210.sap-ag.de/~sapidb/011000358700005875762004E]
    Hope this helps,
    Regards,
    Shailesh
    Edited by: Shailesh Kumar Nagar on May 30, 2008 4:09 PM

  • EP Naming Conventions and Best Practices documents

    Hi all
    Please provide me EP Naming Conventions and Best Practices documents
    Thanks
    Vijay

    Hi,
    Check this:
    Best Practices in EP
    http://help.sap.com/saphelp_nw04/helpdata/en/43/6d9b6eaccc7101e10000000a1553f7/frameset.htm
    Regards,
    Praveen Gudapati

  • Arranging fields in a table-like form: best-Practice-Solution wanted

    Hello Experts,
    I´m wondering if there exists a 'best practice' considering how to arrange fields in a table-like form.
    I know about cross-tables, but that´s not what we need. Most of the requirements that I have come to known are just that certain fields should be put in a certain order in a table-like outfit.
    We have tried to do this using the drawing functions (e.g. putting a square around the fields and certain border styles), but it often happens that the lines overlap or there are breaks between the lines, so that you have to do a lot of manual configuration with the 'table'.
    Since this is a requirement I´ve come upon with many reports, I can´t believe that this is supposed to be the best solution for this.
    I don´t understand why there isn´t a table-like element in Crystal Reports to use for this. E.g. put a table with x rows and y columns in the header or group head section section and then just put the fields in it.
    Many thanks in advance for your help !

    Hi Frank,
    You can use build in templates available in Template expert.
    Click on Report menu-> Template Expert.
    Select the desired template. ( Table grid template would suite best here) and click OK.
    There is no facility of inserting a table directly as you said. You will have to do it manually by using lines and boxes.
    Hope this is helpful.
    Regards

  • Backup validation best practice  11GR2 on Windows

    Hi all
    I am just reading through some guides on checking for various types of corruption on my database. It seems that having DB_BLOCK_CHECKSUM set to TYPICAL takes care of much of the physical corruption and will alert you to the fact any has occurred. Furthermore RMAN by default does its own physical block checking. Logical corruption on the other hand does not seem to be checked automatically unless the CHECK LOGICAL is added to the RMAN command. There are also various VALIDATE commands that could be run on various objects.
    My question is really, what is best practice for checking for block corruption. Do people even bother regularly checking this and just allow Oracle to manage itself, or is it best practice to have the CHECK LOGICAL command in RMAN (even though its not added by default when configuring backup jobs through OEM) or do people schedule jobs and output reports from a VALIDATE command on a regular basis?
    Many thanks

    To use CHECK LOGICAL clause is considered best practice at least by Oracle Support according to
    NOTE:388422.1  Top 10 Backup and Recovery best practices
    (referenced in http://blogs.oracle.com/db/entry/master_note_for_oracle_recovery_manager_rman_doc_id_11164841).

  • EFashion sample Universes and best practices?

    Hi experts,
    Do you all think that the eFashion sample Universe was developed based on the best practices of Universe design? Below is one of my questions/problems:
    Universe is designed to hide technical details and answer all valid business questions (queries/reports). For non-sense questions, it will show 'incompatible' etc. In the eFashion sample, I tried to compose a query to answer "for a period of time, e.g. from 2008.5 to 2008.9, in each week for each product (article), it's MSRP (sales price) and sold_price and margin and quantity_sold and promotion_flag". I grabed the Product.SKUnumber, week from Time period, Unit Price MSRP from Product, Sold at (unit price) from Product, Promotions.promotion, Margin and Quantity sold from Measures into the Query Panel. It gives me 'incompatible' error message when I try to run it. I think the whole sample (from database data model to Universe schema structure/joins...) is flawed. In the Product_promotion_facts table, it seems that if a promotion lasts for more than one week, the weekid will be the starting week and duration will indicate how long it lasts. In this design, to answer "what promotions run in what weeks" will not be easy because you need to join the Product_promotion_facts with Time dimention using "time.weekid between p_prom.weekid and p_prom.weekid+duration" assuming weekid is in sequence, instead of simple "time.weekid=p_prom.weekid".  The weekid joins between Shop_fact and product_promotion_facts and Calendar_year_lookup are very confusing because one is about "the week the sales happened" and the other "the week the promotion started". No tools can smart enough to resolve this ambitious automatically. Then the shortcut join between shop_facts and product_promotion_facts. it's based on the articleid alone. obviously the two have to be joined on both article and time (using between/and, not the simple weekid=weekid in this design), otherwise the join doesn't make sense (a sale of one article on one day joins to all the promotions to this article of all time?).
    What do you think?
    thanks.
    Edward

    You seem to have the idea that finding out whether a project uses "best practices" is the same as finding out whether a car is blue. Or perhaps you think there is a standards board somewhere which reviews projects for the use of "best practices".
    Well, it isn't like that. The most cynical viewpoint is that "best practices" is simply an advertising slogan used by IT consultants to make them appear competent to their prospective clients. But basically it's a value judgement. For example using Hibernate may be a good thing to do in many projects, but there are projects where it would not be a good thing to do. So you can't just say that using Hibernate is a "best practice".
    However it's always a good idea to keep your source code in a repository (CVS, Subversion, git, etc.) so I think most people would call that a "best practice". And you could talk about software development techniques, but "best practice" for a team of three is very different from "best practice" for a team of 250.
    So you aren't going to get a one-paragraph description of what features you should stick in your project to qualify as "best practices". And you aren't going to get a checklist off the web whereby you can rate yourself for "best practices" either. Or if you do, you'll find that the "best practice" involves buying something from the people who provided the checklist.

  • Oracle EPM 11.1.2.3 Hardware Requirement and best practice

    Hello,
    Could anyone help me find the Minimum Hardware Requirement for the Oracle EPM 11.1.2.3 on the Windows 2008R2 Server? What's best practice to get the optimum performance after the default configuration i.e. modify or look for the entries that need to be modified based on the hardware resource (CPU and RAM) and number of users accessing the Hyperion reports/files.
    Thanks,
    Yash

    Why would you want to know the minimum requirements, surely it would be best to have optimal server specs, the nearest you are going to get is contained in the standard deployment guide - About Standard Deployment
    Saying that it is not possibly to provide stats based on nothing, you would really need to undertake a technical design review/workshop as there many topics to cover before coming up with server information.
    Cheers
    John

  • Static NAT refresh and best practice with inside and DMZ

    I've been out of the firewall game for a while and now have been re-tasked with some configuration, both updating ASA's to 8.4 and making some new services avaiable. So I've dug into refreshing my knowledge of NAT operation and have a question based on best practice and would like a sanity check.
    This is a very basic, I apologize in advance. I just need the cobwebs dusted off.
    The scenario is this: If I have an SQL server on an inside network that a DMZ host needs access to, is it best to present the inside (SQL server in this example) IP via static to the DMZ or the DMZ (SQL client in this example) with static to the inside?
    I think its to present the higher security resource into the lower security network. For example, when a service from the DMZ is made available to the outside/public, the real IP from the higher security interface is mapped to the lower.
    So I would think the same would apply to the inside/DMZ, making 'static (inside,dmz)' the 'proper' method for the pre 8.3 and this for 8.3 and up:
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    Am I on the right track?

    Hello Rgnelson,
    It is not related to the security level of the zone, instead, it is how should the behavior be, what I mean is, for
    nat (inside,dmz) static yy.yy.yy.yy
    - Any traffic hitting translated address yy.yy.yy.yy on the dmz zone should be re-directed to the host xx.xx.xx.xx on the inside interface.
    - Traffic initiated from the real host xx.xx.xx.xx should be translated to yy.yy.yy.yy if the hosts accesses any resources on the DMZ Interface.
    If you reverse it to (dmz,inside) the behavior will be reversed as well, so If you need to translate the address from the DMZ interface going to the inside interface you should use the (dmz,inside).
    For your case I would say what is common, since the server is in the INSIDE zone, you should configure
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    At this time, users from the DMZ zone will be able to access the server using the yy.yy.yy.yy IP Address.
    HTH
    AMatahen

  • Not a question, but a suggestion on updating software and best practice (Adobe we need to create stickies for the forums)

    Lots of you are hitting the brick wall in updating, and end result is non-recoverable project.   In a production environment and with projects due, it's best that you never update while in the middle of projects.  Wait until you have a day or two of down time, then test.
    For best practice, get into the habit of saving off your projects to a new name by incremental versions.  i.e. "project_name_v001", v002, etc.
    Before you close a project, save it, then save it again to a new version. In this way you'll always have two copies and will not loose the entire project.  Most projects crash upon opening (at least in my experience).
    At the end of the day, copy off your current project to an external drive.  I have a 1TB USB3 drive for this purpose, but you can just as easily save off just the PPro, AE and PS files to a stick.  If the video corrupts, you can always re-ingest.
    Which leads us to the next tip: never clear off your cards or wipe the tapes until the project is archived.  Always cheaper to buy more memory than recouping lost hours of work, and your sanity.
    I've been doing this for over a decade and the number of projects I've lost?  Zero.  Have I crashed?  Oh, yeah.  But I just open the previous version, save a new one and resume the edit.

    Ctrl + B to show the Top Menu
    View > Show Sidebar
    View > Show Staus Bar
    Deactivate Search Entire Library to speed things up.
    This should make managing your iPhone the same as it was before.

  • Large heap sizes, GC tuning and best practices

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

    Hello,
    I’ve read in the best practices document that the recommended heap size (without JVM GC tuning) is 512M. It also indicates that GC tuning, object number/size, and hardware configuration play a significant role in determining what the optimal heap size is. My particular Coherence implementation contains a static data set that is fairly large in size (150-300k per entry). Our hardware platform contains 16G physical RAM available and we want to dedicate at least 1G to the system and 512M for a proxy instance (localstorage=false) which our TCP*Extend clients will use to connect to the cache. This leaves us 14.5G available for our cache instances.
    We’re trying to determine the proper balance of heap size vs num of cache instances and have ended up with the following configuration. 7 cache instances per node running with 2G heap using a high-units value of 1.5G. Our testing has shown that using the Concurrent Mark Sweep GC algorithm warrants no substantial GC pauses and we have also done testing with a heap fragmentation inducer (http://www.azulsystems.com/e2e/docs/Fragger.java) which also shows no significant pauses.
    The reason we opted for a larger heap was to cut down on the cluster communication and context switching overhead as well as the administration challenges that 28 separate JVM processes would create. Although our testing has shown successful results, my concern here is that we’re straying from the best practices recommendations and I’m wondering what others thoughts are about the configuration outlined above.
    Thanks,
    - Allen Bettilyon

  • Installation and best practices

    I saw this link being discussed in a thread about "Live Type," but I think it needs a thread of its own, so I'm going to begin it here.
    http://support.apple.com/kb/HT4722?viewlocale=en_US
    I have Motion 4 (and everything else with FCS 2, of course), and just purchased Motion 5 via the App Store. (I'm sure I'll be buying FCP X also at some point, but decided to hold off for now.)
    When I was reading the "Live Type" thread there was some discussion about Motion 5 overwriting Motion 4 projects or something like that, so I started freaking out. I've opened both 5 and 4, but am closing them until I understand what's going on.
    Since I purchased Motion 5 from the App Store, I'm just under the assumption that my Mac took care of everything correctly. I see that Motion 4 resides in the FCS folder and Motion 5 is a stand-alone in the Applications folder.
    So I guess my questions are these ...
    1) What's so important about having FCS 2009 on a separate drive? I have a couple other internal drives with more than enough lots and lots of free space, so that isn't an issue for me. I just wonder why this is a "best practice." The two programs CAN share the same drive ...the link says so.
    2) I supppose that I'll let 4 and 5 reside side by side for now. How do I make sure Motion 5 won't screw up my Motion 4 projects? (My hunch is that you can open a M4 project in M5 and do a "save as" ...this will create an M5 version and leave the M4 alone. Am I correct about that?) Maybe the answer to this is related to my first question.
    3) I want to make sure I'm not missing something my the words "startup disk." Although I have 3 drives in my MacPro, only is a "startup disk" ...the other two are for storage. If I move everything from FCS to a different internal drive, does it make any difference that the destination drive is NOT a starup disk.
    **I'm gonna separate this part out a bit because it may or may not be related to the previous quesitons.**
    I noticed the Motion 5 came with very little content and only a few templates, but I read in another thread that additional content/t can be downloaded free when I do an update. I also read that thread that this free content is pretty much the same as the content that I have with Motion 4.
    1) If I download this additional content (which is basically the same as what's in Motion 4), will I just have a duplicate of all that material?
    2) Could this be part of the reason that Apple reccomends that Motion 5 be on a separate drive ...so that the content and templates don't get mixed up?
    --Just a couple months ago, I finally got around to cleaning out all the FCS content, throwing away duplicates and organinzing thins properly. If I've got to got through this process again, I want to do it correclty the first time.

    When you install Motion 5 or FCP X, all your Final Cut Studio apps are moved into a folder called Final Cut Studio.  This is because you can't have two apps with the same name in the same folder.  I'm running them both on the same drive, no problems.
    Motion 5 does not automatically overwrite any Motion project files, that is hogwash.  When you open a v.4 file into 5, it will ask if you want to convert the original to 5, or open a copy called Untilted and make it a v.5 project.  Very simple.  If you're super paranoid, Duplicate the original Motion project file, and open the copy into v.5 to be extra safe.  Remember once a project file is version 5, it can't be opened into previous versions.
    You can't launch both at the same time, duh.
    The System Drive, or OS drive, is just that, the drive your operating system is installed on.  All applicactions should be on that drive, and NOT be moved to other drives.  Especiall pro apps like these.  Move them to a non-OS drive, and you'll regret it.  Trust me.
    Yes, run Software Update (Apple Menu) and you'll get additional content for Motion 5 that v.4 doesn't have.  It won't be any problem with space on your drive.  That stuff takes up very little space.
    Apple recommends two different OS drives, or partitions, only to avoid an overwhelming flood of people screaming "What happened to my Final Cut Studio legacy apps?" and other such problems.  Hey, they're put into a new folder, that's all, breath...
    If you're having excessive problems, you may not have hardware up to speed.  CPU speed is needed, at least 8GB RAM (if not 12 or 16 for serious work), but your graphics card needs to really be up to speed.  iMacs and MacBook Pros barely meet up, and will work well.  Mac Pros can get much more powerful graphics cards.  Airs and Minis should be avoided like the plauge.
    After checking hardware, be sure to run Disk Utility to "repair" all drives.  Then, get the free app "Preference Manager" by Digital Rebellion (dot com) to safely trash your app's preference files, which resets it, and can fix a lot of current bugs.

  • Technical documentation for ADF projects - how to and best practices

    Hi,
    I have a question how to create technical documentation for ADF project expecialy ADF BC and ADF Faces. Is there any tool or JDev plugin for that purpose? What informations should contains documentation for that project, any principles ?Has anybody any experienece. Are there something like documentation best practices for ADF projects? E.g. how to create documentation for bussiness components.
    Kuba

    I'm not sure there is "best practices" but some of the things that can help people understand are:
    An ADF BC diagram - this will describe all your ADF BC objects - just drag your components into a new empty diagram
    A JSF page flow - to show how pages are called.
    Java class diagram - for Java code that is not in the above two
    One more thing to consider - since ADF BC, JSF page flow and JSPX pages are all XML based - you could write XSL files that will actually transform them into any type of documentation you want.

  • BASELINE PACKAGE - V1-V2.603 and Best Practices for Pharmaceuticals

    Hi All,
             > Recently we have upgraded to EHP4 stack 4
             > I am trying to install BASELINE PACKAGE - V1-V2.603 I chose that because its localized for India.
             > When i am trying to install  BP-ERP05 603V7 and BP-INSTASS 600V7 its asking for BBPCRM 600, BBPCRM
                700 and SAP_APPL 600, SAP_APPL 603.
             > Why does the installer ask for lower versions? ofcourse i have read that if the version is Higher than the one mentioned
                in the "Quick Guide to Installing the SAP Best Practices Baseline Package (IN)" will not work.
             > But do we have any BaseLine Packages Specific to EHP4?
             > If not, then could any one tell me where and how to download and install the BBPCRM as an add-on.
             > I only need that Software component because I strongly feel that all the interdependencies are linked with this one only.
    Any help and suggestions are welcome.
    Regards,
    Antony Chaitanya.

    Hi Sunny,
                    Thanks very much for your response.
    The major problem that i am having is I some how did not include the BBPCRM software component or the related software components for CRM at the time of upgrade is my best guess.
    So, the latest add ons BP-CRM60 600V3 and BP-CRM70 700V1 or the add ons BP-ERP05 600VD/ 603V7 along with that BP-INSTASS 600V7 are not able to install because the prerequisite are not meeting. Only BP-INSTASS 600V1 got installed.
    Can any one tell me how and what to do now.... plzzz???
    Things like how to get the CRM related s/w components installed or if there is a work around to get the Baseline Packages (IN) activated with out the CRM.
    Regards,
    Antony Chaitanya.

Maybe you are looking for

  • COPA value field not updated in KE24

    In VF03, I can see condition type ZV02 (discount), amount 10 USD, condition class: A(Discount) calculation type (quantity), account key Z02, in Ke4I, I already assigned ZV02 to value field VV101, but in KE24 line item, VV101 is empty, Why VV101 is em

  • Are Nodes like Types?

    I am trying to understand Context in Web Dynpro. I know that Context is data storage in WDP. But how is Nodes and Attributes related to a standard ABAP program. Are nodes like Types and Attributes the actual fields? So if I create a type TYPES: begin

  • Uploads not capturing changes made in Lightroom

    Hi, I'm having problems with uploads using Lightroom 2.3 on a Mac OS X. I have noticed that the pictures I upload to the internet don't show all the changes I make in Lightroom. Specifically, the uploads seem to lose adjustments that I make to specif

  • Changing Users Name - different from the other post below

    I want to change my home name under the places folder. It's not my account name that can be viewed in the preferences under account. When I got this new MacBook Pro and went thru the set up, somehow I must have agreed to use the name that is next to

  • Cannot open this file because it is empty  re jpg

    AN older jpg file will not open. I get the above message. Anny ideas? Many thanks.