Best way of handling large amounts of data movement

Hi
I like to know what is the best way to handle data in the following scenario
1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
6. Is there any better way of doing the above scenario. What other options do I have.
7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
Thanks in advance for your suggestion
sekhar

Hi,
You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
Regards

Similar Messages

  • Best way to pass large amounts of data to subroutines?

    I'm writing a program with a large amount of data, around 900 variables.  What is the best way for me to pass parts of this data to different subroutines?  I have a main loop on a PXI RT Controller that is controlling hydraulic cylinders and the loop needs to be 1ms or better.  How on earth should I pass 900 variables through a loop and keep it at 1ms?  One large cluster??  Several smaller clusters??  Local Variables?  Global Variables??  Help me please!!!

    My suggestion, similar to Altenbach and Matt above, is to use a Functional Global Variable (FGV) and use a 1D array of 900 values to store the data in the FGV. You can retrieve individual data items from the FGV by passing in the index of the desired variable and the FGV returns the value from the array. Instead of passing in an index you could also use a TypeDef'd Enum with all of your variables as element of the Enum, which will allow you to place the Enum constant on the diagram and make selecting variables, as well as reading the diagram, simpler.
    My group is developing a LabVIEW component/example code with this functionality that we plan to publish on DevZone in a month or two.
    The attached RTF file shows the core piece of this implementation. This VI off course is non-reentrant. The Init case could be changed to allocate the internal 1D array as part of this VI rather than passing it from another VI.
    Message Edited by Christian L on 01-31-2007 12:00 PM
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense
    Attachments:
    CVT_Double_MemBlock.rtf ‏309 KB

  • Best way to handle large amount of text

    hello everyone
    My project involves handling large amount of text.(from
    conferences and
    reports)
    Most of them r in Ms Word. I can turn them into RTF format.
    I dont want to use scrolling. I prefer turning pages(next,
    previous, last,
    contents). which means I need to break them into chunks.
    Currently the process is awkward and slow.
    I know there wud b lots of people working on similar
    projects.
    Could anyone tell me an easy way to handle text. Bring them
    into cast and
    break them.
    any ideas would be appreciated
    thanx
    ahmed

    Hacking up a document with lingo will probably loose the rtf
    formatting
    information.
    Here's a bit of code to find the physical position of a given
    line of on
    screen text (counting returns is not accurate with word
    wrapped lines)
    This stragety uses charPosToLoc to get actual position for
    the text
    member's current width and font size
    maxHeight = 780 -- arbitrary display height limit
    T = member("sourceText").text
    repeat with i = 1 to T.line.count
    endChar = T.line[1..i].char.count
    lineEndlocV = charPosToLoc(member "sourceText",
    endChar).locV
    if lineEndlocV > maxHeight then -- fount "1 too many"
    line
    -- extract identified lines "sourceText"
    -- perhaps repeat parce with remaining part of "sourceText"
    singlePage = T.line[1..i - 1]
    member("sourceText").text = T.line[i..99999] -- put remaining
    text back
    into source text member
    If you want to use one of the roundabout ways to display pdf
    in
    director. There might be some batch pdf production tools that
    can create
    your pages in pretty scalable pdf format.
    I think flashpaper documents can be adapted to director.

  • Best way to store large amounts of data

    Greetings!
    I have some code that will parse through XML data one character at a time, determine if it's an opening or closing tag, what the tag name is, and what the value between the tags is. All of the results are saved in a 2D string array. Each parent result can have a variable number of child results associated with it and it is possible to have over 2,000 parent results.
    Currently, I initialize a new string that I will use to store the values at the beginning of the method.
    String[][] initialXMLValues = new String[2000][45]I have no idea how many results will actually be returned when the method is initially called, so I don't know what to do besides make initialXMLValues around the maximum values I expect to have.
    As I parse through the XML, I look for a predefined tag that signifies the start of a result. Each tag/value that follows is stored in a single element of an arraylist in the form "tagname,value". When I reach the closing parent tag, I convert the arraylist to a String[], store the size of the array if it is bigger than the previous array (to track the maximum size of the child results), store it in initialXMLValues[i.] (<- peroid to avoid post showing up in italics), then increment i
    When I'm all done parsing, I create a new String String[][] XMLValues = new String[i][j], where i is equal to the total number of parent results (from last paragraph) and j is equal to the maximum number of child results. I then use a nested for loop to store all of the values from initialXMLValues into XMLValues. The whole point of this is to minimize the overall size of the returned String Array and minimize the number of null valued fields.
    I know this is terribly inefficient, but I don't know a better way to do it. The problem is having to have the size of the array initilized before I really know how many results I'm going to end up storing. Is there a better way to do this?

    So I'm starting to rewrite my code. I was shocked at how easy it was to implement the SAX parser, but it works great. Now I'm on to doing away with my nasty string arrays.
    Of course, the basic layout of the XML is like this:
    <result1>
    <element1>value</element1>
    <element2>value</element2>
    </result1>
    <result2>
    I thought about storing each element/value in a HashMap for each result. This works great for a single result. But what if I have 1000 results? Do I store 1000 HashMaps in an ArrayList (if that's even possible)? Is there a way to define an array of HashMaps?

  • What is the best way to migrate large amounts of data from a g3 to an intel mac?

    I want to help my mom transfer her photos and other info from an older  blueberry G3 iMac to a new intel one.  There appears to be no prmigration provision on the older mac.  Also the firewire caconnestions are different.  Somebody must have done this before.

    Hello
    the cable above can be use to enable Target Disk mode for data transfert
    http://support.apple.com/kb/ht1661 for more info
    to enable Target Disk mode just after startup sound Hold on "T" key on key board until see at  screen firewire symbol aka screen saver , then plug fire wire cable betwen 2 mac
    HTH
    Pierre

  • Is the only way to import large amount of data and database objects into a primary database is to shutdown the standby, turn off archive log mode, do the import, then rebuild the standby?

    I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?

    Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby.  That way, if, for example
    a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
    b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
    Hemant K Chitale

  • What is the best way to extract large volume of data from a BW InfoCube?

    Hello experts,
    Wondering if someone can suggest the best method that is availabe in SAP BI 7.0 to extract a large amount of data (approx 70 million records) from an InfoCube.  I've tried OpenHub and APD but not working.  I always need to separate the extracts into small datasets.  Any advice is greatly appreciated.
    Thanks,
    David

    Hi David,
    We had the same issue but that was loading from an ODS to cube. We have over 50 million records. I think there is no such option like parallel loading using DTPs. As suggested earlier in the forum, the only best option is to split according to the calender year of fis yr.
    But remember even with the above criteria sometimes for some cal yr you might have lot of data, even that becomes a problem.
    What i can suggest you is apart from Just the cal yr/fisc, also include some other selection criteria like comp code or sales org.
    yes you will end up load more requests, but the data loads would go smooth with lesser volumes.
    Regards
    BN

  • Best way to handle large files in FCE HD and iDVD.

    Hi everyone,
    I have just finished working on a holiday movie that my octagenarian parents took. They presented me with about 100 minutes of raw footage that I have managed to edit down to 64 minutes. They have viewed the final version that I recorded back to tape for them. They now want to know if I can put it onto a DVD for them as well. Problem is the FCE HD file is 13Gb.
    So here is my question.
    What is the best way to handle this problem?
    I have spoken to a friend of mine who is a professional editor. She said reduce the movie duration down to about 15mins because it's probably too long and boring. (rather hurtful really) Anyway that is out of the question as far as my oldies are concerned.
    I have seen info on Toast 8 that mentions a "Fit to DVD" process that purports to "squash" 9Gb of movie to a 4.7Gb disk. I can't find if it will also put 13Gb onto a dual layer 8.5Gb disk.
    Do I have to split the movie into two parts and make two dual layer DVD's? If so I have to ask - How come "Titanic", 3hrs+ fits on one disk??
    Have I asked too many questions?

    Take a deep breath. Relax. All is fine.
    iDVD does not look at the size of your video file, it looks at the length. iDVD can accomodate up to 2 hours of movie
    iDVD gives you different options depending on the length of your movie. Although I won't agree with your friend about reducing the length of your movie to 15 minutes, if you could trim out a few minutes to get it under an hour that setting in iDVD (Best Performance though the new version may have renamed it) gives you the best quality. Still, any iDVD setting will give you good quality even at 64 minutes
    In FCE export as Quicktime Movie NOT any flavour of Quicktime Conversion. Select chapter markers if you have them. If everything is on one system unchecked the Make Movie Self Contained button. Drop the QT file into iDVD

  • Best way to store big amount of data

    Hi, i need to store a big amount of data, written in a txt its size is almost 12 mg, anyway it depends on the computer it runs, beacause what i want to store is all the shared files in a computer.
    Which is the best way to store it? Array string? textfile? List? I don�t need the data after the app close.
    Thanks

    Well, then which is the best solution? LinkedList or
    Tree? i only need to store the full path.
    What i didn�t say, my fail, is that i need to search
    for a file name once i have stored them...For searching, LinkedList will be very slow if it's very large. I think the same is true of javax.swing .tree.DefaultTreeModel, which is the JDK's only tree implementation. I don't know what Jakarta-collections has - it's possible they have a tree that offers fast searching. If you want to stick to the standard Java libraries, you'll want a Set for fast searching. TreeSet keeps the entries in sorted order. If you also need to display them as a tree, you can keep them in both a Set and a tree. If you don't have enough memory to do that, then displaying the whole tree isn't going to be useful to the user anyway, so rethink your goal.

  • Best way to handle large number video files for a project..

    Hey, I was looking at getting some insight from the community here. Bascially there is a project that is being worked on that requires large amount of footage to be sifted through that only a small percentage will be used. These are mostly HD files and while most of the footage has been watched on Quicktime with notes taken, my question is this.
    What is the best way to take only small portions of each file without having to load everything into final cut and without any loose of quality. Should I just trim and rename from Quicktime or is there an easier way?
    Reason this needs to be done this way is the smaller segments will each be sent to other editors and rather then send huge files we want to split it to smaller amounts for each editor to use.
    Thank you so much for any input regarding this, I look forward to what you have to say

    Open the clip into the viewer. Mark In and Out points on the section you want. Make it a subclip Cmd-U. Drag the subclip into the bin for the editor who needs it. Repeat.
    If you batch export from a clip there is a selection to choose whether to export the whole clip or check box to export the marked I/O.
    This does not sound like a good project on which to being learning FCP.

  • Best way to store small amount of data to file?

    If I need to store, edit, and retrieve small amounts of data for a desktop app (let's say a small address book with name, address, phone, etc), what are my choices with JavaFX? As far as I can tell, there is no built-in database to handle this...
    What about the same need for a web-based app?
    thanks

    You can use with Netbeans Composer a DataSource and manipulate items with the Query Langage.
    Link

  • Best strategy to retrieve large amount of data with EJB ?

    Hi all
    I have an EJB 3 which is fronted by a Web Service. The EJB retrieves a large amount of objects (TaskList) and returns it to the client.
    @Stateless
    public List <TaskList> findByRole(String role){
    List <TaskList> tasklist = em.createNamedQuery("findByRole")
    .setParameter("Role", role)
    .getResultList();
    return tasklist;
    }The problem is the query is rather slow but especially returning lots of objects in the SOAP envelopes takes a lot of time.
    So I'd like to retrieve only a "slot" of the data. Is there any EJB3/Hibernate object which could help ?
    By myself the only solution that I found is to turn the SLSB into SFSB , add a parameter to the method and return only a slot, not the whole TaskList.
    @Stateful
    List <TaskList> tasklist;
    public List <TaskList> findByRole(String role, int slot){
    tasklist = em.createNamedQuery("findByRole")
    .setParameter("Role", role)
    .getResultList();
    // Take a piece of the TaskList
    return tasklist;
    }What do you think ? any idea on how to improve this ?
    Thanks
    Frank

    Hello,
    Am using remote object services.
    USing component (ColdFusion as destination).

  • Best way to handle large list of results in recordsets?

    Hello all.
    I'm using Dreamweaver CS3, MySQL and ASP/VBScript.
    My database of users behind my website is now approaching 25,000.
    I often have to "move" items in the database from one user record to another.
    Up and until now, I've done this simply by way of a drop down menu/list that is populated with the user ID# and Name of each and every user in the database.   This previously allowed me to simply select the ID of the Customer I wanted to "Move" the record to.
    The problem, is that the system is of course now trying to load a list of almost 25,000 user ID's each time I view the relevant site page, which is now taking so long to load it's uncomfortable.
    I've seen other sites that allow you to start typing something in to a text box and it starts filtering the results that match as you type, showing a list below.
    I assume (but am happy to be advised otherwise) that this is likely to be my best way forward, but I haven't the first clue how to do it.
    Can anyone advise?
    Regards
    David.

    You're looking for a 'type ahead' control. Try searching the web, although you may have trouble finding example code for classic asp. I did find some asp.net solutions out there.

  • Best way of handle Log Shipping in Physical Movement of SQL Server

    Hi All
    We are moving SQL Server physically from one rack to another in data centre. Just power off and move the server and network link up and power on and bring to same state before power off.
    One of Server 2005 have Log shipping active. What is best way to maintain Log Shipping during this physical movement. I do not want to remove log shipping and re-configure from scratch.
    Need help for clean and safe method to carry this activity.
    Thanks in Advance

    Thanks for reply...
    No... I am not asking RACK migration. Just SQL SERVER...Here are my steps that I am like to do...
    1. Stop application(s) that connect to the databases on this Server
    Correct
    2. Note of account under which SQL Server running for the purpose of permission of this account on folder(s) for L Log Shipping  
    OK
    2. Stop Jobs using via making script for disable and enable jobs
    Here I have doubt on Log Shipping jobs disable and enable that
    You can use job( tsql) but why not use GUI.Log into SQL Server expand SQL server agent RK on job and disable it.
    a. I should stop LS jobs manually as you recommended but not while running em I right?
    Yes see job activity monitor on both primary and secondary server.It will show status executing  for jobs,look out for LS jobs.
    b. Shall I stop disable jobs first on Secondary or Primary Server?
    First disable on Primary and then on secondary.
    I faced a issue where shutting down SQL agent on Secondary caused Sec database to go in suspect mode.So make sure while you shutdown agent no job is running.If restore log job is running let it complete and then disable the job
    c. On which server I should enable LS jobs first?
    Primary
    3.  Stop SQL Server Services.
    Run sp_who2 or select * from sys.sysprocessess to see any active transaction.Then move accordingly.
    AND take same steps in reverse manner when power on after physical migration of Box.
    Pls advise
    Hope this helps
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

Maybe you are looking for

  • How to Create reusable Jsfcomponent in Jheadstart 10.1.3.2

    Hi Jheadstart team, We are in the early stages of developing a larg project using jheadstart 10.1.3.2. 52 and we have some jsf compoents which will be repeated in meny pages like emploee information or product information. I wander if it is posiable

  • Cannot find the website

    I'm using Contribute CS3 for the first time and going through the connection wizard I'm not even able to find the website even though I browse to it. It asks if I would like to try and connect anyway. I select "yes" add all my ftp info which I know i

  • Calculate the amt according to the days?

    Hi, Can I realize that the total amount in Billing is calculated by price * days? The days are the days from creation date of sales order to PGI date. tks in advance.

  • Can you further subdivide a plant? If yes into what ?

    Can you further subdivide a plant? If yes into what ?

  • Disc drive doesn't read discs but says it's working properly

    I have a Pavilion dv6000 with vista.  I can put in any cd and the disc drive will start spinning but will not read the disc.  For example, i put the game "Empire Earth" into the drive and i can here the disc spinning but it will not start.  The drive