Converting large amounts of points - 76 million lat/lon's to spatial object...

Hello, I need help.
Platform - Oracle 11g 64bit on Windows Enterprise server 2008 64bit.  64 GB of ram with 2 CPUs totalling 24 cores
Does any one know of a fast way to convert large amounts of points to a spatial object?  I need to convert 76 million lat/lon's to ESRI st_geometry or Oracle sdo_geometry.
Currently, I have setup code using pipelined parallel functions and multiple jobs that run concurrently.  It still takes over 2.5 hours to process all of the points.
Any pointers would be GREATLY appreciated!
Thanks
John

Hi,
Where is the lat/lon data at the moment?  In an external text file or in an existing database table as number attributes?
If they're in an external text file, then I'd probably use an external table to load them in as quickly as possible.
If they're in an existing database table, then you can just update the sdo_geometry column using:
update <table> set <geometry column> = sdo_geometry(2001, <your srid>, sdo_point_type(<lon column>, <lat column>, null), null, null)
where <lon column> is not null
and <lat column> is not null;
That should run very quick for you.  If you want to avoid the overhead of creating redo, you could use "create table .... as select...".  This example of creating 1,000,000 points runs in 9 seconds for me.
create table sample_points (geometry) nologging as
  (select sdo_geometry(2001, null,
  sdo_point_type(
  trunc(100000 * dbms_random.value()),
  trunc(100000 * dbms_random.value()),
  null), null, null)
  from dual connect by level <= 1000000);
I have setup code using pipelined parallel functions and multiple jobs that run concurrently
You shouldn't need to use pl/sql for this task.  If you find you do, then provide some sample code and we'll take a look.
Regards,
John O'Toole

Similar Messages

  • Is there a way to play/convert large FLAC libraries on Ipod Classic, without having to pre-convert each of the folders?

    Hello,
    I'm looking to get an Ipod Classic due to the huge memory it has. The majority of my library is in FLAC and made up of full albums, some with cue files as well. My library is well over 2TB (external hard drive) and mainly FLAC with occasional MP3s and WAV. I'm looking to transfer a bunch of albums in 16/44.1 FLAC on to Ipod Classic. I looked into Rockbox as I did with Sansa Fuze, but as of right now, the Ipod Classic isn't listed. I really wish there was a native way of playing FLAC on the IPod Classic.
    So the next best thing I could hope for is to convert FLAC to another format (ALAC or WAV...maybe MP3 for more space, but really want to stick with lossless). Is there a way to queue a bunch of albums (folders) at the same time and have them converted into another format suitable for playback on Ipod Classic. Regarding this conversion process, I have two aspects I am looking to achieve.
    1) The conversion process should be on the fly. I really, really don't want to convert all the FLAC to another IPod format and then keep those converted files on my hard drive until they are transferred to the Ipod. I would much rather prefer something that could extract my FLAC folders from my external hard drive, process, convert, and transfer to the Ipod directly, without having to keep anything directly on my Macbook. This is so I don't run out of space. Also, it's much more easier that way. I don't care much about the time in this process as long as I don't have to constantly doing things manually. I would like to maybe move, say 40-50 GB at a time, until I max out the 160 GB 
    2) Proper catalouging of the converted files - I don't want the files losing all their tagging information and coming up randomly, disorganized on the IPod. I understand ALAC is better at this as it can retain the data while WAV can't, but I would still like some input as I have never tried ALAC before.
    I looked at MediaMonkey, although I'm not too sure about it. Also, I have a Macbook Pro Retina, so a native app would be the best.

    You cannot put music onto your iPod without using iTunes, that what iTunes is for.
    It's also not a good idea to wipe the music from your computer and having it only on your iPod. We see countless posts here from people who have done just that - and then lost everything when the iPod needs a Restore. Even if you never need to Restore your iPod, what happens when you eventually replace the iPod? You'll be back here asking how to get the music from one iPod to another. That's not easy to do, we see countless posts about that too!
    A much better idea is to buy an external drive (a proper external drive, not simply an iPod) and put your large amount of music onto that drive. Then point your iTunes Library to that drive. However, you need to remember two things:
    You still need a backup of that Library.
    Using an external drive as your iTunes Library means that the drive must be connected and ready to read before starting your iTunes programme. If it isn't, then iTunes will look on the C: drive - and you will find no music in your Library. (Once again, lots of posts about that as well!)

  • Query about clustering unrelated large amounts of data together vs. keeping it separate.

    I would like to ask the talented enthusiasts who frequent the devolper network to tell me if I have understood how Labview deals with clusters. A generic description of a situation involving clusters and what I believe Labview does is shown below. An example of this type of situation is shown for generating the Fibonacci sequence is attached to illustrate what I am saying.
    A description of the general situation:
    A cluster containing several different variables (mostly unrelated) has one or two of these variables unbundled for immediate use and then the modified values bundled back into the cluster for later use.
    What I think Labview does:
    As the original cluster is going into the unbundle (to get original variable values) and the bundle (to update stored variable values) a duplicate of the entire cluster is made before picking out the individual values chosen to be unbundled. This means that if the cluster also contains a large amount of unrelated data then processor time is wasted duplicating this data.
    If on the other hand this large amount of data is kept separate then this would not happen and no processor time is wasted.
    In the attached file the good method does have the array (large amount of unrelated data) within the cluster and does not use the array in more than one place, so it is not duplicated. If tunnels were used instead, I believe at least one duplicate is made.
    Am I correct in thinking that this is the behaviour Labview uses with clusters? (I expected Labview only to duplicate the variable values chosen in the unbundle code object only. As this choice is fixed at compile time it would seem to me that the compiler should be able to recognise that the other cluster variables are never used.)
    Is there a way of keeping the efficiency of using many separate variables (potentialy ~50) whilst keeping the ease of using a single cluster variable over using separate variables?
    The attachment:
    A vi that generates the Fibonacci sequence (the I32 used wraps at ~44th value, so values at that point and later are wrong) is attached. The calculation is itterative using a for loop. 2 variables are needed to perform the iteration which are stored in a cluster (and passed from iteration to iteration within the cluster). To provide the large amount of unrelated data, a large array of reasonably sized strings is provided.
    The bad way is to have the array stored within the cluster (causing massive overhead). The good way is to have the array separate from the other pieces of data, even if it passes through the for loop (no massive overhead).
    Try replacing the array shift registers with tunnels in the good case and see if you can repeat my observation that using tunnels causes overhead in comparison to shift registers whenever there is no other reason to duplicate the array.
    I am running Labview 7 on windows 2000 with sufficient memory so that the page file is not used in this example.
    Thank you all very much for your time and for sharing your Labview experience,
    Richard Dwan
    Attachments:
    Fibonacci_test.vi ‏71 KB

    > That is an interesting observation you have made and seems to me to be
    > quite inexplicable. The trick is interesting but not practical for me
    > to use in developing a large piece of software. Thanks for your input
    > - I think I'll be contacting technical support for an explaination
    > along with some other anomolies involving large arrays that I have
    > spottted.
    >
    The deal here is that the bundle and unbundle nodes must be very careful
    when they are swapping elements around. This used to make copies in the
    normal cases, but that has been improved. The reason that the sequence
    affects it is that it affects the algorithm so that it orders the
    element movement so that the algorithm succeeds in avoiding a copy.
    Another, more obvious way
    is to use a regular bundle and unbundle, not
    the named variety. These tend to have an easier time in the algorithm also.
    Technically, I'd report the diagram to tech support to see if the named
    bundle/unbundle case can be handled as well. In the meantime, you can
    leave the data unbundled, as in the faster version.
    Greg McKaskle

  • Store and sort large amounts of Strings

    Hi, everyone.
    I would be very grateful if someone could help me with the two questions I have.
    I am working on the program, which has to turn a billion of integers in words, sort them and find a certain character.
    My first question is how can I process my convertion of integer other than in loop? ( loop takes about an hour). The algorithm works well for smaller iterations.
    Where can I store such large amounts of strings to be able to sort them then and work with them?
    Thank you in advance.

    I will try to explain it better.
    I have written a function, which converts integers in
    words. I call it in a for loop for all the integers
    from 1 to 999999999. It works well for smaller
    integers, but takes too long to complete all the
    cycles for 999999999. This seems nonsensical. But yes it will take a long time.
    Do you in fact mean that you are converting integers INTO words? Maybe?
    I try to store each converted integer in an array of
    Strings, but obviously it is a lot of Strings. Yes it would seem that is in fact what you do mean...
    would like to find out what kind of array I can use
    to store all those Strings. I have to sort them then,
    concatenate all of them and get a certain character.You can store them into a file instead.
    Why do you want to sort AND concatenate them? This also makes no sense and will be difficult.
    Is it better?
    Thank you.Not really.
    At any rate in order to solve this problem I think you are going to
    a) have to live with buffering to disk
    b) use multiple threads
    c) get a more realistic requirement then concatenating 900 million Strings together because that won't be happening

  • Firefox is using large amounts of CPU time and disk access, and I need to know how to shut down most of this so I can actually use the browser.

    Firefox is a very busy piece of software. It's using large amounts of CPU time and disk access. It puts my usage at low priority, so I have to wait for some time to be able to use my pointer or keyboard. I don't know what it uses all that CPU and disk access time for, but it's of no use to me. It often takes off with massive use of resources when I'm not doing anything, and I may not have use of my pointer for several minutes. How can I shut down most of this so I can use the browser to get my work done. I just want to use the web site access part of the software, and drop all the extra. I don't want Firefox to be able to recover after a crash. I just want to browse with a minimum of interference from Firefox. I would think that this is the most commonly asked question.

    Firefox consumes a lot of CPU resources
    * https://support.mozilla.com/en-US/kb/Firefox%20consumes%20a%20lot%20of%20CPU%20resources
    High memory usage
    * https://support.mozilla.com/en-US/kb/High%20memory%20usage
    Check and tell if its working.

  • Creation of data packages due to large amount of datasets leads to problems

    Hi Experts,
    We have build our own generic extractor.
    When data packages (due to large amount of datasets) are created, different problems occur.
    For example:
    Datasets are now doubled and appear twice, one time in package one and a second time in package two. Since those datsets are not identical, information are lost while uploading those datasets to an ODS or Cube.
    What can I do? SAP will not help due to generic datasource.
    Any suggestion?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

  • Open Large amount of data

    Hi
    I have a file on application server in .dat format, it contains large amount of data may be 2 million of records or  more, I need to open the file to check the record count, is there any software or any option to open the file, I have tried opening with Notepad, excel .... it gives error..
    please let me know
    Thanks

    Hi,
    Try this..
    Go to AL11..
    Go to the file directory..Then in the file there will be field called length..which is the total length of the file in characters..
    If you know the length of a single line..
    Divide the length of the file by the length of single line..I believe you will get the number of records..
    Thanks,
    Naren

  • Why each time I try to save my day's work,version WITH mark-ups and version w/o, does it say "you have placed a large amount of text on clipboard do you want to access this? What is clipboard and how can I just save my work in its proper file?

    I don't get this clipboard thing! I have created two files, one for the MS with mark-ups, and one for the unmarred version, each living in their own spot in my "house." It seems to be merging the two versions and I have to go in and re-paste and then it always says "you have placed a large amount of text on clipboard do you want to be able to access this?" and it prompts me to say no but I am afraid I'll lose my day's work so I just tap cancel. I highlight the day's revision and select "copy" to paste into the unmarked doc. before I save on the marked up or working doc. It is when I try to close that one that the clipboard issue pops up. What can you tell be about saving doc. in two places and how to NOT get it on clipboard? Thanks! I am not super computer savvy so layperson's language much appreciated!

    Are you using Microsoft word?  Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users.  I have seen this message from Microsoft word.  It's annoying.
    As BDaqua points out...
    When you copy information via edit > copy,  command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
    If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information.  Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
    You should be saving your work more than once a day. I'd save every 5 minutes.  command + s does a save.
    Robert

  • Advice needed on how to keep large amounts of data

    Hi guys,
    Im not sure whats the best way is to make large amounts of data available to my android  app on the local device.
    For example records of food ingredients, in the 100's?
    I have read and successfully created .db's using this tutorial.
    http://help.adobe.com/en_US/AIR/1.5/devappsflex/WS5b3ccc516d4fbf351e63e3d118666ade46-7d49. html
    However to populate the database I use flash? So this kind of defeats the purpose of it. No point in me shifting a massive array of data from flash to a sql database, when I could access the data direct from the as3 array?
    So maybe I could create the .db with an external program? but then how would I include that .db in the apk file and then deploy it to users android device.
    Or maybe I create a as3 class with an xml object init and use that as a means of data storage?
    Any advice would be appreciated

    You can use any means you like to populate your SQLite database, including using external programs, (temporarily) embedding a text file with SQL statements, executing some SQL from AS3 code etc etc.
    Once you have populated your db, deploy it with your project:
    http://chrisgriffith.wordpress.com/2011/01/11/understanding-bundled-sqlite-databases-in-ai r-for-mobile/
    Cheers, - Jon -

  • Is there a way to put a large amount of music on your iPod without having to keep all the files in iTunes on your computer as well? I want to put my entire music collection (including cds) on my iPod but don't want to take up the space on my computer.

    is there a way to put a large amount of music on the ipod without having to keep all the files on itunes as well? I want to use my ipod as an externa drive and put all of my music on it without taking up the space on my computer. I also don't want to lose all my files everytime I plug ipod into my computer. Is this possible? Is there a way to avoid using itunes and only use the ipod as an external drive?

    You cannot put music onto your iPod without using iTunes, that what iTunes is for.
    It's also not a good idea to wipe the music from your computer and having it only on your iPod. We see countless posts here from people who have done just that - and then lost everything when the iPod needs a Restore. Even if you never need to Restore your iPod, what happens when you eventually replace the iPod? You'll be back here asking how to get the music from one iPod to another. That's not easy to do, we see countless posts about that too!
    A much better idea is to buy an external drive (a proper external drive, not simply an iPod) and put your large amount of music onto that drive. Then point your iTunes Library to that drive. However, you need to remember two things:
    You still need a backup of that Library.
    Using an external drive as your iTunes Library means that the drive must be connected and ready to read before starting your iTunes programme. If it isn't, then iTunes will look on the C: drive - and you will find no music in your Library. (Once again, lots of posts about that as well!)

  • With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?

    With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?
    For example, in Notes, I have written three notes; however if I click on 'All On My Mac' on the side bar, I see about 10 different versions of each note I make, it saves a version every time I add or delete a sentence.
    I also noticed, that when I write an email, Mail saves about 10 or more draft versions before the final is sent.
    I understand that all this journaling provides a level of security, and prevents data lost; but I was wondering, is there a function to clean up journal logs once in a while?
    Thanks
    Roz

    Are you using Microsoft word?  Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users.  I have seen this message from Microsoft word.  It's annoying.
    As BDaqua points out...
    When you copy information via edit > copy,  command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
    If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information.  Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
    You should be saving your work more than once a day. I'd save every 5 minutes.  command + s does a save.
    Robert

  • My iMac running 10.7.5 crashes when copy and paste large amounts of information like a picture.

    My iMac running 10.7.5 crashes when I store a large amount of data like copy and paste a picture. It also has started being painfully slow to open the first page in Safari.  Here's is some information a program gathered on my computer.   -Thanks!
    Problem description:
    iMac 10.7.5.  Crashes when copy & paste large amounts and slow to first open first web page then back to normal speed
    EtreCheck version: 2.1.8 (121)
    Report generated April 13, 2015 3:05:27 PM EDT
    Download EtreCheck from http://etresoft.com/etrecheck
    Click the [Click for support] links for help with non-Apple products.
    Click the [Click for details] links for more information about that line.
    Hardware Information: ℹ️
        iMac (27-inch, Late 2009) (Technical Specifications)
        iMac - model: iMac10,1
        1 3.33 GHz Intel Core 2 Duo CPU: 2-core
        8 GB RAM
            BANK 0/DIMM0
                2 GB DDR3 1067 MHz ok
            BANK 1/DIMM0
                2 GB DDR3 1067 MHz ok
            BANK 0/DIMM1
                2 GB DDR3 1067 MHz ok
            BANK 1/DIMM1
                2 GB DDR3 1067 MHz ok
        Bluetooth: Old - Handoff/Airdrop2 not supported
        Wireless: Unknown
    Video Information: ℹ️
        ATI Radeon HD 4670 - VRAM: 256 MB
            iMac 2560 x 1440
    System Software: ℹ️
        Mac OS X 10.7.5 (11G63) - Time since boot: 14 days 7:47:18
    Disk Information: ℹ️
        ST31000528AS disk0 : (1 TB)
            disk0s1 (disk0s1) <not mounted> : 210 MB
            Macintosh HD (disk0s2) / : 999.35 GB (777.11 GB free)
            Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
        OPTIARC DVD RW AD-5680H 
    USB Information: ℹ️
        Sunplus Innovation Technology. USB to Serial-ATA bridge 1 TB
            disk2s1 (disk2s1) <not mounted> : 210 MB
            Time Machine Backups (disk2s2) /Volumes/Time Machine Backups : 999.86 GB (143.50 GB free)
        Apple Inc. Built-in iSight
        Apple Internal Memory Card Reader
        Apple Computer, Inc. IR Receiver
        HP Deskjet 9800
        Apple Inc. BRCM2046 Hub
            Apple Inc. Bluetooth USB Host Controller
    Kernel Extensions: ℹ️
            /Library/Extensions
        [loaded]    org.virtualbox.kext.VBoxDrv (4.2.18) [Click for support]
        [loaded]    org.virtualbox.kext.VBoxNetAdp (4.2.18) [Click for support]
        [loaded]    org.virtualbox.kext.VBoxNetFlt (4.2.18) [Click for support]
        [loaded]    org.virtualbox.kext.VBoxUSB (4.2.18) [Click for support]
            /Library/Parallels/Parallels Service.app
        [loaded]    com.parallels.kext.prl_hid_hook (7.0 15107.796624) [Click for support]
        [loaded]    com.parallels.kext.prl_hypervisor (7.0 15107.796624) [Click for support]
        [loaded]    com.parallels.kext.prl_netbridge (7.0 15107.796624) [Click for support]
        [loaded]    com.parallels.kext.prl_vnic (7.0 15107.796624) [Click for support]
            /System/Library/Extensions
        [loaded]    com.parallels.kext.prl_usb_connect (7.0 15107.796624) [Click for support]
    Startup Items: ℹ️
        HP IO: Path: /Library/StartupItems/HP IO
        ParallelsTransporter: Path: /Library/StartupItems/ParallelsTransporter
        VirtualBox: Path: /Library/StartupItems/VirtualBox
        Startup items are obsolete in OS X Yosemite
    Launch Agents: ℹ️
        [running]    com.hp.devicemonitor.plist [Click for support]
        [loaded]    com.hp.help.tocgenerator.plist [Click for support]
        [loaded]    com.parallels.desktop.launch.plist [Click for support]
        [loaded]    com.parallels.DesktopControlAgent.plist [Click for support]
        [running]    com.parallels.vm.prl_pcproxy.plist [Click for support]
    Launch Daemons: ℹ️
        [loaded]    com.adobe.fpsaud.plist [Click for support]
        [loaded]    com.hikvision.iVMS-4200.plist [Click for support]
        [loaded]    com.microsoft.office.licensing.helper.plist [Click for support]
        [running]    com.parallels.desktop.launchdaemon.plist [Click for support]
    User Launch Agents: ℹ️
        [loaded]    com.adobe.ARM.[...].plist [Click for support]
        [running]    com.akamai.single-user-client.plist [Click for support]
        [loaded]    com.google.keystone.agent.plist [Click for support]
        [not loaded]    org.virtualbox.vboxwebsrv.plist [Click for support]
    User Login Items: ℹ️
        iTunesHelper    UNKNOWN  (missing value)
        GrowlHelperApp    Application  (/Users/[redacted]/Library/PreferencePanes/Growl.prefPane/Contents/Resources/Gr owlHelperApp.app)
        Microsoft AU Daemon    Application  (/Applications/Microsoft AutoUpdate.app/Contents/MacOS/Microsoft AU Daemon.app)
        Air Mouse Server    UNKNOWN  (missing value)
        CrossOver CD Helper    UNKNOWN  (missing value)
        Pages    UNKNOWN  (missing value)
        Dropbox    Application  (/Applications/Dropbox.app)
        Wondershare Helper Compact    Application  (/Users/[redacted]/Library/Application Support/Helper/Wondershare Helper Compact.app)
        AdobeResourceSynchronizer    Application Hidden (/Applications/Adobe Reader.app/Contents/Support/AdobeResourceSynchronizer.app)
        HP Scheduler    Application  (/Library/Application Support/Hewlett-Packard/Software Update/HP Scheduler.app)
    Internet Plug-ins: ℹ️
        JavaAppletPlugin: Version: 14.9.0 - SDK 10.7 Check version
        FlashPlayer-10.6: Version: 16.0.0.305 - SDK 10.6 [Click for support]
        QuickTime Plugin: Version: 7.7.1
        AdobePDFViewerNPAPI: Version: 11.0.10 - SDK 10.6 [Click for support]
        Flash Player: Version: 16.0.0.305 - SDK 10.6 Outdated! Update
        AdobePDFViewer: Version: 11.0.10 - SDK 10.6 [Click for support]
        SharePointBrowserPlugin: Version: 14.4.8 - SDK 10.6 [Click for support]
        Google Earth Web Plug-in: Version: 6.0 [Click for support]
        Silverlight: Version: 4.0.60531.0 [Click for support]
        iPhotoPhotocast: Version: 7.0
    Safari Extensions: ℹ️
        1-ClickWeather
        Reload Button
    3rd Party Preference Panes: ℹ️
        Akamai NetSession Preferences  [Click for support]
        Flash Player  [Click for support]
        Growl  [Click for support]
        MacFUSE  [Click for support]
    Time Machine: ℹ️
        Time Machine not configured!
    Top Processes by CPU: ℹ️
             2%    WindowServer
             1%    prl_disp_service
             0%    fontd
             0%    AdobeReader
             0%    ODSAgent
    Top Processes by Memory: ℹ️
        120 MB    mds
        112 MB    AdobeReader
        94 MB    WindowServer
        94 MB    Finder
        69 MB    loginwindow
    Virtual Memory Information: ℹ️
        5.44 GB    Free RAM
        1.78 GB    Active RAM
        464 MB    Inactive RAM
        901 MB    Wired RAM
        1.64 GB    Page-ins
        0 B    Page-outs

    I believe that insufficient RAM may be the source of some of your problems. If you have a RAM of somewhere 4 to 8GB, you will experience smoother computing. 3GB doesn't seem right, so you might want to learn more by going to this site:
    http://www.crucial.com/store/drammemory.aspx
    I don't know what know what's happening with your optical drive, but it seems you use your drive quite a bit. In that case, look into a lens cleaner for your machine. It's inexpensive, works quite well.
    I hope you'll post here with your results!

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Looking for ideas for transferring large amounts of data between systems

    Hello,
    I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
    We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
    We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
    Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
    We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

Maybe you are looking for

  • Getting Error when we are using same variable value in parallel flows

    Hi All In one of the case we will get a Zipped folder in which we have 4 different types of files . we will have date in folder name . we want to insert that date in all the 4 types file data . so we are storing the filedate in a variable and we are

  • Saving a PDF file when printing is not supported. Instead, choose File Save.

    I am using Mountain Lion (10.8.2) and I was using Adobe Reader version 9. In the past, I've always been able to print to file using the File -> Print dialogue but all of a sudden it stopped working. I read somewhere that installing the latest version

  • "Set the text of member" stutters scroll text...

    Hi! I have a presentation with some text members and a scroll text. The scroll text is scrolled like this: if scrollMessageActive = "True" then scrollPos = scrollPos + scrollSpeed if scrollPos >= member ("TXT_ScrollMessage").height - Sprite(52).heigh

  • Loosing  performance in 10g against 8i

    I've created database under 10g from dump file from 8i database. I've tried some select statements on new created database, but it was about 2x slower much more accessing the disk than on 8i. I increased PGA_AGGREGATE_TARGET parameter and get the per

  • Approval Management for Receivable Invoice Adjustments

    How can we create a new transaction type in AME and link it with AR Adjustments ? AR Adjustments already have inbuit approval flow..do we need to tweak the AR Adjustments workflow