Boost import performance?

Hi,
I'm doing some database imports to 9.2.0 database and it takes 12 hours for the import. OK, it's a large dump file but I am wondering if there is anything I can do to boost the performance of the database during an import.
Thanks,
steve.

Difficult to tell, but there are some general tips:
- Set a large buffer size.
- Set commit=Y.
- Import without indexes (indexes=N) and create the indexes and enable the FK constraints afterwards. You need to have a script ready for that of course.

Similar Messages

  • Boost the Performance of the database

    I am getting a user call that the performance of the database is poor. I want to increase the performance of the database, can any one list out what are the checks and changes i have to do to increase the performance.
    I am using topas command to find the top consume process in unix apart from this what are the area i have to look to boost the performance.
    Help me in this regards.
    Vel

    there is no one area where you can pinpoint and say this needs tuning. performance tuning needs to be addressed from all fronts. but you make one change at a time and see if it gives the desired improvement. the areas that you have to look are
    1. table design
    2. sql tuning, proper use of indexes
    3. sizing the tables, indexes
    4. setting up proper SGA parameters, if you have memory in the machine, make optimal use of it by allocating it to oracle.
    5. use of procedures and functions.
    you may or may not get a call from the user, but if you feel that something could be improved by tuning, i guess its the job of the DBA to do that and squeeze every bit of performance from the hardware and the software.
    check out oracle performance tuning docs for more detailed info.
    Mukundan.

  • TIPS(49) : IMPORT PERFORMANCE TIPS

    제품 : ORACLE SERVER
    작성날짜 : 2003-06-10
    TIPS(49) : IMPORT PERFORMANCE TIPS
    ==================================
    PURPOSE
    [Import 의 Performance]
    Oracle import를 실행 시 많이 걸리는 시간을 줄이기 위해 다음의 현상을
    적용해 보자.
    Explanation
    1) System 적 변경
    - DB 를 다시 create 할 경우 DB_BLOCK_SIZE를 증가시킨다.
    이 block size 가 큰 경우, 더 작은 I/O cycle 이 발생한다.
    이 변경값이 permanent한 경우는 변경 전과 비교한 모든 효과를 고려한다.
    - 1개의 커다란 rollback segment 를 생성하고, 이외의 rollback segment는
    모두 offline 한다.
    - 1개의 rollback segment는 import 되어질 table 중 가장 큰 것의 50%
    정도로 잡는다. import 는 기본적으로 insert into table_name values
    (,,,,,) 이고 이 경우 rollback segment 에는 rowid 만 들어가게 되므로
    2개의 같은 size 인 extent를 갖는 rollback 을 생성하면 된다.
    - import 가 끝날 때까지 database 를 NOARCHIVELOG mode로 유지한다.
    이는 archive log 를 생성하는 overhead 를 없앨 수 있다.
    - rollback segment와 마찬가지로, 커다란 redo log file을 생성한다.
    클수록 log switch 가 발생하지 않으므로 import 시는 좋다.
    작은 size 의 redo log 는 offline 하도록 한다. alert.log 에 나타나는
    'Thread 1 cannot allocate new log, sequence 17, Checkpoint not
    complete'의 메시지는 좀 더 크거나 더 많은 갯수의 redo log file 이
    필요함을 나타낸다.
    - 가능하다면 table, rollback, redo log file 이 다른 disk에 있도록 한다.
    이는 i/o contention을 줄일 수 있다.
    2) Init.ora Parameter 변경
    - LOG_CHECKPOINT_INTERVAL을 redo log file의 size 보다 크게 준다.
    이 숫자는 OS block 을 의미하여 unix 에서는 512 byte 이다.
    이를 크게 하면 log swich time을 줄일 수 있다.
    - SORT_AREA_SIZE를 증가시킨다.
    인덱스를 아직 생성하지 않았다 하더라도, unique, primary key는 있기
    때문이다.
    이 값의 증가는 같은 machine 안에 별도의 어떤 작업이 있는가,
    free memoty 가 얼마 만큼 있느냐에 따라 다르긴 하지만 평상 시의 5-10
    배로 한다.
    만일 machine이 swapping이나 paging이 발생하면 더욱 크게 해준다.
    3) Import Options 변경
    - COMMIT=N option을 사용한다.
    이를 사용하면 buffer의 data 가 insert되고 commit함이 아니라, 각
    object(table) 의 모든 data가 insert 후 commit 된다.
    만일 rollback 이 작다면 이 option 을 이용할 수 없다.
    - BUFFER 크기를 크게 한다.
    이것 역시 시스템의 다른 activity 나, 데이타베이스 크기에 따라 다르다.
    이 값이 크면 export file을 access 하는 횟수를 줄일 수 있다.
    - import 시 INDEXES=N option을 사용한다.
    만일 index를 생성 시는 SORT_AREA_SIZE 는 더욱 커야 한다.
    Reference Documents
    none

    제품 : ORACLE SERVER
    작성날짜 : 2003-06-10
    TIPS(49) : IMPORT PERFORMANCE TIPS
    ==================================
    PURPOSE
    [Import 의 Performance]
    Oracle import를 실행 시 많이 걸리는 시간을 줄이기 위해 다음의 현상을
    적용해 보자.
    Explanation
    1) System 적 변경
    - DB 를 다시 create 할 경우 DB_BLOCK_SIZE를 증가시킨다.
    이 block size 가 큰 경우, 더 작은 I/O cycle 이 발생한다.
    이 변경값이 permanent한 경우는 변경 전과 비교한 모든 효과를 고려한다.
    - 1개의 커다란 rollback segment 를 생성하고, 이외의 rollback segment는
    모두 offline 한다.
    - 1개의 rollback segment는 import 되어질 table 중 가장 큰 것의 50%
    정도로 잡는다. import 는 기본적으로 insert into table_name values
    (,,,,,) 이고 이 경우 rollback segment 에는 rowid 만 들어가게 되므로
    2개의 같은 size 인 extent를 갖는 rollback 을 생성하면 된다.
    - import 가 끝날 때까지 database 를 NOARCHIVELOG mode로 유지한다.
    이는 archive log 를 생성하는 overhead 를 없앨 수 있다.
    - rollback segment와 마찬가지로, 커다란 redo log file을 생성한다.
    클수록 log switch 가 발생하지 않으므로 import 시는 좋다.
    작은 size 의 redo log 는 offline 하도록 한다. alert.log 에 나타나는
    'Thread 1 cannot allocate new log, sequence 17, Checkpoint not
    complete'의 메시지는 좀 더 크거나 더 많은 갯수의 redo log file 이
    필요함을 나타낸다.
    - 가능하다면 table, rollback, redo log file 이 다른 disk에 있도록 한다.
    이는 i/o contention을 줄일 수 있다.
    2) Init.ora Parameter 변경
    - LOG_CHECKPOINT_INTERVAL을 redo log file의 size 보다 크게 준다.
    이 숫자는 OS block 을 의미하여 unix 에서는 512 byte 이다.
    이를 크게 하면 log swich time을 줄일 수 있다.
    - SORT_AREA_SIZE를 증가시킨다.
    인덱스를 아직 생성하지 않았다 하더라도, unique, primary key는 있기
    때문이다.
    이 값의 증가는 같은 machine 안에 별도의 어떤 작업이 있는가,
    free memoty 가 얼마 만큼 있느냐에 따라 다르긴 하지만 평상 시의 5-10
    배로 한다.
    만일 machine이 swapping이나 paging이 발생하면 더욱 크게 해준다.
    3) Import Options 변경
    - COMMIT=N option을 사용한다.
    이를 사용하면 buffer의 data 가 insert되고 commit함이 아니라, 각
    object(table) 의 모든 data가 insert 후 commit 된다.
    만일 rollback 이 작다면 이 option 을 이용할 수 없다.
    - BUFFER 크기를 크게 한다.
    이것 역시 시스템의 다른 activity 나, 데이타베이스 크기에 따라 다르다.
    이 값이 크면 export file을 access 하는 횟수를 줄일 수 있다.
    - import 시 INDEXES=N option을 사용한다.
    만일 index를 생성 시는 SORT_AREA_SIZE 는 더욱 커야 한다.
    Reference Documents
    none

  • Bulk import performance

    Hi Folks,
    We're getting ready to migrate our first teams into our new Vibe environment but I'm wondering if we can do anything to improve the performance of bulk imported documents (drag and drop into the webclient).
    Our environment: Vibe 3.3 on SLES11 SP2 (3 virtual servers)
    Server 1 (Tomcat):
    2x CPU
    8GB RAM (Java cache 6GB)
    Server 2 (Lucene):
    2x CPU
    8GB RAM (Java cache 6GB)
    Server 3 (MySQL):
    2x CPU
    4GB RAM
    When I do a bulk import, files import and everything is fine but if I'm concerned about the time it takes. During my test imports the only process that has any significant utilization is Java on the Tomcat server, but it bounces from between 30-90% on a single CPU. If needed I can easily crank these VMs up to 8 CPUs and 64GB RAM if needed. That said, if I can't tax the current servers, there's no need to increase resources.
    Does anyone know a way to improve the import performance? I want to be able to peg all my CPUs at 100% if need be.

    Hi John,
    many thanks for your reply. Import and gathering stats is done through an application. I have only the following information about what is supposed to happen when we run the statistic gathering process:
    "Optimization - Uses sub-commands for database optimization (DBMS_STATS*). These are important for the Oracle optimizer to generate optimal execution plans.
    Optimize Schema - Runs the DBMS_STATS.GATHER_SCHEMA_STATS(USER,cacade=>true) command. This procedure gathers (not estimates) statistics for all objects in a schema and on the indexes.
    Optimize the schema after you import a industry model from a dump file, and run the Optimize command whenever you have extensive changes to the data, such as imports or major updates.
    Optimize Feature Classes - Runs the DBMS_STATS.GATHER_TABLE_STATS([USER], [Table], cascade=>true) command for the feature class tables. This procedure gathers table, column, and index statistics."
    The application we use allows to the gather statistics in two different places. I now realise that we have only used one of the two so far and if my understanding of the documentation is right the one we have used does not gather all statistics. With your explanation the observed behavouir makes sense. Next time I will gather statistics using the second functionality to see if that one gathers all statistics at once.
    Many thanks again, Rob

  • Import Performance Analysis report

    Hi,
    I ran the import analysis report in FDM and it tells me the time taken by each step of import process. My question is why does it take time for "Memo Reassign Time". I am not doing anything with Memo. Is there a way to avoid the time it takes to reassign memo.
    Thanks, AJ

    Import Performance Analysis
    File Archive Time 1
    Memo Reassign Time 53
    Total Import Process 254 Seconds
    Total Map Process 65 Seconds
    Almost 1/5th of the total time is taken by Memo Reassign Step for nothing. Is there a way to bring it to 0 seconds.

  • Help to boost the performance of my proxy server

    Out of my personal interest, I am developing a proxy server in java for enterprises.
    I've made the design as such the user's request would be given to the server through the proxy software and the response would hit the user's browsers through the proxy server.
    User - > Proxy software - > Server
    Server -> Proxy software -> User
    I've designed the software in java and it is working
    fine with HTTP and HTTPS requests.The problem which i am so scared is,
    for each user request i am creating a thread to serve. So concurrently if 10000 users access the proxy server in same time,
    I fear my proxy server would be bloated by consuming all the resources in the machine where the proxy software is installed.This is because,i'm using threads for serving the request and response.
    Is there any alternative solution for this in java?
    Somebody insisted me to use Java NIO.I'm confused.I need a solution
    for making my proxy server out of performance issue.I want my
    proxy server would be the first proxy server which is entirely
    written in java and having a good performace which suits well for
    even large organisations(Like sun java web proxy server which has been written in C).
    How could i boost the performace?.I want the users should have no expereience of accessing the remote server through proxy.It would be like accessing the web server without a proxy for them.There should be not performance lagging.As fast as 'C Language'.I need to do this in java.Please help.

    I think having a thread per request is fine.Maybe I got it wrong, but I thought the point in
    using NIO with sockets was to get rid of the 1 thread
    per request combo?Correct. A server which has one thread per client doesn't scale well.
    Kaj

  • Import: performance question

    Hi, what is the different between these statements for application in term of performance?
    import java.io.*;
    and
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    Which one is faster for execution?

    Neither. Search the forums or web for the countless answers to the same question.

  • ABAP Memory (EXPORT/IMPORT) - Performance Issue

    Performance wise, is it a good idea to use ABAP memory to export and import data between programs?
    Thanks in advance,
    JT

    IMHO is you EXPORT/IMPORT a couple of variable...Your not going get any performance issues....So go ahead -:)
    Greetings,
    Blag.

  • Trying boost the performance of RMAN on AIX5L

    Hi all,
    I need to find the best "number" for my test environment:
    I'm looking for opinions based on knowledge of who has experience with RMAN.
    I'm trying find better numbers of channels, maxopenfiles and additional parameters.
    I accept the recommendation of changing the parameters of Operating System or Database
    Environment Info:
    DB:  10.2.0.5
    ASM: 10.2.0.5
    OS:  AIX 5L
    CPUs: 16 
    DISKGROUP: 1 (with 16 ASMDISK)
    Adapter: 4 HBA (4GBIT)I want to find out what the maximum rate that can be achieved, then this test I will use only "BACKUP VALIDATE" When I get the best number i'll go to TSM the backup will be done via LAN-FREE.
    Below the results of the tests using BACKUP VALIDATE:
    =====================================================
    1° Test
    Database Parameter
      backup_tape_io_slaves=TRUE
      dbwr_io_slaves=0
      db_writer_processes=6
      tape_asynch_io= TRUE
      disk_asynch_io= TRUE
    RMAN Parameter
      4 Channel
      4 MAXOPENFILES for each Channel
    TIME: 01:08:13
    SIZE IN:892.11GB     
    RATE: 223MB/s
    =====================================================
    =====================================================
    2° Test
    Database Parameter
       backup_tape_io_slaves=TRUE
       dbwr_io_slaves=0
       db_writer_processes=6
       tape_asynch_io= TRUE
       disk_asynch_io= TRUE
    RMAN Parameter
       4 Channel
       3 MAXOPENFILES for each Channel
    TIME: 00:57:42
    SIZE IN:892.11GB     
    RATE: 264MB/s
    =====================================================
    =====================================================
    3° Test - Here my best number
    Database Parameter
      backup_tape_io_slaves=TRUE
      dbwr_io_slaves=0
      db_writer_processes=6
      tape_asynch_io= TRUE
      disk_asynch_io= TRUE
    RMAN Parameter
      4 Channel
      2 MAXOPENFILES for each Channel
    TIME: 00:53:01
    SIZE IN:892.11GB     
    RATE: 287MB/s
    =====================================================I ran other test with 2 Channels and 3 MAXOPENFILES and perfomance was like 3° Test - about 54 minutes
    I could not understand why using the maximum of 2 MAXOPENFILES and 4 Channels = 8 MAXOPENFILES is more performatic than if I increase the number parallelism of the datafiles reads.
    My 1° test i can get 784 GB/hour
    My 3° test i can get 1009 GB/hour
    It's a big difference, because this test database (~ 900GB) is small compared with the production.
    What makes sense for this to be optimal number (8 MAXOPENFILES)?
    Thanks,
    Levi Pereira

    Hi Dude,
    Thanks for the input.
    Dude wrote:
    I think the difference could be memory and process related combined with hardware limitations. RMAN will read the datafiles from disk into memory. Doesn't parallelism introduce additional channels and each channel writes to a separate backupset? From what I understand, too many I/O channels can reduce overall performance and cause additional overhead in particular if the media or disk is the bottleneck. On the other hand, If the cpu was the bottleneck, more cpu or processes can boost performance. When processes run on different cpu's, caching is less effective. The same applies to your HBA adapter. I monitored the whole process of backup and saw that the issue was always associated with performance of I/O the RMAN not was waiting for the CPU and also had no waits on disks. I knew I could increase my throughput but I do not know how. (i.e Resources was left over but did not know how exploring them.)
    >
    As a rule, the number of channels used in carrying out an individual RMAN command should match the number of physical devices accessed in carrying out that command. Striped disk configurations involve hardware multiplexing, so that the level of RMAN multiplexing does not need to be as high and a smaller MAXOPENFILES setting can results in faster performance.
    I can use up to 4 Drives (4 Channels) that are LTO-5 that may have an acceptable rate of 120MB/s, so I can go up to 480Mb/s I know it's not exact, there are several extra factors.
    I wish be sure which was the highest rate I could get by using four channels.
    It remains unknown (x =?) this math.
    16 (asmdisk) + 2 (HBA) + 256 (Datafiles) its ok to use 4 Channel + 2 MAXOPENFILES + 12 DB_WRITE
    If I knew the math to find the ideal number of channels and maxopenfiles would be my great discovery.
    I guess you already checked the Tuning Backup and Recovery chapter of the Database Backup and Recovery Advanced User's Guide, which also shows that you can use the V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO views to determine the source of backup or restore bottlenecks and to see detailed progress of backup jobs. http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtunin.htm#i1006195
    I add more notes than I used:
    RMAN Performance Tuning Using Buffer Memory Parameters [ID 1072545.1]
    RMAN Performance Tuning Diagnostics [ID 311068.1]
    Advise On How To Improve Rman Performance [ID 579158.1]
    Using V$BACKUP_ASYNC_IO / V$BACKUP_SYNC_IO to Monitor RMAN Performance [ID 237083.1]
    http://levipereira.wordpress.com/2010/11/20/tuning-oracle-rman-jobs/
    I tried one more change:
    On large workloads, the database writer may become a bottleneck. If it does, then increase the value of DB_WRITER_PROCESSES. As a general rule, do not increase the number of database writer processes above one for each pair of CPUs in the system or partition.
    http://download.oracle.com/docs/cd/B28359_01/server.111/b32009/appa_aix.htm#BEHGGBAJ
    I changed DB_WRITE_PROCESSES from 6 to 12.
    My last RATE: 305MB/s.
    Cheers,
    Levi Pereira

  • Important performance metrics from Tivoli, MOM, etc. into Oracle database

    Hi,
    I'm brand new with all of this stuff. I'm trying to grab performance data from a bunch of performance tools (MOM, Tivoli, Load Runner, custom client tools, etc.) and import it into one big file in an Oracle database (the performance data would have to map). Then, ideally, I'd like to be able to link that data up to Excel somehow and make graphs from it. Is this feasible? Where can I go for information to help me out with this?
    Thanks!

    Everything you've written is wrong. Well not just wrong but horribly wrong.
    Here are my thoughts in no particular order.
    1. Since are brand new you should push yourself back from the keyboard and educate yourself before you proceed. Is there anyone in your organization you can ask for help?
    2. Your entire concept around "import it into one big file" is not a complete non sequitur it is essentially impossible. A relational database such as Oracle stores information in logical segments called tables. You need to design a table with columns that correspond with your data and comply with normalziation rules.
    http://www.psoug.org/reference/normalization.html
    3. Unless you are not under any governmental restrictions with respect to auditing and compliance you've go not business ever taking data out of a database and putting it into a toy like Excel. Get a report writer such as one of the tools Oracle sells or Crystal Reports, or Cognos, etc.
    Where can you go for information? I'd suggest you start with the Oracle DBA who is managing the database. Buy him or her lunch and ask for help. Then go to your manager and ask for training.
    My apology if this sounds harsh but sitting here reading this was roughly equivalent to watching two cars at 100kph heading toward each other on a one-lane road.

  • Does anyone know how to boost mac performance?

    Hey,
    After installing OS X Yosemite on my mac, I began to notice some decrease in performance. Do you guys have any tips and recommendations on how I could fix this issue please?
    Thanks!

    Please describe the problem in as much relevant detail as possible. The better your description, the better the chance of a solution.
    For example, if the computer is slow, which specific actions are slow? Is it slow all the time, or only sometimes? Did you change anything else just before it started? Have you seen any alerts or error messages? Have you done anything to try to fix it? Most importantly, do you have a current backup of all data? If the answer to the last question is "no," back up now. Ask if you need guidance. Do nothing else until you have a backup.

  • Import performance with mac

    I have bought a Mac Pro within the last week and so at present I am not fully familiar with all its features.
    I would appreciate any help with the following, when using lightroom v1.1.
    When I import images from disk the first 20 or so images are imported very quickly (less than 5 seconds) the import then virtually stops for 5 minutes plus and proceeds at snails pace. If however I click on one of the imported images it seems to wake the Mac up and the remaining images (200+) are downloaded at an amazing speed.
    The above happens if I click on an image at any stage of the download process including as soon as the process slows down.
    My computer is a Mac Pro, 3.0Ghz processors with 4Gb of RAM.
    Thank you
    Michael Windle

    Michael
    Perhaps not too much help, but as a point of reference I have the following:
    20 inch iMac
    2Ghz Intel Core Duo
    1.5GB memeory
    I am running Lightroom v1.1
    Have to say that with the above configuration I am experiencing no import or performance issues
    Roger

  • Slow album art import performance on new iTunes library

    Hi-
    I am experiencing very slow performance when importing album artwork on a recently migrated iTunes library. First some background:
    I am running iTunes 7.5.0.20 on WinXP although the exact problem I am experiencing was happening on 7.4 prior to me upgrading.
    I have just added a new internal HD to my PC for additional storage. I currently have 2 iTunes libraries on my machine. My original library on the old HD and a new library on the new HD. For the time being, I have an exact copy of all my music files on both my old and new HD which the old and new iTunes libraries reference repsectively. I can toggle between the two libraries no problem by using the hold down shift key method when starting iTunes. I am planning on migrating to the new library/drive (and deleting all music off the old drive) as soon as I feel confident that everything is working correctly on the new drive.
    As a side note, I like to manage my own file naming/structure so I do not have the "Keep iTunes Music Folder organized" option checked and I have not consolodated my entire music library under the iTunes folder.
    I am experiencing 2 oddities:
    1. Whenever I manually import/paste album artwork for an album on the new drive, I get 2 dialog/status boxes for each track. One that says, "Writing ID tags..." and the next says "<Track name>". It takes a good 90 seconds to add album art for 10-12 songs.
    On the old drive, if I import the exact same album art for the other copy of the same album, I only get the "<Track name>" dialog box and it completes 4-5 times faster than on the new drive.
    2. Everytime I shut down iTunes on the new hard drive/library I get a "Saving iTunes Library..." dialog regardless of if I modify any tags, listen to music, or don't do anything at all.
    On the old drive, I only rarely get this dialog, and only after I make obvious changes to the library.
    I am really only concerned about issue #1, although my hunch is that these two issues are related. I have attempted to troubleshoot this a ton of different ways to no avail. Any help would be much appreciated.
    thanks!

    Just RIGHT click the song, and near the bottom of the drop-down menu, hit "get album artwork".

  • How to boost Macbook performance

    Hello,
    Many thanks in advance for your help. I have a white Macbook purchased in October 2008. The spec is:
    2.1GHz Intel Core 2 Duo
    2 GB 667 MHz DDR2 SDRAM
    120 Gig HD
    It's been great so far, but later in the year I'm changing job and they use (horrors!) Wintel. I'd like to keep my Mac to use for the majority of the time, and then switch over to Windows when necessary but I'm worried about the potential slowdown. I'll be using Adobe Premiere Pro on Windows for occasional use, as well as other CS applications, but I'd like to switch back to the Mac whenever possible.
    Might it be worth changing the hard drive and boosting the RAM to 4 gig? Is there anything else I should consider to squeeze every inch of juice out of my trusty white Macbook that I can't afford to update for another year or so?
    Thanks again,
    Jonathan.

    The RAM and the hard drive upgrade are excellent ideas....with a 120Gb drive that I would expect to be at least half full by now a 320Gb would help....and doubling the RAM will make a difference also....
    When I first bought my MacBook it came with 2Gb of RAM and a 160Gb hard drive....I also bought a new Hitachi 200Gb 7200 rpm drive and a 4Gb kit for the RAM...waited about a week (to see how the MacBook performed stock)....then installed the upgrades...well worth the investment...

  • DS6 export/import performance

    Hi,
    is it just me, or is LDIF export quite slow in DS6? For example:
    725,000 Entries, DS6:
    offline dsadm export: 44 minutes
    online dsconf export: 44 minutes
    online dsconf export -Q: 53 minutes
    offline db2ldif -r: 47 minutes
    As opposed to that:
    725,000 Entries, DS5.2P4 (same machine, same cache sizes):
    online db2ldif: 13 minutes
    offline db2ldif -r: 10 minutes
    Also import seems to be a bit slower for DS6.
    I really liked the fast LDIF export, hope this gets fixed (if it's not a problem with my configuration).
    Cheers,
    Holger

    I checked iostat, for DS6 export the disk is roughly as busy as for DS5.2P4, although the export for DS6 is much slower. So maybe the disk is the limiting factor. The question is why the disk load is almost the same, although DS6 throughput is around 4x less?Sorry, this information was wrong, I checked again. Disk usage was much higher with DS6. Busy percentages during export were roughly:
    DS5.2P4: 40-50%
    DS6: 80-90%
    Then I realized some configuration difference, db cache files location was not in /tmp for DS6. After I changed that, performance for DS6 was much better:
    DS6: 60-70% disk busy
    15 minutes export time
    Still not as fast as DS5.2P4, but acceptable for me.

Maybe you are looking for