TIPS(49) : IMPORT PERFORMANCE TIPS

제품 : ORACLE SERVER
작성날짜 : 2003-06-10
TIPS(49) : IMPORT PERFORMANCE TIPS
==================================
PURPOSE
[Import 의 Performance]
Oracle import를 실행 시 많이 걸리는 시간을 줄이기 위해 다음의 현상을
적용해 보자.
Explanation
1) System 적 변경
- DB 를 다시 create 할 경우 DB_BLOCK_SIZE를 증가시킨다.
이 block size 가 큰 경우, 더 작은 I/O cycle 이 발생한다.
이 변경값이 permanent한 경우는 변경 전과 비교한 모든 효과를 고려한다.
- 1개의 커다란 rollback segment 를 생성하고, 이외의 rollback segment는
모두 offline 한다.
- 1개의 rollback segment는 import 되어질 table 중 가장 큰 것의 50%
정도로 잡는다. import 는 기본적으로 insert into table_name values
(,,,,,) 이고 이 경우 rollback segment 에는 rowid 만 들어가게 되므로
2개의 같은 size 인 extent를 갖는 rollback 을 생성하면 된다.
- import 가 끝날 때까지 database 를 NOARCHIVELOG mode로 유지한다.
이는 archive log 를 생성하는 overhead 를 없앨 수 있다.
- rollback segment와 마찬가지로, 커다란 redo log file을 생성한다.
클수록 log switch 가 발생하지 않으므로 import 시는 좋다.
작은 size 의 redo log 는 offline 하도록 한다. alert.log 에 나타나는
'Thread 1 cannot allocate new log, sequence 17, Checkpoint not
complete'의 메시지는 좀 더 크거나 더 많은 갯수의 redo log file 이
필요함을 나타낸다.
- 가능하다면 table, rollback, redo log file 이 다른 disk에 있도록 한다.
이는 i/o contention을 줄일 수 있다.
2) Init.ora Parameter 변경
- LOG_CHECKPOINT_INTERVAL을 redo log file의 size 보다 크게 준다.
이 숫자는 OS block 을 의미하여 unix 에서는 512 byte 이다.
이를 크게 하면 log swich time을 줄일 수 있다.
- SORT_AREA_SIZE를 증가시킨다.
인덱스를 아직 생성하지 않았다 하더라도, unique, primary key는 있기
때문이다.
이 값의 증가는 같은 machine 안에 별도의 어떤 작업이 있는가,
free memoty 가 얼마 만큼 있느냐에 따라 다르긴 하지만 평상 시의 5-10
배로 한다.
만일 machine이 swapping이나 paging이 발생하면 더욱 크게 해준다.
3) Import Options 변경
- COMMIT=N option을 사용한다.
이를 사용하면 buffer의 data 가 insert되고 commit함이 아니라, 각
object(table) 의 모든 data가 insert 후 commit 된다.
만일 rollback 이 작다면 이 option 을 이용할 수 없다.
- BUFFER 크기를 크게 한다.
이것 역시 시스템의 다른 activity 나, 데이타베이스 크기에 따라 다르다.
이 값이 크면 export file을 access 하는 횟수를 줄일 수 있다.
- import 시 INDEXES=N option을 사용한다.
만일 index를 생성 시는 SORT_AREA_SIZE 는 더욱 커야 한다.
Reference Documents
none

제품 : ORACLE SERVER
작성날짜 : 2003-06-10
TIPS(49) : IMPORT PERFORMANCE TIPS
==================================
PURPOSE
[Import 의 Performance]
Oracle import를 실행 시 많이 걸리는 시간을 줄이기 위해 다음의 현상을
적용해 보자.
Explanation
1) System 적 변경
- DB 를 다시 create 할 경우 DB_BLOCK_SIZE를 증가시킨다.
이 block size 가 큰 경우, 더 작은 I/O cycle 이 발생한다.
이 변경값이 permanent한 경우는 변경 전과 비교한 모든 효과를 고려한다.
- 1개의 커다란 rollback segment 를 생성하고, 이외의 rollback segment는
모두 offline 한다.
- 1개의 rollback segment는 import 되어질 table 중 가장 큰 것의 50%
정도로 잡는다. import 는 기본적으로 insert into table_name values
(,,,,,) 이고 이 경우 rollback segment 에는 rowid 만 들어가게 되므로
2개의 같은 size 인 extent를 갖는 rollback 을 생성하면 된다.
- import 가 끝날 때까지 database 를 NOARCHIVELOG mode로 유지한다.
이는 archive log 를 생성하는 overhead 를 없앨 수 있다.
- rollback segment와 마찬가지로, 커다란 redo log file을 생성한다.
클수록 log switch 가 발생하지 않으므로 import 시는 좋다.
작은 size 의 redo log 는 offline 하도록 한다. alert.log 에 나타나는
'Thread 1 cannot allocate new log, sequence 17, Checkpoint not
complete'의 메시지는 좀 더 크거나 더 많은 갯수의 redo log file 이
필요함을 나타낸다.
- 가능하다면 table, rollback, redo log file 이 다른 disk에 있도록 한다.
이는 i/o contention을 줄일 수 있다.
2) Init.ora Parameter 변경
- LOG_CHECKPOINT_INTERVAL을 redo log file의 size 보다 크게 준다.
이 숫자는 OS block 을 의미하여 unix 에서는 512 byte 이다.
이를 크게 하면 log swich time을 줄일 수 있다.
- SORT_AREA_SIZE를 증가시킨다.
인덱스를 아직 생성하지 않았다 하더라도, unique, primary key는 있기
때문이다.
이 값의 증가는 같은 machine 안에 별도의 어떤 작업이 있는가,
free memoty 가 얼마 만큼 있느냐에 따라 다르긴 하지만 평상 시의 5-10
배로 한다.
만일 machine이 swapping이나 paging이 발생하면 더욱 크게 해준다.
3) Import Options 변경
- COMMIT=N option을 사용한다.
이를 사용하면 buffer의 data 가 insert되고 commit함이 아니라, 각
object(table) 의 모든 data가 insert 후 commit 된다.
만일 rollback 이 작다면 이 option 을 이용할 수 없다.
- BUFFER 크기를 크게 한다.
이것 역시 시스템의 다른 activity 나, 데이타베이스 크기에 따라 다르다.
이 값이 크면 export file을 access 하는 횟수를 줄일 수 있다.
- import 시 INDEXES=N option을 사용한다.
만일 index를 생성 시는 SORT_AREA_SIZE 는 더욱 커야 한다.
Reference Documents
none

Similar Messages

  • Bulk import performance

    Hi Folks,
    We're getting ready to migrate our first teams into our new Vibe environment but I'm wondering if we can do anything to improve the performance of bulk imported documents (drag and drop into the webclient).
    Our environment: Vibe 3.3 on SLES11 SP2 (3 virtual servers)
    Server 1 (Tomcat):
    2x CPU
    8GB RAM (Java cache 6GB)
    Server 2 (Lucene):
    2x CPU
    8GB RAM (Java cache 6GB)
    Server 3 (MySQL):
    2x CPU
    4GB RAM
    When I do a bulk import, files import and everything is fine but if I'm concerned about the time it takes. During my test imports the only process that has any significant utilization is Java on the Tomcat server, but it bounces from between 30-90% on a single CPU. If needed I can easily crank these VMs up to 8 CPUs and 64GB RAM if needed. That said, if I can't tax the current servers, there's no need to increase resources.
    Does anyone know a way to improve the import performance? I want to be able to peg all my CPUs at 100% if need be.

    Hi John,
    many thanks for your reply. Import and gathering stats is done through an application. I have only the following information about what is supposed to happen when we run the statistic gathering process:
    "Optimization - Uses sub-commands for database optimization (DBMS_STATS*). These are important for the Oracle optimizer to generate optimal execution plans.
    Optimize Schema - Runs the DBMS_STATS.GATHER_SCHEMA_STATS(USER,cacade=>true) command. This procedure gathers (not estimates) statistics for all objects in a schema and on the indexes.
    Optimize the schema after you import a industry model from a dump file, and run the Optimize command whenever you have extensive changes to the data, such as imports or major updates.
    Optimize Feature Classes - Runs the DBMS_STATS.GATHER_TABLE_STATS([USER], [Table], cascade=>true) command for the feature class tables. This procedure gathers table, column, and index statistics."
    The application we use allows to the gather statistics in two different places. I now realise that we have only used one of the two so far and if my understanding of the documentation is right the one we have used does not gather all statistics. With your explanation the observed behavouir makes sense. Next time I will gather statistics using the second functionality to see if that one gathers all statistics at once.
    Many thanks again, Rob

  • Boost import performance?

    Hi,
    I'm doing some database imports to 9.2.0 database and it takes 12 hours for the import. OK, it's a large dump file but I am wondering if there is anything I can do to boost the performance of the database during an import.
    Thanks,
    steve.

    Difficult to tell, but there are some general tips:
    - Set a large buffer size.
    - Set commit=Y.
    - Import without indexes (indexes=N) and create the indexes and enable the FK constraints afterwards. You need to have a script ready for that of course.

  • Import Performance Analysis report

    Hi,
    I ran the import analysis report in FDM and it tells me the time taken by each step of import process. My question is why does it take time for "Memo Reassign Time". I am not doing anything with Memo. Is there a way to avoid the time it takes to reassign memo.
    Thanks, AJ

    Import Performance Analysis
    File Archive Time 1
    Memo Reassign Time 53
    Total Import Process 254 Seconds
    Total Map Process 65 Seconds
    Almost 1/5th of the total time is taken by Memo Reassign Step for nothing. Is there a way to bring it to 0 seconds.

  • Import: performance question

    Hi, what is the different between these statements for application in term of performance?
    import java.io.*;
    and
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    Which one is faster for execution?

    Neither. Search the forums or web for the countless answers to the same question.

  • ABAP Memory (EXPORT/IMPORT) - Performance Issue

    Performance wise, is it a good idea to use ABAP memory to export and import data between programs?
    Thanks in advance,
    JT

    IMHO is you EXPORT/IMPORT a couple of variable...Your not going get any performance issues....So go ahead -:)
    Greetings,
    Blag.

  • Important performance metrics from Tivoli, MOM, etc. into Oracle database

    Hi,
    I'm brand new with all of this stuff. I'm trying to grab performance data from a bunch of performance tools (MOM, Tivoli, Load Runner, custom client tools, etc.) and import it into one big file in an Oracle database (the performance data would have to map). Then, ideally, I'd like to be able to link that data up to Excel somehow and make graphs from it. Is this feasible? Where can I go for information to help me out with this?
    Thanks!

    Everything you've written is wrong. Well not just wrong but horribly wrong.
    Here are my thoughts in no particular order.
    1. Since are brand new you should push yourself back from the keyboard and educate yourself before you proceed. Is there anyone in your organization you can ask for help?
    2. Your entire concept around "import it into one big file" is not a complete non sequitur it is essentially impossible. A relational database such as Oracle stores information in logical segments called tables. You need to design a table with columns that correspond with your data and comply with normalziation rules.
    http://www.psoug.org/reference/normalization.html
    3. Unless you are not under any governmental restrictions with respect to auditing and compliance you've go not business ever taking data out of a database and putting it into a toy like Excel. Get a report writer such as one of the tools Oracle sells or Crystal Reports, or Cognos, etc.
    Where can you go for information? I'd suggest you start with the Oracle DBA who is managing the database. Buy him or her lunch and ask for help. Then go to your manager and ask for training.
    My apology if this sounds harsh but sitting here reading this was roughly equivalent to watching two cars at 100kph heading toward each other on a one-lane road.

  • Import performance with mac

    I have bought a Mac Pro within the last week and so at present I am not fully familiar with all its features.
    I would appreciate any help with the following, when using lightroom v1.1.
    When I import images from disk the first 20 or so images are imported very quickly (less than 5 seconds) the import then virtually stops for 5 minutes plus and proceeds at snails pace. If however I click on one of the imported images it seems to wake the Mac up and the remaining images (200+) are downloaded at an amazing speed.
    The above happens if I click on an image at any stage of the download process including as soon as the process slows down.
    My computer is a Mac Pro, 3.0Ghz processors with 4Gb of RAM.
    Thank you
    Michael Windle

    Michael
    Perhaps not too much help, but as a point of reference I have the following:
    20 inch iMac
    2Ghz Intel Core Duo
    1.5GB memeory
    I am running Lightroom v1.1
    Have to say that with the above configuration I am experiencing no import or performance issues
    Roger

  • Slow album art import performance on new iTunes library

    Hi-
    I am experiencing very slow performance when importing album artwork on a recently migrated iTunes library. First some background:
    I am running iTunes 7.5.0.20 on WinXP although the exact problem I am experiencing was happening on 7.4 prior to me upgrading.
    I have just added a new internal HD to my PC for additional storage. I currently have 2 iTunes libraries on my machine. My original library on the old HD and a new library on the new HD. For the time being, I have an exact copy of all my music files on both my old and new HD which the old and new iTunes libraries reference repsectively. I can toggle between the two libraries no problem by using the hold down shift key method when starting iTunes. I am planning on migrating to the new library/drive (and deleting all music off the old drive) as soon as I feel confident that everything is working correctly on the new drive.
    As a side note, I like to manage my own file naming/structure so I do not have the "Keep iTunes Music Folder organized" option checked and I have not consolodated my entire music library under the iTunes folder.
    I am experiencing 2 oddities:
    1. Whenever I manually import/paste album artwork for an album on the new drive, I get 2 dialog/status boxes for each track. One that says, "Writing ID tags..." and the next says "<Track name>". It takes a good 90 seconds to add album art for 10-12 songs.
    On the old drive, if I import the exact same album art for the other copy of the same album, I only get the "<Track name>" dialog box and it completes 4-5 times faster than on the new drive.
    2. Everytime I shut down iTunes on the new hard drive/library I get a "Saving iTunes Library..." dialog regardless of if I modify any tags, listen to music, or don't do anything at all.
    On the old drive, I only rarely get this dialog, and only after I make obvious changes to the library.
    I am really only concerned about issue #1, although my hunch is that these two issues are related. I have attempted to troubleshoot this a ton of different ways to no avail. Any help would be much appreciated.
    thanks!

    Just RIGHT click the song, and near the bottom of the drop-down menu, hit "get album artwork".

  • DS6 export/import performance

    Hi,
    is it just me, or is LDIF export quite slow in DS6? For example:
    725,000 Entries, DS6:
    offline dsadm export: 44 minutes
    online dsconf export: 44 minutes
    online dsconf export -Q: 53 minutes
    offline db2ldif -r: 47 minutes
    As opposed to that:
    725,000 Entries, DS5.2P4 (same machine, same cache sizes):
    online db2ldif: 13 minutes
    offline db2ldif -r: 10 minutes
    Also import seems to be a bit slower for DS6.
    I really liked the fast LDIF export, hope this gets fixed (if it's not a problem with my configuration).
    Cheers,
    Holger

    I checked iostat, for DS6 export the disk is roughly as busy as for DS5.2P4, although the export for DS6 is much slower. So maybe the disk is the limiting factor. The question is why the disk load is almost the same, although DS6 throughput is around 4x less?Sorry, this information was wrong, I checked again. Disk usage was much higher with DS6. Busy percentages during export were roughly:
    DS5.2P4: 40-50%
    DS6: 80-90%
    Then I realized some configuration difference, db cache files location was not in /tmp for DS6. After I changed that, performance for DS6 was much better:
    DS6: 60-70% disk busy
    15 minutes export time
    Still not as fast as DS5.2P4, but acceptable for me.

  • Important Performance Query

    Hi,
    I have just started using Toplink ORM 10.1.3.3.
    When I log the finest level of logs, I see some entries which could be big performance issues when I put the app in production. Has some one else noticed the same thing, or am I doing something terribly wrong ?
    [TopLink Fine]: *2008.08.24 08:35:47.231* --SELECT SEQ.NEXTVAL FROM DUAL
    [TopLink Finest]: *2008.08.24 08:35:49.394* --sequencing preallocation for SEQ: objects: 50 , first: 4,602, last: 4,651
    [TopLink Fine]: *2008.08.24 08:35:49.805* --SELECT SYSTIMESTAMP FROM DUAL
    [TopLink Finest]: *2008.08.24 08:35:52.018* --Assign return row DatabaseRecord(     MY_TABLE.MODIFIED_DATE => 2008-08-25 06:05:50.404)
    The above logs correspond to single record insert in my application. One is for sequence and another for timestamp for optimistic locking. Simple operation like this is taking 2 seconds(just on server !) to perform an action and this will be involved in every query (at least the TS because of optimistic locking). So I am little worried about the performance.
    Has anyone faced similar issue ?
    Regards,
    Ani

    Thanks for the reply.
    As rightly mentioned by you, the time taken for pre-allocation is not that big bothering factor for me as it is divided between batch size.
    I need to have an timestamp field in my tables for auditing purpose and I am trying to reuse the same for optimistic locking.
    In non ORM world, typically the system time is obtained using db functions. But I guess that ORM tool has to make a call and get the current timestamp from DB, set it in object and then only it can persist the obj.
    i.e. I was expecting that to insert an entity following sql is formed insert into mytable(col1,col2) values(123,_sysdate_) . But instead of having sysdate or something similar, the timestamp is taken from the DB first, set into object and then persisted.
    The reason for this behavior could be that the timestamp should be set in the object copy without having to perform a read after save.
    I have not tried to run any performance test on my usage. As mentioned earlier, we have just started development and I was trying to explore the optimal way to use toplink right from the beginning.
    Regards,
    Ani

  • Import performance and archive logs

    Well we are working on Oracle 10 R2 on Solaris.
    During import (impdp) its generating huge volume of archive logs.
    Our database size is in terabytes.
    How to stop the archive log generation during import or atleast minimize the generation ??

    Hello,
    If you can restart your database then you may set your database in NOARCHIVELOG mode.
    Then, after the import is finished, you'll have to set back your database in ARCHIVELOG mode (you'll need to restart again the database).
    Afterwards, you'll have to Backup your database.
    Else, without changing the Archive mode of the database, you can Backup and compress your archived "logs".
    For instance, with RMAN:
    connect target /
    backup
      as compressed backupset
      device type disk
      tag 'BKP_ARCHIVE'
      archivelog all not backed up
      delete all input;
    exit;By that way you'll save space on disk.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Payables Import Performance after Match Option changed to PO

    We changed the match option from 'R' to 'P' and since then payables import has been running longer and taking nearly 100% of the CPU while all other pending processes are erroring out with Signal 11.
    Has anybody here got this kind of issue with Payables Invoice Import?
    Looking for suggestions
    Thank a lot.

    Not seen this kind of an issue. May be you n can try taking a sql trace for both the cases and locate what the problem might be.

  • Extremely Important Performance Metrics OF XI

    Hi ALL
    Please do give me some documents  on Performance Metrics regarding file transfer from sap to XI,Message payload of XI, database integration.Scenarios is transfer of 100MB of data . please send me all the documents  related to it which gives detailed explaination of metrics .i will appreciate and reward accordingly .
    Perfomance ststistics is the prime objective of this post where data size exceeds 100MB.
    please do replyasap
    with regards
    subrato kundu

    HI,
    I don't think so, is there any documents available on this.
    But Performance of File Adapter depends on many parameters-
    Yes, depends on many parameters-
    1) Memory - Sizing of hardware
    2) Logic used to process the file ( for eg. write a script which will split the file etc)
    3) If multiple files at a time having huge volume of data, you can go with EOIO method
    But we have processed up to 10MB files without any problems. I took around couple of seconds
    Some links-
    Performance of File Adapter
    Hope this helps~
    Regards,
    Moorthy

  • Bad performance with Payables Open Interface Import

    hello all
    have a performance problem with the standard program Payables Open Interface Import, as this process is taking a long time to run.
    could recommend some patch for this issue or a solution.
    our application is 11.5.10.2 and Database 10g
    now create a SR but to date we have not been sending the first plan of action
    best regards

    There are many reasons this is a performance problem - search in MOS for "APXIIMPT performance" or "payables open interface import performance"
    Your best bet is to work this thru an SR.
    Have you traced the concurrent to determine where the bottleneck is ? Are statistics current ?
    HTH
    Srini

Maybe you are looking for

  • PreparedStatements and MySQL Connector/J

    Hi All, I was wondering what is the exact SQL statement that is passed to MySQL server when we use PreparedStatements. I was trying using MySQL Connector/J, type4 driver. I tried with a simple servlet     public void doGet(HttpServletRequest request,

  • Authorization to change BOM with history requirements

    Hi Techies, I m facing an issue now with respect to authorisation of changing BOM with history requirements. Our quality system was a copy from a production server which was used for CRM before. Usually in SAP when we change a BOM with a change numbe

  • Billing output to IDoc & reprocess the IDoc in same sys. to post FI entry

    I have a requirement to send the billing output to IDoc, read the same IDoc to post a FI entry but to another company code. Please suggest how can this be acheived, and is there any other alternative to acheive the above requirement. Thanks in advanc

  • Problems displaying images

    The following class is not displaying an image. Can anyone tell me why? Or, alternatively, does anyone have a complete class that works? Thank you. (ImageFile, obviously, is a parameter passed from an HTML Applet tag) import java.awt.Graphics; import

  • HT1414 My phone powered off on it's own this morning, and will not restart. Any help?

    I replied to a text message this morning, put the phone in my shirt pocket. I went to use it again 2 mins later and it was completely non-responsive. I have tried all the tricks I know to restart it, but no success. I even went to my vehicle and plug