Recordlength & buffer parameter in IMP

In IMPORT of oracle we specify 2 parameter
1. recordlength
2. buffer
1. How much maximum size we can spercify for those parameter to fasten the IMPORT process? Also on which thing those values depends?
2. Also how much time will it save if we specify those parameters? e.g. If any IMPORT takes 10 Hours to import and if we specify those 2 parameters to high values then how much will it take to import?
3. Is there any other parameter which will fasten the import process?
Thanks,

for first You have to understand what those parameters mean:
The BUFFER parameter applies ONLY to conventional path Export. It has no effect on a direct path Export. This BUFFER parameter specifies the size (in bytes) of the buffer used to fetch rows. It determines the maximum number of rows in an array, fetched by Export. For direct path Export, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.
The RECORDLENGTH parameter specifies the length (in bytes) of the file record. You can use this parameter to specify the size of the Export I/O buffer (highest value is 64 kb). Changing the RECORDLENGTH parameter affects only the size of data that accumulates before writing to disk. It does not affect the operating system file block size. If you do not define this parameter, it defaults to your platform-dependent value for BUFSIZ (1024 bytes in most cases).
Then read metalink note: 155477.1
Or this one:
http://www.dba-oracle.com/oracle_tips_load_speed.htm
Then test in Your test env what are the best parameters for Your particular system.

Similar Messages

  • BUFFER parameter in export

    we know that BUFFER parameter in export
    buffer_size = rows_in_array * maximum_row_size
    but how/where can I find the rows_in_array and maximum_row_size
    Please help me

    To get row size
    rem Table Sizes section
    column Bytes heading BytesMB format 999,999,999,999
    column tablespace_name heading 'Tablespace Name' format a16
    column segment_name heading 'Table Name' format a35
    column owner heading 'Owner' format a10
    prompt ########################################################################
    prompt
    prompt Table Sizes Report
    prompt
    col tablespace_name format a20;
    col segment_name format a30;
    col owner format a8;
    set linesiz 132;
    set pagesiz 60;
    BREAK ON owner ;
    COMPUTE SUM OF bytes on owner;
    select owner,tablespace_name,segment_name,sum((bytes/1024/1024)) Bytes
    from sys.dba_extents
    where owner in 'BAAN' and segment_type='TABLE'
    group by tablespace_name,owner,segment_name
    order by owner,tablespace_name,segment_name, bytes
    http://www.baanboard.com/baanboard/showthread.php?t=13985

  • Change buffer parameter like rsdb/ntab/irdbsize

    hi guruji
    i wnat chang following buffer parameter in sap i try this from rz10 but some parameter i cant found for edit
    i chagn following paremeter are i whant to change
    ababp/buffersize,rsdb/ntab/irdbsize, etc
    nainesh suthar

    Hi Suthar,
    The below mentioned parameters may not be defined in your profiles. It will be taking the default values, which can be checked using RZ11.
    For editing these parameters, goto-->RZ11, select the profile and in extended maintenance, "create parameter" and specify the parameter with the required values.
    Regards
    Dona

  • Buffer parameter is better than direct=y

    When I tried to export a history table that has 40 million rows & weighs 11GB, direct=y parameter is taking double the time compared to buffer=16000000 parameter.
    exp uname/pwd@testdb file=/oradata/uat2.dmp buffer=16000000 statistics=none tables=(ALL_TXNS_MMDD)
    buffer=16000000      *22 Minutes*
    direct=y      *48 Minutes*
    exporting onto local HDD
    direct=y      *3 minutes*
    buffer=16MB     *18 minutes*
    Version- Oracle 9.2.0.4+
    I always thought direct=y gives better speed and in all my export scripts I have used only direct=y.
    does anyone know why buffer is giving better speed ?
    Edited by: Sajiv on Jun 11, 2009 1:49 PM

    Sajiv wrote:
    When I tried to export a history table that has 40 million rows & weighs 11GB, direct=y parameter is taking double the time compared to buffer=16000000 parameter.
    exp uname/pwd@testdb file=/oradata/uat2.dmp buffer=16000000 statistics=none tables=(ALL_TXNS_MMDD)
    buffer=16000000      *22 Minutes*
    direct=y      *48 Minutes*
    exporting onto local HDD
    direct=y      *3 minutes*
    buffer=16MB     *18 minutes*
    In your export command the @testdb suggests that you are exporting across a network. Does this mean that when you say "exporting onto a local HDD" you run the export command on the same machine as the database ?
    If so then the difference in time may be related to the way in which SQL*Net can perform data compression for network traffic - and perhaps this can't be done in the same way for direct exports, leaving you with more round-trips when the traffic is across tcp/ip but fewer when using a local connection.
    If you want to find out more about what's going on, you could simply query v$session_event for the session every few seconds to see where the time is going.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke

  • Buffer Parameter Increase???

    In one of the servers we are facing high number of swaps in the following buffers.
    Program buffer                   (PXA)
    CUA buffer                       (CUA)
    Screen buffer                    (PRES)
    Export/Import buffer             (EIBUF)
    To overcome this issue we have to increase the value in corresponding
    Program buffer                   (PXA)
    abap/buffersize
    CUA buffer                       (CUA)
    rsdb/cua/buffersize
    Screen buffer                    (PRES)
    zcsa/presentation_buffer_area
    Export/Import buffer             (EIBUF)
    rsdb/obj/buffersize       
    My Question is how do I analyze how much Value should I increase for each of the Parameter.
    Useful answers will be rewarded!

    Hello,
    In st02, double click on the parameter buffer that faces many swap, for exemple "Screen".
    The following tab show you
    Size              Allocated        KB
                      Available        KB
                      Used             KB
                      Free             KB
    Directory entries Available
                      Used
                      Free
    This buffer is setup by 2 parameters (you can display them with 'current parameter button'):
    zcsa/presentation_buffer_area
    sap/bufdir_entries
    One parameter is an amount of memory (in Bytes), the other represent a 'maximum number'.
    To know if you need more memory place in the buffer or to increase the max number, you need to read the tab on the previous screen.
    If the "Size" - "Free" is next to 0, you need to increase zcsa/presentation_buffer_area
    If the "Directory entries"-"Free" is next to 0, you need to increase sap/bufdir_entries
    There is no absolute rule to know in one row how much you need to increase every parameters for every buffer. It's more a question of 'fellings' and you often need to do many tests (change parameter, stopsap/startsap, re-change parameter, etc.).
    For exemple
    zcsa/presentation_buffer_area = 10000000
    sap/bufdir_entries = 10000
    If "Size"-"Free" is to 0 and  "Directory entries"-"Free" is to 9000, this means  you can have 9 times more directory entries but no more memory to hold them.
    So, on this system 1000 buffer entries equals to 10Mb of memory.
    So if "Objects swapped" is to 2000, just set zcsa/presentation_buffer_area = 10000000 +2*10000000
    You don't have to modify sap/bufdir_entries because it can handle 9 more time entries.
    If you want to handle in memory as much entries as the maximum directory entries, you could set zcsa/presentation_buffer_area = 10000000 + 9*10000000
    Do the same way for every buffer with many swaps.
    Hope this will help you.
    Regards,

  • Imp-00020 long column too large for column buffer size (22)

    Hi friends,
    I have exported (through Conventional path) a complete schema from Oracle 7 (Sco unix patform).
    Then transferred the export file to a laptop(window platform) from unix server.
    And tried to import this file into Oracle10.2. on windows XP.
    (Database Configuration of Oracle 10g is
    User tablespace 2 GB
    Temp tablespace 30 Mb
    The rollback segment of 15 mb each
    undo tablespace of 200 MB
    SGA 160MB
    PAGA 16MB)
    All the tables imported success fully except 3 tables which are having AROUND 1 million rows each.
    The error message comes during import are as following for these 3 tables
    imp-00020 long column too large for column buffer size (22)
    imp-00020 long column too large for column buffer size(7)
    The main point here is in all the 3 tables there is no long column/timestamp column (only varchar/number columns are there).
    For solving the problem I tried following options
    1.Incresed the buffer size upto 20480000/30720000.
    2.Commit=Y Indexes=N (in this case does not import complete tables).
    3.first export table structures only and then Data.
    4.Created table manually and tried to import the tables.
    but all efforts got failed.
    still getting the same errors.
    Can some one help me on this issue ?
    I will be grateful to all of you.
    Regards,
    Harvinder Singh
    [email protected]
    Edited by: user462250 on Oct 14, 2009 1:57 AM

    Thanks, but this note is for older releases, 7.3 to 8.0...
    In my case both export and import were made on a 11.2 database.
    I didn't use datapump because we use the same processes for different releases of Oracle, some of them do not comtemplate datapump. By the way, shouldn't EXP / IMP work anyway?

  • IMP-000200: long column Too larege for column buffer size 22

    IMP error: long column too large for column buffer size
    IMP-000200: long column Too larege for column buffer size <22>
    imp hr/hr file=/home/oracle/hr.dmp fromuser=hr touser=hr buffer=10000 and try also 100000000
    and still the same error please any body can help me with detials please

    Providing more information/background is probably the wise thing to do.
    Versions (databases, exp, imp), commands and parameters used - copy&paste, relevant part of logs - copy&paste, describe table, etc.
    Some background, like what's the purpose, did this work before, what has changed, etc.
    Also you might check the suggested action for the error code per documentation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14219/impus.htm#sthref10620
    Edited by: orafad on Dec 5, 2010 7:16 PM

  • Export buffer maximum size

    Hi,
    For the parameter buffer used in export what is the maximum size we can give as input.

    BUFFER
    Default: operating system-dependent. See your Oracle operating system-specific documentation to determine the default value for this parameter.
    Specifies the size, in bytes, of the buffer used to fetch rows. As a result, this parameter determines the maximum number of rows in an array fetched by Export. Use the following formula to calculate the buffer size:
    buffer_size = rows_in_array * maximum_row_size
    If you specify zero, the Export utility fetches only one row at a time.
    Tables with columns of type LOBs, LONG, BFILE, REF, ROWID, LOGICAL ROWID, or DATE are fetched one row at a time.
    Note:
    The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. For direct path Exports, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.
    Example: Calculating Buffer Size
    This section shows an example of how to calculate buffer size.
    The following table is created:
    CREATE TABLE sample (name varchar(30), weight number);
    The maximum size of the name column is 30, plus 2 bytes for the indicator. The maximum size of the weight column is 22 (the size of the internal representation for Oracle numbers), plus 2 bytes for the indicator.
    Therefore, the maximum row size is 56 (30+2+22+2).
    To perform array operations for 100 rows, a buffer size of 5600 should be specified.
    Ref. Oracle® Database Utilities
    10g Release 2 (10.2)
    Part Number B14215-01
    Ch. 19 Original Export and Import
    ~ Madrid

  • How can we provide F4 help for parameter field.

    Hi All,
    How can I provide F4 help for parameter field.
    Regards,
    Amar

    hi,
    different ways of applying search help are :
    you can use anyone:
    1)   CALL FUNCTION 'F4IF_INT_TABLE_VALUE_REQUEST'
    or
    2) 
    can follow these simple steps for search help:
    go to se11==> put some name after ticking search help radiobutton==> create==>
    then tick " elementery search help " and press enter ===>then put description and table name in selection method ===>then put the field on which u want search help
    in search help parameter==> tick IMP EXP ==> write 1 in lpos and spos===>save and activate===> double click on table name ===>  select that field and press *search help tab* above===> then copy
    3 )
    methods of applyind search help:
    For search help sometimes we use MATCHCODE----->when we apply it directly to parameter or select option.they are obselete now ii.For search help sometimes we CALL FUNCTION 'F4IF_INT_TABLE_VALUE_REQUEST' ??------->if we want to populate data of internal table as search help.
    iii.For search help sometimes we declare tablename-fieldname in selection?------>if that field in that table is having seach help or check table ..F4 will be avaliable directly.
    also:
    Fixed value of domain can also work as search help.
    i hope it will help u a lot
    thaks and regards
    rahul sharma

  • How to exclude a schema in imp & imp error

    Hi Folks,
    Is there a way to exculde a schema when using full=y in imp (import).
    Why i prefer this is because im getting the below error while importing the dumpfile
    "ALTER SESSION SET CURRENT_SCHEMA= "PERFSTAT""
    IMP-00003: ORACLE error 1435 encountered
    ORA-01435: user does not exist
    IMP-00000: Import terminated unsuccessfully
    since I dont have (dont want) perfstat in my destination database, I want to exculde perfstat schema during import.
    Also, I dont perfer to go fromuser touser, since some pubic synonym/objects misses.
    Regards
    KSG
    Edited by: KSG on Aug 30, 2011 7:13 PM

    Ok. You can perform schema level import with imp by using FROMUSER parameter.
    User (Owner)--allows you to import all objects that belong to you (such as tables, grants, indexes, and procedures). A privileged user importing in user mode can import all objects in the schemas of a specified set of users. Use the FROMUSER parameter to specify this mode.
    Source:http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch02.htm#1005111
    But, your question is you wish to exclude one or more schema from a full=y exported dump, then it is not possible with imp, because there is no EXCLUDE parameter with imp, but you can use FROMUSER to get import from export dump.
    Regards
    Girish Sharma

  • IMP-00017,IMP-00003,ORA-959

    Hello,
    After a successful export of a schema, I got a warning while doing an import of the schema I had just exported.
    I checked the log and saw the following errors:
    IMP-00017 : following statement failed with ORACLE error 959:
    "CREATE TABLE "M4DMS_ORA_DOC_CO1" ("ID_DOC" NUMBER(8, 0), "ID_DOC_VERSION" N"
    "UMBER(2, 0), "DOC_CONTENT" BLOB, "ISO_LANGUAGE" VARCHAR2(3), "DOC_FORMAT_TY"
    "PE" VARCHAR2(10), "ID_CHAR_SET" VARCHAR2(20))  PCTFREE 1 PCTUSED 40 INITRAN"
    "S 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER"
    "_POOL DEFAULT) TABLESPACE "M4PRO" LOGGING NOCOMPRESS LOB ("DOC_CONTENT") ST"
    "ORE AS  (TABLESPACE "M4PRO" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10 "
    "NOCACHE LOGGING  STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER"
    "_POOL DEFAULT))"
    IMP-00003: Erreur ORACLE 959 rencontrée
    ORA-00959: Tablespace 'M4PRO' does not exitTo make sure the 'guilty' tablespace does indeed exist, I ran the following as sysdba:SQL> conn /@m4client as sysdba
    ConnectÚ.
    SQL> select tablespace_name from dba_tablespaces where tablespace_name like 'M%'
    ;and the result is as follows:
    TABLESPACE_NAME
    META4
    M4PRO
    SQL> Which means that the tablespace M4PRO does exit.
    Why do I have that error and how can I fix it?
    I am using oracle 10gR2 on both machine (The one I did the export from and the one I am trying to import into)
    Thanks in advance.

    here:
    SQL> select username,default_tablespace from dba_users where username like 'M4%';
    USERNAME DEFAULT_TABLESPACE
    M4PROD M4PROD
    uhmmm... I can see by myself that, the default tablespace of user 'M4PROD' is not 'M4PRO' but *'M4PROD'*. Is it why am having that problem? If yes, does it mean that I should create a tablespace M4PRO in the database where the import is to be done, alter the user M4PROD and set his default tablespace to M4PRO??? If no, so how should I proceed?
    Thanks in advance.

  • From dng_image to pixel buffer?

    Various things of the DNG SDK return a dng_image. The stage1, stage2 and stage3 are an RGB image. And if you use dng_render, you also get a dng_image.
    I'd like to get a buffer of pixels (with for example R, G, B values). However, I haven't found any way in the API to get such a buffer.
    For example, dng_image has member functions to get its width and height, but nothing to get the pixel buffer or even the value of a single pixel.
    And I experimented a bit with the "Get" method of dng_image, but if I don't put anything in fArea of the buffer parameter, I didn't manage to get anything interesting, while if define a rectangle there, the whole program corrupts and crashes a bit after using dng_image.Get.
    So, basically, my question is: how can I get the pixel buffer from a dng_image?
    This is for displaying it on a computer monitor BTW.
    Unfortunately the dng_validate sample doesn't help me out, because they don't get any pixel values there, they just call an API method to convert the dng_image to a TIFF file, which doesn't help me at all because I need RGB pixels on a screen, not a TIFF file on a disk.
    Thanks.

    I found the answer in the meantime, here it is: it's done with a dng_const_tile_buffer, where you give the dng_image as parameter in its constructur, and then you can get the pixels with dng_const_tile_buffer::ConstPixel.

  • Exp buffer

    Hi,
    Version :10.2.0.1
    What is buffer in export?
    will it increase performance of exp?
    I read the below link
    http://docs.oracle.com/cd/B19306_01/server.102/b14215/exp_imp.htm
    We have around 600 tables in a database? so How to calculate value for this parameter?
    Thanks,

    >
    We have 600 tables in a database. I want full export.how to calculate appropriate value?
    will it increase performance of exp?
    >
    Why are you ignoring the advice in the doc?
    >
    In general, Oracle recommends that you use the new Data Pump Export and Import utilities because they support all Oracle Database 10g features. Original Export and Import do not support all Oracle Database 10g features.
    >
    You should read the doc and follow the advice in it. That includes the advice to use expdp instead of export.
    The BUFFER parameter applies only to conventional path Export.
    Normally you would use the default buffer size; that is, do not specify BUFFER on the command line. The buffer size examples shown in the doc are specific to a table since they depend on the maximum_row_size and the number of rows that you want to fetch.

  • Schema imp successful without warning

    I want to refresh one schema from production to stage, took schema export successfule without warnings,
    exp system/**** file=schema.dmp log=schema.log owner=ARIF statistics=none
    i want to do successful imp on stage db, can
    imp system/test file=p00ibmtest.dmp show=y ignore=y fromuser=ARIF touser=ARIF CONSTRAINTS=n and getting these error messages, can someone help me tell exact parametes to imp successful without warnings?
    IMP-00019: row rejected due to ORACLE error 1
    IMP-00003: ORACLE error 1 encountered
    ORA-00001: unique constraint (HR.COUNTRY_C_ID_PK) violated
    Column 1 BE
    Column 2 Belgium
    Column 3 1

    Hi,
    can someone help me tell exact parametes to imp successful without warnings?This is usually because there are already rows in the target table.
    You can load it with indexes=n, but you will still have dups in your table.
    Hope this helps. . . .
    Don Burleson
    Oracle Press author

  • 10g で exp したデータ → 11g へ imp でエラー

    10g から 11g へのデータ移行についての質問です。
    ■10gでの実行expコマンド
    exp user/password file=f:\backup_db\当日.dmp owner=user log=f:\backup\当日.log buffer=2000000
    ■11gへの実行impコマンド
    imp user/password file=e:\DB\Copy.dmp buffer=2000000 log=e:\db\imp.log fromuser=user
    上記の10g(10.2 32bit)のexpファイルを11g(11.2.0.1.0 - 64bit Production)へimpすると「ORA-01659」「ORA-20000」「ORA-01658」などのエラーが発生します。
    ところが「CREATE USER」のコマンドを変更したところimpで上記エラーはなくなりましたが、今度はストアドの一部がimpされない現象が新たに発生しました。
    何が原因なのでしょうか。
    そもそも「10g で exp したデータ → 11g へ imp でインポート」はできないのでしょうか。
    (マニュアルを見る限りは問題がなさそうなのですが)

    回答ではないのですが、expコマンドのバージョンとimpコマンドのバージョンは同じでしょうか?
    昔oracle8.1.7のexpコマンドを使用してデータをエクスポートし、oracle8.1.6のimpコマンドでデータを入れようとした際にエラーがでてインポートできないことがありました。
    結果的にはそれぞれのバージョンで若干dmpファイルのヘッダが異なっていたために発生した現象でしたので、もし同じような状況であればバージョンをそろえてみたらどうかと思います。

Maybe you are looking for