Importing tables generating too much archives

I am using 9i/10g.
Please suggest how to import big tables without generating archives.
Can be set something like direct=y
Thanks in advance,
Aj

Pre-create the objects before import and change them to NOLOGGING.
This way the amount of Redo Generated will be less with Archive Log mode.
Use Buffer and Commit=n which would speed up the import process.
If possible, Change the Database to Noarchive log mode and proceed with the import.
Same way.Create the Indexes with Nologging and Parallel mode to speed up the process.

Similar Messages

  • One process generating too much archives

    Hi all,
    DB-Oracle 9.2
    OS- Aix
    One of my friend ask me how to find out which process generating too mcuh archive logs
    below are some of the parameters
    log_checkpoint_interval integer 30000
    log_checkpoint_timeout integer 1800
    log_checkpoints_to_alert boolean FALSE
    log_buffer  1048576
    log_archive_max_processes  2
    fast_start_mttr_target 0please suggest me

    hi,
    you have killed the seesion.. but insert operations are not needed or what? if you don't want any insert from that user then give priveleges acccordingly.
    coz, the import may be needed for some application. or else increase the redo log files.
    to find which process
    check the holding session from dba_waiter and check whether the session using any insert or update statement..(find the query).. even u can check from ps-ef |grep arch..
    instead of killing you can do any other alteration or inform user to import at a less consuption DB time..
    regards,
    Deepak

  • Importing is taking too much time (2 DAYS)

    Dear All,
    I'm Importing below support packages together in a queue @ SAP Solution manger 4  .
    SAPKB70015             Basis Support Package 15 for 7.00
    SAPKA70015             ABA Support Package 15 for 7.00
    SAPKITL426             ST 400: Patch 0016, CRT for SAPKB70015
    SAPKIBIIP6             BI_CONT 703: patch 0006
    SAPKIBIIP7             BI_CONT 703: patch 0007
    SAPKIBIIP8             BI_CONT 703: patch 0008
    SAPK-40010INCPRXRPM    CPRXRPM 400: patch 0010
    SAPK-40011INCPRXRPM    CPRXRPM 400: patch 0011
    SAPK-40012INCPRXRPM    CPRXRPM 400: patch 0012
    SAPKIPYJ7E             PI_BASIS 2005_1_700: patch 0014
    SAPKW70016             BW Support Package 16 for 7.00
    importing is taking too much time (2 DAYS) in main import phase, I have seen SLOG, there are many rows " I am waiting 1 sec and 6 sec . and also checked transaction code STMS all support packages  imported except  one support package "SAPKW70016  "
    Please advice.
    Best Regards'
    HE

    Hello Mohan,
    The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
    Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
    Note 531923 - Audit Trail: Indexes on table DBTABLOG
    Note 579980 - Table logs: Performance during access to DBTABLOG
    Regards,
    Siddhesh

  • REDO scanning goes in infinite loop and Too much ARCHIVE LOG

    After we restart DATABASE, capture process REDO scanning goes in infinite loop and Too much ARCHIVE LOG being generated.
    No idea whats going on.... otherwise basic streams functionality working fine.

    What's your DB version

  • Finding which table generated how much redo

    I'm wondering if we can find from any view that during this period a particular table generated how much amount of redo log , other than logminer ?
    We have a particular application and want to know which tables are generating most of the redo.

    from asktom
    see
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:477221446020
    and
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:366018048216
    Nologging only affects very very specific operations. For exmaple, from the
    alter index syntax statement in the SQL reference:
    LOGGING|NOLOGGING
    LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and
    direct-load INSERT operations against a nonpartitioned index, a range or hash
    index partition, or all partitions or subpartitions of a composite-partitioned
    index will be logged (LOGGING) or not logged (NOLOGGING) in the redo log file.
    In NOLOGGING mode, data is modified with minimal logging (to mark new extents
    invalid and to record dictionary changes). When applied during media recovery,
    the extent invalidation records mark a range of blocks as logically corrupt,
    because the redo data is not logged. Therefore, if you cannot afford to lose
    this index, you must take a backup after the operation in NOLOGGING mode.
    If the database is run in ARCHIVELOG mode, media recovery from a backup taken
    before an operation in LOGGING mode will re-create the index. However, media
    recovery from a backup taken before an operation in NOLOGGING mode will not
    re-create the index.
    An index segment can have logging attributes different from those of the base
    table and different from those of other index segments for the same base table.
    That also explains why the truncate above generated redo -- The statement
    "minimal logging (to mark new extents invalid and to record dictionary
    changes)." explains where that redo comes from. The blocks that were truncated
    were not logged HOWEVER the changes to the data dictionary itself were.
    .

  • Create procedure is generating too many archive logs

    Hi
    The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
    What would be the answer? The db must remain in archivelog mode.
    I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
    CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
    ,P_GRE NUMBER
    ,P_SDATE VARCHAR2
    ,P_EDATE VARCHAR2
    ,P_ssn VARCHAR2
    ) IS
    CURSOR MainCsr IS
    SELECT DISTINCT
    PPF.NATIONAL_IDENTIFIER SSN
    ,ppf.full_name FULL_NAME
    ,ppa.effective_date Pay_date
    ,ppa.DATE_EARNED period_end
    ,pet.ELEMENT_NAME
    ,SUM(TO_NUMBER(prv.result_value)) VALOR
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAf.ASSIGNMENT_ID ASSG_ID
    ,paf.ORGANIZATION_ID
    FROM
    pay_element_classifications pec
    , pay_element_types_f pet
    , pay_input_values_f piv
    , pay_run_result_values prv
    , pay_run_results prr
    , pay_assignment_actions paa
    , pay_payroll_actions ppa
    , APPS.pay_all_payrolls_f pap
    ,Per_Assignments_f paf
    ,per_people_f ppf
    WHERE
    ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND ppa.payroll_id = pap.payroll_id
    AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
    AND ppa.payroll_action_id = paa.payroll_action_id
    AND paa.action_status = 'C'
    AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
    AND ppa.action_status = 'C'
    --AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
    AND paa.assignment_action_id = prr.assignment_action_id
    AND prr.run_result_id = prv.run_result_id
    AND prv.input_value_id = piv.input_value_id
    AND piv.name = 'Pay Value'
    AND piv.element_type_id = pet.element_type_id
    AND pet.element_type_id = prr.element_type_id
    AND pet.classification_id = pec.classification_id
    AND pec.non_payments_flag = 'N'
    AND prv.result_value <> '0'
    --AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
    -- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
    AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
    AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
    ------------------------------------------------------------------TO get emp.
    AND ppf.person_id = paf.person_id
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
    ------------------------------------------------------------------TO get emp. ASSIGNMENT
    --AND paf.assignment_status_type_id NOT IN (7,3)
    AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
    GROUP BY PPF.NATIONAL_IDENTIFIER
    ,ppf.full_name
    ,ppa.effective_date
    ,ppa.DATE_EARNED
    ,pet.ELEMENT_NAME
    ,PET.ELEMENT_INFORMATION_CATEGORY
    ,PET.CLASSIFICATION_ID
    ,PET.ELEMENT_INFORMATION1
    ,pet.ELEMENT_TYPE_ID
    ,paa.tax_unit_id
    ,PAF.ASSIGNMENT_ID
    ,paf.ORGANIZATION_ID
    BEGIN
    DELETE cust.DFC_PAYROLL_DW
    WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
    AND tax_unit_id = NVL(p_GRE, tax_unit_id)
    AND ssn = NVL(p_ssn, ssn)
    COMMIT;
    FOR V_REC IN MainCsr LOOP
    INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
    VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
    COMMIT;
    END LOOP;
    END ;
    So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
    Thanks
    Oracle 9.2.0.5
    AIX 5.2

    The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
    I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo.

  • ACCTIT table Taking too much time

    Hi,
      In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
    Thanku

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

  • ORacle  table taking too much space

    Hi
    I am using Oracle 9i
    I created one table using script .Table is blank but It takes around 1 Gb space .
    When I create another table using that table
    as create table t_name as select * from original_table ;
    Space is taken by that table is vely low in MB (.6 MB)
    Only Index is not created
    why my table is taking to much space ?
    How to resolve it ?
    what parameter to check ?

    Hi Pavan,
    I am trying to take backup of oracle DB using RMAN script with OSB (Oracle Secure backup). I am facing the following issue given below. I had created a storage selector and my devices are configured in OSB.. Given below the error message
    My Rman script is :
    RMAN> run {
    2> allocate channel oem_sbt_backup type 'sbt_tape' format '%U';
    3> backup as BACKUPSET current controlfile tag '11202008104814';
    4> restore controlfile validate from tag '11202008104814';
    5> release channel oem_sbt_backup;
    6> }
    error message is given below
    allocated channel: oem_sbt_backup
    channel oem_sbt_backup: sid=143 devtype=SBT_TAPE
    channel oem_sbt_backup: Oracle Secure Backup
    Starting backup at 20-NOV-08
    channel oem_sbt_backup: starting full datafile backupset
    channel oem_sbt_backup: specifying datafile(s) in backupset
    including current control file in backupset
    channel oem_sbt_backup: starting piece 1 at 20-NOV-08
    released channel: oem_sbt_backup
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on oem_sbt_backup channel at 11/20/2008 22:50:05
    ORA-19506: failed to create sequential file, name="07k075kr_1_1", parms=""
    ORA-27028: skgfqcre: sbtbackup returned error
    ORA-19511: Error received from media manager layer, error text:
    sbt__rpc_cat_query: Query for piece 07k075kr_1_1 failed.
    (Oracle Secure Backup error: 'no preauth config found for OS user (OB tools) oracle').
    Need help from you ..
    Thanks in Advance

  • Truncate table taking too much time

    hi guys,
    thnks in advince.......
    oracle version is =9.2.0.5.0
    os/ version =SunOS Ganesha1 5.9 Generic_122300-05 sun4u sparc SUNW,Sun-Fire-V890
    application people-soft
    version=8.4
    every thing is running fine from last week .
    whenever process executed like billing ,d_dairy.
    it selected some temporary tables and start truncate tables it takes 5 mint to 8 mint even table has 0 rows.
    if more then one users executed process but diff process it comes in lock ..
    regs
    deep..

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

  • How can i stop printing something i do not want to print. what i wanted to print is not important and uses too much paper.

    some where on the computer there should be a "stop printing document"

    Once you send something to your printer from any application you would have to control the printing from the printer driver and control panel. I suggest you learn how that works. It is not a Thunderbird function.

  • Import taking too much time

    Hi all
    I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
    Description::
    1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
    2 I am taking import in 10 g with character set utf 8 and national character set is also same.
    3 dump file is about 1.5 gb.
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow
    please help me thanks in advance.......

    Hello,
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
    You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
    For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
    VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
    So, I suggest you this:
    1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
    2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
    3. Import the datas to the Tables.
    4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
    Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
    AL32UTF8 is recommended for Unicode uses.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Client import taking too much time

    hi all,
    i am importing a client , i it has complete copy table 19,803 of 19,803 but for last four hours its status is processing
    scc3
    Target Client           650
    Copy Type               Client Import Post-Proc
    Profile                 SAP_CUST
    Status                  Processing...
    User                    SAP*
    Start on                24.05.2009 / 15:08:03
    Last Entry on           24.05.2009 / 15:36:25
    Current Action:         Post Processing
    -  Last Exit Program    RGBCFL01
    Transport Requests
    - Client-Specific       PRDKT00004
    - Texts                 PRDKX00004
    Statistics for this Run
    - No. of Tables             19803 of     19803
    - Deleted Lines                 7
    - Copied Lines                  0
    sm50
    1 DIA 542           Running Yes             SAPLTHFB 650 SAP*     
    7 BGD 4172   Running Yes 11479  RGTBGD23 650 SAP* Sequential Read     D010INC
    sm66
    Server  No. Type PID Status  Reason Sem Start Error CPU Time   User Report   Action          Table
    prdsap_PRD_00  7  BTC 4172 Running   Yes    11711 SAP* RGTBGD23 Sequential Read D010INC
    plz guide me why it is taking too much time , while it has finished most of the things
    best regard
    Khan

    The import is in post processing. It digs through all the documents and adapts them to the new client. Most of the tables in the application area have a "MANDT" (= client) field which needs to be changed. Depending of the size of the client this can take a huge amount of time.
    You can try to improve the speed by updating the table statistics for table D010INC.
    Markus

  • R3load ttaking too much time when table REPOSRC is loaded

    Hello,
    I am installing the SAP ECC 6.0 SR2 on SUN Solaris 10 on DB2 V9.1.  17 jobs of the 19 have been completed in ABAP Import phase but it is taking too much time while doing this SAPSSEXC. It is running aroung 10 hours.  There is no error is giving. So I have canceled the SAP Installation. After that I have started through manual OS command
    /sapmnt/<SID>/exe/R3load -dbcodepage 4102 -i /<instdir>/SAPSSEXC.cmd -l /<instdir>/SAPSSEXC.log -stop_on_error -merge_bck
    It is also running around 9 hours. I do not why it is happening and when it will be completed.
    Can you help me what will check for doing this job fast or can help me how to resolve this issue?
    I have checked these SAP notes 454368 and 455195
    If i change any DB2 parameter, I have to restart the DB2 Database. What will I do? I can not understand what to do now.
    Please help me ASAP.
    Thanks
    Gautam Poddar

    Hello,
    running the R3load import step manually you might try to add the option
    -loadprocedure fast LOAD
    Pay attention to write the LOAD in capital letters!
    This will invoke the LOAD-API whenever possible.
    This should save some time on "normal" tables without LOB-columns.
    For your table REPOSRC that has a BLOB column the LOAD-API will not be taken.
    So I am sorry this will not work for your special case.
    (Thanks for the hint to Frank-Martin Haas)
    Kind regards
    Waldemar Gaida
    Edited by: Waldemar Gaida on Jan 10, 2008 8:26 AM

  • Archive Delete job taking too much time - STXH Sequential Read

    Hello,
    We have been running Archive sessions in our production system in last couple of months. We use SARA and selecting the appropriate variants for WRITE, DELETE and STORAGE options.
    Currently we use the Archive object FI_DOCUMNT and the write job is finished as per the normal time (5 hrs based on the selection criteria). After that normally the delete job is used to complete in 1hr or less than 2hrs always (in last 3 months).
    But in last few days the delete job is taking too much to complete (around 8 - 10hrs) when I monitor the system found that the Sequential Read for table STXH is taking too much time to read and it seems this is the cause.
    Could you please provide a solution for the same, so that the job will run faster as earlier.
    Thanks for your time
    Shyl

    Hi Juan,
    After the statistics run the performance is quite good. Now the job getting finished as expected.
    Thanks. Problem solved
    Shyl

  • Import  taking too much time in oracle 7i

    Hi Guys
    I am trying to import a table of around 9600000 rows. on oracle 7i it is taking too much time. any suggestion or way i can speedup the process?
    Thanks in advance
    Khurana

    Ok.
    Note that it is "_disable_logging" not "disable_logging", but I don't have an Oracle 7 database to confirm if it works.
    Been a long time since I used Oracle 7. Any reason why you have not upgraded ? Import should be much faster with 11g ...
    For further tuning you would need to look at OS and DB performance to find bottlenecks. E.g run bstat / estat.
    Other things to look at are disk performance, increase db_block_buffers, increase db_writer etc.

Maybe you are looking for

  • Safari kept on crashing in Mac OS X 10.9.1

    Guys, this is a very strange case. Safari kept on crashing in 10.9.1, it crashes evevery time. Please help me! Thank you in advance. I am attaching the bug report. Process:                           Safari  [12171]   Path:                            

  • Installing Flex 4 SDK on Eclipse IDE

    Hi, I want to use Eclipse IDE with Flex 4 SDK but am unable to do so. I have looked at various blog posts & threads dealing with the same, most of them are obsolete(in the sense posted way back in 2005/07 with very old Eclipse releases) or I am not g

  • Refund for a ipod

    i have a 30gb ipod video who doesn't work, so i was tired of that ipod so yesterday i went to buy a ipod classic and now i have 2 ipod, i was wondering, can i give my ipod to apple and have a refund or something like that??

  • CS6 error after installation

    Getting the following error when launching applications.  I have uninstalled and reinstalled both logged on to the network and off the network.,  No joy.

  • Listening to radio on new 3G iPhone

    Hi I purchased the original iPhone in April and was disappointed that it was not possible to listen to FM radio. I found that I could stream some stations such as Capital FM on musicradio.com over the Edge network, however, it would cut out immediate