Direct load with Force Logging enable

Hi All,
I have a dataguard environment with one physical standby database, I have to load a big flat file with close to 50Millions records using sqlldr. can i use direct=true in sqlldr,with force logging enabled on primary? or they is any other way to load this file
FYI- ORACLE 11GR2
Thanks

Thanks for that information.
The direct=true means what you think, Loaded data will not be replicated. However its make sense and your test proves that Data Guard overrides this.
based on this you might as well go conventional path.
Does this help?
Its makes sense that Data Guard would override this, otherwise Data Guard would (or Forced logging) would not be doing its job.
From Oracle doc A96524-01 Database Concepts 19 Direct-Path INSERT
If the database or tablespace is in FORCE LOGGING mode, then direct path INSERT always logs, regardless of the logging or nologging setting.
I know this an older doc, but its unlikely this would have changed

Similar Messages

  • Direct Path Loads Vs FORCED LOGGING

    Hello everyone,
    I just got below doubt.
    Suppose, we've
    One Oracle Primary Database (running in FORCED LOGGING mode) and,
    One Oracle Standy Database (Oracle Data Gaurd)
    My doubt is that if we use sqlloader direct path to load data in a table in primary database, will it generate/send the redo and ship the same to standby database ?

    Thanks Anil,
    But have a doubt again.
    How does direct path data upload generate redo if it's working on just data blocks.
    I.e. get the block adjust the high water and put the records.

  • EclipseLink Direct Map with Joining mapping issue

    Hi EclipseLink Team,
    We encountered an issue to map an attribute as Direct Map with Join Fetch enabled in EclipseLink Workbench 1.1.1 (Build v20090430-r4097).
    Basically, we have the following data model:
    SESSION
    SESS_NO (PK),
    PARAM_SESS
    SESS_NO (FK) (PK),
    PARAM_NAME (PK),
    PARAM_VALUE
    Then we have Session.java persistent entity associated with SESSION table. This class contains Map attribute sessionParameters which we map as Direct Map with PARAM_SESS.PARAM_NAME as key and PARAM_SESS.PARAM_VALUE as value (referenced by PARAM_SESS.SESS_NO = SESSION.SESS_NO).
    The described mapping works fine without Joining enabled (both for Lazy and Eager Loading).
    But we have cases when we want to get parameters for a number of sessions. And disabling of Joining leads to a number of performed SELECT SQL queries (both for Lazy and Eager Loading) those are bad for performance.
    So we have chosen the Eager Loading and set the Join Fetch option to Outer. Then we have got the following:
    I see in the log the only SQL query performed: SELECT DISTINCT t0.SESS_NO, …, t1.PARAM_VALUE, t1.PARAM_NAME FROM {oj SESSION t0 LEFT OUTER JOIN PARAM_SESS t1 ON (t1.SESS_NO = t0.SESS_NO)}. This is pretty good for us. This query returns exactly what we expect when executing on the database. But the Map attribute for every session is populated incorrectly: Maps are empty (not corresponding to available relational data).
    Could you please let us know if this is a bug, or kind of known issue or we made something wrong? Some hints and proposals would be very helpful and appreciated.
    I should mention that for now we want to map all of this for read only purpose.
    Thanks.
    Best Regards,
    Alexey Elisawetski

    James,
    I've tried both 1.2 release and 2.0 (v20091121-r5847) but received the same result - empty Map.
    Moreover, for both versions the following string was absent in deployed XML file:
    +<direct-key-field table="PARAM_SESS" name="PARAM_NAME" xsi:type="column"/>+
    Therefore, on application initialization I have got an exception: org.eclipse.persistence.exceptions.DescriptorException with message This descriptor contains a mapping with a DirectMapMapping and no key field set.
    So I was forced to add the line manually.
    This seems buggy to me...
    Regards,
    Alexey

  • Oracle 11G Direct SQL Load with Data Guard

    Does SQL Loader in direct mode always bypass the writing of redo logs ?
    If the database has force logging on, will SQL Loader in direct mode bypass the writing of redo logs ?
    Is there a way to run SQL Loader in direct mode that will create redo logs that can be applied by Data Guard to the backup database ?

    846797 wrote:
    Does SQL Loader in direct mode always bypass the writing of redo logs ?
    If the database has force logging on, will SQL Loader in direct mode bypass the writing of redo logs ?
    Is there a way to run SQL Loader in direct mode that will create redo logs that can be applied by Data Guard to the backup database ?In case of data guard setup , redo logs will always be generated.

  • A process by the name of avgcmgr is loading the CPUs by up to 100 percent. At least 5 of them have appeared on the activity monitor. I've removed them with forced quit but they return! How do I permanently get rid of them? CPUA temp is now 194F.

    A process by the name of avgcmgr is loading the CPUs by up to 100 percent. At least 5 of them have appeared on the activity monitor. I've removed them with forced quit but they return! How do I permanently get rid of them? Three of these processes has now driven the CPUA temp to 194F.
    Ray

    Hi Ray-
    I'm having the exact same problem and have searched the web for hours looking for a solution (multiple spawned avgcmgr processes that consume cpu).
    Did you find any solution?
    Thanks so much!
    Steve

  • When I publish projects with AICC reporting enabled the SWF "breaks" and does not load.

    When I publish projects with AICC reporting enabled the SWF "breaks" and does not load. If I change the publish method to SCORM the SWF works, and if I turn off reporting the SWF also works.

    OK.  That makes sense now.  Captivate 6 changed over to Rustici's LMS drivers.  Cp7 must have made even further changes that mean you must have an LMS present if you turn on reporting.
    I would suggest you just turn off reporting while you are testing other aspects of the course functionality during development, and only turn on reporting when you are ready to upload to your LMS.

  • HT1695 I am trying to connect to my uni wifi, and when i click on the wifi, it directs me to a log in page. However, the two boxes to enter the username and password do not appear. Can anyone help me with this???

    I am trying to connect to my uni wifi, and when i click on the wifi, it directs me to a log in page. However, the two boxes to enter the username and password do not appear. Can anyone help me with this???

    You didn't say what you have already tried, but if you haven't already done so, power-cycle your router (unplug it for 15 seconds then plug it back in), then on your phone go to Settings>General>Reset and tap Reset Network Settings, then try joining your wifi again.

  • Direct load vs nologging

    Hi all,
    I have a insert .... select statement and i am using append hint to enforce direct load.
    Is it same as the statement without hint on nologging table.
    Thanks in advance.
    Jaggy

    If you use insert with append hint, there won't be redo logs amount will be minimal. Of course I don't calculate redo logs generated by indexes. Any index on the table affected by the load will produce logs which requires time. If you want to speed up this process then make indexes on the affected table unusable and rebuild them after the load is done.
    If you use insert into without HINT on nologging table, then it is totally different because it is not an direct load method. Redo logs will be generated for the table and all indexes. This option will be much slower.
    If you have nologging enabled than it redo logs won't be generated only if you use direct path load (using append hint or import utility)

  • Direct Load Insert problem

    Hello,
    we want to make bulk insert in C++ faster by avoiding generating redo logs. I plan to use Direct Load Insert. The problem is, DLI works only on insert /*+ APPEND */ into .. select ... but not with 'values' clause. To get rid of this, I want to bulk insert the rows into a temporary table, and than, at the end of the transaction, issue an insert /*+ APPEND */ into target_tab select * from temp_tab. After truncate temp_tab, I commit the transaction.
    I know, REDO will be generated on the UNDO, but inserts generate only very low UNDO. By the way, can I avoid generating UNDO for temp tables?
    What do you think about this? Any oppinion? Will it be really faster? Do you have any other idea?
    Thank you,
    Balazs

    There is no hint call "NOLOGGING".So /*+ APPEND NOLOGGING */ will be ignored in your case.You may be better performance ,if you do it this way..
    1) Create temp_tab with required columns matching target table. using CTAS method.
    Create table temp_tab
    tablespace &lt;TS_name&gt;
    storage( ..................)
    PCTFREE 0
    NOLOGGING
    as
    select /*+ PARALLE(a,4) */ col1,col2,col3.....
    from soruce_table a;
    Comments: Above operation will be with NO REDO and NO UNDO. I used PCTFREE 0 to pack as many rows in the block,so that it helps in querying the temp_tab table in next steps.
    2) Load the data into target table .Using parallel direct load , instead of serial-Direct load ( /*+ APPEND */ )
    sql &gt; alter session enable parallel dml;
    sql &gt; INSERT /*+ PARALLEL(a,4) */ INTO target_table a
    SELECT /*+ PARALLEL(b,4) */ from temp_tab b;
    sql&gt;commit;
    To give to some benchmarks , a 14 GB table is loaded with above method within 9 minutes on 12cpus busy unixbox.
    Note : Check the other effect of parallel direct load in 8i concept

  • My mac froze in an application so I shut it down by powering off with button, now when I try to turn it on I have a grey screen with Apple loge and the timer swirling but it doesn't get past this, please help!

    I Shut down my Mac by holding in the power button after my iMac froze and now when I try to turn it back it on all I get is the grey screen with Apple loge and the timer and doesn't get any further.  I have tried the diagnostic test but nothing was found.

    Take each of these steps that you haven't already tried. Stop when the problem is resolved.
    To restart an unresponsive computer, press and hold the power button for a few seconds until the power shuts off, then release, wait a few more seconds, and press it again briefly.
    Step 1
    The first step in dealing with a startup failure is to secure the data. If you want to preserve the contents of the startup drive, and you don't already have at least one current backup, you must try to back up now, before you do anything else. It may or may not be possible. If you don't care about the data that has changed since the last backup, you can skip this step.
    There are several ways to back up a Mac that is unable to start. You need an external hard drive to hold the backup data.
    a. Start up from the Recovery partition, or from a local Time Machine backup volume (option key at startup.) When the OS X Utilities screen appears, launch Disk Utility and follow the instructions in this support article, under “Instructions for backing up to an external hard disk via Disk Utility.” The article refers to starting up from a DVD, but the procedure in Recovery mode is the same. You don't need a DVD if you're running OS X 10.7 or later.
    b. If Step 1a fails because of disk errors, and no other Mac is available, then you may be able to salvage some of your files by copying them in the Finder. If you already have an external drive with OS X installed, start up from it. Otherwise, if you have Internet access, follow the instructions on this page to prepare the external drive and install OS X on it. You'll use the Recovery installer, rather than downloading it from the App Store.
    c. If you have access to a working Mac, and both it and the non-working Mac have FireWire or Thunderbolt ports, start the non-working Mac in target disk mode. Use the working Mac to copy the data to another drive. This technique won't work with USB, Ethernet, Wi-Fi, or Bluetooth.
    d. If the internal drive of the non-working Mac is user-replaceable, remove it and mount it in an external enclosure or drive dock. Use another Mac to copy the data.
    Step 2
    If the startup process stops at a blank gray screen with no Apple logo or spinning "daisy wheel," then the startup volume may be full. If you had previously seen warnings of low disk space, this is almost certainly the case. You might be able to start up in safe mode even though you can't start up normally. Otherwise, start up from an external drive, or else use the technique in Step 1b, 1c, or 1d to mount the internal drive and delete some files. According to Apple documentation, you need at least 9 GB of available space on the startup volume (as shown in the Finder Info window) for normal operation.
    Step 3
    Sometimes a startup failure can be resolved by resetting the NVRAM.
    Step 4
    If a desktop Mac hangs at a plain gray screen with a movable cursor, the keyboard may not be recognized. Press and hold the button on the side of an Apple wireless keyboard to make it discoverable. If need be, replace or recharge the batteries. If you're using a USB keyboard connected to a hub, connect it to a built-in port.
    Step 5
    If there's a built-in optical drive, a disc may be stuck in it. Follow these instructions to eject it.
    Step 6
    Press and hold the power button until the power shuts off. Disconnect all wired peripherals except those needed to start up, and remove all aftermarket expansion cards. Use a different keyboard and/or mouse, if those devices are wired. If you can start up now, one of the devices you disconnected, or a combination of them, is causing the problem. Finding out which one is a process of elimination.
    Step 7
    If you've started from an external storage device, make sure that the internal startup volume is selected in the Startup Disk pane of System Preferences.
    Start up in safe mode. Note: If FileVault is enabled in OS X 10.9 or earlier, or if a firmware password is set, or if the startup volume is a software RAID, you can’t do this. Post for further instructions.
    Safe mode is much slower to start and run than normal, and some things won’t work at all, including wireless networking on certain Macs.
    The login screen appears even if you usually log in automatically. You must know the login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
    When you start up in safe mode, it's normal to see a dark gray progress bar on a light gray background. If the progress bar gets stuck for more than a few minutes, or if the system shuts down automatically while the progress bar is displayed, the startup volume is corrupt and the drive is probably malfunctioning. In that case, go to Step 11. If you ever have another problem with the drive, replace it immediately.
    If you can start and log in in safe mode, empty the Trash, and then open the Finder Info window on the startup volume ("Macintosh HD," unless you gave it a different name.) Check that you have at least 9 GB of available space, as shown in the window. If you don't, copy as many files as necessary to another volume (not another folder on the same volume) and delete the originals. Deletion isn't complete until you empty the Trash again. Do this until the available space is more than 9 GB. Then restart as usual (i.e., not in safe mode.)
    If the startup process hangs again, the problem is likely caused by a third-party system modification that you installed. Post for further instructions.
    Step 8
    Launch Disk Utility in Recovery mode (see Step 1.) Select the startup volume, then run Repair Disk. If any problems are found, repeat until clear. If Disk Utility reports that the volume can't be repaired, the drive has malfunctioned and should be replaced. You might choose to tolerate one such malfunction in the life of the drive. In that case, erase the volume and restore from a backup. If the same thing ever happens again, replace the drive immediately.
    This is one of the rare situations in which you should also run Repair Permissions, ignoring the false warnings it may produce. Look for the line "Permissions repair complete" at the end of the output. Then restart as usual.
    Step 9
    If the startup device is an aftermarket SSD, it may need a firmware update and/or a forced "garbage collection." Instructions for doing this with a Crucial-branded SSD were posted here. Some of those instructions may apply to other brands of SSD, but you should check with the vendor's tech support.  
    Step 10
    Reinstall the OS. If the Mac was upgraded from an older version of OS X, you’ll need the Apple ID and password you used to upgrade.
    Step 11
    Do as in Step 9, but this time erase the startup volume in Disk Utility before installing. The system should automatically restart into the Setup Assistant. Follow the prompts to transfer the data from a Time Machine or other backup.
    Step 12
    This step applies only to models that have a logic-board ("PRAM") battery: all Mac Pro's and some others (not current models.) Both desktop and portable Macs used to have such a battery. The logic-board battery, if there is one, is separate from the main battery of a portable. A dead logic-board battery can cause a startup failure. Typically the failure will be preceded by loss of the settings for the startup disk and system clock. See the user manual for replacement instructions. You may have to take the machine to a service provider to have the battery replaced.
    Step 13
    If you get this far, you're probably dealing with a hardware fault. Make a "Genius" appointment at an Apple Store, or go to another authorized service provider.

  • Materialized View with No Logging Option;;; THX

    Hi all,
    What's the diffrence between a :
    Materialized View with No Logging Option
    Materialized View with Logging Option
    thank you

    LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and direct-load INSERT operations against a nonpartitioned index, a range or hash index partition, or all partitions or subpartitions of a composite-partitioned index will be logged (LOGGING) or not logged (NOLOGGING) in the redo log file.
    In NOLOGGING mode, data is modified with minimal logging (to mark new extents invalid and to record dictionary changes). When applied during media recovery, the extent invalidation records mark a range of blocks as logically corrupt, because the redo data is not logged. Therefore, if you cannot afford to lose this index, you must take a backup after the operation in NOLOGGING mode.
    If the database is run in ARCHIVELOG mode, media recovery from a backup taken before an operation in LOGGING mode will re-create the index. However, media recovery from a backup taken before an operation in NOLOGGING mode will not re-create the index.
    An index segment can have logging attributes different from those of the base table and different from those of other index segments for the same base table.
    Message was edited by:
    HAGGAR

  • Force logging in archivelog mode

    Hi !!!
    What happen if I have "force_logging" parameter set to true when the database is in archivelog mode ?
    Thanks.

    rarain wrote:
    Hi Juamd,
    You should only use this option when it is really required because this option will forcibly generate redo for all Nologging operations that means you might find more archives and you need to setup more space for archive.
    Normally we use this option when we need to replicate data changes from one database to another database like in standby configuration, Golden Gate replication etc. I would suggest you to monitor Redo amount generated after enabling this option and accordingly estimate archive space and backup space for archive.
    Thanks...Ah, don't agree with that at all. You can compromise your recovery if you happen to want to restore to a point-in-time when there was a NOLOGGING operation going on. Fine, if it's an index, but if it happens to be on a table...
    (Yes, been there, done that - with a non-Production database, thankfully)
    This is one of the 'must haves', IMO, for Production - set it at the database-level and it overrides any tablespace or object setting.
    Archivelogs are generated for a reason. If you have a particular operation that really does massively benefit from NOLOGGING and is something you are sure that you simply re-run/re-create yourself, fine. If not, by default, you really should FORCE LOGGING.

  • Why is TB re-syncing/re-load with GMail all messages each time I open TB? Slows down TB by much each time.

    Why is TB re-syncing/re-load with GMail all messages each time I open TB?
    It does slow down quite heavily. e.g. It takes almost 5 mins to just write a simple short email, since ''freezing'' at every 3-4 caracters.
    All other programs on the computer work fine in the meantime, though. Including word processor. So it is really a TB issue.
    Or is it a GMail issue, forcing download everytime a connection is created ?
    I am on Win8.1 on an ASUS AiO.
    Thanks

    IMAP mail accounts see a remote view of the server.
    You subscribe to see server Folders.
    Headers are downloaded and when you select to view an email, the emails are downloaded to a temp cache to enable quicker access to those emails, but upon exit Thunderbird the temp cache is lost, so you cannot read emails if offline. These emails are not stored on your computer.
    If you synchronise subscribed folders, then a copy of the folder is downloaded to Thunderbird Profile folder and stored in an mbox file. This means you can read them even if offline.
    However, the folder is set up to synchronise with the server when changes occur, so that your copy and the folder on the server are updated to be the same.
    I would recomend that you do not allow folders to get too big as it would take longer to synch.
    more info on IMAP:
    https://support.mozilla.org/en-US/kb/imap-synchronization
    Can you see an 'All Mail' folder?
    This is gmails copy of all your emails, so it is gmail Archive of your emails.
    This folder can get huge. As it only shows you what you already have in other folders, it also doubles the size of your Profile and can take a while to download. It is recommended that you do not subscribe to see this folder.
    Read info at link - under 'all Mail' section:
    http://kb.mozillazine.org/Gmail

  • Direct load insert  vs direct path insert vs nologging

    Hello. I am trying to load data from table A(only 4 columns) to table B. Table B is new. I have 25 million records in table A. I have debating between direct load insert,, direct path insert and nologging. What is the diference between the three methods of data load? What is the best approach??

    Hello,
    The fastest way to move data from Table A to Table B is by using direct path insert with no-logging option turned on table B. Meaning this will be produce minimum logging and in case of DR you might not be able to recover data in table B. Now Direct path insert is equivalence of loading data from flat using direct load method. Generally using conventional method it's six phases to move your data from source (table, flat file) to target (table). But with direct path/load it will cut down to 3, and if in addition you will use PARALLEL hint on select and insert you might have faster result.
    INSERT /*+ APPEND */ INTO TABLE_B SELECT * from TABLE_A;Regards
    Correction to select statement
    Edited by: OrionNet on Feb 19, 2009 11:28 PM

  • Direct load insert internals

    Hi,
    I usually posts threads regarding the internals of oracle. today the topic is direct load inserts.
    Here is the mains concept
    in direct load insert oracle by pass the buffer cache and inserts in directly into the data file. By by passing the buffer cache oracle avoid redo log generation and other over head. oracle builds a block in the memory and inserts it into the data file above the high water mark.
    here are the questions
    where does oracle builds the data block in memory is it in server process PGA.
    does oracle builds more than one data blocks
    does increasing the size of PGA memory can have any effect on direct load inserts.
    if oracle by pass buffer cache than rollback segament are still generated. What its mechanism under direct load inserts
    regards
    Nick

    Nick Naughty wrote:
    from above document. it seems that oracle streams and array size play an important role in performance but the above document is regarding sql loader what about this statementFor SQL Loader...only
    insert /*+ append */ into dest_table
    select * from source_table;
    here we could not set oracle stream or array size a mentioned in above document. kindly elobrate with respect to above statement. Does not apply.
    Perhaps you could explain what you are trying to do, or add some context of the problem. This would aid with responses.
    Regards,
    Greg Rahn
    http://structureddata.org

Maybe you are looking for

  • Delta fo Generic extractor using function module

    Hi, I am using the following function module for generic extractor but its always showing me extraction error.Could anyone please suggest to resolve the issue. Thanks in advance fo rsuggestion. FUNCTION Z_BW_SALESDATA_EXTRACT_CHNG2. ""Local interface

  • [Solved]Need help with backup-manager

    So I'm trying to set up backup-manager to make.. well, backups. Their wiki has been shut down thanks to spammers, and ours has no specific entry for backup-manager, so I will have to do it with its man page and the (well documented) config file. Howe

  • Cannot be launched

    Please help me ... I'm getting this message Adobe acrobat 8.10 Professional cannot be launched at this time. you must launch at least one other suite compont(such as adobe photoshop)before launching acrobat 8.1.0 Professional. I did open Photoshop, b

  • Conversion of PDF to WORD (a problem with a file containing Persian text)

    Converting/copying  a PDF file containing a persian text to WORD (.doc) gives a distorted  WORD document!  Could any one please tell me what I must do to have a  sound and neat WORD (.doc) version of a PDF file containg a Persian  text? I use Adobe A

  • New posted data in underlying table is not fetched in datasource (RSA3).

    Hi Gurus, I am using DataSource 0CA_TS_IS_1 in RSA3. To meet the requiremnt we posted some new data in the underlying database transparent table (CATSDB).  But datasource is not extracting new data whatever is posted in underlying database transparen