Tablespace, table logging and nologging

Hi i had a tablespace which is in logging mode.
i had a table in that tablespace which i want to place in nologging mode.
i do that using following command
ALTER TABLE SCOTT.XYZ NOLOGGING;
i want to know whether the above statement will work even though my tablespace is in logging mode?or should i place my tablespace also in nologging mode?
thanks in advance

The tablespace and table logging mode (specified during create tablespace and create table) DOES NOT affect the logging of inserts, deletes, updates.
Inserts, deletes, updates will be logged regardless, with the exception of "insert /*+ append. */ ...".
The following sentence from the blog should be re-worded because it is ambiguous:
Actually the real meaning of NOLOGGING is whatever operations are performed on
the object with the NOLOGGING option, will NOT be recorded in logfiles.

Similar Messages

  • Recording Differences in Long Text Changes via Table Logging and AUT10

    Hello,
    I am trying to record changes to Long Text generated in a DMS DIR.  (This field is the Language Dependent Description field.)  We have set the system profile param rec/client.    We have enabled table logging for STXH & STXL in SE11.  When we make a change to the Long Text, we try to display the results in AUT10.  A change is indicated but when displayed, the text seems identical. 
    When displaying Logging status in RSTBHIST, the Active box for STXL is not checked despite the fact that we have set the "Log changes" box in SE11.
    Note #573291 talks about installing a PH-ELR add-on.  Is that necessary if we are already running SAP_APPL 6.04?
    Also, is Table Logging the best approach to capture these changes to Long Text?  Is there any way we could use Change Docs"?
    The field width of DBTABLOG-LOGDATA = 16000 if that matters.
    Thanks for any responses.
    Edited by: John K Ryan on Oct 13, 2011 9:20 AM

    Tobias,
    "You created a dynpro based transaction which allows you to change
    the content of a text. The text to change is not part of the dynpro
    screen, but you want to set it into an ITS field where you can access
    it using Bussiness HTML. Is this correct?"
    <b><u>This is correct.</u></b>
    Below is the portion of HTML template code in which I'm defining the web screen pushbuttons, providing the pushbutton names, and linking same to the function codes defined in the R/3 transaction.
    This is my first experience with ITS, so if I've missed something glaringly obvious, I apologize in advance.
    Thanks!
    <table>
      <tr>
        <input type=submit name="~OKCode=ANAL" value="Analysis Long Text">
      </tr>
      <tr>
        <input type=submit name="~OKCode=ROOT" value="Root Cause Long Text">
      </tr>
      <tr>
        <input type=submit name="~OKCode=RMCA" value="Remedial Corrective Action Plan Long Text">
      </tr>
      <tr>
        <input type=submit name="~OKCode=CRAP" value="Corrective Action Plan Long Text">
      </tr>
      <tr>
        <input type=submit name="~OKCode=PRAP" value="Preventative Action Plan Long Text">
      </tr>
      <tr>
        <input type=submit name="~OKCode=COMM" value="Comments">
      </tr>
    </table>

  • External table log and bad files

    Hi, i have defined the following access parameters for my external table and set REJECT LIMIT to 0:
    ACCESS PARAMETERS
    RECORDS DELIMITED by NEWLINE
    BADFILE BAD_DIR:'CARDS.bad'
    LOGFILE LOG_DIR:'CARDS.log'
    NODISCARDFILE
    FIELDS TERMINATED BY ","
    OPTIONALLY ENCLOSED BY '"'
    READSIZE 1048576
    LRTRIM
    MISSING FIELD VALUES ARE NULL
    REJECT ROWS WITH ALL NULL FIELDS
    I want to know everytime i query the external table, my log file and bad file will be overwritten or appended? i want them to be overwritten.

    Hi,
    Yeah, well, be in for a surprise here:
    "The external tables feature is a complement to existing SQL*Loader functionality."
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/utility.htm#sthref1800
    http://www.oracle.com/pls/db102/search?remark=quick_search&word=external+table&tab_id=&format=ranked
    Have you actually read/tried anything?
    You can download an XE DB for free and play with that.

  • Should we use LOGGING or NOLOGGING for table, lob segment, and indexes?

    We have some DML performance issue on cf contention over the tables that also include LOB segments. In this case, should we define LOGGING on tables, lob segments, and/or INDEXES?
    Based on the metalink note < Performance Degradation as a Result of 'enq: CF - contention' [ID 1072417.1]> It looks we need to turn on logging for at least table and lob segment. What about the indexes?
    Thanks!

    >
    These tables that have nologging are likely from the application team. Yes, we need to turn on the logging from nologging for tables and lob segments. What about the indexes?
    >
    Indexes only get modified when the underlying table is modified. When you need recovery you don't want to do things that can interfere with Oracle's ability to perform its normal recovery. For indexes there will never be loss of data that can't be recovered by rebuilding the index.
    But use of NOLOGGING means that NO RECOVERY is possible. For production objects you should ALWAYS use LOGGING. And even for those use cases where use of NOLOGGING is appropriate for a table (loading a large amount of data into a staging table) the indexes are typically dropped (or at least disabled) before the load and then rebuilt afterward. When they are rebuilt NOLOGGING is used during the rebuild. Normal index operations will be logged anyway so for these 'offline' staging tables the setting for the indexes doesn't really matter. Still, as a rule of thumb you only use NOLOGGING during the specific load (for a table) or rebuild (for an index) and then you would ALTER the setting to LOGGING again.
    This is from Tom Kyte in his AskTom blog from over 10 years ago and it still applies today.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5280714813869
    >
    NO NO NO -- it does not make sense to leave objects in NOLOGGING mode in a production
    instance!!!! it should be used CAREFULLY, and only in close coordination with the guys
    responsible for doing backups -- every non-logged operation performed makes media
    recovery for that segment IMPOSSIBLE until you back it up.
    >
    Use of NOLOGGING is a special-case operation. It is mainly used in Datawarehouse (OLAP systems) data processing during truncate-and-load operations on staging tables. Those are background or even offline operations and the tables are NOT accessible by end users; they are work tables used to prepare the data that will be merged to the production tables.
    1. TRUNCATE a table
    2. load the table with data
    3. process the data in the table
    In those operations the table load is seldom backed up and rarely needs recovery. So use of NOLOGGING enhances the performance of the data load and the data can be recovered, if necessary, from the source it was loaded from to begin with.
    Use of NOLOGGING is rarely, if ever, used for OLTP systems since that data needs to be recovered.

  • Flashback database and nologging tablespaces

    DB Versions
    Oracle 11.2.0.3
    2 other DBs at 11.2.0.3 with 10.2.0.3 compatibility mode. 
    We are using flashback database in our dev/test environments so we can flashback and apply builds again. I am ok if we lose data and can't roll forward. We are running out of archive space. Lately we have been going over a month or more between builds. Is it possible to set tablespaces to nologging and still have flashback work? I dont need the data. I just need to be able to reset to the structure before the last build. I am 100% ok with losing test data. I have no say over when builds get done.
    Reason I am asking not just testing (I did do some google searches):
    Testing this isn't that simple due to process issues. I have to get a build scheduled which can take a week or more(no extra DB for me to test this) and the build team generally doesn't listen, so if I tell them to wait for me for the test, it generally doesn't happen, then it can be another week before I get a build and so on. Restoring from backup isn't an easy change because that is a process change and I have to go through incredible amounts of 'process' and approval to get any kind of change to a process (think dilbert on steroids). So I have to ask ahead of even running a test.

    Hi,
    Doc Ref:ALTER TABLESPACE
    Changing Tablespace Logging Attributes: Example The following example changes the default logging attribute of a tablespace to NOLOGGING:
    ALTER TABLESPACE tbs_03 NOLOGGING;
    Altering a tablespace logging attribute has no affect on the logging attributes of the existing schema objects within the tablespace. The tablespace-level logging attribute can be overridden by logging specifications at the table, index, and partition levels.
    So if you want to Nologging  then you have to go at table level.
    i'd suggest use the  the different location for Archive and Flashback log. and clean the Archive area using some OS job. Flashback database required only the flashback log files during restore.
    HTH

  • Reading the data from table control and write log.

    Hi all,
       In va01 trasaction i have table control 'All item'.
       I want to write value of some columns,( Article no, Order, plant ) so on into ecatt Log file after saving the trasction, for all rows which is having article no.
       Is there any possibility in eCATT with going to GETGUI function which is static to spacefic field.
    Regards,
    Sree

    Hi Sreedhar,
    There are two types of variable values you find in transactions, one system generated(generally the unique values) and then the static field values..
    When you want to go for the static field values you can use GETGUI. You can use the same GETGUI n number of times according to the situation(like in loops etc) and for the system generated messages we can handle them from the message blocks.
    MESSAGE.
    ENDMESSAGE.
    In the message block make a rule for the message that you are expecting like
    'E' MSGNR(the message number) and give a variable in the fields MSGV1/MSGV2 where ever you are getting the unique generated value(according to the log) and you can use that variable for LOG purpose..
    Confirm me whether you were looking for this or something else.
    Best regards,
    Harsha

  • How to overwrite a log and bad file in external table in oracle 10g

    Hi,
    I have used external table in oracle 10g.whenever use select query in external table orace internally create one log file in specified directory,
    but this log file is growing.How can i overwrite the log file(old to replace with new).I need overwrite a log and bad file in external table.
    kindly give the solutions.
    By
    Siva

    I don't believe that is possible with the LOGFILE clause, but it may be with the BADFILE clause. Here is an excerpt from the documentation :
    The LOGFILE clause names the file that contains messages generated by the external tables utility while it was accessing data in the datafile. If a log file already exists by the same name, the access driver reopens that log file and appends new log information to the end. This is different from bad files and discard files, which overwrite any existing file. NOLOGFILE is used to prevent creation of a log file.
    If you specify LOGFILE, you must specify a filename or you will receive an error.
    If neither LOGFILE nor NOLOGFILE is specified, the default is to create a log file. The name of the file will be the table name followed by _%p and it will have an extension of .log.

  • Reorganization of tablespaces, tables and indexes.

    Hello Experts,
    What is the concept of tablespaces, statistics , tables and indexes in SAP/Oracle ? Where are they used and what are they meant for ?
    Wt is the concept and procedure of performing Reorganization of tablespaces, tables and indexes ? why do we need it for ?
    Requested to revertb at earliest as its urgent . points guaranteed .
    Regards,
    Somya

    Hello Somya,
    Probably difficult to explain entire information in this thread. But you definately get good information in the following link
    http://help.sap.com/saphelp_47x200/helpdata/en/0d/d2fafd4a0c11d182b80000e829fbfe/frameset.htm
    Please drill down through menus, and you will be able to get good information.
    Also you can check the following SAP notes
    666061     FAQ: Database objects, segments and extents
    912620     FAQ: Oracle indexes
    588668     FAQ: Database statistics
    592393     FAQ: Oracle
    541538     FAQ: Reorganizations
    Regards,
    Madhukar

  • Clean up of work flow logs entries (WFMC) from tables CMFP and CMFK

    Hi,
    I am cleaning up the tables CMFP and CMFK for work flow logs entries with app id WFMC. I used the cleanup programs RSCLNAFP, RSCLCMFP but the entries are still seen on the tables. Can anyone please advice ?

    Hey,
    I think notes: 627257, 758952  would help.
    if not, please read notes 52114, 617634, and:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/25c1f5d1-0901-0010-d495-e96d02a0cb01
    the link above advice to run transaction NACE:
    To avoid unnecessary growth of tables CMFP and CMFK, you can prevent the creation of processing logs by following these steps:
    1. Call up transaction NACE (“Conditions for Output Control”).
    2. Choose the desired applications and then “Output Types”.
    3. Double click on the output type to go to the detail view where you can make the necessary settings. To make the settings you must enter the change mode.
    4. Set the indicator “do not write processing log” and save your settings.
    This setting is only applicable to the individual application and output type. If it is set, processing logs will be collected in the main memory, but they will not be written to the database. Other output types are not affected by this setting. You have to repeat the aforementioned steps for each output type individually. It is not possible to switch off the processing log for all output types at the same time. For more information on the setting “do not write processing log” see the corresponding documentation.

  • Difference between Delta "Change Log" and "Active Table (Without Archive)"?

    In BI7.0 environment, we perform our Delta loads (the DTP settings under the Extraction tab, there is a field called Extraction Mode and it's value is selected as "Delta") every day among all our DSOs.
    There is a section called "Delta Init. Extraction From..." under the same tab in DTP, there are four radio buttons:
    Active Table (With Archive)
    Active Table (Without Archive)
    Archive (Full Extraction Only)
    Change Log
    Then what is the difference between "Change Log" and "Active Table (Without Archive)" if both Extraction Mode is "Delta" for two Delta loads?
    Thanks!

    Hi ,
    The new options SP16 has are:(Chk Note 1096771)
    Active Table (with Archive)
    The data is read from the active table and from the archive or from a near-line storage if one exists. You can choose this option even if there is no active data archiving process yet for the DataStore object.
    Active Table (Without Archive)
    The data is only read from the active table. If there is data in the archive or in a near-line storage at the time of extraction, this data is not extracted.
    Archive (Only Full Extraction)
    The data is only read from the archive or from a near-line storage. Data is not extracted from the active table.
    Change Log
    The data is read from the change log of the DataStore object.
    Delta will always be picked from change log table.Only during intialization you can choose between getting data from change log or active table.If you are doing the load first time and are initializzing delta in subsequent data targets, then pulling data from active table will get lesse volume of data then it would have got from change log table....All subsequent deltas will be picked up from the change log.  And when we need to reload data into the data target (which would be a full load) - we use active table.
    From change log : you can take below ones as targets
    1) Cube 2) DSO with Addition as the update for the Keyfigures
    From Active table: you can take below ones as targets
    1) Cube ,if and only if, the records are never changes in the source once after creation
    2) DSO with Addition as the update for the Keyfigures ,if and only if, the records are never changes in the source once after creation
    3) DSO with Overwrite as the update for the Keyfigures ( incase deletions is not happening in the source system)
    Pls check this link
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/47/e8c56ecd313c86e10000000a42189c/frameset.htm
    Regards,
    CSM Reddy

  • Advantage of FORCE LOGGING over NOLOGGING

    Hi,
    Can you please help me on the advantages of using force logging mode with a standby database and the effect of it in indexes etc. Also, it may help if you could also share ideas on difference between the two modes?
    Thanks,
    Jennah

    <i>>>  Can you help me what factors would be sacrificed</i>
    This really depends on your system, in most cases you will not be able to see a difference. However i did a small test:
    - drop index, restart db
    - create index with logging (measure time/redo size)
    - drop index, restart db
    - create index with logging (measure time/redo size)
    Result:
    logging - Elapsed: 00:02:40.68  / Redo size: 800mb
    nologging - Elapsed: 00:02:20.29 / Redo size: 1.5mb
    Here the full test:
    [code]SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                             28304
    SQL> CREATE UNIQUE INDEX "SAPR3"."CDCLS~0" ON "SAPR3"."CDCLS"
    ("MANDANT", "OBJECTCLAS", "OBJECTID", "CHANGENR", "PAGENO")
      PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 65536 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "PSAPCLUI" LOGGING;
    Index created.
    Elapsed: 00:02:40.68
    SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                         834714816
    SQL> select segment_name, bytes/1024/1024 "Size_MB" from dba_segments where segment_name = 'CDCLS~0'
    SEGMENT_NAME            Size_MB
    CDCLS~0                     800
    drop index / db restart here
    SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                             28992
    SQL> CREATE UNIQUE INDEX "SAPR3"."CDCLS~0" ON "SAPR3"."CDCLS"
    ("MANDANT", "OBJECTCLAS", "OBJECTID", "CHANGENR", "PAGENO")
      PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 65536 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "PSAPCLUI" NOLOGGING; 
    Index created.
    Elapsed: 00:02:20.29
    SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                           1520824
    SQL> select segment_name, bytes/1024/1024 "Size_MB" from dba_segments where segment_name = 'CDCLS~0';
    SEGMENT_NAME            Size_MB
    CDCLS~0                     800[/code]

  • Where are  tablespace, table structure, package physically stored?

    Running Oracle 11g on Linux.
    My disk settings are:
    /dev/sda1 (boot)
    /dev/sda2 (/)
    /dev/sda3 (/u01) for Oracle software. Base: /u01/app/oracle
    /dev/sda4 (swap)
    /dev/sdb1 ASM disk for DATA
    /dev/sdc1 ASM disk for DATA
    /dev/sdd1 ASM disk for FRA
    1. Where are the tablespace, table structure (such as table name, column name, column type, primarykey key, and index), package, procedure, trigger, and function PHYSICALLY stored? I want to know the common directory path and disk location?
    2. When the Fast Recovery Area (FRA) does the backup job, will the tablespace, table structure (column, type, key, index), package, procedure, trigger, and function also be backed up to the Fast Recovery Area (FRA), in addition to the data files, control files, redo logs and archived logs?
    Thanks

    You need understand the difference and concept about Physical Storage Structures and Logical Storage Structures.
    http://docs.oracle.com/cd/E11882_01/server.112/e16508/physical.htm
    http://docs.oracle.com/cd/E11882_01/server.112/e16508/logical.htm
    /dev/sdb1 ASM disk for DATA
    /dev/sdc1 ASM disk for DATA
    /dev/sdd1 ASM disk for FRA
    1. Where are the tablespace, table structure (such as table name, column name, column type, primarykey key, and index), package, procedure, trigger, and function PHYSICALLY stored? I want to know the common directory path and disk location?Probably your database is stored on ASM Filesystem, wich are using the /dev/sdb1 and /dev/sdc1. So if your database files is on ASM "DATA" diskgroup then all objects is there.
    Query V$DATAFILE, V$CONTROLFILE, V$LOGFILE, V$TEMPFILE
    2. When the Fast Recovery Area (FRA) does the backup job, will the tablespace, table structure (column, type, key, index), package, procedure, trigger, and function also be backed up to the Fast Recovery Area (FRA), in addition to the data files, control files, redo logs and archived logs?FRA is used to hold backups of database then your objects are backed up there too.
    Use RMAN to check it:
    LIST BACKUP;
    LIST ARCHIVELOG ALL;
    Edited by: Levi Pereira on Oct 8, 2012 12:05 PM
    Edited by: Levi Pereira on Oct 8, 2012 12:09 PM

  • DataGuard and NoLogging

    i would like to know pertaining to DataGuard and NoLogging.
    I read that it causes block corruption if nologging is there on the primary, which can be a problem on the standby db.
    but when i tried to simulate the same on my primary/standby it was not observed.
    create table xyz(name varchar2(20));
    insert into xyz values ('myname');
    [did this by insert 10 rows]
    then created index
    create index ind_xyz on xyz(name) nologging; <-- this should be causing a problem
    then same log is transfered on stadnby .
    but when i query on standby (in read only mode)
    it works fine. the select query works fine.
    so wheres the problem of nologging ???

    SQL> select tablespace_name, force_logging from dba_tablespaces;
    TABLESPACE_NAME FORCE_LOGGING
    SYSTEM NO
    UNDOTBS NO
    TEMP NO
    SQL> select force_logging from v$database;
    FOR
    NO
    ???? then where is the catch ???

  • A question regarding database table partitioning and table indexes in 10g

    We are considering partitioning a large table in our 10g database, in order to improve response time. I believe I understand the various partitioning options, but am wondering about the indexes built over the table. When the table is partitioned, will the indexes also be partitioned "automatically"? Or do I need to also partition the indexes as well?
    Thank you in advance to any and all who respond to this question.

    Hello,
    When you build your partiton table you just need to create indexes locally and they will be partitioned automatically, see following example
    CREATE TABLE YY_EVENT
      PART_KEY       DATE                              NOT NULL,
      SUBPART_VALUE  NUMBER                             NULL,
      EVENT_NAME     VARCHAR2(30 BYTE)                  NULL,
      EVENT_VALUE    NUMBER                             NULL
    TABLESPACE TEST_DATA
    PARTITION BY RANGE (PART_KEY)
      PARTITION Y_EVENT_200901 VALUES LESS THAN (TO_DATE(' 2009-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE TEST_DATA, 
      PARTITION Y_EVENT_200902 VALUES LESS THAN (TO_DATE(' 2009-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
        TABLESPACE TEST_DATA
    -- This will create paritioned indexes automatically
    CREATE INDEX MY_IDX ON YY_EVENT
    (EVENT_NAME)
      TABLESPACE TEST_DATA
    LOGGING
    LOCAL;Regards
    Edited by: OrionNet on Feb 25, 2009 12:05 PM

  • How to identified the size of table logging (SCU3)

    hi SAP Expert,
    currently i'm activating the table logging via parameter REC\CLIENT = ALL, understand that this will accumulate the log times to times, is there anyway i can identified how big is the log file already? since i'm concern the log files would make my hardisk full. thanks for any respon.
    Regards
    Hariyono

    Table logging write to table DBTABPRT, just check the size of the table in DB02 (or number of entries in SE16) you can track the growth of the table and wipe the contents once the data is been consumed by whoever requested it.
    You can check in DB02 in what tablespace the table is located and make sure that theres enough space in the filesystem for the table to grow without causing inconvenients.
    Regards
    Juan

Maybe you are looking for

  • How do i output multiple arrays from a case structure to create one larger array

    I currently have a vi that has one hardware input that i needed to take a measurement then be moved and take a similar measurement at a different point.  To accomplish this i used a while loop inside a case structure.  The while loop takes the measur

  • Two BenQ external displays not working with new 27" iMac

    I have a 27" iMac (newest model) and have been using it with a BenQ BL2710 monitor.  It's connected with a mini display port to display port cable.  It's been working great; I found that I had to uncheck the 'Displays have separate spaces' box in Mis

  • Creating an expression from JSP using a non-bound variable

    Hi, This should be a simple question. I have an attribute that belongs to a row that is displayed on a jsp page. This attribute, however, is not displayed on the page, but is an attribute in the VO. I have a Set Property Listener on a command button,

  • P55M-GD45 win 7 x64 1.99GB useable

    I have been looking all over for an answer for this and it all points to Memory Remapping being disabled. I have the updated BIOS (1.4) and I cant see an option to enable Memory Remapping. From a forum search most topics say that is enabled by defaul

  • Poor sound quality on ipod and G4

    The last three albums i have downloaded from itunes music store have poor sound quality. On both my G4 and my ipod the songs have a lot of noise. They sound as though the speakers are blown, or the eq is maxed. All my songs prior to these last three