INSERT WITH NOLOGGING CLAUSE

CAN WE ROLLBACK TRANSACTION IF WE YOU INSERT WITH NOLOGGING CLAUSE ???

ORACLE 'STUDNET',
IT IS APPRECIATED YOU STUDY ORACLE BY READING MANUALS, AND TRYING THINGS YOURSELF, INSTEAD OF REQUESTING BEING SPOON FED FOR EVERY QUESTION YOU HAVE.
ALSO IT IS APPRECIATED YOU DON'T Y E L L YOUR QUESTIONS!!!
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Insert with Where Clause

    Hi,
    Can we write Insert with 'Where' clause? I'm looking for something similar to the below one (which is giving me an error)
    insert into PS_AUDIT_OUT (AUDIT_ID, EMPLID, RECNAME, FIELDNAME, MATCHVAL, ERRORMSG)
    Values ('1','10000139','NAMES','FIRST_NAME',';','')
    Where AUDIT_ID IN
    (  select AUDIT_ID from PS_AUDIT_FLD where AUDIT_ID ='1' and RECNAME ='NAMES'
    AND FIELDNAME = 'FIRST_NAME' AND MATCHVAL = ';' );
    Thanks
    Durai

    It is not clear what are you trying to do, but it looks like:
    insert
      into PS_AUDIT_OUT(
                        AUDIT_ID,
                        EMPLID,
                        RECNAME,
                        FIELDNAME,
                        MATCHVAL,
                        ERRORMSG
    select  '1',
            '10000139',
            'NAMES',
            'FIRST_NAME',
      from  PS_AUDIT_FLD
      where AUDIT_ID = '1'
        and RECNAME ='NAMES'
        and FIELDNAME = 'FIRST_NAME'
        and MATCHVAL = ';'
    SY.

  • NOLOGGING CLAUSE

    Hi,
    I would like to know that if we have our database in FORCE LOGGING mode with standby databases,could cause some kind of conflicts if we create then objects with the NOLOGGING clause?? , something like corrupt the standby databases??
    The objects that we would like to create with NOLOGGING clause are temporary tables for example, which ones, we don't need a log for they. And it seems it could help to the performance if we set it in NOLOGGING those objects.
    But we are not sure if this clause will be ignored or if it could cause confusion??
    Thanks in advance,

    >
    CREATE TABLE TABLE_NAME (column_name varchar(2)...) NOLOGGING
    Now, using this, the table will be populated, it will be used and then the information will be deleted.
    It doesn't matter if the information is not logged and recovered in a recovery database, due to that for us is like a temporary table.
    So, the question is: There will be some impact in the database or in the db recovery if we use the object above using NOLOGGING clause??
    >
    As the table that you are creating is a permanent table, and the database is in logging mode, as written by you in your very first post , the redo will be generated.As the database is in FORCE LOGGING mode, the information will be logged into the redo.So, this information will also be archived and applied to the DR.The NOLOGGING clause won't be having any effect.
    Secondly, if the table is going to be populated and then the whole table is going to be deleted then why not use TEMPORARY TABLE.
    Refer to [http://download.oracle.com/docs/cd/B14117_01/server.101/b10739/tables.htm#sthref1862]
    Anand

  • Use of nologging clause

    Hi, I was trying to use nologging clause to improve performance of DML on one of the table. however I observed that table with nologging option actually decreases the performance :(
    please refer to following log.
    SQL> create table test_log(id int, name char(40))
    2 /
    Table created.
    Elapsed: 00:00:00.03
    SQL> create table test_nolog(id int, name char(40)) nologging
    2 /
    Table created.
    Elapsed: 00:00:00.00
    SQL> insert into test_log select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000
    2 /
    1000000 rows created.
    Elapsed: 00:00:13.46
    SQL> insert into test_nolog select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000
    2 /
    1000000 rows created.
    Elapsed: 00:00:16.95
    SQL> update test_log set id=100
    2 /
    1000000 rows updated.
    Elapsed: 00:00:46.35
    SQL> update test_nolog set id=100
    2 /
    1000000 rows updated.
    Elapsed: 00:00:49.43

    Insert and update have no impacts whether the tables are created with NOLOGGING or LOGGING clause
    It generates the same amount of redo for insert stmts as well as UPDATE stmts
    NOLOGGING can help only for the following things
    1.CTAS
    2.SQL*Loader in direct mode
    3.INSERT /*+APPEND*/ ...
    SYSTEM@rman 15/12/2008> truncate table  test_log;
    Table truncated.
    Elapsed: 00:00:01.49
    SYSTEM@rman 15/12/2008> truncate table test_nolog;
    Table truncated.
    Elapsed: 00:00:00.67
    SYSTEM@rman 15/12/2008> insert into test_nolog select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000;
    1000000 rows created.
    Elapsed: 00:00:39.80
    Execution Plan
    Plan hash value: 1731520519
    | Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  COUNT                        |      |       |            |          |
    |*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
    |   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(LEVEL<=1000000)
    Statistics
           3081  recursive calls
          41111  db block gets
           8182  consistent gets
              0  physical reads
       60983504  _redo size_
            674  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
        1000000  rows processed
    SYSTEM@rman 15/12/2008> commit;
    Commit complete.
    Elapsed: 00:00:00.03
    SYSTEM@rman 15/12/2008> insert into test_log select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=1000000;
    1000000 rows created.
    Elapsed: 00:00:38.79
    Execution Plan
    Plan hash value: 1731520519
    | Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
    |   1 |  COUNT                        |      |       |            |          |
    |*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
    |   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(LEVEL<=1000000)
    Statistics
           3213  recursive calls
          41323  db block gets
           8261  consistent gets
              2  physical reads
       60993120  _redo_ size
            674  bytes sent via SQL*Net to client
            636  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
        1000000  rows processed
    SYSTEM@rman 15/12/2008> commit;They simply generate the same anount of redo
    If you use APPEND hint you can reduce the INSERT stmt timings
    SYSTEM@rman 15/12/2008> truncate table test_nolog;
    Table truncated.
    Elapsed: 00:00:00.28
    SYSTEM@rman 15/12/2008> INSERT /*+ APPEND */ into test_nolog select ROWNUM*-1,DBMS_RANDOM.STRING('A',1) FROM DUAL CONNECT BY LEVEL <=100
    1000000 rows created.
    Elapsed: 00:00:28.19
    Execution Plan
    ERROR:
    ORA-12838: cannot read/modify an object after modifying it in parallel
    SP2-0612: Error generating AUTOTRACE EXPLAIN report
    Statistics
           3125  recursive calls
           8198  db block gets
            929  consistent gets
              0  physical reads
         161400  redo size
            660  bytes sent via SQL*Net to client
            652  bytes received via SQL*Net from client
              3  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
        1000000  rows processed
    SYSTEM@rman 15/12/2008> You can aslo notice significant time difference/redo generated between INSERT and INSERT with append on a NOLOGGING table

  • Create table with logging clause....

    hi,
    just reading this http://docs.oracle.com/cd/B19306_01/server.102/b14231/tables.htm#ADMIN01507
    it mention under Consider Using NOLOGGING When Creating Tables:
    The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. S*ubsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo.*
    Help me with my understanding. Does it mean that when you create a table using logging clause and then when you need to do a direct loads, you can specify nologging to reduce the redo log. The nologging part will only work for this activity but not on DML operation even when you specify nologging?

    sybrand_b wrote:
    Nologging basically applies to the INSERT statement with the APPEND hint. Direct load means using this hint.
    All other statements are always logging regardless of any table setting.>
    Sybrand Bakker
    Senior Oracle DBAi did a few test :
    create table test
    (id number) nologging;
    Table created.
    insert into test values (1);
    1 row created.
    commit;
    Commit complete.
    delete from test;
    1 row deleted.
    rollback;
    Rollback complete.
    select * from test;
            ID
             1there is no logging on table- level and tablespace - level. So what i am doing is to check -> Subsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo."
    the above makes me confuse cos isn't rollback related to UNDO tablespace not redo log. So one can still rollback even when it is in no logging.
    REDO log is for rolling forward , recovery when there is a system crash... ..

  • Table with nologging on

    Guys,
    My Oracle procedure has an insert operation to a table with the insert being huge volume of data selected from a query on few other table.Hence it takes hell lots of time.
    Well,I don't care about the redo generation;So if I alter the table to turn on "nologging", and insert using the /*+ append */ clause,would that do and would there be any ill effects?
    Thanks for your guidance in advance!!!
    Regards,
    Bhagat

    Hello
    It's easy enough to test:
    tylerd@DEV2> CREATE TABLE dt_test_redo(id number)
      2  /
    Table created.
    Elapsed: 00:00:00.00
    tylerd@DEV2>
    tylerd@DEV2> CREATE TABLE dt_test_redo_nolog(id number) nologging
      2  /
    Table created.
    Elapsed: 00:00:00.03
    tylerd@DEV2>
    tylerd@DEV2> set autot trace stat
    tylerd@DEV2>
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.03
    Statistics
             69  recursive calls
            153  db block gets
             35  consistent gets
              0  physical reads
         145600 redo size
            677  bytes sent via SQL*Net to client
            622  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
             69  recursive calls
            151  db block gets
             36  consistent gets
              0  physical reads
         145276 redo size
            677  bytes sent via SQL*Net to client
            628  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00Virtually no difference for the insert when the table has nologging specified and the insert is done via the conventional path.
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.03
    Statistics
            133  recursive calls
             92  db block gets
             42  consistent gets
              0  physical reads
           8128 redo size
            661  bytes sent via SQL*Net to client
            632  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
            133  recursive calls
             89  db block gets
             42  consistent gets
              0  physical reads
           8180 redo size
            661  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00Again, virtually no difference in the amount of redo between the table with logging and nologging, the append hint however had a large impact for both tables as the insert was done using direct path.
    Now create indexes with logging on each table to see what impact that has on redo
    tylerd@DEV2> CREATE INDEX dt_test_redo_i1 ON dt_test_redo(id)
      2  /
    Index created.
    Elapsed: 00:00:00.06
    tylerd@DEV2> CREATE INDEX dt_test_redo_nolog_i1 ON dt_test_redo_nolog(id)
      2  /
    Index created.
    Elapsed: 00:00:00.06
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.07
    Statistics
            273  recursive calls
            946  db block gets
            188  consistent gets
             21  physical reads
        1155872 redo size
            677  bytes sent via SQL*Net to client
            622  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.09
    Statistics
            259  recursive calls
            933  db block gets
            176  consistent gets
             21  physical reads
        1155372 redo size
            677  bytes sent via SQL*Net to client
            628  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.00Again, virtually no difference when the append hint is not used
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.07
    Statistics
            133  recursive calls
            254  db block gets
             44  consistent gets
              0  physical reads
         339856 redo size
            661  bytes sent via SQL*Net to client
            632  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.06
    Statistics
            133  recursive calls
            250  db block gets
             44  consistent gets
              0  physical reads
         339872 redo size
            661  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01The append hint has significantly reduced the amount of redo generated but it is almost the same for both tables. This is because redo is being generated for the indexes as they have not used the nologging option. So now repeat the test using indexes with nologging
    tylerd@DEV2> DROP INDEX dt_test_redo_i1;
    Index dropped.
    Elapsed: 00:00:00.03
    tylerd@DEV2> DROP INDEX dt_test_redo_nolog_i1;
    Index dropped.
    Elapsed: 00:00:00.03
    tylerd@DEV2>
    tylerd@DEV2> CREATE INDEX dt_test_redo_i1 ON dt_test_redo(id) NOLOGGING
      2  /
    Index created.
    Elapsed: 00:00:00.09
    tylerd@DEV2> CREATE INDEX dt_test_redo_nolog_i1 ON dt_test_redo_nolog(id) NOLOGGING
      2  /
    Index created.
    Elapsed: 00:00:00.09
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.15
    Statistics
            320  recursive calls
           1445  db block gets
            240  consistent gets
             43  physical reads
        1921940 redo size
            677  bytes sent via SQL*Net to client
            622  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.09
    Statistics
            292  recursive calls
           1408  db block gets
            218  consistent gets
             42  physical reads
        1887772 redo size
            678  bytes sent via SQL*Net to client
            628  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01Again, very little change in the redo between the table with and without logging.
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
            133  recursive calls
            278  db block gets
             65  consistent gets
              0  physical reads
         313104 redo size
            663  bytes sent via SQL*Net to client
            632  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2> INSERT +append
      2  INTO
      3      dt_test_redo_nolog
      4  SELECT
      5      rownum
      6  FROM
      7      dual
      8  CONNECT BY
      9      LEVEL <= 10000
    10  /
    10000 rows created.
    Elapsed: 00:00:00.04
    Statistics
            133  recursive calls
            274  db block gets
             66  consistent gets
              0  physical reads
         313096 redo size
            663  bytes sent via SQL*Net to client
            638  bytes received via SQL*Net from client
              4  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
          10000  rows processed
    tylerd@DEV2> COMMIT
      2  /
    Commit complete.
    Elapsed: 00:00:00.01
    tylerd@DEV2>So from these tests you can see that the nologging clause specified at the table level has very little impact. The Append hint is having more impact, and as you scale the numbers up, making sure the indexes have nologging clause specified and using the append hint gives the least amount of redo generated. Of course, if you can do the insert without any indexes on the table, and you use the append hint, this will give you the best throughput.
    HTH
    David
    Message was edited by:
    David Tyler
    fixed the formatting

  • Create Materialized View with Compress clause

    Hi,
    Oracle 9i R2 has an option to use compress clause while creating table, materialized view or etc. Just wanted some help on tips to be followed during creating a materilized view with compress clause and refresh the same.do i need to follow any special procedures ??? Any help greatly appreciated.
    Thanks,
    Chak.

    I read in the book that while inserting user /+append+/ otherwise table with compress will fail. I am doing materialized view refresh with fast mode and data will be inserted as per logs residing at master site.. while inserting into materialized view, do i have to setup specially since fast refresh is going to insert data into existing materialized view.
    Thanks,

  • Merge with where clause after matched and unmatched

    Hai,
    Can anybody give me one example of merge statement with
    where clause after matched condition and after the unmatched condition.
    MERGE INTO V1 VV1
    USING (SELECT     A.CNO XXCNO, A.SUNITS XXSU, A.DDATE XXDD, XX.SUM_UNITS SUMMED
    FROM V1 A,
    (SELECT                
    SUM(SUNITS) SUM_UNITS FROM V1 B                                   
    GROUP BY CNO) c
    WHERE
    A.DDATE=0 AND A.SUNITS <>0 AND
    A.ROWID=(SELECT MAX(ROWID) FROM V1 )) XX
    ON (1=1)
    WHEN MATCHED THEN UPDATE SET
    VV1.SUNITS=XX.SUMMED
    WHERE XX.XXDD=0
    WHEN NOT MATCHED THEN INSERT
    (VV1.CNO, VV1.SUNITS, VV1.SUNITS)
    VALUES (XX.XXCNO, XX.XXSU, XX.XXDD)
    WHERE XX.XXDD<>0
    i am getting the error
    WHERE XX.XXDD=0
    ERROR at line 13:
    ORA-00905: missing keyword
    Thanks,
    Pal

    One of the example is there:
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10759/statements_9016.htm#sthref7014
    What Oracle version do you use ?
    Besides the condition (1=1) is non-deterministic,
    I would expect there an exception like "unable to get a stable set of rows".
    Rgds.

  • Multi table insert with error logging

    Hello,
    Can anyone please post an example of a multitable insert with an error logging clause?
    Thank you,

    Please assume that I check the documentation before asking a question in the forums. Well, apparently you had not.
    From docs in mention:
    multi_table_insert:
    { ALL insert_into_clause
          [ values_clause ] [error_logging_clause]
          [ insert_into_clause
            [ values_clause ] [error_logging_clause]
    | conditional_insert_clause
    subqueryRegards
    Peter

  • Direct load insert  vs direct path insert vs nologging

    Hello. I am trying to load data from table A(only 4 columns) to table B. Table B is new. I have 25 million records in table A. I have debating between direct load insert,, direct path insert and nologging. What is the diference between the three methods of data load? What is the best approach??

    Hello,
    The fastest way to move data from Table A to Table B is by using direct path insert with no-logging option turned on table B. Meaning this will be produce minimum logging and in case of DR you might not be able to recover data in table B. Now Direct path insert is equivalence of loading data from flat using direct load method. Generally using conventional method it's six phases to move your data from source (table, flat file) to target (table). But with direct path/load it will cut down to 3, and if in addition you will use PARALLEL hint on select and insert you might have faster result.
    INSERT /*+ APPEND */ INTO TABLE_B SELECT * from TABLE_A;Regards
    Correction to select statement
    Edited by: OrionNet on Feb 19, 2009 11:28 PM

  • Multi-table INSERT with PARALLEL hint on 2 node RAC

    Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
    servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
    used is what is given below.
    create table t1 ( x int );
    create table t2 ( x int );
    insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
    when (dummy='X') then into t1(x) values (y)
    when (dummy='Y') then into t2(x) values (y)
    select dummy, 1 y from dual;
    I can see multiple sessions using the below query, but on only one instance only. This happens not
    only for the above statement but also for a statement where real time table(as in table with more
    than 20 million records) are used.
    select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
    sql.sql_text
    from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
    WHERE p.sid = s.sid
    and p.serial# = s.serial#
    and p.sid = ps.sid
    and p.serial# = ps.serial#
    and s.sql_address = sql.address
    and s.sql_hash_value = sql.hash_value
    and qcsid=945
    Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
    Thanks,
    Mahesh

    Please take a look at these 2 articles below
    http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
    http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
    thanks
    http://swervedba.wordpress.com

  • Using bind variable with IN clause

    My application runs a limited number of straight up queries (no stored procs) using ODP.NET. For the most part, I'm able to use bind variables to help with query caching, etc... but I'm at a loss as to how to use bind variables with IN clauses. Basically, I'm looking for something like this:
    int objectId = 123;
    string[] listOfValues = { "a", "b", "c"};
    OracleCommand command = new OracleCommand();
    command.Connection = conn;
    command.BindByName = true;
    command.CommandText = @"select blah from mytable where objectId = :objectId and somevalue in (:listOfValues)";
    command.Parameters.Add("objectId", objectId);
    command.Parameters.Add("listOfValues", listOfValues);
    I haven't had much luck yet using an array as a bind variable. Do I need to pass it in as a PL/SQL associative array? Cast the values to a TABLE?
    Thanks,
    Nick

    Nevermind, found this
    How to use OracleParameter whith the IN Operator of select statement
    which contained this, which is a brilliant solution
    http://oradim.blogspot.com/2007/12/dynamically-creating-variable-in-list.html

  • Export (expdp) with where clause

    Hello Gurus,
    I am trying to export with where clause. I am getting below error.
    Here is my export command.
    expdp "'/ as sysdba'" tables = USER1.TABLE1 directory=DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= “USER1.TABLE1:where auditdate>'01-JAN-10'” Here is error
    [keeth]DB1 /oracle/data_15/db1> DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= USER1.TABLE1:where auditdate>'01-JAN-10'                    <
    Export: Release 11.2.0.3.0 - Production on Tue Mar 26 03:03:26 2013
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_TABLE_03":  "/******** AS SYSDBA" tables=USER1.TABLE1 directory=DATA_PUMP dumpfile=TABLE1.dmp logfile=TABLE1.log query= USER1.TABLE1:where auditdate
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 386 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-31693: Table data object "USER1"."TABLE1" failed to load/unload and is being skipped due to error:
    ORA-00933: SQL command not properly ended
    Master table "SYS"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded
    Dump file set for SYS.SYS_EXPORT_TABLE_03 is:
      /oracle/data_15/db1/TABLE1.dmp
    Job "SYS"."SYS_EXPORT_TABLE_03" completed with 1 error(s) at 03:03:58Version
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production

    Hello,
    You should use parameter file.Another question i can see you are using 11g.Why don't you use data pump?.
    Data Pump is faster and have more features and enhancement than regular imp and exp.
    You can do the following:
    sqlplus / as sysdba
    Create directory DPUMP_DIR3  for 'Type here your os path that you want to export to';then touch a file:
    touch par.txt
    In this file type the following the following :
    tables=schema.table_name
    dumpfile=yourdump.dmp
    DIRECTORY=DPUMP_DIR3
    logfile=Your_logfile.log
    QUERY =abs.texp:"where hiredate>'01-JAN-13' "then do the following
    expdp username/password parfile='par.txt'
    If you will import from Oracle 11g to version 10g then you have to addthe parameter "version=10" to the parameter file above
    BR
    Mohamed ELAzab
    http://mohamedelazab.blogspot.com/

  • Outer Join with Where Clause in LTS

    HI all,
    I have a requirement like this in ANSI SQL:
    select p1.product_id, p1.product_name, p2.product_group
    from product p1 left outer join product_group p2 on p1.product_id = p2.product_id
    and p2.product_group = 'NEW'
    In Regular SQL:
    select p1.product_id, p1.product_name, p2.product_group
    from product p1, product_group p2
    WHERE p1.product_id *= p2.product_id and p2.product_group = 'NEW'
    In OBIEE, I am using a left outer join between these two in Logical table Source, and also, Gave
    p2.product_group = 'NEW' in WHERE clause of LTS.
    This doesn't seem to solve purpose.
    Do you have any idea how to convert WHERE clause in physical query that OBIEE is generating to something like
    product p1 left outer join product_group p2 on p1.product_id = p2.product_id AND p2.product_group = 'NEW'
    I am using Version 10.1.3.4.1
    Creating an Opaque view would be my last option though.

    Hello
    I have read your post and the responses as well. and I understand that you have issues with the Outer Join with where Clause in LTS.
    Try this solution which worked for me (using your example ) -
    1. In the Physical Layer created a Complex join between PRODUCT and PRODUCT_GROUP tables and use this join relationship :
    PRODUCT.PROD_ID = PRODUCT_GROUP.PROD_ID  AND  PRODUCT_GROUP.GROUP_NAME = 'MECHANICAL' 
    2. In the General Tab of PRODUCT table LTS add PRODUCT_GROUP  table and select Join Type as Left Outer Join.
    3. Check Consistency and make sure there are no errors .
    when you run a request you should see the following query generated -
    select distinct T26908.PROD_ID as c1,
         T26908.PROD_NAME as c2,
         T26912.GROUP_NAME as c3
    from
         PRODUCT T26908 left outer join PRODUCT_GROUP T26912 On T26908.PROD_ID = T26912.PROD_ID and T26912.GROUP_NAME = 'MECHANICAL'
    order by c1, c2, c3
    Hope this works for you. If it does please mark this response as 'Correct' .
    Good Luck.

  • Select query with UNION clause in database adapter

    Friends,
    I have got a SQL query with two UNION clause withing it. like
    select a,b,c
    from a
    union
    select a,b,c
    from b
    The schema generated is like below in sequence
    <element>a</element>
    <element>b</element>
    <element>c</element>
    <element>a</element>
    <element>b</element>
    <element>c</element>
    So, the columns from different select queries joined with UNION clause are all appeared in schema instead of the distinct columns.
    Is there any way around to solve this issue ? or will need to with DB function/procedure.

    I think I know what you are saying but your example doesn't make sense, your SQL should produce something like
    I had to change a, b, c with elementA, elementB, elementC as a and b are reserved html tags.
    <elementA>DateA</elementA>
    <elementB>DataB</elementB>
    <elementC>DataC</elementC>
    ...What is the result of the query when you run it in SQLPlus? Is it what you expect?
    cheers
    James

Maybe you are looking for

  • Decode statement in Select line of a View Object Query

    I attempted to create a view object in expert mode with a customized query. The query had a decode statement in the select line of the query. The view object compiled correctly but gave an error when run. ex: select .... decode(CrpSchools.SCHOOLS_ID,

  • Nano bluetooth won't connect to or find devices

    Hi, I have a new ipod nano that I can't connect to anything.  When searching for BT devices it doesn't ever find anything.  I used my tablet to pair the device so I could at least see the tablet in the BT list, but I'm not able to connect to it. I al

  • Requests cannot be released during the upgrade

    Hi guys, During an Upgrade that is currently on Preprocessing stage, a developer needs to release an urgent change. We are not able to proceed with the Upgrade due Networking issues related to IP Multicast (Kernel 740 I tried running tp unlocksys and

  • Nokia 3600 Screen went Black...!!! Please help...

    My Nokia 3600 Slide phone suddenly has a black, glowing screen. Whenever I switch it on, I hear all the sounds and the phone is also working fine with PC suite and OVI suite, but the display has turned Black...!!! Any ideas or Solutions???

  • Cant query view but have table privileges (not found in ALL_VIEWS)

    Hi guys, I have a strange (to me~) problem where I have a view querying 4 tables. the creator and another user can call the view but a third user cannot. If I query ALL_OBJECTS or ALL_VIEWS with this user the view is not listed. However, the user has