Parallel Statement Queueing

I know that parallel statement queueing is enabled when you use automatic degree of parallelism. I am trying to identify if it is possible to use parallel statement queueing without using auto DOP. My current environment does not use auto DOP and I was looking to use queueing to solve a performance variance issue I am seeing. I will submit a request to support as well but I wanted to see if anyone had any experience with doing this or if it was even possible. Thanks for your time.

According to Oracle documentation advanced features of parallel execution such as automatic DOP, statement queuing and in-memory parallel execution depend on the value of PARALLEL_DEGREE_POLICY parameter. Statement queuing is only available if the parameter set to AUTO.
Here is the [url http://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm#CIHEFJGC]document.
Since automatic DOP itself depends on many other factors to work (such as DEFAULT degree set for the objects) it's theoretically possible to use queuing and still explicitly specify DOP for majority of the objects through either hints of parallel degree property.
Hope it helps.

Similar Messages

  • What is a Parallelized statement....

    In the topic of Restrictions on Functions...
    it provides.... when called from a SELECT statement or parallelized UPDATE or DELETE statement, the function cannot query or modify any database tables.
    What is meant by Parallelized UPDATE or DELETE Statement?
    Please explain it with an example.
    Thanks
    Ankur R
    http://ankurraheja.tripod.com

    There is a section in the Application Developer's Guide on parallel query http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96590/adg10pck.htm#20844.
    One way of creating a parallelized statement is to use the PARALLEL hint, though the link above goes into others.
    UPDATE /*+ PARALLEL */ <<myTable>>
      SET <<column name>>=<<value>>Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Proper using of index for parallel statement execution

    Hi all,
    I've created index for my table
    CREATE INDEX ZOO.rep184_med_arcdate ON ZOO.rep184_mediate(arcdate);It was before I started to think about parallel statement execution. As far as I've heard I should alter my index for proper using with parallel hint. Could you please suggest the way to go?

    marco wrote:
    Hi all,
    I've created index for my table
    CREATE INDEX ZOO.rep184_med_arcdate ON ZOO.rep184_mediate(arcdate);It was before I started to think about parallel statement execution. As far as I've heard I should alter my index for proper using with parallel hint. Could you please suggest the way to go?when all else fails Read The Fine Manual
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/sql_elements006.htm#autoId63

  • Why is my sql running parallel ?

    Hi,
    NOTE: first, I thought sql developer tool is causing this and I opened a thread on that category, thanks to @rp0428's warnings and advises, I realized that something else is happening. so reopened this thread in that category, I also need an admin to delete other one:
    https://forums.oracle.com/forums/thread.jspa?threadID=2420515&tstart=0
    thanks.
    so my problem is:
    I have table partitioned by range (no subpartition) on a DATE column per months. It has almost 100 partitions. I run a query on that table based on partition column:
      select *
      from   hareket_table
      where islem_tar between to_date('01/05/2012', 'dd/mm/yyyy') and to_date('14/07/2012', 'dd/mm/yyyy') -- ISLEM_TAR is my partition column.so, when I run this query from sql developer, query works parallel. I didnt just get execution plan via sql developer interface, first I used "EXPLAIN PLAN FOR" statement (which I always do, I dont use developer tools interfaces generally) then used developer interface (just to be sure) but I didnt satisfied and then I run the query and and get real execution plan via:
    select * from table(dbms_xplan.display_cursor(sql_id => '7cm8cz0k1y0zc', cursor_child_info =>0, format=>'OUTLINE'));and the same execution plan again with PARALLELISM. so INDEXES and TABLE has no parallelism (DEGREE column in DBA_INDEXES and DBA_TABLES is set to 1).
    as I know, if I'm wrong please correct me, there is no reason to oracle run this query as parallel (I also did not give any hint). so I worried and run the same steps in "plsql developer" and query runs noparallel (inteface, explain plan for, dbms_xplan.display_cursor). sqlplus autotrace was the same( just autotrace, didnt try others dbms_xplan etc.) Based on that, I decided sql developer is causing to this (*edit: but I was wrong TOAD did same thing*).
    so I focused on sql developer and I disabled parallel query using:
    alter session disable parallel query;then run the statement again and there were no Parallelism (expectedly).
    so looked for execution plans:
    I run query twice. one with normal, one with session disabled parallel query. and look for executed execution plan for both. (child 0 and 1)
    -- WHEN PARALLEL QUERY IS ENABLE, SESSION DEFAULT
    -- JUST CONNECTED TO DATABASE
      select * from table(dbms_xplan.display_cursor('7cm8cz0k1y0zc', 0, 'OUTLINE'));
    | Id  | Operation            | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT     |               |       |       |  2025 (100)|          |       |       |        |      |            |
    |   1 |  PX COORDINATOR      |               |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)| :TQ10000      |  7910K|  1267M|  2025   (2)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    PX BLOCK ITERATOR |               |  7910K|  1267M|  2025   (2)| 00:00:01 |    90 |    92 |  Q1,00 | PCWC |            |
    |*  4 |     TABLE ACCESS FULL| HAREKET_TABLE |  7910K|  1267M|  2025   (2)| 00:00:01 |    90 |    92 |  Q1,00 | PCWP |            |
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
          DB_VERSION('11.2.0.2')
          OPT_PARAM('query_rewrite_enabled' 'false')
          OPT_PARAM('optimizer_index_cost_adj' 30)
          OPT_PARAM('optimizer_index_caching' 50)
          OPT_PARAM('optimizer_dynamic_sampling' 6)
          ALL_ROWS
          OUTLINE_LEAF(@"SEL$1")
          FULL(@"SEL$1" "HAREKET_TABLE"@"SEL$1")
          END_OUTLINE_DATA
    Predicate Information (identified by operation id):
       4 - access(:Z>=:Z AND :Z<=:Z)
           filter(("ISLEM_TAR">=TO_DATE(' 2012-05-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "ISLEM_TAR"<=TO_DATE('
                  2012-07-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
    --WHEN DISABLED PARALLEL QUERY
    --AFTER CONNECTED, EXECUTED "ALTER SESSION DISABLE PARALLEL QUERY"
    select * from table(dbms_xplan.display_cursor('7cm8cz0k1y0zc', 1, 'OUTLINE'));
    | Id  | Operation                | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT         |               |       |       | 36504 (100)|          |       |       |
    |   1 |  PARTITION RANGE ITERATOR|               |  7910K|  1267M| 36504   (2)| 00:00:04 |    90 |    92 |
    |*  2 |   TABLE ACCESS FULL      | HAREKET_TABLE |  7910K|  1267M| 36504   (2)| 00:00:04 |    90 |    92 |
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          IGNORE_OPTIM_EMBEDDED_HINTS
          OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
          DB_VERSION('11.2.0.2')
          OPT_PARAM('query_rewrite_enabled' 'false')
          OPT_PARAM('optimizer_index_cost_adj' 30)
          OPT_PARAM('optimizer_index_caching' 50)
          ALL_ROWS
          OUTLINE_LEAF(@"SEL$1")
          FULL(@"SEL$1" "HAREKET_TABLE"@"SEL$1")
          END_OUTLINE_DATA
    Predicate Information (identified by operation id):
       2 - filter(("ISLEM_TAR">=TO_DATE(' 2012-05-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "ISLEM_TAR"<=TO_DATE(' 2012-07-14 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
    as you can see, when I've just connected to database (no any statements run) OPT_PARAM('optimizer_dynamic_sampling' 6) is in my stats.
    when I disable parallel query on the session, this is not in stats...
    value of optimizer_dynamic_sampling is 2 in DB. so why is that query runs parallel ? I don't want that.
    thanks for answers

    >
    NOTE: first, I thought sql developer tool is causing this and I opened a thread on that category, thanks to @rp0428's warnings and advises, I realized that something else is happening. so reopened this thread in that category, I also need an admin to delete other one:
    https://forums.oracle.com/forums/thread.jspa?threadID=2420515&tstart=0
    as you can see, when I've just connected to database (no any statements run) OPT_PARAM('optimizer_dynamic_sampling' 6) is in my stats.
    when I disable parallel query on the session, this is not in stats...
    value of optimizer_dynamic_sampling is 2 in DB. so why is that query runs parallel ? I don't want that.
    >
    I answered this question in that other thread, that is now gone. I pointed you to, and quoted from, a blog that tells you EXACTLY why that is happening. And I also gave you a link to an article by Oracle ACE and noted author Jonathan Lewis. You either didn't see the links or didn't read them.
    Maria Colgan is an Oracle developer and a member of the optimizer developer team. She has many article on the net that talk about the optimizer, how it works, how to use it.
    The one I pointed you to, and quoted from, is titled 'Dynamic sampling and its impact on the Optimizer'
    https://blogs.oracle.com/optimizer/entry/dynamic_sampling_and_its_impact_on_the_optimizer
    >
    For serial SQL statements the dynamic sampling level will depend on the value of the OPTIMIZER_DYNAMIC_SAMPLING parameter and will not be triggered automatically by the optimizer. The reason for this is that serial statements are typically short running and any overhead at compile time could have a huge impact on their performance. Where as we expect parallel statements to be more resource intensive, so the additional overhead at compile time is worth it to ensure we can be best execution plan.
    In our original example the SQL statement is serial, which is why we needed to manual set the value for OPTIMIZER_DYNAMIC_SAMPLING parameter. If we were to issue a similar style of query against a larger table that had the parallel attribute set we can see the dynamic sampling kicking in.
    You should also note that setting OPTIMIZER_FEATURES_ENABLE to 9.2.0 or earlier will disable dynamic sampling all together.
    When should you use dynamic sampling? DS is typically recommended when you know you are getting a bad execution plan due to complex predicates. However, you should try and use an alter session statement to set the value for OPTIMIZER_DYNAMIC_SAMPLING parameter as it can be extremely difficult to come up with a system-wide setting.
    When is it not a good idea to use dynamic sampling? If the queries compile times need to be as fast as possible, for example, unrepeated OLTP queries where you can't amortize the additional cost of compilation over many executions.
    >
    If you read the article and particularly the first two paragraphs above Maria expalins why dynamic sampling was used in your case. And for a table with as many partitions as yours Oracle chose to use sampling level six (256 blocks, see article); a level of two would only sample 64 blocks and you have 90+ partitions. Oracle needs a good sample of partitions.
    The Jonathan Lewis article is titled 'Dynamic Sampling'
    http://jonathanlewis.wordpress.com/2010/02/23/dynamic-sampling/
    This article can also shed light on sampling as he shows how it appears that sampling isn't being used and then shows that it actually is
    >
    We can see that we have statistics.
    We can see that we delete 9002 rows
    We can see that we have 998 rows left
    We can see that the plan (and especially the cardinality of the full tablescan) doesn’t change even though we included a table-level hint to do dynamic sampling.
    Moreoever – we can’t see the usual note that you get when the plan is dependent on a dynamic sample (” – dynamic sampling used for this statement”).
    It looks as if dynamic sampling hasn’t happened.
    However, I “know” that dynamic sampling is supposed to happen unconditionally when you use a table-level hint – so I’m not going to stop at this point. There are cases where you just have to move on from explain plan (or autotrace) and look at the 10053 trace.
    So the optimizer did do dynamic sampling, but then decided that it wasn’t going to use the results for this query.

  • Oracle 9i and parallelism

    Can someone tell me if Oracle 9i on Linux supports multi-processors, partionning, parallel statements ... ?
    Also, what is the best Linux/Hardware on which 9i runs ? Is there a place I can find benchmark ?
    Thanks

    call oracle support
    i think they will give u all answer

  • ORA-12827: insufficient parallel query slaves

    Hi All,
    We are hitting ORA-12827 after setting up parameters related to parallelism. It was a suggestion from Oracle , just to boost up the performance by setting up auto parallelism and few other parameters complementing to it. We have made changes on 2 Database , but we are hitting this only one database, which has similar in configuration in terms of hardware (32 cpus, gobs of ram , sufficient I/O) but it has less SGA setup (7G) over 30G on other database, which seems fine as if now. Here ,ORA-12827 , clearly suggest that , the server does not have sufficient parallel slaves , may be , already in use by different users. I have a question here , what do you think , what is causing this issue to occur , is it no of users or memory or anything else. I know, parallel stuff is not scalable in terms of no of increased users. Also, since we are on 11GR2, we can go for "parallel statement queuing" and that will not fail us ,because of resources getting exhausted completely, right ? What do you think , would it be a right approach to go with or you would you go different approach? thanks a lot for all of your help in past.
    OS -- Enterprise Linux Server release 5.8 (Tikanga)
    DB --- 11.2.0.3.0
    ------------- Parameters suggested by Oracle -----------------------------
    change parallel_min_servers=8
    change parallel_max_servers=128
    change parallel_degree_limit=8
    change parallel_degree_policy=AUTO
    change parallel_min_percent=50
    ----------------------------------- Current SGA and Parallel Settings ---------------------
    NAME                                 TYPE                              VALUE
    fast_start_parallel_rollback         string                            LOW
    parallel_adaptive_multi_user         boolean                           TRUE
    parallel_automatic_tuning            boolean                           FALSE
    parallel_degree_limit                string                            8
    parallel_degree_policy               string                            AUTO
    parallel_execution_message_size      integer                           16384
    parallel_force_local                 boolean                           FALSE
    parallel_instance_group              string
    parallel_io_cap_enabled              boolean                           FALSE
    parallel_max_servers                 integer                           128
    parallel_min_percent                 integer                           50
    parallel_min_servers                 integer                           8
    parallel_min_time_threshold          string                            AUTO
    parallel_server                      boolean                           FALSE
    parallel_server_instances            integer                           1
    parallel_servers_target              integer                           128
    parallel_threads_per_cpu             integer                           2
    recovery_parallelism                 integer                           0
    SQL> show parameter sga_
    NAME                                 TYPE                              VALUE
    sga_max_size                         big integer                       7G
    sga_target                           big integer                       7GRegards

    why posted this question then?Fran, it's not about the "ORA-12827" only. I also asked the , what is the reason , one can think of , when comparing other database , which has more memory other then that , pretty much identical. Please go through my question once again , you would realize , what was the question actually.
    Regards

  • Is parallel DML transactionally equivalent to non parellel DML?

    So I've got a whole bunch of insert into select statements where I have parallel hints on my inserts.
    if I have parent object A and child object B
    I fetch all the A's I want to move and use FORALL (with FETCH LIMIT ie.batches)
    to insert all the A's in the parent table. I also store the pk's of batch A so that I can use those in a join to identify the B's for this batch of A's.
    INSERT /*+ parallel(arch,4) */ INTO ARCHIVED_A arch
    SELECT *
    FROM NONARCHIVED_A non_arch
    WHERE EXISTS (
    SELECT 1
    FROM ids i
    WHERE non_arch.ot_id = i.ot_id);
    something like that.
    I am finding that this works fine when I don't use parallel DML
    whenever I use parallel DML I end up with the A's archived but no B's.

    From: http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/usingpe.htm#i1006876
    A session that is enabled for parallel DML may put transactions in the session in a special mode: If any DML statement in a transaction modifies a table in parallel, no subsequent serial or parallel query or DML statement can access the same table again in that transaction. This means that the results of parallel modifications cannot be seen during the transaction.
    Serial or parallel statements that attempt to access a table that has already been modified in parallel within the same transaction are rejected with an error message.So it's not quite "transactionally equivalent", in the sense that you cannot read the changes made by a parallel DML statement: you must commit them first.
    Could your code be loosing the error message (that you will get if you try to read your changes) in some when-others handler?

  • "Documents" folder is empty, all files and folders disappeared.

    Hello there.
    I was trying to install a software, and since it couldn't work, I was advised to remove different files.
    Since then, seems all my files and folders from "Documents" have all disappeared. They're not in the trash, they're not anywhere.
    I'm actually using MacKeeper to try finding them among deleted stuff. But so far, nothing has showed up.
    Before anyone asks, I did not change my folder user name. Can't be this.
    Thanks for your help. I'm in real ****.
    Friendly,
    Cyrille.

    No I don't have any backups.
    The software I am talking about is Parallels Desktop.
    I had to remove those files :
    rm -rfd /Users/~Cyberpen~/Library/Preferences/com.parallels*"
    rm -rfd /Users/~You~/Library/Preferences/Parallels/*"
    rm -rfd /Users/~You~/Library/Preferences/Parallels"
    sudo m -rfd /private/var/db/Parallels/Stats/*
    sudo m -rfd /private/var/db/Parallels/Stats
    sudo m -rfd /private/var/db/Parallels
    sudo m -rfd /Library/Logs/parallels.log
    sudo m -rfd /Library/Preferences/Parallels/*
    sudo m -rfd /Library/Preferences/Parallels
    sudo m -rfd /private/var/db/Parallels
    sudo m -rfd /private/var/.Parallels_swap
    sudo m -rfd /private/var/db/receipts/'com.parallels*'
    sudo m -rfd /private/tmp/qtsingleapp-*-lockfile
    sudo m -rfd /private/tmp/com.apple.installer*/*
    sudo m -rfd /private/tmp/com.apple.installer*
    sudo m -rfd /private/var/root/Library/Preferences/com.parallels.desktop.plist
    I did this manually (didn't work with Terminal) so I had to go deep into the macbook and sometimes change permissions to access those specific files (I was probably not supposed to touch).

  • Select in current mode

    Hi,
    is there a way to get rows performing a select in corrent mode (not consistent which implies using undo).
    I nedd a fast way to retrieve rows as they are and I don't care about read consistency.
    Is it possible?
    Regards

    >
    There are 4 sessions inserting into a table T. It is a bulk insert performed using INSERT INTO T SELECT ...
    Meanwhile I have to generate a report on the same table T.
    >
    What mechanism are you using to do a bulk insert? Are you really doing a bulk insert or are you just inserting a lot of data at once?
    The term 'bulk insert' generally refers to a direct-path load and has a very specific meaning and implications.
    There is no APPEND hint in the query you show and you make no mention of PARALLEL so there is no indication that you are using direct-path loads unless you are using sql*loader.
    See 'Enabling Direct-Path INSERT' in the DBA guide
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables004.htm#ADMIN01509
    Is the report generation from a fifth session? Then, as damorgan mentioned, you need to consider the restrictions mentioned in the SQL Language doc in the 'Conventional and Direct-Path INSERT' section.
    http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm
    Two of the ones that damorgan was alluding to are
    >
    Queries that access the same table, partition, or index are allowed before the direct-path INSERT statement, but not after it.
    If any serial or parallel statement attempts to access a table that has already been modified by a direct-path INSERT in the same transaction, then the database returns an error and rejects the statement.
    >
    Those are another indication that you are not really using bulk inserts (direct-path load).

  • Why do these insert vary so much in performance?

    I have a table and a package similar to those shown in below DDL.
    Table TABLE1 is populated in chunks of 10000 records from a remote database, thru TABLE1_PKG
    by receiving arrays of data for three of its fields and a scalar value for a set identifier
    in column named NUMBER_COLUMN_3.
    I have two databases with following record count in the table:
         DATABASE1: 55862629
         DATABASE2: 64225247
    When I executed the procedure to move 50000 records to each of the two databases, it took 20 seconds to
    populate DATABASE1 and it took 150 seconds to populate DATABASE2.  Network was discarded as I recorded
    in the package how long each of the 5 10000 chunk took to insert in each of the two databases as follows:
    Records Being Inserted  Time it took in DATABASE1     Time it took in DATABASE2
    First  10000             3 seconds                    27 seconds
    Second 10000             4 seconds                    26 seconds
    Third  10000             6 seconds                    40 seconds
    Fourth 10000             4 seconds                    31 seconds
    Fifth  10000             4 seconds                    26 seconds
    When I look at the explain plan in both databases I see following:
    | Id  | Operation                | Name | Cost  |
    |   0 | INSERT STATEMENT         |      |     1 |
    |   1 |  LOAD TABLE CONVENTIONAL |      |       |
    My questions:
         1) Does the explain plan indicate that Direct Load was not used.
         2) If the answer to 1 is Yes, is it possible to use Direct Load or a faster insert method in this case?
         3) Any ideas what could be causing the 7.5 to 1 difference between the two databases.
    Please note that these two databases are non production so load is negligible.
    CREATE TABLE TABLE1
      TABLE1_ID                VARCHAR2(255)   NOT NULL,
      NUMBER_COLUMN_1          NUMBER,
      NUMBER_COLUMN_2          NUMBER,
      NUMBER_COLUMN_3          NUMBER
    ALTER TABLE TABLE1 ADD CONSTRAINT TABLE1_PK PRIMARY KEY (TABLE1_ID);
    CREATE INDEX NUMBER_COLUMN_3_IX ON TABLE1(NUMBER_COLUMN_3);
    CREATE OR REPLACE PACKAGE TABLE1_PKG IS
      TYPE VARCHAR2_ARRAY      IS TABLE OF VARCHAR2(4000);
      TYPE NUMBER_ARRAY        IS TABLE OF NUMBER;
      TYPE DATE_ARRAY          IS TABLE OF DATE;
      PROCEDURE Insert_Table1
        Table1_Id_Array_In         TABLE1_PKG.VARCHAR2_ARRAY,
        Number_Column1_Array_In    TABLE1_PKG.NUMBER_ARRAY,
        Number_Column2_In          TABLE1_PKG.NUMBER_ARRAY,
        NUMBER_COLUMN_3_In         NUMBER
    END;
    CREATE OR REPLACE PACKAGE BODY TABLE1_PKG IS
      PROCEDURE Insert_Table1
        Table1_Id_Array_In         TABLE1_PKG.VARCHAR2_ARRAY,
        Number_Column1_Array_In    TABLE1_PKG.NUMBER_ARRAY,
        Number_Column2_In          TABLE1_PKG.NUMBER_ARRAY,
        NUMBER_COLUMN_3_In         NUMBER
      IS
      BEGIN
        FORALL I IN 1..Table1_Id_Array_In.Count
          INSERT /*+ APPEND */ INTO TABLE1 (TABLE1_ID            , NUMBER_COLUMN_1           , NUMBER_COLUMN_2     , NUMBER_COLUMN_3   )
          VALUES                           (Table1_Id_Array_In(I), Number_Column1_Array_In(I), Number_Column2_In(I), NUMBER_COLUMN_3_In);
      END Insert_Account_Keys;
    END;
    Thanks,
    Thomas

    I found answer for why Direct Path is not used when I do an insert into TABLE1@SOMEDATABASE SELECT....:
      http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_9014.htm#i2163698
    Direct-path INSERT is subject to a number of restrictions. If any of these
    restrictions is violated, then Oracle Database executes conventional INSERT serially
    without returning any message, unless otherwise noted:
    You can have multiple direct-path INSERT
    Queries that access the same table, partition, or index are allowed before the direct-path INSERT
    If any serial or parallel statement attempts to access a table that has already been modified by a direct-path INSERT
    The target table cannot be of a cluster.
    The target table cannot contain object type columns.
    Direct-pathINSERT
    Direct-pathINSERTAPPENDAPPEND_VALUESINSERT
    The target table cannot have any triggers or referential integrity constraints defined on it.
    The target table cannot be replicated.
    A transaction containing a direct-path INSERT
    My table is being replicated and I am try it via a distributed transaction.
    I am still puzzled as to why it took 2 minutes and 44 seconds to insert 10000 rows in our production database but that's something that I'll investigate if time permits.  For now I've rewritten the process to use Insert...select if the number of records in the batch is less than or equal to a configured (currently set at 400000) number, else it will move using chunck and for now using bulk collect in the source pass arrays of data and FORALL inserts in the target.  If time allows in the future I will try to rewrite to use chunking combinde with insert..select.
    Thanks to all for your help,
    Thomas

  • Forcing DIRECT PATH INSERT to go CONVENTIONAL.

    According to Oracle, to force a statement to avoid using DIRECT-PATH insert it must fall into the following:
    Direct-path INSERT is subject to a number of restrictions. If any of these restrictions is violated, then Oracle Database executes conventional INSERT serially without returning any message, unless otherwise noted:
        *     You can have multiple direct-path INSERT statements in a single transaction, with or without other DML statements. However, after one DML statement alters a particular table, partition, or index, no other DML statement in the transaction can access that table, partition, or index.
        *      Queries that access the same table, partition, or index are allowed before the direct-path INSERT statement, but not after it.
        *      If any serial or parallel statement attempts to access a table that has already been modified by a direct-path INSERT in the same transaction, then the database returns an error and rejects the statement.
        *      The target table cannot be part of a cluster.
        *      The target table cannot contain object type columns.
        *      Direct-path INSERT is not supported for an index-organized table (IOT) if it is not partitioned, if it has a mapping table, or if it is reference by a materialized view.
        *      Direct-path INSERT into a single partition of an index-organized table (IOT), or into a partitioned IOT with only one partition, will be done serially, even if the IOT was created in parallel mode or you specify the APPEND hint. However, direct-path INSERT operations into a partitioned IOT will honor parallel mode as long as the partition-extended name is not used and the IOT has more than one partition.
        *      The target table cannot have any triggers or referential integrity constraints defined on it.
        *      The target table cannot be replicated.
        *      A transaction containing a direct-path INSERT statement cannot be or become distributed.Are there any others that are not documented here? We have a vendor based app and want to avoid the DIRECT PATH INSERT and have it go CONVENTIONAL. We tried the TRIGGER approach, but that did not help at all.

    Why are you wanting to force conventional ?
    Are you sure the application uses direct path ?

  • High level design options

    Hi, I'm trying to decide on the best way to desing an application that will be responsible for the following:
    -Display and control of 5 temperature zones (analogue/digital)
    -Display and control of 4 Mass Flow Controllers (analogue)
    -Control of various serial devices
    -Logging and other standard application features
    I have no problem writing small VI's do control one of the temp zones or one of the MFC's but putting it all together is proving to be harder than I expected. I am using LV 7.1 with PCI 6024E and PCI 6602 DAQ cards. I've looked at application examples like the 'Cookie factory' which have been very useful but I'm worried about how to do all the I/O.
    I've noticed that if I write a separate VI to control each temp zone, they don't seem to all be able to access the hardware at the same time (not surprising) but does this mean that I'll have to do all my I/O at the same time and therefore in the same VI?
    I've also thought about using the DAQopc server in conjunction with the DSC module which would allow my VI's to just read and write to tags as apposed to trying to read directly from the DAQ. Does the DAQopc server work with DAQmx? I haven't been able to see any of my DAQmx tags in the opc server.
    Any help would be much appreciated. Thanks.

    I do not have any experience with the DSC stuff, so I will not comment on that. I had delayed responding, thinking that someone with more experience along those lines might make some suggestions. Anyway with the cautions out of the way, here goes my opinions.
    I like state machine architectures. In particular I have separate, parallel state machines for DAQ, GUI, and data processing. I use queues and functional globals to transfer data and commands among the state machines. In your case I might further subdivide the DAQ into a machine for each of the cards and another for the serial communications.
    For example if all the temperature zones (A, B, C, D, E) were monitored and controlled through the PCI 6024, I might have a loop which reads and writes to the device (PCI 6024). Inputs would be commands sent over a queue and would be of the form "Read Temperature " or "Set Temperature <150>" where the part in <> brackets is a parameter. The output (in separat...

  • Replace Snow Leopard Server OS on Mini Server with Snow Leopard non-Server?

    I have a Mid-2010 Mini Server which came preinstalled with Snow Leopard Server. I'm wondering if anyone has had experience with attempting to replace the OS-X Server software with OS-X non-Server in order to run Parallels for the Desktop. If this worked, then one should be able to re-install the OS-X Server software as a client using Parallels. Parallels states that it supports OS-X Server as a client, but the real question is whether one can install the OS-X (non-server) on a Mini which came with the OS-X Server software installed.

    Hi
    Your problem is going to be finding a Client OS that will actually boot and install on the MacMini:
    http://support.apple.com/kb/HT2186
    http://support.apple.com/kb/HT1159
    According to MacTracker the build version of OSX Server 10.6.3 shipping with the MacMini Servers is 10D2235. It may work if you have a comparable client that's fully updated to 10.6.4. One way of finding out is to target disk mode an appropriately updated mac to the MacMini Server. If you can, select the System on that unit as the Startup Disk and see if it boots and works successfully with no kernel panics.
    Tony

  • Help! I need to go back!

    I recently installed OSX 10.6. I set up boot camp with Vista in hopes of getting rid of parallels. Now however Parallels states it cannot work with 10.6. How do I go back? My 10.4 install disk won't work. Will restoring from Time Capsule work?

    Hi Derek,
    the Parallels DMG file contains an uninstaller for Parallels, but the Virtual Machine files might be deleted by yourself.
    Also Parallels has an update that claims to be compatible with Snow Leopard 10.6 http://www.parallels.com/download/desktop/
    Regards
    Stefan

  • Macbook Pro doubt!

    Hello everyone, I recently bought this computer on the Apple Store:
    Macbook Pro 17"
    2.66GHz Intel Core i7
    8GB 1066MHz DDR3 SDRAM - 2X4GB
    128GB Solid State Drive
    MacBook Pro 17-inch Hi-Resolution Antiglare Widescreen display
    Apple LED Cinema Display (27" flat panel)
    My objective is to start university next year and it's required for me to use mostly Adobe Programs so I bought the CS5 Master Collection.
    I know that some students computer's kept crashing because they couldn't support those heavy programs.
    I don't want that to happen so I would like to know if my purchase is enough to manage all those programs smoothly.
    Thanks a lot for further replies and sorry about my English =)
    ps. I will be also using Microsoft office 2010 and Microsoft Project ( haven't found a mac version for this last one yet ).
    Message was edited by: Adriano Martins

    Given that you first stated "I will be also using Microsoft office 2010 and Microsoft Project" and then later asked "Are there any applications on the Mac that could just open a .mpp file in read only?" I will try to address the options you might have for both: using and just reading MS Project (mpp files) on you MBP.
    You could get Parallels Desktop for Mac (http://www.parallels.com/products/desktop/), install a virtual MS Windows machine and then install MS Project. I think this would be a pretty cool setup. Go the Parallels site, there is plenty of info / documentation there, also trial download. Given you 8GB RAM, high-end processor, SSD, and if Parallels lives up to their claims, then you would have one sweet, multi-os, cross-platform access to files, high-end computing environment. The thing I like about VMs is that if you don't need one, you can just get rid of it. They also allow you to set up multiple configurations of OS / Apps, which can make for great multi-use and test environments. And, all the time, just running nice and smoothly "in parallel" with the Mac (I am basing that on what Parallels states, not actual experience, but this would be one of the first apps I would choose for anew MBP, based on my use).
    There is also, iTaskX (the project management powerhouse for macs - http://www.itaskx.com/software/en/iTaskX2_info.asp). Essentially, this product is designed to be the MS Project replacement for macs. Google for reviews, some say it is, others say it is not quite as robust. Either way, it still looks pretty handy and you can view screenshots, and product video on the site. If you wanted something like this, the nice thing is that "iTaskX offers a high degree of compatibility, letting you easily exchange your documents over Industry-Standard formats like MS Project MPP (read), MS Project XML or MS Project MPX." The key point there being the "MPP (read)" you were looking for. Also, current version is supposedly compatible with MS Project 2010, which seems not to limit the experience. They also have an evaluation license you can get to try it out.
    Then, there is Steelray Project Viewer (http://www.steelray.com/spv_2010.html), which claims to be "The Only Viewer That Suppports Microsoft Project 2010". Maybe they don't count iTaskX as just a project viewer since it is also more than that, in which case, and in lieu of further searching, seems correct. They also have demos and tutorials and a free trial.
    So, I am not sure exactly what you need / want, but there are three options to investigate. If you have the time, could try all them out and see what works for you.
    To me, if they deliver as stated, they all seem reasonably priced for what they do. I don't currently use MS Project but did intensely at one time. Only advice I can give there is - use versioning and backup - I just recall how time consuming was to recreate a previous version of a project or start from scratch when something did not work right. I doubt this has changed. Would really love to hear how any of these work out. In particular, if you try the Parallels Desktop for Mac.
    Hope this helps. Best of luck at university.

Maybe you are looking for

  • I have doubt with import

    i have to import user dump file in the particular username i created in the tablespace .. in os level i have just specify imp and then prompt ask me username and i have specified the username and password i created in the tablespace ...but it shows i

  • Bridging Two Networks

    My mac (G5 running 10.4.11) is connected through ethernet to a router-powered small network. I'd like to be able to connect a PC (running XP64), using ethernet, to the 2nd ethernet port on the Mac. I would like this PC to be able to access all of the

  • Alert Management Configuration

    Hello Guys, We want to configure alert management for bid invitation. Could you please give us step by step approach of how to do it ? Steps we have carried out till now. 1. We have defined events (PUBLISHED_AGAIN and PUBLISHED) 2. Defined event sche

  • SAP Note Issue

    Hi All, I am downloading the SAP notes using the T.code SNOTE & its showing the status message: Requested SAP Notes downloaded. But i can't find the SAP note in the hierarchy. Does anyone know why ?? SAP system is 4.6C... BR Tanmoy Edited by: Tanmoy

  • Mail problem. (Log)

    16/11/10 09:50:09 Mail[164] * Assertion failure in -[MessageViewer _countStringForType:isDrafts:omitUnread:totalCount:], /SourceCache/Mail/Mail-1082/MessageViewer.subproj/MessageViewer.m:5081 deleted count greater than total count 0 Message 0x00007ff