Parallel query processing. (Max Processors)

Hi Guys,
We have a System running on SQL 2000, and it configured to use all available processors.
(Server properties>Processor)
I am about to change it to use 1 processor as per microsoft recommendation, but there another option when you select fixed number of processor instead of all available processors, which is
"Minimum query plan threshold for considering queries for parallel execution(cost estimate)"
http://picasaweb.google.com/lh/photo/8N0-WQt3m23vEiupxdjtwQ?feat=directlink
can anybody suggest me, how do you estimate this number. or you guys can post what is current threshold value in your SQL 2000 server environment. (I dont see this option in SQL 2005,probably its handled automatically).
Thanks Guys

Hi Fan Yu
SAP on Microsoft SQL Server
In the above document, page 44 under section Max degree of parallesim, microsoft recommends this value to be set to 1.
(Reasons are discussed in the white paper)
Like i said, SQL 2000 has one more option regarding max degree of parallelism, i.e "Minimum query plan threshold for considering queries for parallel execution(cost estimate)" 
http://picasaweb.google.com/lh/photo/8N0-WQt3m23vEiupxdjtwQ?feat=directlink
Probably i was not clear in my question, this is not related to number of processors and data files, but number of processors used by the query engine at a given  time.(max degree of parallelism).

Similar Messages

  • Parallel Query processing in SQL server

    In stored procedure we are processing nearly 10 update statements with in a transaction, because  of synchronous query processing it is taking time to process all these queries ,I need to execute all the quires asynchronous processing can any
    one help me out how to process quires asynchronously?

    Hi,
    For most cases the need of using asynchronous
    statements processing
    is resolving using Service Broker
    SQL Server Service Broker provides native support for messaging and queuing applications in the SQL Server Database Engine. Service Broker, internal or external processes can send and receive guaranteed, asynchronous messaging by using extensions to
    Transact-SQL.
    SQL Server Service Broker
    http://msdn.microsoft.com/en-us/library/bb522893.aspx
    An Introduction to SQL Server Service Broker
    http://technet.microsoft.com/en-us/library/ms345108(v=sql.90).aspx
    [Personal Site] [Blog] [Facebook]

  • ORA-12801: error signaled in parallel query server P007

    Hi friends,
    I am running a update on a big table with parallel clause.
    I got the below error.Can someone help what should i next? This query takes a long time to execute(nearly 1 hour).
    I searched on google and metalink but no success yet.
    =======
    ORA-12801: error signaled in parallel query server P007
    ORA-00001: unique constraint (VEL5APPO.BL1_CHARGE_PK) violated
    =======
    Cheers,
    Kunwar

    Kunwar wrote:
    Hi friends,
    I am running a update on a big table with parallel clause.
    I got the below error.Can someone help what should i next? This query takes a long time to execute(nearly 1 hour).
    I searched on google and metalink but no success yet.
    =======
    ORA-12801: error signaled in parallel query server P007
    ORA-00001: unique constraint (VEL5APPO.BL1_CHARGE_PK) violated
    =======Not sure why you think this a complex and unusual and difficult to understand error. It is very clear what is happening.
    WHAT: your code violates constraint VEL5APPO.BL1_CHARGE_PK that says that one or more columns must contain unique values.
    Who says it? That is specified by the first line in the error stack.
    WHO: parallel query process 7 ran into the constraint error
    So what's happening? Your SQL is executed in parallel. Up to 7 PX slaves are used to run your SQL - with each slave running the SAME SQL for DIFFERENT rowid ranges. PX slave 7 attempted an insert/update that would have resulted in a duplicate row. The database constraint protecting the integrity of that table, prevented it.
    So the error is very clear as to what is wrong, why it is wrong, and where it is happening.
    I fail to understand that you search google and metalink and failed to find answers, when the error is meaningful....?

  • Is parallel measuregroup processing supported in Business Intelligence (BI) edition?

    Hi all,
    I know that parallel measuregroup processing is not supported in Standard Edition of SQL Server Analysis Services. But what about Business Intelligence Edition?
    I can't find this in the
    feature comparison matrix for SQL Server 2014. I think the entry shown in the picture is for the relational engine only. I haven't found any other entry which would describe this feature.
    It is also not mentioned in
    Processing Options and Settings (Analysis Services).
    Thank you for your help.
    Best regards,
    Gerald

    Hi Gerald,
    According to your description, you need to know if parallel processing is supported in Business Intelligence Edition of SQL Server Analysis Service, right?
    In SSAS, engine can process and query OLAP partitions in parallel, leading to potentially better performance and scalability. However, not all the edition is support this feature. Parallel query processing on partitioned tables and indices is only support
    on Enterprise edition. Please refer to the link below.
    http://msdn.microsoft.com/en-us/library/cc645993.aspx#SSAS
    If you have any concern about this feature, you can submit a feedback at
    http://connect.microsoft.com/SQLServer/Feedback and hope it is released in the next release of service pack or product. Your feedback enables Microsoft to make software and services the best that they can be, Microsoft might consider to add this feature
    in the following release after official confirmation.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Data Driven Subscriptions Error - the query processor could not start the necessary thread resources for parallel query execution

    Hi,
    We are getting the following error when certain data driven subscriptions are fired off: "the query processor could not start the necessary thread resources for parallel query execution".  I've read other posts that have the same error, and
    the solution usually involves adjusting MaxDOP to limit the number of queries that are fired off in parallel.  
    Unfortunately, we cannot change this setting on our server for performance reasons (outside of data driven subscriptions, it negatively impacts our ETL processing times).  We tried putting query hints like "OPTION (MAXDOP 2);" in the reports
    that are causing the error, but it did not resolve the problem.
    Are there any settings within Reporting Services that can be adjusted to limit the number of subscriptions that get fired off in parallel?
    Any help is appreciated - thanks!

    Yes, that is correct.  It's a painful problem, because you don't know which specific subscription failed. For example, we have a data driven subscription that sends out about 800 emails. Lately, we've been having a handful of them fail. You don't know
    which ones out of the 800 failed though, even from the RS log files - all it tells you is that "the
    query processor could not start the necessary thread resources for parallel query execution".
    Thanks, I'll try changing <MaxQueueThreads> and will let you know if it works.
    On a side note: I've noticed that it is only reports with cascading parameters (ex. where parameter 2 is dependent on the selection from parameter 1) that get this error message...

  • Apply process is aborting with:ORA-12801: error signaled in parallel query

    hi,
    We created a queue of a specific type, capture and apply process on the queue. Then we started the queue capture and the apply process. The problem is that the apply process is getting enabled and with in moments going into aborted state. wea re getting the follwoing error:
    ORA-12801: error signaled in parallel query server P000
    ORA-00600: internal error code, arguments: [kwqiceval:anyconv], [], [], [], [], [], [], []
    Any idea what could have gone wrong?
    scripts:
    exec dbms_aqadm.create_queue_table(queue_table=>'qt_anc',queue_payload_type=>'type_anc',multiple_consumers=> true, compatible => '9.0');
    exec dbms_aqadm.create_queue (queue_name => 'q_anc',queue_table=>'qt_anc');
    EXEC DBMS_AQADM.START_QUEUE (queue_name => 'q_anc');
    DECLARE
    emp_rule_name_dml VARCHAR2(300);
    emp_rule_name_ddl VARCHAR2(300);
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'ops.t_anc',
    streams_type => 'capture',
    streams_name => 'capture_anc',
    queue_name => 'strmadmin.q_anc',
    include_dml => true,
    include_ddl => false,
    source_database => null,
    dml_rule_name => emp_rule_name_dml,
    ddl_rule_name => emp_rule_name_ddl);
    DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name => emp_rule_name_dml,
    destination_queue_name => 'strmadmin.q_anc');
    END;
    DECLARE
    emp_rule_name_dml VARCHAR2(300);
    emp_rule_name_ddl VARCHAR2(300);
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'ops.t_anc',
    streams_type => 'apply',
    streams_name => 'apply_anc',
    queue_name => 'strmadmin.q_anc',
    include_dml => true,
    include_ddl => false,
    source_database => null,
    dml_rule_name => emp_rule_name_dml,
    ddl_rule_name => emp_rule_name_ddl);
    DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name => emp_rule_name_dml,
    destination_queue_name => 'strmadmin.q_anc');
    END;
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'apply_anc');
    END;
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'capture_anc');
    END;
    /

    Hello
    The above configuration is never supported. The implicit capture expects the queue payload to be SYS.ANYDATA and same with implicit apply also.
    However you can use Streams Messaging capability to achieve this. You need to wrap the messages with SYS.ANYDATA for this to work. The implicit capture uses a persistent logminer session to generate LCRs and it then wraps it with SYS.ANYDATA and enqueues into the capture queue, it then propagated to apply queue. You can generate LCRs and wrap it with SYS.ANYDATA and then enqueue into the capture queue then apply can recognise the messages.
    Here is an example on creating LCRs (tested in 10g):
    CREATE TABLE lcr_test (col1 NUMBER);
    DECLARE
         l_lcr SYS.LCR$_ROW_RECORD;
    BEGIN
         l_lcr :=
    SYS.LCR$_ROW_RECORD.CONSTRUCT
         source_database_name=>SYS_CONTEXT('USERENV','DB_NAME'),
         command_type=>'INSERT',
         object_owner=>USER,
         object_name=>'LCR_TEST'
         l_lcr.ADD_COLUMN('new','col1',SYS.AnyData.ConvertNumber(99));
         l_lcr.EXECUTE(TRUE);
         COMMIT;
    END;
    SELECT * FROM lcr_test;
    Converting to SYS.ANYDATA:
    DECLARE
         l_lcr SYS.LCR$_ROW_RECORD;
         l_anydata SYS.ANYDATA;
    BEGIN
         l_lcr :=
    SYS.LCR$_ROW_RECORD.CONSTRUCT
         source_database_name=>SYS_CONTEXT('USERENV','DB_NAME'),
         command_type=>'INSERT',
         object_owner=>USER,
         object_name=>'LCR_TEST'
         l_lcr.ADD_COLUMN('new','col1',SYS.AnyData.ConvertNumber(99));
         l_anydata:=SYS.ANYDATA.ConvertObject(l_lcr);
         ENQ_PROC(l_anydata);
         COMMIT;
    END;
    Thanks,
    Rijesh

  • ORA-12801 error signalled in parallel query with ORA-00018 max # session

    Hello,
    I've just run dbca to create a new database. On completion I received the following:
    ORA-12801 error was signalled in parallel query server
    ORA-00018 maximum number of sessions exceeded
    ORA-06512 at sys_utl_recomp line 760
    ORA-06512 as sys_utl_recomp line 773
    ORA-06512 as sys_utl_recomp line 1
    The processes are set at 100, sessions is set at 115?
    Can you tell how to address this error and resolve the problem? I'm thinking that I should rerun sys_utl_recomp

    Hi,
    You didn't mention oracle version as well OS.
    might be you hit a bug.
    check the below link for reference
    https://metalink2.oracle.com/metalink/plsql/f?p=130:14:8051847734518941130::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,21564.1,1,1,1,helvetica
    Regards,
    Taj

  • Parallel query feature

    Hi to utilize the parallel query option in database in 9i 10 g how can me modify the sql staments or the procedures.Whether it will use the parallel query feature without giving the parallel query hint?
    with regards
    ramya

    When we have to handle a lot of data in operations, like full table scans of large tables, or the creation of large indices, we can divide the work. Instead of using a single process for one statement, the work can be spread to multiple processes. This is called parallel execution or parallel processing. Parallel execution normally reduces response time when you are accessing large amount of data, but it is not always faster than a serial operation.
    Parallel execution is useful for many types of operations like:
    * Queries requiring large table scans, joins, or partitioned index scans
    * Creation of large indices and large tables
    * Bulk inserts, updates and deletes
    * Aggregations
    We have the following hints to influence the use of parallel execution
    * PARALLEL
    * NOPARALLEL
    * PQ_DISTRIBUTE
    * PARALLEL_INDEX
    * NOPARALLEL_INDEX
    From the session where the query run, type:
    select * from v$pq_sesstat;
    SELECT slave_name,status, cpu_secs_total
    FROM v$pq_slave;
    Performance summary:
    It cannot be stressed strongly enough that parallel query is not always quicker than serial. Some queries are more suited to parallel operations than others. If you are using parallel execution, then you should endeavour to maximize I/O throughput to your disk. Ensure that:
    * you have enough slaves to retrieve the data efficiently
    * you do not have too many slaves (avoid maxing out the CPU)
    * you set realistic memory parameters (sort_area_size etc.) so that you do not max out memory and cause swapping
    * the data is spread over multiple disk spindles so that slaves do not contend for I/O resource
    * the type of query your are attempting to execute in parallel is suitable for parallelisation.
    * watch for non-uniform loading on slave processes as this could indicate data skew.
    http://www.jlcomp.demon.co.uk/pqcosts.html
    courtesy from oracle metalink.
    SJH
    OCP DBA

  • Problem with temp space allocation in parallel query

    Hello
    I've got a query which matches two large result sets (25m+ rows) against each other and does some basic filtering and aggregation. When I run this query in serial it takes about 30 mins and completes successfully. When I specify a parallel degree of 4 for each result set, it also completes successfully in about 20 minutes. However, when I specify that it should be run in parallel but don't specify a degree for each result set, it spawns 10 parallel servers and after a couple of minutes, bombs out from one of the parallel servers with:
    ORA-12801: error signaled in parallel query server P000
    ORA-01652: unable to extend temp segment by 64 in tablespace TEMPThis appears to be when it is about to perform a large hash join. The execution plan does not change whether the parallel degree is specified or not, and there is several GB of temp space available.
    I'm at a bit of a loss as to how to track down specifically what is causing this problem. I've looked at v$sesstat for all of the sessions involved and it hasn't really turned anything up. I've tried tracing the main session and that hasn't really turned up much either. From what I can tell, one of the sessions seems to try to allocate massive amounts of temp space that it just does not need, but I can figure out why.
    Any ideas of how to approach finding the cause of the problem?
    David

    Hello
    I've finally resolved this and the resolution was relatively simple - and was also the main thing that Mark Rittman said he did in his article: reduce the size of the hash join.
    After querying v$sql_workarea_active I could see what was happening which was that the sum of the temp space for all of the parallel slaves was exceeding the total amount of temp space available on the system. When run in serial, it was virtually at the limit. I guess the extra was just the overhead for each slave maintaining it's own hash table.
    I also made the mistake of misreading the exectuion plan - assuming that the data being pushed to the hash join was filtered to eliminate the data that was not of interest. Upon reflection, this was a rather stupid assumption on my part. Anyway, I used sub query factoring with the materialize hint to ensure that the hash join was only working on the data it should have been. This significantly reduced the size of the hash table and therefore the amount of temp space required.
    I did speak to oracle support and they suggested using pga_aggregate_target rather than the separate *area_size parameters.  I found that this had very little impact as the problem was related to the volume of data rather than whether it was being processed in memory or not.  That said, I did try upping the hash_area_size for the session with some initial success, but ultimately it didn't prove to be scalable.  We are however now using pga_aggregate_target in prod.
    So that's that. Problem sorted. And as the title of Mark Rittman's article suggests, I was trying to be too clever! :-)
    HTH
    David

  • Partitioning For Optimal Parallel Query Execution

    Hi All,
    We are trying to design an architecture that benefits from partitioning and parallel query to obtain the best query response times for our system.
    Let me start by describing the main table which has five columns:
    Columns:
    1) DocId ------- Numeric Primary Key Constraint (Unique)
    2) Text ------- CLOB of XML Content
    3) SCode ------- Varchar 12 ( service code Can be one of 200 values)
    4) A_Date ------- Oracle Date ( The arrival date )
    5) A_Month ------- Numeric partition key, the month number (1-12)
    We insert 100,000 records daily so a month of data will contain 3,000,000 rows. The Text varies from 4k to 200k bytes and on average is around 30k bytes per document. A_Date is obtained when the C++ application receives a client connection. After document transmission is complete the DocId is obtained from an Oracle sequence. It is true that A_Date and DocId increase together. However because of varying document sizes and transmission rates, there is no guarantee that consistent order between the two columns exists.
    For Example: If time (t) increases and the connection times are: t1, t2, t3, t4 and the document at t2 took long to transmit, we can have:
    A_Date -------- DocId (From Oracle Sequence)
    t2 -------------- 1004
    t4 -------------- 1003
    t3 -------------- 1002
    t1 -------------- 1001
    A_Month is simply the month number (1-12) extracted from the transmission entry timestamp. It is also our Partition Key (see below). When the year wraps around from 2006 to 2007, data for January will simpy fall into the 1st partition bucket, and so on..
    QUERY NEEDS: Our queries are centered around a DateTime interval Where the left endpoint is current day/time. They can query the current day, current to 1 month back, current to 2 months back, .. current to 15 months back. We MUST RETURN a list of DocId's sorted in DESCENDING ORDER for screen display purposes.
    In General we need to return sub-second for the 1st month. As we query further back in time longer response times between 1 and 4 seconds are acceptable.
    PARTITIONING AND INDEXES:
    The table is partitioned by A_Month as shown below:
    Create Table IndexTable
    PARTITION BY RANGE (A_Month)
    ( PARTITION p1 VALUES LESS THAN 1.1 ...
    PARTITION p2 VALUES LESS THAN 2.1 ...
    PARTITION p3 vALUES LESS THAN 3.1 ...
    There are GLOBAL INDEXES on DocId, Text(Domain Index), and SCode.
    A_Date is a LOCAL INDEX.
    QUERY STRUCTURE:
    My Query structure looks like this (for a 2 month query):
    SELECT DocId from IndexTable WHERE
    Contains (Text, 'BUSH and EARNINGS') > 0 AND
    SCode in ('S1', 'S2', 'S3', 'S4') AND
    A_Date Between '2006-01-15 11:00:00' AND '2006-03-15 11:00:00'
    Order By DocId DESC;
    QUESTIONS (THERE ARE THREE)
    #### QUESTION 1 ####
    As I examine various explain plans, the PSTART and PSTOP do not reflect consistency with my A_Date range. It seems to always display:
    PStart PStop
    RowId RowId
    no matter what my A_Date range is.. I don't see it pruning my partitions. I cannot find documentation as to what RowId means or how it affects the optimizer's plan.
    #### QUESTION 2 ####
    I have tried parallelization hints on the table and indexes such as
    /*+ PARALLEL( IndexTable, 4) */ and on the A_Date index
    /*+ PARALLEL_INDEX( IndexTable, A_Date_Index, 4) */.
    I can't really tell if the parallel hints make a difference.
    However, the FIRST_ROWS hint makes a big difference if I nest the query inside a rownum clause as:
    SELECT * from
    ( Select /+* first_rows */ ... WHERE CONTAINS... > 0 AND... )
    Where ROWNUM <=20;
    #### QUESTION 3 ####
    I'm running close to the latest RedHat Linux and Oracle 10g2 and I have read about Parallel Slave Processes in Tom Kyte's Expert Oracle Architecture book in which he says you should see processes like:
    ora...p000..
    ora...p001..
    ora...pnnn..
    Which are the parallel slave processes. I have NEVER seen any oracle processes numbered as such running on my system when I run test queries. I have seen some q_ processes and others, but not the pnnn processes..
    Can I benefit from parallel query without ever seeing these processes??
    I Greatly Appreciate Any Advice To Any Of These Questions..
    Joe

    Well I can tell you this much. You will never get partition pruning if you don't have the partition key in the where clause.
    I'm not sure about the parallel query. There was a time that this wasn't supported with Text indexes, but not sure if that still applies today.
    In theory there are two levels of partitioning that can be defined that you may want to test out.
    1st, range partition the base table and create a partitioned text index based on that. This is what you are currently doing.
    2nd, in the storage preference for the $I table specify a range or hash partition (I've never tried this, but in theory it should work) on the token_text column. Try this out and see if it works.

  • Error on Parallel Query

    Hi,
    My application throws the following error when Iam processing a parallel query on my database. Can any one help me to resolve this.
    ERROR at line 1:
    ORA-29903: error in executing ODCIIndexFetch() routine
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drekrgm (one piece lob read)
    ORA-01555: snapshot too old: rollback segment number with name "" too small
    ORA-22924: snapshot too old
    Regards
    Venkat

    HI,
    Thanks for the article. I read the article but my problem is getting addressed. I have enough disk space and rollback segments.
    7 rollback segments (1 default 200 MB + 6 created for my executing wtih 1GB each ). All of them are auto extended 10M per block.
    Apart from this I have a free space for 25 GB in the drive left out. Inspite of all of these. I am getting this error.
    I am not able to understand the "DreKrgm" part.
    Can you throw some light on this part.

  • Parallel  query in Oracle 11g

    We use Oracle 11g DB on windows2008R2.
    We wrote very long and heavy SELECT SQL query usign several table join and sub-queries and it take very long time to get result.
    We did SQL statement tuning as much as we can do so far. ( I will get the Execution Plan and ADDM, etc access right)
    I today notice about Parallel query function.
    Where could I write parallel hint phrases in very long SELECT SQL query ?
    In each selected tables like below ?
    /sample in Oracle Doc/
    SELECT /*+ PARALLEL(employees 4) PARALLEL(departments 4)
           USE_HASH(employees) ORDERED */ MAX(salary), AVG(salary)
    FROM employees, departments
    WHERE employees.department_id = departments.department_id
    GROUP BY employees.department_id;

    You need to be careful with some of the examples in the docs. The example you quote includes an unnecessary join. Before considering parallel query, it should have been re-written to this:
    SELECT MAX(salary), AVG(salary)
    FROM employees
    WHERE department_id is not null
    GROUP BYdepartment_id;
    Later versions of the CBO may do this re-write for you, but it is important to understand why the SQL is inefficient before throwing parallel servers at it. Are you sure that all your joins are actually necessary? (I know you are just going to say "yes", but you might want to think about it first.)

  • Problem in running parallel query

    Hi All,
    We have an Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 installed on Microsoft Windows 2000 Advanced Server 5.0.2195 Service Pack 4 Build 2195.
    And we have a stored procedure which does lots of works.
    In this procedure, there is a select statement which joins a huge table with milions of records(lets call it table X) to some other almost huge tables. This table X
    is partitioned and parallel processing has been enabled on it. Every thing has been fine until a week ago. From then on, we are getting the following error in that procedure:
    ORA-12801: error signaled in parallel query server P014
    ORA-04030: out of process memory when trying to allocate 4194328 bytes (QERHJ hash-joi,QERHJ list array)
    We have made some changes in that select statement which are very common changes like adding some additional where clauses and selecting some additional columns. The From List of select statement has not been changed.
    I don't exactly know what is realy going on.
    The available memory is about 2 GB when the mentioned procedure crashes.
    Any Idea?
    My Best,
    Alireza Vali

    Is it a 32bit installation (32bit Windows, 32bit Oracle) ?
    What are your MEMORY_TARGET, SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_TARGET parameters set to ? (If you have MEMORY_TARGET set, it is not necessary that the other parameters be set).
    ORA-4030 relates to PGA memory, not SGA memory. But Oracle on Windows is a single, multi-threaded process. Therefore, all Oracle "processes" (threads in the Windows process) share the same memory (address) space, even the SGA and PGA.
    Hemant K Chitale

  • Error signaled in parallel query server p005 DATE format comparison

    Hello All,
    I have a data like this.
    {code}
    j_id   s_id     b_id    lc   t_date                             my_val1     my_val2
    100    200    300     prs   2013-07-17 16:01:47         myval1     myval2
    100    200   300     prs    2013-07-17 16:01:47         myCval1   myCval2
    {code}
    When i am running a query like this
    {code}
    update my_tab b
            set my_col = 'X'
            where rowid <> ( select max(t.rowid) from my_tab t
                             where
                                    t.J_ID        = b.J_ID   and
                                    t.S_ID = b.S_ID and
                                    t.B_ID      = b.B_ID and
                                    t.LC   = b.LC
                                  and TO_TIMESTAMP(trim(t.t_DATE), 'YYYY-MM-DD HH24:MI:SS.FF')
                                   = TO_TIMESTAMP(trim(b.t_DATE), 'YYYY-MM-DD HH24:MI:SS.FF')
    {code}
    I know i have a DATE format but converting it into TIMESTAMP because my data is random and could contain the time stamp as well.
    My concern here is when i run above update statement i get error
    {code}
    ORA-12801: error signaled in parallel query server P005
    ORA-01862: the numeric value does not match the length of the format item
    {code}
    but when i do like below.. It runs fine. Not sure what i am missing here or doing something wrong. Please help.
    {code}
    select to_timestamp('2013-07-08 17:58:47', 'YYYY-MM-DD HH24:MI:SS.FF') from dual
    where
    to_timestamp('2013-07-08 17:58:47', 'YYYY-MM-DD HH24:MI:SS.FF') = to_timestamp('2013-07-08 17:58:47', 'YYYY-MM-DD HH24:MI:SS.FF')
    {code}
    Thanks!

    user10647455 wrote:
    Hello All,
    I have a data like this.
    {code}
    j_id   s_id     b_id    lc   t_date                             my_val1     my_val2
    100    200    300     prs   2013-07-17 16:01:47         myval1     myval2
    100    200   300     prs    2013-07-17 16:01:47         myCval1   myCval2
    {code}
    When i am running a query like this
    {code}
    update my_tab b
            set my_col = 'X'
            where rowid <> ( select max(t.rowid) from my_tab t
                             where
                                    t.J_ID        = b.J_ID   and
                                    t.S_ID = b.S_ID and
                                    t.B_ID      = b.B_ID and
                                    t.LC   = b.LC
                                  and TO_TIMESTAMP(trim(t.t_DATE), 'YYYY-MM-DD HH24:MI:SS.FF')
                                   = TO_TIMESTAMP(trim(b.t_DATE), 'YYYY-MM-DD HH24:MI:SS.FF')
    {code}
    I know i have a DATE format but converting it into TIMESTAMP because my data is random and could contain the time stamp as well.
    My concern here is when i run above update statement i get error
    {code}
    ORA-12801: error signaled in parallel query server P005
    ORA-01862: the numeric value does not match the length of the format item
    {code}
    but when i do like below.. It runs fine. Not sure what i am missing here or doing something wrong. Please help.
    {code}
    select to_timestamp('2013-07-08 17:58:47', 'YYYY-MM-DD HH24:MI:SS.FF') from dual
    where
    to_timestamp('2013-07-08 17:58:47', 'YYYY-MM-DD HH24:MI:SS.FF') = to_timestamp('2013-07-08 17:58:47', 'YYYY-MM-DD HH24:MI:SS.FF')
    {code}
    Thanks!
    If you have a date column, converting that to a timestamp isn't going to magically add more information to the date.
    Date data types hold time information (not to the fractional precision like timestamps, but up to the second) ... if you are having a problem seeing that information, it's likely because of your NLS_DATE_FORMAT setting (whatever client you are using to view the data isn't showing you all of the information, but it's still there).
    So basically, this boils down to your code not "making sense" at the moment
    Cheers,

  • ORA-12801: error signaled in parallel query server P000

    Hello All,
    Week before one of the APPLY process ABORTED with following error:
    ORA-12801: error signaled in parallel query server P000
    ORA-04031: unable to allocate 104 bytes of shared memory ("streams pool","unknown object","apply shared t","knalfGetTxn:lcr")
    We are using ORACLE 10.2.0.4.0 on HP unix B.11.23
    For now, I have started the APPLY process again and it's working properly.
    When I looked into the trace file it shows following
    A001: [enq: TM - contention] name|mode=544d0002, object #=2a67, table/partition=0
    *** 2009-06-15 10:53:57.897
    A001: warning -- apply server 1, sid 302 waiting on user sid 267 for event (since 302 seconds):
    A001: [enq: TM - contention] name|mode=544d0002, object #=2a67, table/partition=0
    *** 2009-06-15 10:58:58.792
    A001: warning -- apply server 1, sid 302 waiting on user sid 267 for event (since 603 seconds):
    A001: [enq: TM - contention] name|mode=544d0002, object #=2a67, table/partition=0
    *** 2009-06-15 12:14:36.679
    A001: [enq: TX - row lock contention] name|mode=54580004, usn<<16 | slot=90028, sequence=1b743
    *** 2009-06-15 12:19:36.961
    A001: warning -- apply server 1, sid 302 waiting on user sid 181 for event (since 300 seconds):
    A001: [enq: TX - row lock contention] name|mode=54580004, usn<<16 | slot=90028, sequence=1b743
    *** 2009-06-15 12:24:37.417
    A001: warning -- apply server 1, sid 302 waiting on user sid 181 for event (since 600 seconds):
    A001: [enq: TX - row lock contention] name|mode=54580004, usn<<16 | slot=90028, sequence=1b743
    *** 2009-06-15 12:29:37.906
    A001: warning -- apply server 1, sid 302 waiting on user sid 181 for event (since 901 seconds):
    A001: [enq: TX - row lock contention] name|mode=54580004, usn<<16 | slot=90028, sequence=1b743
    *** 2009-06-15 12:34:37.428
    A001: warning -- apply server 1, sid 302 waiting on user sid 181 for event (since 1201 seconds):
    A001: [enq: TX - row lock contention] name|mode=54580004, usn<<16 | slot=90028, sequence=1b743
    *** 2009-06-19 11:26:44.601
    A001: [enq: TX - row lock contention] name|mode=54580006, usn<<16 | slot=8001e, sequence=1a4af
    *** 2009-06-19 11:31:43.753
    A001: warning -- apply server 1, sid 302 waiting on user sid 212 for event (since 300 seconds):
    A001: [enq: TX - row lock contention] name|mode=54580006, usn<<16 | slot=8001e, sequence=1a4af
    *** 2009-06-19 11:36:44.149
    A001: warning -- apply server 1, sid 302 waiting on user sid 212 for event (since 600 seconds):
    A001: [enq: TX - row lock contention] name|mode=54580006, usn<<16 | slot=8001e, sequence=1a4af
    *** 2009-06-19 11:41:43.775
    A001: warning -- apply server 1, sid 302 waiting on user sid 212 for event (since 900 seconds):
    A001: [enq: TX - row lock contention] name|mode=54580006, usn<<16 | slot=8001e, sequence=1a4af
    *** 2009-06-23 16:55:24.002
    A001: [enq: TM - contention] name|mode=544d0004, object #=2c05, table/partition=0
    *** 2009-06-29 09:48:58.166
    A001: [enq: TM - contention] name|mode=544d0004, object #=2c05, table/partition=0
    *** 2009-07-01 06:02:37.236
    A001: [enq: TM - contention] name|mode=544d0004, object #=2c05, table/partition=0
    *** 2009-07-01 11:46:43.672
    error 12801 in STREAMS process
    ORA-12801: error signaled in parallel query server P000
    ORA-04031: unable to allocate 104 bytes of shared memory ("streams pool","unknown object","apply shared t","knalfGetTxn:lcr")
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-12801: error signaled in parallel query server P000
    ORA-04031: unable to allocate 104 bytes of shared memory ("streams pool","unknown object","apply shared t","knalfGetTxn:lcr")
    Can you please help to provide details on that how i can fix this problem permanetely.
    Any suggestions would be great!
    Thanks,
    Nick

    It seems that you are using Oracle database and not Berkeley Database.
    To fix ORA-04031 errors on streams pool, you need to increase streams pool size with
    alter system set streams_pool_size=<the right value>;See recommandations in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229/strms_mprep.htm#i1006278

Maybe you are looking for