Parallel queries are suspending
A procedure start two concurrent jobs. Each of them runs identical queries but the name of partition of destination table and bind variables defining partitions of source tables.
On schema A(test) the procedure works fine for a few minutes, on schema B(business) queries are suspended.
Any suggestions greatly appreciated.
Query generated by job:
BEGIN
MERGE /*+ APPEND USE_HASH(s d)*/ INTO D_SERVICE_FACT_BUF ::PART_DEF d
USING
SELECT /*+ USE_NL( TSRF DSUB ) USE_NL( TSRF NTRR ) USE_NL( TSRF NBLP ) USE_NL( TSRF NTKG1 ) USE_NL( TSRF NTKG2 ) */
COL1,
COL2,
FROM
<SCHEMA_NAME>.T_SERVICE_FACT TSRF,
<SCHEMA_NAME>.D_SUBSCRIPTION_ACT DSUB,
<SCHEMA_NAME>.N_TARIFF_RULE_ON_CALLS NTRR,
<SCHEMA_NAME>.N_BILLING_PROCEDURE NBLP,
<SCHEMA_NAME>.N_TRUNKGROUP NTKG1,
<SCHEMA_NAME>.N_TRUNKGROUP NTKG2,
<SCHEMA_NAME>.N_NUMZONE NNMZ
WHERE
1 = 1 AND
TSRF.SRC = DSUB.SRC AND
TSRF.SRC = NTRR.SRC(+) AND
TSRF.SRC = NBLP.SRC(+) AND
TSRF.SRC = NNMZ.SRC(+) AND
TSRF.SRF_RF_SUBSCRIPTION = DSUB.SUB_ID AND
TSRF.SRF_RF_TARIFF_RULE = NTRR.TRR_ID(+) AND
TSRF.SRF_DT_START BETWEEN DSUB.FD AND DSUB.TD AND
TSRF.SRF_DT_START BETWEEN NTRR.FD(+) AND NTRR.TD(+) AND
TSRF.SRF_DT_START BETWEEN NTKG1.FD(+) AND NTKG1.TD(+) AND
TSRF.SRF_DT_START BETWEEN NTKG2.FD(+) AND NTKG2.TD(+) AND
TSRF.SRF_DT_START BETWEEN NNMZ.FD(+) AND NNMZ.TD(+) AND
TSRF.SRF_RF_BILLING_PROC = NBLP.BLP_ID(+) AND
TSRF.SRF_RF_TRUNKGROUP_IN = NTKG1.TKG_ID(+) AND
TSRF.SRF_RF_TRUNKGROUP_OUT = NTKG2.TKG_ID(+) AND
TSRF.SRF_RF_NUMZONE = NNMZ.NMZ_ID(+) AND
:LDN = :LDN AND
:LDN_UP = :LDN_UP AND
TSRF.HASH_KEY = :SRF_HK AND
1 = 1 ) s
ON ( 1 = 0 )
WHEN MATCHED THEN UPDATE SET
d.src = 0
WHEN NOT MATCHED THEN INSERT
<DST_COLS_LIST>
VALUES
<RES_COL_LIST>
END;
Hi,
Suspending means are you getting error after some time if yes (then what is the error ), or it just hang and does not give you output, then please post the wait event through V$session_wait or session trace.
Regards
Anurag Tibrewal
Similar Messages
-
Parallel queries are failing in 8 node RAC DB
While running queries with parallel hints , the queries are failing with
ORA-12805 parallel query server died unexpectedly
Upon checking the alert logs, I couldnt find any thing about ORA-12805, But the i find this error: Please help me to fix this problem
Fatal NI connect error 12537, connecting to:
(LOCAL=NO)
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.7.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
Time: 15-MAY-2012 16:49:15
Tracing not turned on.
Tns error struct:
ns main err code: 12537
TNS-12537: TNS:connection closed
ns secondary err code: 12560
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
ORA-609 : opiodr aborting process unknown ospid (18807_47295439087424)
Tue May 15 16:49:16 2012A couple of thoughts come immediately to mind:
1. When I read ... "Tracing not turned on" ... I wonder to myself ... why not turn on tracing?
2. When I read ... "Version 11.1.0.7.0" ... I wonder to myself ... why not apply all of the patches Oracle has created in the last 3 years and see if having a fully patched version addresses the issue?
3. When I read ... "parallel query server died" ... I wonder whether you have gone to support.oracle.com and looked up the causes and solutions for Parallel Query Server dying?
Of course I also wonder why you have an 8 node cluster as that is adding substantial complexity and which leads me to wonder ... "is it happening on only one node or all nodes?"
Hope this helps. -
How to detect sessions that are currently running parallel queries?
Hi everyone,
How to detect session that are currently running parallel queries?
- The only way i can think of is querying pdml_Status from gv$session?
- Is there a better way to do this?
Follow up question:
After detecting sessions that are running parallel queries how do i identify which sessions are slaves of which session?
thanks!Start with V$PX_SESSION, however also take a look at V$PQ_* and V$PX_* tables.
-
Gather_Plan_Statistics + DBMS_XPLAN A-rows for parallel queries
Looks like gather_plan_statistics + dbms_xplan displays incorrect A-rows for parallel queries. Is there any way to get the correct A-rows for a parallel query?
Version details:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi on HPUX
Create test tables:
-- Account table
create table test_tacprof
parallel (degree 2) as
select object_id ac_nr,
object_name ac_na
from all_objects;
alter table test_tacprof add constraint test_tacprof_pk primary key (ac_nr);
-- Account revenue table
create table test_taccrev
parallel (degree 2) as
select apf.ac_nr ac_nr,
fiv.r tm_prd,
apf.ac_nr * fiv.r ac_rev
from (select rownum r from all_objects where rownum <= 5) fiv,
test_tacprof apf;
alter table test_taccrev add constraint test_taccrev_pk primary key (ac_nr, tm_prd);
-- Table to hold query results
create table test_4accrev as
select apf.ac_nr, apf.ac_na, rev.tm_prd, rev.ac_rev
from test_taccrev rev,
test_tacprof apf
where 1=2;
Run query with parallel dml/query disabled:
ALTER SESSION DISABLE PARALLEL QUERY;
ALTER SESSION DISABLE PARALLEL DML;
INSERT INTO test_4accrev
SELECT /*+ gather_plan_statistics */
apf.ac_nr,
apf.ac_na,
rev.tm_prd,
rev.ac_rev
FROM test_taccrev rev, test_tacprof apf
WHERE apf.ac_nr = rev.ac_nr AND tm_prd = 4;
SELECT *
FROM TABLE (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Use
|* 1 | HASH JOIN | | 1 | 30442 | 23412 |00:00:00.27 | 772 | 1810K| 1380K| 2949K (0)|
| 2 | TABLE ACCESS FULL| TEST_TACPROF | 1 | 26050 | 23412 |00:00:00.01 | 258 | | |
|* 3 | TABLE ACCESS FULL| TEST_TACCREV | 1 | 30441 | 23412 |00:00:00.03 | 514 | | |
ROLLBACK ;
A-rows are correctly reported with no parallel.
Run query with parallel dml/query enabled:
ALTER SESSION enable PARALLEL QUERY;
alter session enable parallel dml;
insert into test_4accrev
select /*+ gather_plan_statistics */ apf.ac_nr, apf.ac_na, rev.tm_prd, rev.ac_rev
from test_taccrev rev,
test_tacprof apf
where apf.ac_nr = rev.ac_nr
and tm_prd = 4;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-M
| 1 | PX COORDINATOR | | 1 | | 23412 |00:00:00.79 | 6 | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 0 | 30442 | 0 |00:00:00.01 | 0 | | | |
|* 3 | HASH JOIN | | 0 | 30442 | 0 |00:00:00.01 | 0 | 2825K| 1131K| |
| 4 | PX BLOCK ITERATOR | | 0 | 30441 | 0 |00:00:00.01 | 0 | | | |
|* 5 | TABLE ACCESS FULL | TEST_TACCREV | 0 | 30441 | 0 |00:00:00.01 | 0 | | |
| 6 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
| 7 | PX RECEIVE | | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
| 8 | PX SEND BROADCAST | :TQ10000 | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
| 9 | PX BLOCK ITERATOR | | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
|* 10 | TABLE ACCESS FULL| TEST_TACPROF | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
rollback;
A-rows are zero execpt for final step.I'm sorry for posting following long test case.
But it's the most convenient way to explain something. :-)
Here is my test case, which is quite similar to yours.
Note on the difference between "parallel select" and "parallel dml(insert here)".
(I know that Oracle implemented psc(parallel single cursor) model in 10g, but the details of the implementation is quite in mystery as Jonathan said... )
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL>
SQL> alter system flush shared_pool;
System altered.
SQL>
SQL> alter table t parallel 4;
Table altered.
SQL>
SQL> select /*+ gather_plan_statistics */ count(*) from t t1, t t2
2 where t1.c1 = t2.c1 and rownum <= 1000
3 order by t1.c2;
COUNT(*)
1000
SQL>
SQL> select sql_id from v$sqlarea
where sql_text like 'select /*+ gather_plan_statistics */ count(*) from t t1, t t2%';
SQL_ID
bx61bkyh9ffb6
SQL>
SQL> select * from table(dbms_xplan.display_cursor('&sql_id',null,'allstats last'));
Enter value for sql_id: bx61bkyh9ffb6
PLAN_TABLE_OUTPUT
SQL_ID bx61bkyh9ffb6, child number 0 <-- Cooridnator and slaves shared the cursor
select /*+ gather_plan_statistics */ count(*) from t t1, t t2 where t1.c1 = t2.c
1 and rownum <= 1000 order by t1.c2
Plan hash value: 3015647771
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.62 | 6 | | | |
|* 2 | COUNT STOPKEY | | 1 | | 1000 |00:00:00.62 | 6 | | | |
| 3 | PX COORDINATOR | | 1 | | 1000 |00:00:00.50 | 6 | | | |
| 4 | PX SEND QC (RANDOM) | :TQ10002 | 0 | 16M| 0 |00:00:00.01 | 0 | | | |
|* 5 | COUNT STOPKEY | | 0 | | 0 |00:00:00.01 | 0 | | | |
|* 6 | HASH JOIN BUFFERED | | 0 | 16M| 0 |00:00:00.01 | 0 | 1285K| 1285K| 717K (0)|
| 7 | PX RECEIVE | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 8 | PX SEND HASH | :TQ10000 | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 9 | PX BLOCK ITERATOR | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
|* 10 | TABLE ACCESS FULL| T | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 11 | PX RECEIVE | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 12 | PX SEND HASH | :TQ10001 | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 13 | PX BLOCK ITERATOR | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
|* 14 | TABLE ACCESS FULL| T | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
38 rows selected.
SQL>
SQL> select sql_id, child_number, executions, px_servers_executions
2 from v$sql where sql_id = '&sql_id';
SQL_ID CHILD_NUMBER EXECUTIONS
PX_SERVERS_EXECUTIONS
bx61bkyh9ffb6 0 1
8
SQL>
SQL> insert /*+ gather_plan_statistics */ into t select * from t;
10000 rows created.
SQL>
SQL> select sql_id from v$sqlarea
where sql_text like 'insert /*+ gather_plan_statistics */ into t select * from t%';
SQL_ID
9dkmu9bdhg5h0
SQL>
SQL> select * from table(dbms_xplan.display_cursor('&sql_id', null, 'allstats last'));
Enter value for sql_id: 9dkmu9bdhg5h0
PLAN_TABLE_OUTPUT
SQL_ID 9dkmu9bdhg5h0, child number 0 <-- Coordinator Cursor
insert /*+ gather_plan_statistics */ into t select * from t
Plan hash value: 3050126167
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 1 | PX COORDINATOR | | 1 | | 10000 |00:00:00.20 | 3 |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 0 | 10000 | 0 |00:00:00.01 | 0 |
| 3 | PX BLOCK ITERATOR | | 0 | 10000 | 0 |00:00:00.01 | 0 |
|* 4 | TABLE ACCESS FULL| T | 0 | 10000 | 0 |00:00:00.01 | 0 |
SQL_ID 9dkmu9bdhg5h0, child number 1 <-- Slave(s)
insert /*+ gather_plan_statistics */ into t select * from t
PLAN_TABLE_OUTPUT
Plan hash value: 3050126167
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 1 | PX COORDINATOR | | 0 | | 0 |00:00:00.01 | 0 |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 0 | 10000 | 0 |00:00:00.01 | 0 |
| 3 | PX BLOCK ITERATOR | | 1 | 10000 | 2628 |00:00:00.20 | 16 |
|* 4 | TABLE ACCESS FULL| T | 4 | 10000 | 2628 |00:00:00.02 | 16 |
SQL>
SQL> select sql_id, child_number, executions, px_servers_executions
2 from v$sql where sql_id = '&sql_id'; <-- 2 child cursors here
SQL_ID CHILD_NUMBER EXECUTIONS
PX_SERVERS_EXECUTIONS
9dkmu9bdhg5h0 0 1
0
9dkmu9bdhg5h0 1 0
4
SQL>
SQL> set serveroutput on
-- check mismatch
SQL> exec print_table('select * from v$sql_shared_cursor where sql_id = ''&sql_id''');
Enter value for sql_id: 9dkmu9bdhg5h0
SQL_ID : 9dkmu9bdhg5h0
ADDRESS : 6AD85A70
CHILD_ADDRESS : 6BA596A8
CHILD_NUMBER : 0
UNBOUND_CURSOR : N
SQL_TYPE_MISMATCH : N
OPTIMIZER_MISMATCH : N
OUTLINE_MISMATCH : N
STATS_ROW_MISMATCH : N
LITERAL_MISMATCH : N
SEC_DEPTH_MISMATCH : N
EXPLAIN_PLAN_CURSOR : N
BUFFERED_DML_MISMATCH : N
PDML_ENV_MISMATCH : N
INST_DRTLD_MISMATCH : N
SLAVE_QC_MISMATCH : N
TYPECHECK_MISMATCH : N
AUTH_CHECK_MISMATCH : N
BIND_MISMATCH : N
DESCRIBE_MISMATCH : N
LANGUAGE_MISMATCH : N
TRANSLATION_MISMATCH : N
ROW_LEVEL_SEC_MISMATCH : N
INSUFF_PRIVS : N
INSUFF_PRIVS_REM : N
REMOTE_TRANS_MISMATCH : N
LOGMINER_SESSION_MISMATCH : N
INCOMP_LTRL_MISMATCH : N
OVERLAP_TIME_MISMATCH : N
SQL_REDIRECT_MISMATCH : N
MV_QUERY_GEN_MISMATCH : N
USER_BIND_PEEK_MISMATCH : N
TYPCHK_DEP_MISMATCH : N
NO_TRIGGER_MISMATCH : N
FLASHBACK_CURSOR : N
ANYDATA_TRANSFORMATION : N
INCOMPLETE_CURSOR : N
TOP_LEVEL_RPI_CURSOR : N
DIFFERENT_LONG_LENGTH : N
LOGICAL_STANDBY_APPLY : N
DIFF_CALL_DURN : N
BIND_UACS_DIFF : N
PLSQL_CMP_SWITCHS_DIFF : N
CURSOR_PARTS_MISMATCH : N
STB_OBJECT_MISMATCH : N
ROW_SHIP_MISMATCH : N
PQ_SLAVE_MISMATCH : N
TOP_LEVEL_DDL_MISMATCH : N
MULTI_PX_MISMATCH : N
BIND_PEEKED_PQ_MISMATCH : N
MV_REWRITE_MISMATCH : N
ROLL_INVALID_MISMATCH : N
OPTIMIZER_MODE_MISMATCH : N
PX_MISMATCH : N
MV_STALEOBJ_MISMATCH : N
FLASHBACK_TABLE_MISMATCH : N
LITREP_COMP_MISMATCH : N
SQL_ID : 9dkmu9bdhg5h0
ADDRESS : 6AD85A70
CHILD_ADDRESS : 6B10AA00
CHILD_NUMBER : 1
UNBOUND_CURSOR : N
SQL_TYPE_MISMATCH : N
OPTIMIZER_MISMATCH : N
OUTLINE_MISMATCH : N
STATS_ROW_MISMATCH : N
LITERAL_MISMATCH : N
SEC_DEPTH_MISMATCH : N
EXPLAIN_PLAN_CURSOR : N
BUFFERED_DML_MISMATCH : N
PDML_ENV_MISMATCH : N
INST_DRTLD_MISMATCH : N
SLAVE_QC_MISMATCH : N
TYPECHECK_MISMATCH : N
AUTH_CHECK_MISMATCH : N
BIND_MISMATCH : N
DESCRIBE_MISMATCH : N
LANGUAGE_MISMATCH : N
TRANSLATION_MISMATCH : N
ROW_LEVEL_SEC_MISMATCH : N
INSUFF_PRIVS : N
INSUFF_PRIVS_REM : N
REMOTE_TRANS_MISMATCH : N
LOGMINER_SESSION_MISMATCH : N
INCOMP_LTRL_MISMATCH : N
OVERLAP_TIME_MISMATCH : N
SQL_REDIRECT_MISMATCH : N
MV_QUERY_GEN_MISMATCH : N
USER_BIND_PEEK_MISMATCH : N
TYPCHK_DEP_MISMATCH : N
NO_TRIGGER_MISMATCH : N
FLASHBACK_CURSOR : N
ANYDATA_TRANSFORMATION : N
INCOMPLETE_CURSOR : N
TOP_LEVEL_RPI_CURSOR : N
DIFFERENT_LONG_LENGTH : N
LOGICAL_STANDBY_APPLY : N
DIFF_CALL_DURN : Y <-- Mismatch here. diff_call_durn
BIND_UACS_DIFF : N
PLSQL_CMP_SWITCHS_DIFF : N
CURSOR_PARTS_MISMATCH : N
STB_OBJECT_MISMATCH : N
ROW_SHIP_MISMATCH : N
PQ_SLAVE_MISMATCH : N
TOP_LEVEL_DDL_MISMATCH : N
MULTI_PX_MISMATCH : N
BIND_PEEKED_PQ_MISMATCH : N
MV_REWRITE_MISMATCH : N
ROLL_INVALID_MISMATCH : N
OPTIMIZER_MODE_MISMATCH : N
PX_MISMATCH : N
MV_STALEOBJ_MISMATCH : N
FLASHBACK_TABLE_MISMATCH : N
LITREP_COMP_MISMATCH : N
PL/SQL procedure successfully completed. -
Precision Changes when running parallel queries in Oracle?
I am trying to speed up our SQL queries and database draws by running parallel queries. However, we are noticing a slight difference in one of our queries. We have identified two possible reasons. One of those reasons is parallel queries for some reason have either less or more precision than what we were doing.
Has this ever been reported before? Is it even possible? Thanks for any help.One of those reasons is parallel queries for some reason have either less or more precision than what we were doing.What do you mean? Show us an example of that happening and exactly what you mean.
-
How many parallel queries?
We are planing to install Ultra Search on Linux SUSE 8.1 with
Oracle Webserver : APACHE
ca.30 GB , gigabit ethernet ,
3GB RAM
searchengine: ORACLE ULTRA SEARCH
we have ca 10.000 users, who are interested to search on our webserver
9iAs , the index-database and the crawler will be installed on the same machine.
Ist it possible to answer in general :
How many parallel queries Ultra Search enables?We are planing to install Ultra Search on Linux SUSE 8.1 with
Oracle Webserver : APACHE
ca.30 GB , gigabit ethernet ,
3GB RAM
searchengine: ORACLE ULTRA SEARCH
we have ca 10.000 users, who are interested to search on our webserver
9iAs , the index-database and the crawler will be installed on the same machine.
Ist it possible to answer in general :
How many parallel queries Ultra Search enables? -
Performance - Parallel Queries
For query performance reason ,I know we can execute query in sequential and parallel.But i want to increase the number of work processes for the query execution,how to increase that?
thanksIn table RSADMIN, Parameter QUERY_MAX_WP_DIAG .
The maximum value can be changed to a value between 1 and 100 in the QUERY_MAX_WP_DIAG entry in table RSADMIN.
<b>Rationale -</b>
The actual degree to which queries are executed in parallel depends on the load on the system at any given time and lies between 1 (sequential processing) and the maximum value. If the number of sub-queries is greater than the maximum level of parallelism, all existing sub-queries are divided between the work processes determined by the degree of parallelism.
The results of all sub-queries are collected at a synchronization point and collated to form an overall result.
In sequential processing, the sub-queries are processed one after another. The interim result is immediately passed on to the analytic engine.
Hope it Helps
Chetan
@CP.. -
Fixed asset Write up Error. ABZU(Parallel depreciation area 32 is not poste
Hi Sap Gurus,
I am running into an error when trying to post a write up via "ABZU". Here is the error message "Parallel depreciation area 32 is not posted". Message no. AA565.
Diagnosis
The asset to be posted does not manage parallel depreciation area 32, or transaction type that you are using is limited to certain depreciation areas, and does not contain depreciation area 32. This is incorrect.
System Response
Posting is rejected.
Procedure
Check the asset and the transaction type.
I tried "Limit Transaction Types to Depreciation Areas" in t-code:OAXE for derpeciation area 01 and 32 but still i am getting the same error. I am using ttype:700. When i look into derpeciation tab in asset master record i have depreciation area 32 is there. Is there any configuration piece i am missing. We are using area 32 as IFRS in group which is copy of 01.
Any help will be appreciate.
Thanks and Regards,
BabuHi,
There are the following reasons:
1) The wrong transaction type is being used. Please verify that you are using transaction type in the 700 series for your write-up.
2) The transaction type is restricted to certain depreciation areas. In transaction OAXE the transaction type can be restricted to post only to certain areas. If the asset manages a parallel area you have to extend the posting to include the parallel area.
3) The asset was created before you introduced parallel currencies and does not manage the parallel area. In this case, the parallel area must be created on the asset or the values need to be transferred to a new asset that manages the parallel area.
4) Transacion types 600 and 601 can only be used for assets that have a calculation key in their depreciation area that allows no automatic calculation (for example depr key MANU). The unplanned transaction types however can be used with assets that allow automatic calculation.
You also need to verify that there are entries in Table ANLB for the asset in question for the depreciation area.
5) Your company code uses parallel currencies (currency types 10-co.code currency, 50-Index based currency, 30-Group currency...for example). The assets created in this company code must have the depreciation areas for parallel currencies active. You can only do this in activating the depreciation areas with parallel currencies for all the asset classes used in this company code.
Check transaction OAYZ to make sure the areas are activated for all asset classes in the co.code. If not, you will need to activate these areas and ensure the asset has all areas assigned.
regards Bernhard -
Why 3 queries are generated while running a report?
Hi All,
I'm into SA production support team. Currently i'm curious to know that why 3 queries are getting generated in Siebel Analytics server log while i'm running a pivot report in Answer?
Can anyone explain it? Is there any issue with the RPD itself?
Thanks
SudiptaUsually I'd seen this is expected behaviour. Your probably seeing one query for the table view, one for grand totals (assuming they are switched on) and one for the pivot. Basically when the BI server cant perform it all in one statement, you see multiple statements and a 'stich' performed in memory, the idea is its shipping as much processing to the db as possible - usually this is more efficient than in-memory operations performed on the OBIEE server.
-
How to find out which queries are being used ?
We have a number of InfoSet Queries which the users are calling from SQ00. Some of them are very old.
I would like to find out which queries are being used, so we can have the idle queries decommissioned.
How can this be done ?
Best regards,
PeterHi,
It's been a while since I've done this but one method is to use ST03N & look for the execution of programs with an AQ* prefix. The general structure is AQZZ/<query user group><query name>.
You could probably get the same info from the audit log but I've not checked that.
Cheers, -
Queries are not displaying in EP
HI,
when USER(power user ) try to access BW queries via the EP, none of the queries are displaying.Other users are also experiencing the same issue.
I ran the web template in browser its showing nothing.I have only one web item in template definition i.e " Role Menu item"
Pls suggest what could be the problem.
Regards,
ShaHi Sha
Try to launch a web template in Portal, from the Menu option in WAD.If it is not launching in portal check the connections between BW & Portal.
Regards
Tom. -
How to determine which queries are in which workbooks?
HI Folks.
I am attempting to determine if there is a method by which I can determine which queries are utilized in which workbooks in SAP BW system.
I found table RSRWBINDEXT which gives me the workbookid and name. However I am not able to locate the table that gives me workbook id and query id.
Please let me know if you this table?
Thanks
UdayHi,
In BEx... copy workbook technical name.
After wards goto RSA1 metadata repository click-workbook---find your workbook.
If you click that workbook It gives receives information from all Queries list
Thanks,
Mirza -
Spatial Queries are CPU bound and show very heavy use of query buffers
Hi,
Spatial Queries:
When using tkprof to analyse spatial queries it is clear that
there are implicit queries being done by Oracle spatial which
use vast amounts of buffers, and seem unable to cache basic
information from query to query - thus resulting in our machine
being CPU bound when stress testing Oracle Spatial, for example
the example below shows how information which is fixed for a
table and not likely to change very often is being retrieved
inefficiently (note the 26729 query buffers being used to do 6
executions of what should be immediately available!!!):
TKPROF: Release 8.1.7.0.0 - Production on Tue Oct 16 09:43:38
2001
(c) Copyright 2000 Oracle Corporation. All rights reserved.
SELECT ATTR_NO, ATTR_NAME, ATTR_TYPE_NAME, ATTR_TYPE_OWNER
FROM
ALL_TYPE_ATTRS WHERE OWNER = :1 AND TYPE_NAME = :2 ORDER BY
ATTR_NO
call count cpu elapsed disk query rows
Parse 6 0.00 0.01 0 0 0
Execute 6 0.00 0.01 0 0 0
Fetch 6 0.23 0.41 0 26729 5
total 18 0.23 0.43 0 26729 5
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE)
Rows Row Source Operation
0 SORT ORDER BY
0 FILTER
1 NESTED LOOPS
1 NESTED LOOPS
290 NESTED LOOPS
290 NESTED LOOPS
290 NESTED LOOPS
290 NESTED LOOPS
290 TABLE ACCESS FULL ATTRIBUTE$
578 TABLE ACCESS CLUSTER TYPE$
578 TABLE ACCESS CLUSTER TYPE$
578 INDEX UNIQUE SCAN (object id 255)
578 TABLE ACCESS BY INDEX ROWID OBJ$
578 INDEX RANGE SCAN (object id 35)
578 TABLE ACCESS CLUSTER USER$
578 INDEX UNIQUE SCAN (object id 11)
289 TABLE ACCESS BY INDEX ROWID OBJ$
578 INDEX RANGE SCAN (object id 35)
0 TABLE ACCESS CLUSTER USER$
0 INDEX UNIQUE SCAN (object id 11)
0 FIXED TABLE FULL X$KZSPR
0 NESTED LOOPS
0 FIXED TABLE FULL X$KZSRO
0 INDEX RANGE SCAN (object id 101)
error during parse of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the
view
and again:
SELECT diminfo, nvl(srid,0)
FROM
ALL_SDO_GEOM_METADATA WHERE OWNER = 'NAGYE' AND TABLE_NAME =
NLS_UPPER('TILE_MED_LINES_MBR') AND '"'||COLUMN_NAME||'"'
= '"GEOM"'
call count cpu elapsed disk query
current rows
Parse 20 0.00 0.04 0
0 0 0
Execute 20 0.00 0.00 0
0 0 0
Fetch 20 0.50 0.50 0 5960
100 20
total 60 0.50 0.54 0 5960
100 20
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE) (recursive depth: 1)
Rows Row Source Operation
1 FILTER
2 TABLE ACCESS BY INDEX ROWID SDO_GEOM_METADATA_TABLE
2 INDEX RANGE SCAN (object id 24672)
1 UNION-ALL
1 FILTER
1 NESTED LOOPS
1 NESTED LOOPS
1 NESTED LOOPS OUTER
1 NESTED LOOPS OUTER
1 NESTED LOOPS OUTER
1 NESTED LOOPS OUTER
1 NESTED LOOPS
1 TABLE ACCESS FULL OBJ$
1 TABLE ACCESS CLUSTER TAB$
1 INDEX UNIQUE SCAN (object id 3)
0 TABLE ACCESS BY INDEX ROWID OBJ$
1 INDEX UNIQUE SCAN (object id 33)
0 INDEX UNIQUE SCAN (object id 33)
0 TABLE ACCESS CLUSTER USER$
1 INDEX UNIQUE SCAN (object id 11)
1 TABLE ACCESS CLUSTER SEG$
1 INDEX UNIQUE SCAN (object id 9)
1 TABLE ACCESS CLUSTER TS$
1 INDEX UNIQUE SCAN (object id 7)
1 TABLE ACCESS CLUSTER USER$
1 INDEX UNIQUE SCAN (object id 11)
0 FILTER
0 NESTED LOOPS
0 NESTED LOOPS OUTER
0 NESTED LOOPS
0 TABLE ACCESS FULL USER$
0 TABLE ACCESS BY INDEX ROWID OBJ$
0 INDEX RANGE SCAN (object id 34)
0 INDEX UNIQUE SCAN (object id 97)
0 INDEX UNIQUE SCAN (object id 96)
0 FIXED TABLE FULL X$KZSPR
0 NESTED LOOPS
0 FIXED TABLE FULL X$KZSRO
0 INDEX RANGE SCAN (object id 101)
0 FIXED TABLE FULL X$KZSPR
0 NESTED LOOPS
0 FIXED TABLE FULL X$KZSRO
0 INDEX RANGE SCAN (object id 101)
error during parse of EXPLAIN PLAN statement
ORA-01039: insufficient privileges on underlying objects of the
view
Note: The actual query being performed is:
select a.id, a.geom
from
tile_med_lines_mbr a where sdo_relate(a.geom,mdsys.sdo_geometry
(2003,NULL,
NULL,mdsys.sdo_elem_info_array
(1,1003,3),mdsys.sdo_ordinate_array(151.21121,
-33.86325,151.21132,-33.863136)), 'mask=anyinteract
querytype=WINDOW') =
'TRUE'
call count cpu elapsed disk query
current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.08 0.08 0 4 0 0
Fetch 5 1.62 21.70 0 56 0 827
total 7 1.70 21.78 0 60 0 827
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE)
Rows Row Source Operation
827 TABLE ACCESS BY INDEX ROWID TILE_MED_LINES_MBR
828 DOMAIN INDEX
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
827 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'TILE_MED_LINES_MBR'
828 DOMAIN INDEX OF 'TILE_MLINES_SPIND'
CPU: none, I/O: none
call count cpu elapsed disk query
current rows
Parse 1 0.00 0.00 0 92
Execute 1 0.00 0.00 0 22
Fetch 1 0.00 0.00 38 236
total 3 0.00 0.00 38 350
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 37 (NAGYE)
Rows Row Source Operation
12 TABLE ACCESS BY INDEX ROWID ROADELEMENT_MBR
178 DOMAIN INDEX
Rows Execution Plan
0 SELECT STATEMENT GOAL: CHOOSE
12 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'ROADELEMENT_MBR'
178 DOMAIN INDEX OF 'RE_MBR_SPIND'
CPU: none, I/O: none
Can Oracle improve the performance of Oracle spatial by
improving the implementation so as to perform alternative
implicit queries so as not to use these vast amounts of memory?
Cheers
Alex EadieHi Ravi,
Thankyou for your reply.
Here are some more details for you:
Yes the queries are cached in that it gets its data from RAM and
not from disk however the number of buffers used internally by
Oracle RDBMS/Spatial is rather large and results in significant
CPU usage (namely > 5000 per query or >40MByte). Which I'm sure
you'd agree? Those numerous internal queries taking >10ms CPU
time each, which is culmulative.
A single real of ours query of will take between 180ms and 580ms
depending on the number of results returned.
An example query is:
select a.id, a.geom
from tile_med_lines_mbr a where sdo_relate
(a.geom,mdsys.sdo_geometry
(2003,NULL, NULL,mdsys.sdo_elem_info_array
(1,1003,3),mdsys.sdo_ordinate_array(151.21121,
-33.86325,151.21132,-33.863136)), 'mask=anyinteract
querytype=WINDOW') = 'TRUE'
Our 500Mhz PC Server database can only execute 3 processes
running these queries simultaneously to go to 100% CPU loaded.
The disk is hardly utilized.
The data is the main roads in Sydney, Australia.
The tables, data and indexes were created as shown below:
1. Create the Oracle tables:
create table tile_med_nodes_mbr (
id number not null,
geom mdsys.sdo_geometry not null,
xl number not null,
yl number not null,
xh number not null,
yh number not null);
create table tile_med_lines_mbr (
id number not null,
fromid number not null,
toid number not null,
geom mdsys.sdo_geometry not null,
xl number not null,
yl number not null,
xh number not null,
yh number not null);
2. Use the sqlldr Oracle loader utility to load the data
into Oracle.
% sqlldr userid=csiro_scats/demo control=nodes.ctl
% sqlldr userid=csiro_scats/demo control=lines.ctl
3. Determine the covering spatial extent for the tile
mosaic and use this to create the geometry metadata.
% sqlplus
SQLPLUS> set numw 12
SQLPLUS> select min(xl), min(yl), max(xh), max(yh)
from (select xl, yl, xh, yh
from tile_med_nodes_mbr union
select xl, yl, xh, yh
from tile_med_lines_mbr);
insert into USER_SDO_GEOM_METADATA
(TABLE_NAME, COLUMN_NAME, DIMINFO)
VALUES ('TILE_MED_NODES_MBR', 'GEOM',
MDSYS.SDO_DIM_ARRAY
(MDSYS.SDO_DIM_ELEMENT('X', 151.21093421,
151.21205421, 0.000000050),
MDSYS.SDO_DIM_ELEMENT('Y', -33.86347146,
-33.86234146, 0.000000050)));
insert into USER_SDO_GEOM_METADATA
(TABLE_NAME, COLUMN_NAME, DIMINFO)
VALUES ('TILE_MED_LINES_MBR', 'GEOM',
MDSYS.SDO_DIM_ARRAY
(MDSYS.SDO_DIM_ELEMENT('X', 151.21093421,
151.21205421, 0.000000050),
MDSYS.SDO_DIM_ELEMENT('Y', -33.86347146,
-33.86234146, 0.000000050)));
4. Validate the data loaded:
create table result
(UNIQ_ID number, result varchar2(10));
execute sdo_geom.validate_layer
('TILE_MED_NODES_MBR','GEOM','ID','RESULT');
select result, count(result)
from RESULT
group by result;
truncate table result;
execute sdo_geom.validate_layer
('TILE_MED_LINES_MBR','GEOM','ID','RESULT');
select result, count(result)
from RESULT
group by result;
drop table result;
5. Fix any problems reported in the result table.
6. Create a spatial index, use the spatial index advisor to
determine the sdo_level.
create index tile_mlines_spind on
tile_med_lines_mbr (geom) indextype is
mdsys.spatial_index parameters
( 'sdo_level=7,initial=1M,next=1M,pctincrease=0');
7. Analyse table:
analyze table TILE_MED_LINES_MBR compute statistics;
8. Find the spatial index table name:
select sdo_index_table, sdo_column_name
from user_sdo_index_metadata
where sdo_index_name in
(select index_name
from user_indexes
where ityp_name = 'SPATIAL_INDEX'
and table_name = 'TILE_MED_LINES_MBR');
9. Analyse spatial index table:
analyze table TILE_MLINES_SPIND_FL7$
compute statistics;
I hope this helps.
Cheers
Alex Eadie -
Regarding parallel queries in ABAP same as in oracle 10g
Hi,
Is there any way we can write parallel queries in ABAP, in the same way we do in oracle 10g.Kindly see below;
alter table emp parallel (degree 4);
select degree from user_tables where table_name = 'EMP';
select count(*) from emp;
alter table emp noparallel;
SELECT /*+ PARALLEL(emp,4) / COUNT()
FROM emp;
The idea here is to distribute the load of select query in multiple CPUs for load balancing & performance improvement.
Kindly advise.
Thanks:
GauravHi,
> Is there any way we can write parallel queries in ABAP, in the same way we do in oracle 10g.
sure. Since it is just a hint...
SELECT *
FROM t100 INTO TABLE it100
%_HINTS ORACLE 'PARALLEL(T100,4)'.
will give you such an execution plan for example:
SELECT STATEMENT ( Estimated Costs = 651 , Estimated #Rows = 924.308 )
4 PX COORDINATOR
3 PX SEND QC (RANDOM) :TQ10000
( Estim. Costs = 651 , Estim. #Rows = 924.308 )
Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
2 PX BLOCK ITERATOR
( Estim. Costs = 651 , Estim. #Rows = 924.308 )
Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
1 TABLE ACCESS FULL T100
( Estim. Costs = 651 , Estim. #Rows = 924.308 )
Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
PX = Parallel eXecution...
But be sure that you know what you do with the parallel execution option... it is not scalable.... .
Kind regards,
Hermann -
Parallel depreciation area 04 is not posted.(Assets)
Hi All,
I have found below mentioned error at the time of Advance knocking off.(F-54)
Parallel depreciation area 04 is not posted.
I have found below mentioned error at the time of Advance knocking off.
The error message number :AA565
The Asset to be posted does not manage parallel depreciation area 04,or transaction type that you are using is limited to certain depreciation areas and does not contain depreciation area 04. This is incorrect.
Please help out for this issue.
Thanks and Regards,
RamHi Ram
With regards to the issue, if you have activated the use of parallel currencies, all assets created in this company code must have the depreciation areas for parallel currencies active. You can only do this by activating the depreciation areas with parallel currencies for all the asset classes used in this company code. Check transaction OAYZ to make sure the areas are activated for all asset classes in the co.code . If not, you will need to activate these areas and ensure the asset has
all areas assigned.
Another reason why the error message appeared could be the asset was created before you introduced parallel currencies and does not manage the parallel area. In this case, the parallel area must be created on the asset or the values need to be transferred to new asset that manages the parallel area.
I hope this information helps.
Best regards
George
Maybe you are looking for
-
I am learning as I go. I have a form I am working on and I need to calculate a discount. So I created a combo box that I want to add 5 and 10% into. I want my salesman to choose one of those percentages and have it calculate the 5% or 10% discount
-
How to set up a new email address with my ipad using a password made from keychain?
I set up keychain for my passwords on my macbook. I used keychain's suggested password for a email account I set up. Now I want to add the email address to my iPhone and iPad. When it comes to the password I have to add Its not giving me and option t
-
How Is this Level Problem Even Possible?
I have a PT HD system with a 96 i/o interface. When I open a Logic 8 session, I have to crank my phones up much louder to hear than I do if I open Pro Tools. The Master Volume and Output faders are at ZERO in Logic, but I really have to crank Logic t
-
I have tried to re-install Lion, delete different cach folders, etc., nothing is working - I keep getting the same message: We could not complete your purchase The product distribution file could not be verified. It may be damaged or was not signed.
-
Satellite C650 Windows 7 64-bit video driver
I have removed the home edition of Widows 7 from a new Satellite C650 laptop so I can install Windows 7 Professional (I need to join a domain). I have all drivers installed as well as the BIOS 1.50 update, but the display driver fails to run with an