Regarding parallel queries in ABAP same as in oracle 10g
Hi,
Is there any way we can write parallel queries in ABAP, in the same way we do in oracle 10g.Kindly see below;
alter table emp parallel (degree 4);
select degree from user_tables where table_name = 'EMP';
select count(*) from emp;
alter table emp noparallel;
SELECT /*+ PARALLEL(emp,4) / COUNT()
FROM emp;
The idea here is to distribute the load of select query in multiple CPUs for load balancing & performance improvement.
Kindly advise.
Thanks:
Gaurav
Hi,
> Is there any way we can write parallel queries in ABAP, in the same way we do in oracle 10g.
sure. Since it is just a hint...
SELECT *
FROM t100 INTO TABLE it100
%_HINTS ORACLE 'PARALLEL(T100,4)'.
will give you such an execution plan for example:
SELECT STATEMENT ( Estimated Costs = 651 , Estimated #Rows = 924.308 )
4 PX COORDINATOR
3 PX SEND QC (RANDOM) :TQ10000
( Estim. Costs = 651 , Estim. #Rows = 924.308 )
Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
2 PX BLOCK ITERATOR
( Estim. Costs = 651 , Estim. #Rows = 924.308 )
Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
1 TABLE ACCESS FULL T100
( Estim. Costs = 651 , Estim. #Rows = 924.308 )
Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
PX = Parallel eXecution...
But be sure that you know what you do with the parallel execution option... it is not scalable.... .
Kind regards,
Hermann
Similar Messages
-
Trace queries from abap to a custom oracle database via dblink
I' m
connecting to a database by dblink (name magiap).
I
would like to know if somewhere I can trace all the queries from abap to oracle
in this specific session , to dbs ='MAGIAP'.
For istance, i would like that the query
"SELECT "DESPARTY1"
into :v_DESPARTY1
FROM "T040PARTY"
WHERE "CODPARTY" = '305142941' will
be stored some where (in a file??).
I would like that parameters - w_CODPARTY- will be substituted and stored in the trace
file with the value (305142941), as shown in the previous
Here
is the piece of code ..(a very short example of course)..
DATA : dbs LIKE dbcon-con_name,
v_CODPARTY(15),
v_DESPARTY1(60).
data : w_CODPARTY(15) value '305142941'.
dbs = 'MAGIAP'.
TRY.
EXEC SQL.
CONNECT TO :dbs
ENDEXEC.
IF sy-subrc <> 0.
EXEC SQL.
CONNECT TO :dbs
ENDEXEC.
ENDIF.
IF sy-subrc <> 0.
* RAISE err_conn_aea.
ENDIF.
EXEC SQL.
set connection :dbs
ENDEXEC.
EXEC SQL .
SELECT "DESPARTY1"
into :v_DESPARTY1
FROM "T040PARTY"
WHERE "CODPARTY" =
:w_CODPARTY
ENDEXEC.
IF sy-subrc NE 0.
* rc = 4.
ENDIF.
EXEC SQL.
DISCONNECT :dbs
ENDEXEC.
ENDTRY.Hi Silvana,
The SQL statements have been stored in the SQL Cursor Cache, on the database and they will be available until they have been invalidated. You can access the statements at the 'MAGIAP' side and see the last executed queries in the cache.
You can access bind variables by query on the V$SQL_BIND_CAPTURE table, also.
On the other hand, you can activate the trace by the statement, below;
ALTER SYSTEM SET sql_trace = true SCOPE=MEMORY;
Then, the sql statements will be available in the usertrace file. Please note that you should execute and investigate all the statements that I noted above, at the remote side. Plus, as far as I know that it is not able to distinguish the records by the "dblink name". You should check all the statements and try to figure out what queries have been executed remotely.
Best regards,
Orkun Gedik -
Can we have Multiple Instance on same Node in Oracle 10g RAC
Hi All,
I am planning to implement the RAC in Oracle 10g.Before that i have one doubt regarding RAC.
My question is "Can we create multiple Instance on Same node(Server) ?"
is it possible.
Any ideas or thoughts would be apperciable.
Thanks in Advance.
AnwarThis is where it is important to keep the separation between 'database' and 'instance'.
A database is the set of files that contains the data (and the redo, control files, etc). A database does nothing by itself, other than take up lots of disk space.
An instance is theCPU cycles (running software) and the memory to control the database.
In Oracle RAC, you can have as many instances controlling one database [at the same time] as you want (within reason). Each instance must be able to access the disk(s) that contains the database.
These multiple instances can be on the same computer (effectively taking up a lot of server memory and CPU for nothing) or they can be on separate computers.
If they are on separate computers, the disk subsystems must be able to be shared across computers - this is occasionally done using operating system clusterware and is the main reason why clusterware is required at all. (This is also the toughest part of the pre-requisites in setting up a RAC and is very vendor dependent unless you use ASM.)
These instances need a communication connection to coordinate their work (usually a separate network card for each computer) so they do not corrupt the disk when they are trying to access the same file, and possibly the same block, at the same time.
In a RAC configuration, instances can be added, started, running, stopped and removed independent of each other (allowing a lot of high availability) or can be started and stopped as a group.
Each instance gets it's own SID, which is in no way any different than a non-RAC SID. It's just the name of a service that can be invoked. The neat thing is that the SID
a) helps the DBA keep things straight by lettiung us talk about 'instance A' (the Oracle software to be running over on computer A) vs 'instance B' when starting, stopping and managing;
b) helps the application by providing targets that can be listed in the TNSNAMES.ORA [against one service alias], which is used by ORacle Networking to provide automated load balance or failover (instance/SID a is not available, I guess I'll try the next in the list)
Hope that helps the concept level a bit. -
How to run the same instance of oracle 10g on another computer
hello everyone sorry for my ignorance. I have downloaded oracle 10g express edition on 2 computers however when i login using sql developer i get two different instances of oracle databases. What file do i need to change on one of the computers so that when i log in through sql developer im always on the same database. Is there are tns.ora file that i need to make the same on my other machine? Any help would be appreciated.
user633029 wrote:
I have downloaded oracle 10g express edition on 2 computers however when i login using sql developer i get two different instances of oracle databases. What file do i need to change on one of the computers so that when i log in through sql developer im always on the same database.Hi!
I'm not sure does I understand your requirement good?
But if so:
You have 2 computers and from both you want to access one single Oracle database.
If this correct then you need only on one machine to install XE Server and on another machine ONLY XE Client.
After that you need to copy tnsnames.ora file from computer where XE server resides to according directory where XE client resides.
HTH -
How to insert two queries in the same workbook in new BI 7.0 Bex web analy
Hi
Can you please advise how to insert multiple queries in the same workbook in BI 7.0 web analyser.
SarahHi,
You need to switch to design mode (BEx Analyser Menu - Design Toolbar - Design Mode). For each query that you want to embed, insert an analysis grid item in the workbook. Link each analysis grid to its corresponding data provider (query).
Hope this helps.
Regards,
Rajkandula -
Filter the content on different queries for the same infoprovider and user
Hello,
We are trying to make the following security scenario in BI, and have
problem with the analysis object concept to filter at query level.
The idea is to permit to :
- user A
- to execute query Q1 and view information about sites 1,2,3
- to execute query Q2 and view information about sites 4,5,6
but for example for another user :
- user B
- to execute query Q1 and view information about site 1,3
- to execute query Q2 and view information about site 5,6,7
Q1 and Q2 are queries from the same infoprovider.
The idea is to make an automatic generation of analysis objects based
on the standard program : RSEC_GENERATE_AUTHORIZATIONS.
During tests, we have faced a problem with the object 0TCTQUERY that we
thought will permit us to filter at the query level, but unless we add
the name on the query on a role in the S_RS_COMP authorization object,
field RSZCOMPID, the query is not granted to the user.
The fact that we use both authorization objects : one for the query
definition, and another for the analysis authorization concept
(S_RS_AUTH, field BIAUTH), has disastrous effect : all values given in
the analysis objects are for all queries of an indicated infoprovider.
With that system, it's then not possible to propose dynamically different
views of the same data (ie from same infoprovider) based on the
authorization concept unless using the technic of customer-exit variable,
but with variable you will have a problem with old queries that doesn't
have a variable and that will permit to see all data given in the new
authorization objects.
Is there exists another object to filter at the query level in the
analysis objects ? If it's not the case, what is possible to do to reach
our goal with the new authorization concept ?
Thank you in advance for your help.
Best regards,
Gaël.The data is protected on infoprovider level and not on the query level, so when two querys are build from the same Infoprovider then the authorizations are the same,
To achieve what you want to do, the querys must be built off different providers. This can be achieved by placing the infoprovider in 2 differnt multiproviders and building the querys and authorizations seperatly on these. -
Parallel Processing through ABAP program
Hi,
We are trying to do the parallel processing through ABAP. As per SAP documentation we are using the CALL FUNCTION STARTING NEW TASK DESTINATION.
We have one Z function Module and as per SAP we are making this Function module (FM)as Remote -enabled module.
In this FM we would like to process data which we get it from internal table and would like to send back the processed data(through internal table) to the main program where we are using CALL FUNCTION STARTING NEW TASK DESTINATION.
Please suggest how to achieve this.
We tried out EXPORT -IMPORT option meaning we used EXPORT internal table in the FM with some memory ID and in the main program using IMPORT internal table with the same memory ID. But this option is not working even though ID and name of the internal table is not working.
Also, SAP documentation says that we can use RECEIVE RESULTS FROM FUNCTION 'RFC_SYSTEM_INFO'
IMPORTING RFCSI_EXPORT = INFO in conjunction with CALL FUNCTION STARTING NEW TASK DESTINATION. Documentation also specifies that "RECEIVE is needed to gather IMPORTING and TABLE returns of an asynchronously executed RFC Function module". But while creating the FM remote-enabled we cant have EXPORT or IMPORT parameters.
Please help !
Thanks in advance
Santosh<i>We tried out EXPORT -IMPORT option meaning we used EXPORT internal table in the FM with some memory ID and in the main program using IMPORT internal table with the same memory ID. But this option is not working even though ID and name of the internal table is not working</i>
I think that this is not working because that memory does not work across sessions/tasks. I think that the
IMPORT FROM SHARED BUFFER and EXPORT TO SHARED BUFFER would work. I have used these in the past and it works pretty good.
Also,
here is a quick sample of the "new task" and "recieve" functionality. You can not specify the importing parameters when call the FM. You specify them at the recieving end.
report zrich_0001 .
data: session(1) type c.
data: ccdetail type bapi0002_2.
start-of-selection.
* Call the transaction in another session...control will be stop
* in calling program and will wait for response from other session
call function 'BAPI_COMPANYCODE_GETDETAIL'
starting new task 'TEST' destination 'NONE'
performing set_session_done on end of task
exporting
companycodeid = '0010'
* IMPORTING
* COMPANYCODE_DETAIL = ccdetails
* COMPANYCODE_ADDRESS =
* RETURN =
* wait here till the other session is done
wait until session = 'X'.
write:/ ccdetail.
* FORM SET_session_DONE
form set_session_done using taskname.
* Receive results into messtab from function.......
* this will also close the session
receive results from function 'BAPI_COMPANYCODE_GETDETAIL'
importing
companycode_detail = ccdetail.
* Set session as done.
session = 'X'.
endform.
Hope this helps.
Rich Heilman -
0PA_C01 different results from two queries on the same cube ....
HI
Can you please help with this problems ...
i am running a two queries with the same restrictions e.g
Sep 08 for employee 22345 ,
In one report it shows the Pay Scale level as A1 , then in the other report it shows Pay scale Scale level as A2 ,
looking at the master data in 0employee , the first report is right ... this is how the data looks like in 0employee
Employee Valid from To Pay scale Level
222345 2007-11-03 2008-09-30 A1
222345 2008-10-01 9999-12-31 A2
Can someone please shed some light on this , im thinking it has something to do with update rule but even that is supposed to use last date of the month , not 1st day of the following month. The Cube is a stndard cube 0PA_C01 and the update rule aswell .....Hi,
Please check in the cube whether the data for the Employee is getting with two values like shown in your question:
Employee Valid from To Pay scale Level
222345 2007-11-03 2008-09-30 A1
222345 2008-10-01 9999-12-31 A2
and also check whether when the data loaded to the cube.
There may be some change in the report structure where the difference is getting may in the column wise or row wise restriction may present.
Please check on the structure of the report also.
With Regards,
Ravi Kanth -
Hi,
I have a internal table that has object references in it. Each item in the table are indepenent of the other. I want to extract info from each object and convert it into a internal table so that i can pass it to an RFC function.
So how can i do this extraction of the info from the objects in internal table in parallel.
To use the STARTING NEW TASK, i created a fn module that is RFC enabled.... then i can't pass the object reference to this module. So how can do this?
Also i read that this function module call will create a main or external session which has a limit of 6 per user session.Is this correct?
If above can be done, I also wanted to restrict the no of parallel processes being executed at any point of time to be 5 or so.
thanks in advance
MurugeshHi Murugesh,
Parallel processing can be implemented in the application reports that are to run in the background. You can implement parallel processing in your own background applications by using the function modules and ABAP keywords.
Refer following docs.
<b>Parallel Processing in ABAP</b>
/people/naresh.pai/blog/2005/06/16/parallel-processing-in-abap
<b>Parallel Processing with Asynchronous RFC</b>
http://help.sap.com/saphelp_webas610/helpdata/en/22/0425c6488911d189490000e829fbbd/frameset.htm
<b>Parallel-Processing Function Modules</b>
http://help.sap.com/saphelp_nw04s/helpdata/en/fa/096ff6543b11d1898e0000e8322d00/frameset.htm
Dont forget to reward pts, if it helps ;>)
Regards,
Rakesh. -
Tracing queries from abap to a custom database via dblink
I' m connecting to a database by dblink (name magiap).
I would like to know if somewhere I can trace all the queries from abap to oracle in this specific session , to dbs ='MAGIAP'.
For istance, i would like that the query "SELECT "DESPARTY1"
into :v_DESPARTY1
FROM "T040PARTY"
WHERE "CODPARTY" = '305142941' will be stored some where (in a file??).
I would like that parameters - w_CODPARTY- will be substituted and stored in the trace file with the value (305142941), as shown in the previous
Here is the piece of code ..(a very short example of course)..
DATA : dbs LIKE dbcon-con_name,
v_CODPARTY(15),
v_DESPARTY1(60).
data : w_CODPARTY(15) value '305142941'.
dbs = 'MAGIAP'.
TRY.
EXEC SQL.
CONNECT TO :dbs
ENDEXEC.
IF sy-subrc <> 0.
EXEC SQL.
CONNECT TO :dbs
ENDEXEC.
ENDIF.
IF sy-subrc <> 0.
* RAISE err_conn_aea.
ENDIF.
EXEC SQL.
set connection :dbs
ENDEXEC.
EXEC SQL .
SELECT "DESPARTY1"
into :v_DESPARTY1
FROM "T040PARTY"
WHERE "CODPARTY" = :w_CODPARTY
ENDEXEC.
IF sy-subrc NE 0.
* rc = 4.
ENDIF.
EXEC SQL.
DISCONNECT :dbs
ENDEXEC.
ENDTRY.Hi Silvana,
The SQL statements have been stored in the SQL Cursor Cache, on the database and they will be available until they have been invalidated. You can access the statements at the 'MAGIAP' side and see the last executed queries in the cache.
You can access bind variables by query on the V$SQL_BIND_CAPTURE table, also.
On the other hand, you can activate the trace by the statement, below;
ALTER SYSTEM SET sql_trace = true SCOPE=MEMORY;
Then, the sql statements will be available in the usertrace file. Please note that you should execute and investigate all the statements that I noted above, at the remote side. Plus, as far as I know that it is not able to distinguish the records by the "dblink name". You should check all the statements and try to figure out what queries have been executed remotely.
Best regards,
Orkun Gedik -
How many parallel queries?
We are planing to install Ultra Search on Linux SUSE 8.1 with
Oracle Webserver : APACHE
ca.30 GB , gigabit ethernet ,
3GB RAM
searchengine: ORACLE ULTRA SEARCH
we have ca 10.000 users, who are interested to search on our webserver
9iAs , the index-database and the crawler will be installed on the same machine.
Ist it possible to answer in general :
How many parallel queries Ultra Search enables?We are planing to install Ultra Search on Linux SUSE 8.1 with
Oracle Webserver : APACHE
ca.30 GB , gigabit ethernet ,
3GB RAM
searchengine: ORACLE ULTRA SEARCH
we have ca 10.000 users, who are interested to search on our webserver
9iAs , the index-database and the crawler will be installed on the same machine.
Ist it possible to answer in general :
How many parallel queries Ultra Search enables? -
Issue regarding Parallel dynamic block
Dear All,
I need some help regarding Parallel dynamic process issue.
My Process structure as follows.
Process
->Sequential Block
--->Action1
--->Parallel Dynamic Block
>Sequential Sub Block
>Action2
I am able to trigger the tasks to multiple users.
Action2 has to perform by multiple users.
Action2 contains Webdynpro Callable Object.
Webdynpro CO view contains two buttons namely "Approve" and "Send Back".
My requirement is
1)If All users selects approve then Action2 has to complete and control moves to Action1
2)If anyone selects Send Back then Action2 will be completed and control moves to Action1.This process will be continues still all the users Approve.
I am struggling for this requirement so please help me.Hi Rajesh,
Question 1:
I am trying to call the Action1 in the Sequential Sub Block from Action 2.At that time i am gettin the following error.
Transition between blocks cannot be defined because there is a parallel block in the path.
@nswer 1:
Please include "Action2 " and "Action1(On Send Back from Action2)" in the subblock.
If both the actions are in the same sequential block (sub-block) then you can set result state.
Question 2:
Please suggest me what to set result state of Action2 when it is completed and how to call Action3.
Tell me Action3 comes under Sequential Block?
@nswer 1:
Action2 must have two result state
1. Completed -- Assign it "Terminal"
2. Send Back -- Assign it "Action1(On Send Back from Action2)"
Action3 is outside Parallel Dynamic Block. The structure of your sequential block will look like this
Process
->Sequential Block
--->Action1
--->Parallel Dynamic Block
--->Action3
Regards,
Pratik -
Multiple running queries at the same time
Hi!
I looked around (and RTM) for this but didn't find anything, so I'm asking here.
I have quite a few long running queries (data loading and such things, warehousing stuff), and I need to be able to run multiple queries/statements at the same time. In TOAD I can do this, start a procedure and while it is running I can do SQL statements in another session tab (it supports threaded sessions - it starts queries in their own background thread/session).
When I start a long running procedure or query in SQL Developer I can not do anything until the procedure execution finishes. Is there any way (setting/preference) to enable SQL Developer to be able to run multiple queries at the same time?
I really would like to move away from TOAD, but this is a major showstopper for me.
Thanx for any tips.
AlexHi!
This post is going to be a little longer, but I have to clarify things out.
I did not mean to throw any wild accusations, because I did my fair share of RTFM and searching the help. I can tell you that if you put any of these in the help search box:
session
non shared
non-shared
connection
concurrent <- I guess this one should yeld something
multiple
spawn
you won't find anything usefull, the article that comes closest, is this:
"Sharing of Connections
By default, each connection in SQL Developer is shared when possible. For example, if you open a table in the Connections navigator and two SQL Worksheets using the same connection, all three panes use one shared connection to the database. In this example, a commit operation in one SQL Worksheet commits across all three panes. If you want a dedicated session, you must duplicate your connection and give it another name. Sessions are shared by name, not connection information, so this new connection will be kept separate from the original."
It does not mention any spawning of non-shared connections from the current one, nor it does mention using a accelerator key combo. But since there could be written something about it, I guess you could call it a documentation bug, because it does not provide any clue to this functionality. The help is definitely of no help in this case. As you can see, I do not throw accusations without trying to find out something first. I guess if someone is not as deep into SQL Developer as you are, there is no way for him/her to know this.
OK, I tried your suggestion, and (sadly) it does not work as I suppose it should.
Here's what I did:
- start a new connection, and enter the following code in SQL Worksheet:
declare
j number;
begin
for i in 1..1000000
LOOP
j := sin(i);
end LOOP;
end;
As you can see, it doesn't do much besides holding the connection busy for a while when executed.
- start a new non-shared connection from the first one using CTRL-SHIFT-N (as you suggested) and put the following statement in the new SQL Worksheet (with "__1" appended to connection name)
select sysdate from dual;
- go to the first SQL Worksheet and execute the procedure
- while the procedure is executing, go to the second SQL Worksheet and hit F9.
The sysdate is returned as soon as the first SQL Worksheet finishes and not any sooner. It may run in separate session, but the result is not returned before the other session is finished doing what it is doing. I guess the correct behaviour would be to return the sysdate immediately.
I verified this behaviour repeating it 3 times starting with a new instance of SQL Developer, each time connecting to another schema and spawning the new non-shared session. The database used was Oracle 10.2.0.3 EE on RHEL 4 UPD3.
The concurrent execution lacks concurrency. The statements might be executed concurently on the database (i did not went the extra mile to verfiy this), but the returning of results is just not independent of other sessions. To the end user this is as much concurrent as it is serial execution.
I hope developers get this issue straightened out soon, as I said, I'd love to move away from Toad, but I'll have to wait until they fix this out.
Is there anything else that can be done to make it behave correctly?
Kind regards
Alex -
How to Add 3 queries in the same work book?
Hi Gurus,
Can any one tell How to Add 3 queries in the same work book?
Example, daily report,Monhly and yearly reports for sales should be in the same workbook.
Please help me if any one have a pointer or a how to doc if available.
<<Text removed>>
Thanks
James
Edited by: Matt on Apr 26, 2010 9:36 AMHi James,
According to BI 7.0 Version
Steps of creating workbook and to insert more than one query in a workbook.
When you run a query and it opens in Bex Analyzer you can click the save button and pick "Save as Workbook".
Once you save it as a workbook Click on the "Design Mode" button in the Bex toolbar (looks like an A).
Click in the sheet where you want the new query to go, click the "Analysis Grid" button. It will add the analysis grid to your new sheet.
Right click on the Analysis grid and go to properties.
Click on button to change data provider and select the query you want to attach.
Exit design mode and you should be all set. -
Gather_Plan_Statistics + DBMS_XPLAN A-rows for parallel queries
Looks like gather_plan_statistics + dbms_xplan displays incorrect A-rows for parallel queries. Is there any way to get the correct A-rows for a parallel query?
Version details:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi on HPUX
Create test tables:
-- Account table
create table test_tacprof
parallel (degree 2) as
select object_id ac_nr,
object_name ac_na
from all_objects;
alter table test_tacprof add constraint test_tacprof_pk primary key (ac_nr);
-- Account revenue table
create table test_taccrev
parallel (degree 2) as
select apf.ac_nr ac_nr,
fiv.r tm_prd,
apf.ac_nr * fiv.r ac_rev
from (select rownum r from all_objects where rownum <= 5) fiv,
test_tacprof apf;
alter table test_taccrev add constraint test_taccrev_pk primary key (ac_nr, tm_prd);
-- Table to hold query results
create table test_4accrev as
select apf.ac_nr, apf.ac_na, rev.tm_prd, rev.ac_rev
from test_taccrev rev,
test_tacprof apf
where 1=2;
Run query with parallel dml/query disabled:
ALTER SESSION DISABLE PARALLEL QUERY;
ALTER SESSION DISABLE PARALLEL DML;
INSERT INTO test_4accrev
SELECT /*+ gather_plan_statistics */
apf.ac_nr,
apf.ac_na,
rev.tm_prd,
rev.ac_rev
FROM test_taccrev rev, test_tacprof apf
WHERE apf.ac_nr = rev.ac_nr AND tm_prd = 4;
SELECT *
FROM TABLE (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Use
|* 1 | HASH JOIN | | 1 | 30442 | 23412 |00:00:00.27 | 772 | 1810K| 1380K| 2949K (0)|
| 2 | TABLE ACCESS FULL| TEST_TACPROF | 1 | 26050 | 23412 |00:00:00.01 | 258 | | |
|* 3 | TABLE ACCESS FULL| TEST_TACCREV | 1 | 30441 | 23412 |00:00:00.03 | 514 | | |
ROLLBACK ;
A-rows are correctly reported with no parallel.
Run query with parallel dml/query enabled:
ALTER SESSION enable PARALLEL QUERY;
alter session enable parallel dml;
insert into test_4accrev
select /*+ gather_plan_statistics */ apf.ac_nr, apf.ac_na, rev.tm_prd, rev.ac_rev
from test_taccrev rev,
test_tacprof apf
where apf.ac_nr = rev.ac_nr
and tm_prd = 4;
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-M
| 1 | PX COORDINATOR | | 1 | | 23412 |00:00:00.79 | 6 | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 0 | 30442 | 0 |00:00:00.01 | 0 | | | |
|* 3 | HASH JOIN | | 0 | 30442 | 0 |00:00:00.01 | 0 | 2825K| 1131K| |
| 4 | PX BLOCK ITERATOR | | 0 | 30441 | 0 |00:00:00.01 | 0 | | | |
|* 5 | TABLE ACCESS FULL | TEST_TACCREV | 0 | 30441 | 0 |00:00:00.01 | 0 | | |
| 6 | BUFFER SORT | | 0 | | 0 |00:00:00.01 | 0 | 73728 | 73728 | |
| 7 | PX RECEIVE | | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
| 8 | PX SEND BROADCAST | :TQ10000 | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
| 9 | PX BLOCK ITERATOR | | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
|* 10 | TABLE ACCESS FULL| TEST_TACPROF | 0 | 26050 | 0 |00:00:00.01 | 0 | | | |
rollback;
A-rows are zero execpt for final step.I'm sorry for posting following long test case.
But it's the most convenient way to explain something. :-)
Here is my test case, which is quite similar to yours.
Note on the difference between "parallel select" and "parallel dml(insert here)".
(I know that Oracle implemented psc(parallel single cursor) model in 10g, but the details of the implementation is quite in mystery as Jonathan said... )
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL>
SQL> alter system flush shared_pool;
System altered.
SQL>
SQL> alter table t parallel 4;
Table altered.
SQL>
SQL> select /*+ gather_plan_statistics */ count(*) from t t1, t t2
2 where t1.c1 = t2.c1 and rownum <= 1000
3 order by t1.c2;
COUNT(*)
1000
SQL>
SQL> select sql_id from v$sqlarea
where sql_text like 'select /*+ gather_plan_statistics */ count(*) from t t1, t t2%';
SQL_ID
bx61bkyh9ffb6
SQL>
SQL> select * from table(dbms_xplan.display_cursor('&sql_id',null,'allstats last'));
Enter value for sql_id: bx61bkyh9ffb6
PLAN_TABLE_OUTPUT
SQL_ID bx61bkyh9ffb6, child number 0 <-- Cooridnator and slaves shared the cursor
select /*+ gather_plan_statistics */ count(*) from t t1, t t2 where t1.c1 = t2.c
1 and rownum <= 1000 order by t1.c2
Plan hash value: 3015647771
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.62 | 6 | | | |
|* 2 | COUNT STOPKEY | | 1 | | 1000 |00:00:00.62 | 6 | | | |
| 3 | PX COORDINATOR | | 1 | | 1000 |00:00:00.50 | 6 | | | |
| 4 | PX SEND QC (RANDOM) | :TQ10002 | 0 | 16M| 0 |00:00:00.01 | 0 | | | |
|* 5 | COUNT STOPKEY | | 0 | | 0 |00:00:00.01 | 0 | | | |
|* 6 | HASH JOIN BUFFERED | | 0 | 16M| 0 |00:00:00.01 | 0 | 1285K| 1285K| 717K (0)|
| 7 | PX RECEIVE | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 8 | PX SEND HASH | :TQ10000 | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 9 | PX BLOCK ITERATOR | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
|* 10 | TABLE ACCESS FULL| T | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 11 | PX RECEIVE | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 12 | PX SEND HASH | :TQ10001 | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
| 13 | PX BLOCK ITERATOR | | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
|* 14 | TABLE ACCESS FULL| T | 0 | 10000 | 0 |00:00:00.01 | 0 | | | |
38 rows selected.
SQL>
SQL> select sql_id, child_number, executions, px_servers_executions
2 from v$sql where sql_id = '&sql_id';
SQL_ID CHILD_NUMBER EXECUTIONS
PX_SERVERS_EXECUTIONS
bx61bkyh9ffb6 0 1
8
SQL>
SQL> insert /*+ gather_plan_statistics */ into t select * from t;
10000 rows created.
SQL>
SQL> select sql_id from v$sqlarea
where sql_text like 'insert /*+ gather_plan_statistics */ into t select * from t%';
SQL_ID
9dkmu9bdhg5h0
SQL>
SQL> select * from table(dbms_xplan.display_cursor('&sql_id', null, 'allstats last'));
Enter value for sql_id: 9dkmu9bdhg5h0
PLAN_TABLE_OUTPUT
SQL_ID 9dkmu9bdhg5h0, child number 0 <-- Coordinator Cursor
insert /*+ gather_plan_statistics */ into t select * from t
Plan hash value: 3050126167
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 1 | PX COORDINATOR | | 1 | | 10000 |00:00:00.20 | 3 |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 0 | 10000 | 0 |00:00:00.01 | 0 |
| 3 | PX BLOCK ITERATOR | | 0 | 10000 | 0 |00:00:00.01 | 0 |
|* 4 | TABLE ACCESS FULL| T | 0 | 10000 | 0 |00:00:00.01 | 0 |
SQL_ID 9dkmu9bdhg5h0, child number 1 <-- Slave(s)
insert /*+ gather_plan_statistics */ into t select * from t
PLAN_TABLE_OUTPUT
Plan hash value: 3050126167
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 1 | PX COORDINATOR | | 0 | | 0 |00:00:00.01 | 0 |
| 2 | PX SEND QC (RANDOM)| :TQ10000 | 0 | 10000 | 0 |00:00:00.01 | 0 |
| 3 | PX BLOCK ITERATOR | | 1 | 10000 | 2628 |00:00:00.20 | 16 |
|* 4 | TABLE ACCESS FULL| T | 4 | 10000 | 2628 |00:00:00.02 | 16 |
SQL>
SQL> select sql_id, child_number, executions, px_servers_executions
2 from v$sql where sql_id = '&sql_id'; <-- 2 child cursors here
SQL_ID CHILD_NUMBER EXECUTIONS
PX_SERVERS_EXECUTIONS
9dkmu9bdhg5h0 0 1
0
9dkmu9bdhg5h0 1 0
4
SQL>
SQL> set serveroutput on
-- check mismatch
SQL> exec print_table('select * from v$sql_shared_cursor where sql_id = ''&sql_id''');
Enter value for sql_id: 9dkmu9bdhg5h0
SQL_ID : 9dkmu9bdhg5h0
ADDRESS : 6AD85A70
CHILD_ADDRESS : 6BA596A8
CHILD_NUMBER : 0
UNBOUND_CURSOR : N
SQL_TYPE_MISMATCH : N
OPTIMIZER_MISMATCH : N
OUTLINE_MISMATCH : N
STATS_ROW_MISMATCH : N
LITERAL_MISMATCH : N
SEC_DEPTH_MISMATCH : N
EXPLAIN_PLAN_CURSOR : N
BUFFERED_DML_MISMATCH : N
PDML_ENV_MISMATCH : N
INST_DRTLD_MISMATCH : N
SLAVE_QC_MISMATCH : N
TYPECHECK_MISMATCH : N
AUTH_CHECK_MISMATCH : N
BIND_MISMATCH : N
DESCRIBE_MISMATCH : N
LANGUAGE_MISMATCH : N
TRANSLATION_MISMATCH : N
ROW_LEVEL_SEC_MISMATCH : N
INSUFF_PRIVS : N
INSUFF_PRIVS_REM : N
REMOTE_TRANS_MISMATCH : N
LOGMINER_SESSION_MISMATCH : N
INCOMP_LTRL_MISMATCH : N
OVERLAP_TIME_MISMATCH : N
SQL_REDIRECT_MISMATCH : N
MV_QUERY_GEN_MISMATCH : N
USER_BIND_PEEK_MISMATCH : N
TYPCHK_DEP_MISMATCH : N
NO_TRIGGER_MISMATCH : N
FLASHBACK_CURSOR : N
ANYDATA_TRANSFORMATION : N
INCOMPLETE_CURSOR : N
TOP_LEVEL_RPI_CURSOR : N
DIFFERENT_LONG_LENGTH : N
LOGICAL_STANDBY_APPLY : N
DIFF_CALL_DURN : N
BIND_UACS_DIFF : N
PLSQL_CMP_SWITCHS_DIFF : N
CURSOR_PARTS_MISMATCH : N
STB_OBJECT_MISMATCH : N
ROW_SHIP_MISMATCH : N
PQ_SLAVE_MISMATCH : N
TOP_LEVEL_DDL_MISMATCH : N
MULTI_PX_MISMATCH : N
BIND_PEEKED_PQ_MISMATCH : N
MV_REWRITE_MISMATCH : N
ROLL_INVALID_MISMATCH : N
OPTIMIZER_MODE_MISMATCH : N
PX_MISMATCH : N
MV_STALEOBJ_MISMATCH : N
FLASHBACK_TABLE_MISMATCH : N
LITREP_COMP_MISMATCH : N
SQL_ID : 9dkmu9bdhg5h0
ADDRESS : 6AD85A70
CHILD_ADDRESS : 6B10AA00
CHILD_NUMBER : 1
UNBOUND_CURSOR : N
SQL_TYPE_MISMATCH : N
OPTIMIZER_MISMATCH : N
OUTLINE_MISMATCH : N
STATS_ROW_MISMATCH : N
LITERAL_MISMATCH : N
SEC_DEPTH_MISMATCH : N
EXPLAIN_PLAN_CURSOR : N
BUFFERED_DML_MISMATCH : N
PDML_ENV_MISMATCH : N
INST_DRTLD_MISMATCH : N
SLAVE_QC_MISMATCH : N
TYPECHECK_MISMATCH : N
AUTH_CHECK_MISMATCH : N
BIND_MISMATCH : N
DESCRIBE_MISMATCH : N
LANGUAGE_MISMATCH : N
TRANSLATION_MISMATCH : N
ROW_LEVEL_SEC_MISMATCH : N
INSUFF_PRIVS : N
INSUFF_PRIVS_REM : N
REMOTE_TRANS_MISMATCH : N
LOGMINER_SESSION_MISMATCH : N
INCOMP_LTRL_MISMATCH : N
OVERLAP_TIME_MISMATCH : N
SQL_REDIRECT_MISMATCH : N
MV_QUERY_GEN_MISMATCH : N
USER_BIND_PEEK_MISMATCH : N
TYPCHK_DEP_MISMATCH : N
NO_TRIGGER_MISMATCH : N
FLASHBACK_CURSOR : N
ANYDATA_TRANSFORMATION : N
INCOMPLETE_CURSOR : N
TOP_LEVEL_RPI_CURSOR : N
DIFFERENT_LONG_LENGTH : N
LOGICAL_STANDBY_APPLY : N
DIFF_CALL_DURN : Y <-- Mismatch here. diff_call_durn
BIND_UACS_DIFF : N
PLSQL_CMP_SWITCHS_DIFF : N
CURSOR_PARTS_MISMATCH : N
STB_OBJECT_MISMATCH : N
ROW_SHIP_MISMATCH : N
PQ_SLAVE_MISMATCH : N
TOP_LEVEL_DDL_MISMATCH : N
MULTI_PX_MISMATCH : N
BIND_PEEKED_PQ_MISMATCH : N
MV_REWRITE_MISMATCH : N
ROLL_INVALID_MISMATCH : N
OPTIMIZER_MODE_MISMATCH : N
PX_MISMATCH : N
MV_STALEOBJ_MISMATCH : N
FLASHBACK_TABLE_MISMATCH : N
LITREP_COMP_MISMATCH : N
PL/SQL procedure successfully completed.
Maybe you are looking for
-
Peripherals = fan noise?
i'm going to post this in a couple areas since it doesn't apply to just displays. i have the last revision (D) 12" PB and have been thinking about switching to the macbook for some time. one problem i had with my pb was that the fan came on a lot whe
-
Issue Related to Billing Document F2 with same Invoice Number.
Hi All, I have 3 Sales Order with 2 Sales Order belonging to Sales Org A and one belonging to Sales Org B. The Invoice generated for all the deliveries for Sales Order has the same Invoice Number. As per the standard functinality it is not possible t
-
Date differences if one of the field not presenting
Hi All, I have two attributes start date and end. I have created formula replacement variable. some times one of the field coming as blank, in such acase system writing some garbage value but i need blank. How to resolve this issue. Reagrds, J B
-
Without data in textfield and datagrid if i click add or delete button it should through msgbox
hi friends ,, i am new to flex i am doing a application in flex4,i need help from this, i have two text inputs and one datagrid, and one add button and delete button. my requirement is without data if i click add or delete it should
-
This has begun to happen recently. I load an mp3 file that is approx 15 minutes long (for radio station programming) and the last few seconds of the file gets cut off in Audition 1.5. No blank space, it just ends abruptly and prematurely. When I play