Parallel job
Hi,
assume i have table A, it has sql statement and runing sequence id.
SQL> SELECT * FROM A;
ID SQL_STMT
1 insert into AA SELECT * FROM DUAL
2 insert into BB SELECT * FROM DUAL
3 insert into CC SELECT * FROM DUAL
4 insert into DD SELECT * FROM DUAL
5 insert into FF SELECT * FROM DUALcurrently the procedure is designed like this
create or replace procedure dp_proc_run
is
begin
for i in (select id,sql_stmt from a order by 1)
EXECUTE IMMEDIATE i.sql_stmt;
end loop;
commit;
end;the above procedure executes the statement one by one. i want to do it in parallel. please help me to do it in parallel.
thanks in advance.
my db version
SQL> select * from v$version;
BANNER
Oracle Database 10g Release 10.2.0.4.0 - 64bit Production
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - ProductionEdited by: babaravi on Feb 8, 2011 10:52 AM
You cannot have PLSQL create new sessions. You have to explicitly create new sessions -- running SQLPLUS from the OS.
As for the procedure, you'd either
a. Have the procedure read a parameter that is passed to it and use that parameter to identify which line from the table it should run (the parameter has to be passed in from another procedure or from the command line).
OR
b. Have 5 different procedures, each procedure hard-coded for 1 line from the table
OR
c. Have each procedure put a "I am running this line, do not run it" message (e.g. in an additional column) for each line to prevent another session running the same line -- the first procedure has to commit that flag and also has to delete it after running the line.
Hemant K Chitale
Similar Messages
-
Query to Report on Parallel Jobs Running
Morning!
I would like to get a query that reports on my parallel jobs.
For each minute that a procedure is running I would like to know what stages are running.
I log the whole procedure in a table called run_details and the start and end of each stage in a table called incident.
I'm running Oracle 9i
Here is some sample data based on 2 threads Expected output at the bottom
SQL>CREATE TABLE run_details
2 (run_details_key NUMBER(10)
3 ,start_time DATE
4 ,end_time DATE
5 ,description VARCHAR2(50)
6 );
SQL>CREATE TABLE incident
2 (run_details_key NUMBER(10)
3 ,stage VARCHAR2(20)
4 ,severity VARCHAR2(20)
5 ,time_stamp DATE
6 );
SQL>INSERT INTO run_details
2 VALUES (1
3 ,TO_DATE('08/10/2007 08:00','DD/MM/YYYY HH24:MI')
4 ,TO_DATE('08/10/2007 08:10','DD/MM/YYYY HH24:MI')
5 ,'Test'
6 );
SQL>INSERT INTO incident
2 VALUES (1
3 ,'Stage1'
4 ,'START'
5 ,TO_DATE('08/10/2007 08:00','DD/MM/YYYY HH24:MI')
6 );
SQL>INSERT INTO incident
2 VALUES (1
3 ,'Stage1'
4 ,'END'
5 ,TO_DATE('08/10/2007 08:08:53','DD/MM/YYYY HH24:MI:SS')
6 );
SQL>INSERT INTO incident
2 VALUES (1
3 ,'Stage2'
4 ,'START'
5 ,TO_DATE('08/10/2007 08:00','DD/MM/YYYY HH24:MI')
6 );
SQL>INSERT INTO incident
2 VALUES (1
3 ,'Stage2'
4 ,'END'
5 ,TO_DATE('08/10/2007 08:04:23','DD/MM/YYYY HH24:MI:SS')
6 );
SQL>INSERT INTO incident
2 VALUES (1
3 ,'Stage3'
4 ,'START'
5 ,TO_DATE('08/10/2007 08:04:24','DD/MM/YYYY HH24:MI:SS')
6 );
SQL>INSERT INTO incident
2 VALUES (1
3 ,'Stage3'
4 ,'END'
5 ,TO_DATE('08/10/2007 08:10','DD/MM/YYYY HH24:MI')
6 );
SQL>select * from incident;
RUN_DETAILS_KEY STAGE SEVERITY TIME_STAMP
1 Stage1 START 08/10/2007 08:00:00
1 Stage1 END 08/10/2007 08:08:53
1 Stage2 START 08/10/2007 08:00:00
1 Stage2 END 08/10/2007 08:04:23
1 Stage3 START 08/10/2007 08:04:24
1 Stage3 END 08/10/2007 08:10:00 So stages 1 and 2 run in parallel from 08:00, then at 08:04:23 stage 2 stops and a second later stage 3 starts.
set some variables
SQL>define start_time = null
SQL>col start_time new_value start_time
SQL>define end_time = null
SQL>col end_time new_value end_time
SQL>
SQL>SELECT start_time-(1/(24*60)) start_time
2 ,end_time
3 FROM run_details
4 WHERE run_details_key = 1;
START_TIME END_TIME
08/10/2007 07:59:00 08/10/2007 08:10:00Get every minute that the process is running for:
SQL>WITH t AS (SELECT TRUNC(TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss'),'MI') + rownum/24/60 tm
2 FROM dual
3 CONNECT BY ROWNUM <= (TO_DATE('&end_time','dd/mm/yyyy hh24:mi:ss')
4 -TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss')
5 )*24*60
6 )
7 SELECT tm
8 FROM t;
old 1: WITH t AS (SELECT TRUNC(TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss'),'MI') + rownum/24/60 tm
new 1: WITH t AS (SELECT TRUNC(TO_DATE('08/10/2007 07:59:00','dd/mm/yyyy hh24:mi:ss'),'MI') + rownum/24/60 tm
old 3: CONNECT BY ROWNUM <= (TO_DATE('&end_time','dd/mm/yyyy hh24:mi:ss')
new 3: CONNECT BY ROWNUM <= (TO_DATE('08/10/2007 08:10:00','dd/mm/yyyy hh24:mi:ss')
old 4: -TO_DATE('&start_time','dd/mm/yyyy hh24:mi:ss')
new 4: -TO_DATE('08/10/2007 07:59:00','dd/mm/yyyy hh24:mi:ss')
TM
08/10/2007 08:00:00
08/10/2007 08:01:00
08/10/2007 08:02:00
08/10/2007 08:03:00
08/10/2007 08:04:00
08/10/2007 08:05:00
08/10/2007 08:06:00
08/10/2007 08:07:00
08/10/2007 08:08:00
08/10/2007 08:09:00
08/10/2007 08:10:00
11 rows selected.Get stage, start & end times and duration
SQL>SELECT ai1.stage
2 ,ai1.time_stamp start_time
3 ,ai2.time_stamp end_time
4 ,SUBSTR(numtodsinterval(ai2.time_stamp-ai1.time_stamp, 'DAY'), 12, 8) duration
5 FROM dw2.incident ai1
6 JOIN dw2.incident ai2
7 ON ai1.run_details_key = ai2.run_details_key
8 AND ai1.stage = ai2.stage
9 WHERE ai1.severity = 'START'
10 AND ai2.severity = 'END'
11 AND ai1.run_details_key = 1
12 ORDER BY ai1.time_stamp
13 /
STAGE START_TIME END_TIME DURATION
Stage1 08/10/2007 08:00:00 08/10/2007 08:08:53 00:08:52
Stage2 08/10/2007 08:00:00 08/10/2007 08:04:23 00:04:22
Stage3 08/10/2007 08:04:24 08/10/2007 08:10:00 00:05:36Then combine both (or do something else) to get this:
TM THREAD_1 THREAD_2
08/10/2007 08:00:00 Stage1 Stage2
08/10/2007 08:01:00 Stage1 Stage2
08/10/2007 08:02:00 Stage1 Stage2
08/10/2007 08:03:00 Stage1 Stage2
08/10/2007 08:04:00 Stage1 Stage2
08/10/2007 08:05:00 Stage1 Stage3
08/10/2007 08:06:00 Stage1 Stage3
08/10/2007 08:07:00 Stage1 Stage3
08/10/2007 08:08:00 Stage1 Stage3
08/10/2007 08:09:00 Stage3
08/10/2007 08:10:00 Stage3Ideally I'd like this to work for n-threads, as I want this to run on different environments that have different numbers of CPUs.
Thank you for your time.> Ideally I'd like this to work for n-threads, as I want this to run on
different environments that have different numbers of CPUs.
The number of CPUs are not always a good indication of the processing load that a platform can take - especially when the processing load involves a lot of I/O.
You can have 99% CPU idle time with a 1000 active processes... as that idle time is in fact CPU time spend waiting on I/O completion. Courtesy of a severely strained I/O channel that is the bottleneck.
Another factor is memory (resources). You for example have 4 CPUs with 8GB physical memory.. where a single process (typically a Java VM for a complex process) grabs a huge amount of memory. Assuming that 4 threads/CPU or 1 threads/CPU can be a severe overestimate due to the amount of memory needed. Getting this wrong leads in turn to excessive virtual memory paging and reduces the platform's performance drastically.
CPU alone is a very poor choice when deciding on the platform's capacity to run parallel processes. -
Problem in table locking while running the parallel jobs (deadlock?)
Hi,
I am trying to delete the entried from a custom table. If I schedule the parallel jobs, I am getting the following dump while deleting the entries from table. This is happening for custom table ( in CRM system).
Exception : DBIF_RSQL_SQL_ERROR
And complaining that, deadlock occured. I am not sure what it is..
Could you please help me out, how can we solve this issue?
Thanks,
SandeepHello Sandeep ,
I would suggest you Record based Lock Objects ,
take a look at below link...to get a general idea..!
<link removed>
Hope it helps,
Thanks and Regards,
Edited by: Suhas Saha on Aug 19, 2011 11:27 AM -
Load Distribution Versus Parallel Jobs processing
Hello,
I would request feedback.
In Industry Solutions - PS, for mass activity - Automatic Clearing run, there is an Technical Settings tab.
This tab has Load distribution settings and an value that can be defined for Number of Jobs.
If I put an value greater than 1, so many number of jobs gets executed in the background.
I would request clarification on the following :
1. What is its use / benefits of putting defining an automatic load distribution ?
2. Does this setting equals the Config settings for IMG - Finnancial Accounting- Contract AR - Technical Settings - Prepare Mass.
In here, we can set parallel jobs that can be initiated by providing an maximum number of jobs under the job control panel.
Does this setting override Load distribution settings or vice-versa OR IS this an different setting ?
3. What is the difference between Load distribution versus parallel Jobs ?
Any feedback, most welcome and appreciated.
I am functional consultant and hence unable to distinguish between the two.
I am not sure if this query needs to be posted here, but I have also posted the same in IS fourm too.
Regards
Bala
email : [email protected]Hi Bala,
In any Mass Activity you will find Technical Settings tab. There you will find Parallel Processing Object and Load Distribution. In Parallel Processing Object, you can select object according to Input depending upon which you will able to divide the job. For example, GPART is used when job have to divide depending upon Business partner. In Maintain Variants you have to create Variants.You have to give Variant name and value in either Interval length or Number of Intervals. Interval length will decide what is the maximum number of object will be processed in a single part. Number of Interval is the number which tells job will be divided in how many parallel process.
For example, let you r running a parallel process for 1000 Business Partner.You choose GPART as object.Now if you put 200 in interval length it will divide job in 5 parallel process.If you put 10 in number of interval it will divide the job in 10 parallel process each of 100 Business partner.
After creating variant, you use it in technical setting.
Let u already define variants which divide the job in 30 but you want maximum number of parallel process will be executed at a time is 6.Then put 6 in Number of jobs field in automatic load distribution.
Hope this can able to clear your question. Still if have any doubt feel free to ask me.
Thanks and regards,
Jyotishankar Dutta -
Billing Set up - Parallel jobs and size of set up table
Hello All,
I want to do a billing (13) set up using comp code, sales org and document number range as selection criteria.
Would it be okay to to run jobs in parallel. I will be setting up jobs for different variants containing different document ranges. Would there be a conflict in running parallel jobs if they belong to same company codes/ sales org?
Q.2 Is there a limit to which we can fill up the set up table and will need to empty it after transferring the data into BI?
Thanks!
Edited by: BI Quest on Oct 7, 2009 6:22 PMYou can run multiple concurrent setup jobs without an issue. I'd recommend that you only execute 4 concurrently, however, unless you have a huge amount of memory on your source R3/ECC server(s). If you're using the Billing Document as part of the selection criteria, there shouldn't be a conflict. In fact, if you're using Billing Document as the selection criteria for your multiple setups, the Company Code and Sales Org designations really aren't necessary, unless the Billing Document numbering in your source R3/ECC environment has been configured to number based on Company Code and/or Sales Org, whereby you could have duplicate Billing Document numbers and the way to distinguish between them is to further qualify by Company Code and/or Sales Org.
-
How to increase parallel jobs for TA SGEN
Hi all,
we did an SGEN load for an AIX ApplServer and a LinuxAppServer to know, which load has a longer runtime.
We saw, that the linux server has more than twice than the AIX server ... But we also noticed in joblog, that
AIX had 37 parallel jobs for sgen und linux only 20 ...
I don't know, where to configure, that linux also uses 37 parallel jobs for sgen ...
I found an notice in help.sap, that perhaps TA rz12 could help me, but how !?
Kind regards
AndreasHi,
SGEN uses the configured quota for asynchronous RFC. Do the following:
RZ12 => select group "parallel_generators" (probably the group already exists, otherwise create it and assign to the instance(s) you wish to participate in this group)
For each instance that is a member of the parallel_generators group:
- Double-click on the entry
- The relevant quota are "Max. Number of WPs Used (%)" and "Minimum Number of Free WPs". The first is treated as a percentage of the number of DIA processes. The second indicates how many DIA processes must be kept free.
Save the changes you make.
Example: assume that an instance has 20 DIA processes and you configure "Max. Number of WPs Used (%)" = 50 and "Minimum Number of Free WPs" = 5. This means that SGEN will start 10 parallel jobs (50% of 20 processes), unless so many DIA processes are busy on other activities that the number of free DIA would fall below 5.
Hope this helps,
Mark -
[solved] make -j (parallel jobs) in PKGBUILD ?
Hi,
I just rediscovered option -j in make, that lets make runs parallel jobs. On my computer (4 cores, ssd), it speeds up things.
I was wondering if it was clean/permitted/a good idea to use this in PKGBUILDs, by automatically adjusting the -j parameter to the number of cores, or half the number of cores ?
Cheers,
Charles
Last edited by cgo (2014-03-12 12:48:45)This should not be added to PKGBUILDs. Makepkg already sets this if the user has opted for it in /etc/makepkg.conf. If you try to override this you will be using a setting that works best on your machine to override a setting the user has found works best on their own machine.
For your own use, just set -j4 in makepkg.conf on your system.
Last edited by Trilby (2014-03-12 12:11:04) -
Pin dbms_cube.build parallel jobs to specific node on RAC
Is there a way to Pin dbms_cube.build parallel jobs to specific node on RAC.Currently the job with say parallelism of 10 spans over all nodes of RAC.
IS there a way to control it so the child jobs runs on subset of nodes. Unable to see how we can tie job classes and services with dbms_cube.build .
Any suggestions will be hugely appreciated.Used undocumented JOB_CLASS and it seems to working fine.
SQL> desc sys.dbms_cube.build
Parameter Type Mode Default?
SCRIPT VARCHAR2 IN
METHOD VARCHAR2 IN Y
REFRESH_AFTER_ERRORS BOOLEAN IN Y
PARALLELISM BINARY_INTEGER IN Y
ATOMIC_REFRESH BOOLEAN IN Y
AUTOMATIC_ORDER BOOLEAN IN Y
ADD_DIMENSIONS BOOLEAN IN Y
SCHEDULER_JOB VARCHAR2 IN Y
MASTER_BUILD_ID BINARY_INTEGER IN Y
REBUILD_FREEPOOLS BOOLEAN IN Y
NESTED BOOLEAN IN Y
JOB_CLASS VARCHAR2 IN Y -
Hi ABAP Experts,
Anybody can give a pointer on......
"Program should create parallel job for X number of invoices at a time"Hi,
The logic behind this is some thg like this,
1 - Read the file from Application Layer or from local PC into an internal table.
2 - Read 1000 records of the internal into another internal table
3 - Schedule the Job using the JOB_OPEN for the first 1000rs,,
4 - Once the job is created then read next set of 1000records to create one more job..
This will create the jobs in parallel from ur prgrm.
please close the thread, if solved
Regards,
Aditya
Edited by: aditya on Mar 3, 2010 5:30 PM -
SGE - Failed To Execute openmp parallel job
Hi, I'm having problems executing a simple parallel openmp job.
The job is scheduled and gets stuck in the 't' state, afterwards the sgeexecd falls. I already tried changing the allocation_rule but nothing chaged.
I have no idea what could be, i google it but couldn't find anything, even reinstalled everything.
Here is the exec messages:
09/28/2009 13:43:57| main|node10|I|starting up SGE 6.2u3 (lx24-amd64)Here is the qmaster messages:
09/28/2009 13:46:28| timer|node10|W|failed to deliver job 24.1 to queue "all.q@node10"
09/28/2009 13:46:28|listen|node10|E|commlib error: got read error (closing "node10/execd/1")Any help is very appreciated!.
Thanks in advance.
Here is may PE:
# Version: 6.2u3
# DO NOT MODIFY THIS FILE MANUALLY!
pe_name smp
slots 4
user_lists NONE
xuser_lists NONE
start_proc_args NONE
stop_proc_args NONE
allocation_rule $pe_slots
control_slaves TRUE
job_is_first_task FALSE
urgency_slots min
accounting_summary TRUEAnd here is my script
#$ -N dotPRODUCT
#$ -S /bin/bash
#$ -o ~
#$ -e ~
#$ -q all.q
#$ -pe smp 2
#$ -v OMP_NUM_THREADS=$NSLOTS
cd /tmp
gcc -fopenmp dot_product.c -lm
mv a.out dot_product
./dot_product > dot_product_output_nt2.txt
echo "Program output written to dot_product_output_nt2.txt"
rm dot_productMarcos.You'll probably find the GridEngine mailing lists more responsive: [http://gridengine.sunsource.net/ds/viewForums.do|http://gridengine.sunsource.net/ds/viewForums.do]
-
How to prevent parallel job scheduling through code?
Hai,
if user schedule one job as background , prevent again scheduling of the same job before finishing the first one. if any function module or code is available for that purpose.
Plz Help.
Moderator message: duplicate post locked.
Edited by: Thomas Zloch on Apr 27, 2011 3:37 PMHi,
You have to use FM SHOW_JOBSTATE, this FM lets you know if the JOB is finished or active.
If itu2019s active you have to put a WAIT, and put all this inside a WHILE.
Regards,
Gabriela -
Oracle11g r2 impdp parallel job does not spwans
Trying to import through data pump but parallel is not working sequence creation is running.
any guess why parallel is not working
oracle11g r2 asm +rac linux
thanks in advanceBetter late than never...
We have seen a few issues with expdp/impdp in 11gr2 on linux.
Problem Description: impdp that was triggered to run in Parallel=4 has failed to load the following table. This table was specified in QUERY option.
Error:
ORA-31693: Table data object "DBATEST"."TEST" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
There was a patch for this. patch 8393456
Also we saw the export would not work correctly in parallel.
9243068: EXPDP RETURNS ORA-00922: MISSING OR INVALID OPTION WHEN USING CONSISTENT=TRUE
We did not see an error here, but parallel would not spaw any workers
Hope this helps -
Running a PERFORM parallel jobs
Hi Experts,
I have a dynamic internal table which has a ONE lakh records filled at run time. I am doing some logic to format some data based on the records filled up. When I excute this program I am getting lot of time in this. Becasue this is dynamic internal table and data is very huge.
So, I need to run this logic in a perform and I have to call this in back ground .
Can you guys help me out how to split the data and call this in BG.
Thanks in Advance
GowHi,
You can do this if u want to execute in background mode. Follow the below steps.
1) Create a second program. This program will be executed in background when called from the first program using SUBMIT statement. The data will be imported into this second program using IMPORT statement into an internal table. Write your required logic using that internal table entries.
2) In your first program as you said u want to split the records and process in background, u can split and populate into a separate internal table and then call the second program using SUBMIT statement. Before calling this second program you have to export the internal table using EXPORT statement.
If the data gets successfully processed in the second program then for every call a spool will be generated. you can check the background job status in SM37 transaction.
thanks,
sksingh -
We want to block the parallel job by using interception feature in CPS ?
Dear Gurus,
We want to block the paralell jobs. In case of the critical/consume resoures job run in the same time more than 2 jobs, the 3rd can not active, must be change status in hold and waiting by using interception feature in CPS-Redwood ?
Does the CPS support this requirement ?
Any one use the interception feature in CPS ? to block the paralell job ?
Please advise.
Best regards,
Supat Jupraphat.Hello Supat,
If I do get you right, you want CPS to run max 2 jobs at the same time. The third jobs, which is then already triggered/scheduled/waiting needs to wait.
We do this by the following(we "told" CPS how many jobs can run at the same time - this is called App server load balancing):
Navigate to Environment. There select "Process Servers".
Edit your process server.
In the edit window, you can set the "execution size". (test yourself for the needed value)
To have these settings to work, one must specify the APP Server load balancing. (you can find this too in "environment" - "SAP". Edit "SAP" and tick the checkbox "App server load balancing".)
Check the CPS AdminGuide for "load balancing"
Hope you will find the appropriate solution!
(I have never used the interception feature - I cannot tell you anything about it and I doubt that it is the feature you're looking for) -
Parallel run of the same function from multiple jobs
Hello, everyone!
I have a function which accepts a date range, reads invoices from a partitioned by date table and writes output to a partitioned by invoice table. Each invoice can have records only with one date, so both tables may have one invoice only in one partition, i.e. partitions do not overlap. Function commits after processing each date. The whole process was running about 6 hrs with 46 million records in source table.
We are expecting source table to grow over 150 million rows, so we decided to split it into 3 parallel jobs and each job will process 1/3 of dates, and, as a result, 1/3 of invoices.
So, we call this function from 3 concurrent UNIX jobs and each job passes its own range of dates.
What we noticed, is that even if we run 3 jobs concurrently, they do not run this way! When 1st job ends after 2 hrs of run, the number of commited rows in the target table is equal to the number of rows inserted by this job. When 2nd job ends after 4 hrs of run, the number of rows in the target table is equal the summary of two jobs. And the 3rd job ends only after 6 hrs.
So, instead of improving a process by splitting it into 3 parallel jobs we ended up having 3 jobs instead of one with the same 6 hrs until target table is loaded.
My question is - How to make it work? It looks like Oracle 11g is smart enough to recognize, that all 3 jobs are calling the same function and execute this function only once at the time. I.e. it looks like only one copy of the function is loaded into the memory at the same even if it called by 3 different sessions.
The function itself has a very complicated logic, does a lot of verifications by joining to another tables and we do not want to maintain 3 copies of the same code under different names. And beside this, the plan is that if with 150 mln rows we will have a performance problem, then split it to more concurrent jobs, for example 6 or 8 jobs. Obviously we do not want to maintain so many copies of the same code by copying this function into another names.
I was monitoring jobs by quering V$SESSION and V$SQLAREA ROWS_PROCESSED and EXECUTIONS and I can see, that each job has its own set of SID's (i.e. runs up to 8 parallel processes), but number of commited rows is always eqal to the number of rows from the 1st job, then 2nd+1st, etc. So, it looks like all processes of 2nd and 3rd jobs are waiting until 1st one is done.
Any ideas?OK, this is my SQL and results (some output columns are ommited as irrelevant)
SELECT
TRIM ( SESS.OSUSER ) "OSUser"
, TRIM ( SESS.USERNAME ) "OraUser"
, NVL(TRIM(SESS.SCHEMANAME),'------') "Schema"
, SESS.AUDSID "AudSID"
, SESS.SID "SID"
, TO_CHAR(SESS.LOGON_TIME,'HH24:MI:SS') "Sess Strt"
, SUBSTR(SQLAREA.FIRST_LOAD_TIME,12) "Tran Strt"
, NUMTODSINTERVAL((SYSDATE-TO_DATE(SQLAREA.FIRST_LOAD_TIME,'yyyy-mm-dd hh24:mi:ss')),'DAY') "Tran Time"
, SQLAREA.EXECUTIONS "Execs"
, TO_CHAR(SQLAREA.ROWS_PROCESSED,'999,999,999') "Rows"
, TO_CHAR(TRAN.USED_UREC,'999,999,999') "Undo Rec"
, TO_CHAR(TRAN.USED_UBLK,'999,999,999') "Undo Blks"
, SQLAREA.SORTS "Sorts"
, SQLAREA.FETCHES "Fetches"
, SQLAREA.LOADS "Loads"
, SQLAREA.PARSE_CALLS "Parse Calls"
, TRIM ( SESS.PROGRAM ) "Program"
, SESS.SERIAL# "Serial#"
, TRAN.STATUS "Status"
, SESS.STATE "State"
, SESS.EVENT "Event"
, SESS.P1TEXT||' '||SESS.P1 "P1"
, SESS.P2TEXT||' '||SESS.P2 "P2"
, SESS.P3TEXT||' '||SESS.P3 "P3"
, SESS.WAIT_CLASS "Wait Class"
, NUMTODSINTERVAL(SESS.WAIT_TIME_MICRO/1000000,'SECOND') "Wait Time"
, NUMTODSINTERVAL(SQLAREA.CONCURRENCY_WAIT_TIME/1000000,'SECOND') "Wait Concurr"
, NUMTODSINTERVAL(SQLAREA.CLUSTER_WAIT_TIME/1000000,'SECOND') "Wait Cluster"
, NUMTODSINTERVAL(SQLAREA.USER_IO_WAIT_TIME/1000000,'SECOND') "Wait I/O"
, SESS.ROW_WAIT_FILE# "Row Wait File"
, SESS.ROW_WAIT_OBJ# "Row Wait Obj"
, SESS.USER# "User#"
, SESS.OWNERID "OwnerID"
, SESS.SCHEMA# "Schema#"
, TRIM ( SESS.PROCESS ) "Process"
, NUMTODSINTERVAL(SQLAREA.CPU_TIME/1000000,'SECOND') "CPU Time"
, NUMTODSINTERVAL(SQLAREA.ELAPSED_TIME/1000000,'SECOND') "Elapsed Time"
, SQLAREA.DISK_READS "Disk Reads"
, SQLAREA.DIRECT_WRITES "Direct Writes"
, SQLAREA.BUFFER_GETS "Buffers"
, SQLAREA.SHARABLE_MEM "Sharable Memory"
, SQLAREA.PERSISTENT_MEM "Persistent Memory"
, SQLAREA.RUNTIME_MEM "RunTime Memory"
, TRIM ( SESS.MACHINE ) "Machine"
, TRIM ( SESS.TERMINAL ) "Terminal"
, TRIM ( SESS.TYPE ) "Type"
, SQLAREA.MODULE "Module"
, SESS.SERVICE_NAME "Service name"
FROM V$SESSION SESS
INNER JOIN V$SQLAREA SQLAREA
ON SESS.SQL_ADDRESS = SQLAREA.ADDRESS
and UPPER(SESS.STATUS) = 'ACTIVE'
LEFT JOIN V$TRANSACTION TRAN
ON TRAN.ADDR = SESS.TADDR
ORDER BY SESS.OSUSER
,SESS.USERNAME
,SESS.AUDSID
,NVL(SESS.SCHEMANAME,' ')
,SESS.SID
AudSID SID Sess Strt Tran Strt Tran Time Execs Rows Undo Rec Undo Blks Sorts Fetches Loads Parse Calls Status State Event P1 P2 P3 Wait Class Wait Time Wait Concurr Wait Cluster Wait I/O Row Wait File Row Wait Obj Process CPU Time Elapsed Time Disk Reads Direct Writes Buffers Sharable Memory Persistent Memory RunTime Memory
409585 272 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED SHORT TIME PX Deq: Execute Reply sleeptime/senderid 200 passes 2 0 Idle 0 0:0:0.436000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 7 21777 22739 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 203 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.9674000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 25 124730 4180 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 210 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.11714000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 24 124730 22854 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 231 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.4623000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 46 21451 4178 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 243 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED SHORT TIME PX qref latch function 154 sleeptime 13835058061074451432 qref 0 Other 0 0:0:0.4000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21451 3550 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 252 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.19815000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 49 21451 22860 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 273 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.11621000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 22 124730 4182 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 277 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 20 blocks 125 requests 125 User I/O 0 0:0:0.242651000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 39 21451 4184 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 283 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.2781000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 42 21451 3552 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 295 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.24424000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 40 21451 22862 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 311 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.15788000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 31 21451 22856 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 242 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED KNOWN TIME PX Deq: Execute Reply sleeptime/senderid 200 passes 1 0 Idle 0 0:0:0.522344000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 28 137723 22736 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 192 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.14334000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 31 21462 4202 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 222 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.16694000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 37 21462 4194 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 233 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.7731000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 44 21462 4198 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 253 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 21 blocks 125 requests 125 User I/O 0 0:0:0.792518000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 39 21462 4204 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 259 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.2961000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21462 4196 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 291 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.9548000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21462 4200 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 236 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Table Q Normal sleeptime/senderid 200 passes 2 0 Idle 0 0:0:0.91548000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 25 124870 22831 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 207 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Execution Msg sleeptime/senderid 268566527 passes 3 0 Idle 0 0:0:0.644662000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 43 21423 4208 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 241 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Execution Msg sleeptime/senderid 268566527 passes 3 0 Idle 0 0:0:0.644594000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 47 21423 4192 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 297 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 20 blocks 109 requests 109 User I/O 0 0:0:0.793261000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 12 21316 4206 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448Here I found one interesting query http://www.pythian.com/news/922/recent-spike-report-from-vactive_session_history-ash/
But it does not help me
Maybe you are looking for
-
Attachments(matrix) in Sales Quotation
Hi All, In my sales quotation i want to add an attachemnt tab,in that matrix will be there.i want to browse from dialog box & add 'n' no.of items to the matrix folder & save it.can anybody tell me some sample coding to add multiple item
-
I bought a Album I didn't want I want to know if I can get the money back for it
I Accident Bought a Albom I didn't want, I was wondering if I can get the money back?
-
BI Query with Hierarchy in VC does not get correct values
Hello Gurus, I am building a model in VC for Performance Score card using Query as data service. I have the following problem. When I execute query in BEx with Hierachy_node variable , it is getting correct values, but the same is getting incorrect
-
Operations Manager Event ID 31901
Hi all, I am getting error in operations manager agent status column. The error event id is 31901. I have searched a lot on the net for it's resolution but did not get any helpful link. So please help me to solve this error. Thanks, Vishwaj
-
Hi all. Im install SAP NetWeaver Sneak preview ABAP edition. Me application writen i VB 6 connect ok. BAPI,TABLE- object connect ok. If retrieving data command BAPI getdetail+show app closed=crash,TABLE search ok,if TABLE show data or TABLE detail da