Running multiple jobs (or parallelism) in non-project mode
So, I have just converted my GUI-based project mode project to Tcl based non-project mode.
I have 8 OOC IP modules that I sythesise before the main design using synth_ip. This occurs sequentially, rather than in parallel as it could when I have a run for each module in the GUI. there I would just set the number of jobs to 8 upon launching the run. This was far quicker.
Without creating runs for each IP, can I implement the same kind of parallellism with my non-project tcl script?
No. There are sort of two issues.
The first is that in non-project batch mode there is not (supposed to be) a mechanism for managing files and filesets. Each synthesis run that you want to run has its own set, and there would normally be no way (in non-project mode) of managing this within the Tcl environment. However, in the case of IP, this sort of isn't true - IP are almost by definition little projects, so this problem doesn't really apply here.
The second is that in non-project mode, there is a single flow of execution - there is no concept of "background jobs" which is what is used in project mode; there is a single thread of Tcl execution that runs linearly. The processes invoked by this thread may use multiple processors (the place_design and route_design processes do), but only one process is running at a time. Furthermore, in non-project batch, there is no eqivalent to fork/join (which is esstially what launch_runs/wait_on_run is).
So, you have two choices. One is to compile the IP outside of your main Vivado run before you launch your main run - use your OS to launch 8 separate Vivado processes, each of which has its own script to compile one of your IP.
The other is to compile the IP once, and keep the generated products around from run to run; your IP does not need to be synthesized each time - each synthesis run should end up with exactly the same netlist. You can even revision control the IP directory (with all its targets). This way, during "normal" runs, you skip the IP synthesis entirely and go straight to synthesizing your design.
Avrum
Similar Messages
-
Running SETUP jobs in parallel in Production R/3
Hi All,
I have identified the number ranges(for Billing) for which I have to run SETUP.
Example,I have 5 parameters/Variables defined in 'oli9bw'....1 variable for 1 range in 'oli9bw' and say all are new runs.
How can I run these variables/parameters in Parallel inorder to reduce the downtime of the Production system.??
Do I have to have a ABAP code written to trigger these Variables (all at once)which inturn fills the setup tables.?
or Is there a standard ABAP program where I can mention these Variables and then Kickoff the program?
Note:I already took methods to have enough processes to run these 5 jobs in parallel.
Any help is much appreciated.
Thanks,
JbAbhijit,
I have an alternate approach than running SETUP's for history data .since I am afraid that running SETUP's for 80 jobs may take long time in PRODUCTION and also to avoid any complications.
Please let me know if this works
For future data:
1.clear qrfc and RSA7 by running delta twice for Billing DSO,so that Delta queue for 2LIS_13_VDITM is empty before transporting new structure.
2.send the new structure of 2LIS_13_VDITM to R/3 Production while R/3 Prod is down.
3.Will check the delta queue for this data source and see if this DS is active with new structure.
4.Turn on the Job control so that users can post new documents and then turn back on BW Process chain.
In this way,we dont run the SETUP's for history data,but future forward we have new structure data coming in BW.
In order to capture History:
1.I thought of building generic DS based on a view on VBRK and VBRP with selction on Bill_date = 03/2008 to till date.
2.I suppose view can handle 80 k billing documents
3.then dump the data in to a new DSO (keys in DSO are Bill_doc and Bill_item ,data fields are just say 'xx' and 'yy.....xx and yy are the new fields that has been added to extract structure)')
4. then load as full load to Billing ODS .(Bill_doc and Bill_item are the keys in Billing DSO )
5.In this way ,all the data from Generic DS will over write the Billing DSO with new fields 'xx' and 'yy'
By the way,can we have repair full from DSO to DSO ??So that the deltas won't get disturbed in Billing DSO.
Please let me know if this works
Thanks,
Jb -
Running Multiple Scenarios in Parallel
Hello all,
I am trying to figure out how to execute multiple Interfaces / Scenarios inside a single parent thread I found some useful guidance in the thread below.
Link: Running multiple interfaces within a package in parallel
Which lead to me to use ODIStartScen in Asynchronous Mode. This works for the most part but this method generates a new Session Number for every child thread. Is there a way to run the child sessions inside the parent?
Thanks in advance
MikeHi Mike,
The child sessions are always tagged to the parent sessions. You can find this in the Work Repos table in snp_session table, field name is parent_sess_no.
To keep the same session-id even in the parent, try the below:
1. Declare a variable with the refresh mode as 'select '<%=odiRef.getInfo(SESS_NO")%>
2. Refresh this variable in the Parent Package.
3. In all the child packages, just declare this variable. This will help ODI in identifying the variable and as the value is already refreshed, it will be used during execution.
4. Use this variable and do what you are interested in.
Hope this helps.
Thanks,
Nithesh B -
How can I run multiple midlets in parallel via the raspberry pi cmd line interface?
I am trying to run multiple midlets at the same time on my raspberry pi.
From netbeans point of view I am able to deploy and run them. But for my applications I want to auto start multiple midlets on the raspberry pi.
On the cmd line interface of java me 8 I have commands to install a midlet and run a midlet.
But i am not able to start multiple midlets.
Can someone explain to me how I can do this?Hi!
Could you please clarify a bit: do you want auto-start behavior of your applications or be able to start them manually via CLI?
Because "auto start" means that applications are started by Java once you start Java itself and in order to make it happen you shall mark your applications appropriately (check MEEP spec for "MIDlet-<n>-Type" application attribute value "autostart"). If this is the case, have you added this attribute?
If you want to run them manually you should be ale to do so via CLI just the same way as you start one application (via "ams-run" command).
Either way, could you also please clarify what are the symptoms of the problem? E.g. on CLI what are the messages you receive back (if start is unsuccessful what is the error message etc).
In addition please mind that there is limit on number of simultaneous VM tasks. If I'm not mistaken by default it is 6: 2 of them are used by system code and each running MIDlet consumes 1 VM task. If you want to increase it please change the value of "MAX_ISOLATES" property in jwc_properties.ini file. The maximum supported value is 16
Regards,
Andrey -
Using MIG in OOC non-project mode
We're trying to use the MIG with Vivado v2015.2 in an OOC non-project workflow for a Kintex Ultrascale device.
However trying to build the MIG itself out-of-context fails at opt_design stage with an error that MIG configuration is incomplete.
Putting the MIG in the top-level design means that as soon as a routed design checkpoint is loaded, building fails with
ERROR: [Mig 66-106] The MIG I/O ports have been moved since the last time the design was implemented. Due to a current limitation, Vivado cannot execute opt_design in this situation. In order to proceed with the current task, you must first save and close the current design in memory. The flow must be restarted with a synthesized design by either loading the synthesized design checkpoint or re-running synthesis.
I've tried all sorts of stuff, read_checkpoint+synth_design, or creating a synthesized design with black boxes and then add_files & link_design but in the end the only way to avoid this issue seems to give up on OOC implementation altogether.
Are there any known workarounds for these issues?BTW, it works fine in Tomcat 6,7........
-
Running multiple queries in parallel in a stored procedure
Hi,
I have two queries one on table1 (group bys) and second on table2 (again group bys) and then I need to compare the results.
Is it possible to execute both of them in parallel as they have no dependencies ? SInce it is going to be called from a JAVA program right now the only way I know is
to execute them in multiple threads but it would be great to have ideas of how to do it in a stored procedure. As a series of steps(all sql quries) have to be carried out and
the data has to be locked for that period ?
Just for clarity the steps are :
Step1 : select for update cursor to lock the rows from table1
Step2: open curosr then select data from table1 --- perform validations
Step3 : select data using group bys on table1 and select data using group bys on table2 ---- this needs to be done in parallel (just to gain on time)
Step4 : compare data
Step5 : insert statement
Step6 : close cursor then commit and exit
This might be really silly question but please pardon my ignorance as I am not much of SQL guy !
Thanks,
Neetesh
Edited by: user13312817 on 10 Nov, 2011 8:27 AMMaybe something like this (not tested)?
SELECT t1.referenceno
, t1.col1
, t1.col2
, t1.entityparent
, t1.class
, t1.annual - t2.annual
FROM (
SELECT table1.Referenceno
, table1.col1
, table1.col2
, entitycodes.entityparent
, classes.class
, SUM(nvl(table1.AnnualAmt,0)) as Annual
FROM table1
JOIN entitycodes ON entitycodes.entitycode = table1.fundentity
JOIN classes ON classes.accountcode = table1.account
GROUP BY table1.Referenceno
, table1.col1
, table1.col2
, entitycodes.entityparent
, classes.class
) t1
JOIN (
SELECT table2.Referenceno
, table2.col1
, table2.col2
, entitycodes.entityparent
, classes.class
, SUM(nvl(table2.AnnualAmt,0)) as Annual
FROM table2
JOIN entitycodes ON entitycodes.entitycode = table2.fundentity
JOIN classes ON classes.accountcode = table2.account
GROUP BY table1.Referenceno
, table1.col1
, table1.col2
, entitycodes.entityparent
, classes.class
) t2 ON t2.referenceno = t1.referenceno
AND t2.col1 = t1.col1
AND t2.col2 = t1.col2
AND t2.entityparent = t1.entityparent
AND t2.class = t1.class -
I am looking to use multithreading in order to run multiple tests in parallel on one UUT.
I am looking to multithread multiple tests in paralllel on one UUT. I looked in the main site and all examples are on zip files that I seem to not be able to successfully download from the site. Does anyone have any file examples or white papers on this subject that I can view???
Solved!
Go to Solution.put one test in a sub sequence. then call that subsequence using New Thread as the Execution Options:
jigg
CTA, CLA
teststandhelp.com
~Will work for kudos and/or BBQ~ -
Run Multiple SSIS Packages in parallel using SQL job
Hi ,
We have a File Watcher process to determine the load of some files in a particular location. If files are arrived then another package has to be kick-started.
There are around 10 such File Watcher Processes to look for 10 different categories of files. All 10 File Watcher have to be started at the same time.
Now these can be automated by creating 10 different SQL jobs which is a safer option. But if a different category file arrives then another job has to be created. Somehow I feel this is not the right approach.
Another option is to create one more package and execute all the 10 file watcher packages as 10 different execute packages in parallel.
But incase if they don’t execute in parallel, i.e., if any of the package waits for some resources, then it does not satisfy our functional requirement . I have to be 100% sure that all 10 are getting executed in parallel.
NOTE: There are 8 logical processors in this server.
(SELECT cpu_count FROM sys.dm_os_sys_info
i.e., 10 tasks can run in parallel, but somehow I got a doubt that only 2 are running exactly in parallel and other tasks are waiting. So I just don’t want to try this option.
Can someone please help me in giving the better way to automate these 10 file watcher process in a single job.
Thanks in advance,
Raksha
RakshaHi Jim,
For Each File Type there are separate packages which needs to be run.
For example package-A, processes FileType-A and package-B processes FileType-B. All these are independent processes which run in parrallel as of now.
The current requirement is to have File Watcher process for each of these packages. So now FileWatcher-A polls for FileType-A and if any of the filetype-A is found it will kick start package-A. In the same way there is FileWatcher-B, which looks for FileType-B
and starts Package-B when any of FileType-B is found. There are 10 such File Watcher processes.
These File Watcher Processes are independent and needs to start daily at 7 AM and run for 3 hrs.
Please let me know if is possible to run multiple packages in parallel using SQL job.
NOTE: Some how I find it as a risk, to run these packages in parallel using execute package task and call that master package in job. I feel only 2 packages are running in parallel and other packages are waiting for resources.
Thanks,
Raksha
Raksha -
Running the jobs sequentially instead of parallel
Hi All,
My script runs multiple jobs by iterating in a loop one by one. They are getting executed in parallel where as i want them to run sequentially one only after another has finished execution. For ensuring this i have added this logic in my code:
[Code for Job name reading, parameters setting goes here]
jcsSession.persist();
while (((infoJob.getStatus().toString().matches("Scheduled")) || (infoJob.getStatus().toString().matches("Running"))) {
jcsOut.println("infojob still running"+infoJob.getStatus().toString());
This should run after each job's persist statement.
Ideally the loop should end as soon as the job attains 'Error' or 'Completed' state i.e. the job ends but when i am running the script
this while loop is running infinitely thus causing an infinite loop.
Because of which the first job ends but the script does not move forward to next job.
Please help me what i am doing wrong here or if there is any other way to make the jobs run sequentially through Redwood.
Thanks,
ArchanaHi Archana,
How about jcsSession.waitForJob(infoJob);
Regards,
HP -
Submitting multiple job on teh same table via trigger
Hi All,
I have a trigger that run multiple jobs using dbms_job on the same table. I am trying to refresh two materialized views complete via dbms_job.
Issue is when data is inserted into NET_CAB table , the trigger kicks off the bothe procedures but only the first materialized view is refreshed and not the other one.
Attached is the trigger, the procedure and materialized view
<pre>
create or replace
TRIGGER NET_CAB_TRG
AFTER INSERT OR UPDATE OR DELETE ON NET_CAB
DECLARE
pbl NUMBER;
pbl1 number;
BEGIN
SYS.DBMS_JOB.SUBMIT( JOB => pbl,what => 'P_CAB_PROC;' ) ;
SYS.DBMS_JOB.SUBMIT( JOB => pbl1,what => 'P_CABAS_PROC;') ;
END;
</pre>
<pre>
create or replace
procedure P_CAB_PROC
is
BEGIN
dbms_mview.REFRESH('P_CAB','C');
COMMIT;
END;
</pre>
<pre>
create or replace
procedure P_CABAS_PROC
is
BEGIN
dbms_mview.REFRESH('P_CABAS','C');
COMMIT;
END;
</pre>
<pre>
CREATE MATERIALIZED VIEW P_CAB
BUILD DEFERRED
USING INDEX
REFRESH COMPLETE ON DEMAND
USING DEFAULT LOCAL ROLLBACK SEGMENT
DISABLE QUERY REWRITE
AS
SELECT
seq_nextval AS ID,
NAME,
SEGMENT_ID,
reproject(geometry) AS GEOMETRY
FROM NET_CAB
where sdo_geom.validate_geometry(geometry,0.005) = 'TRUE'
</pre>
<pre>
CREATE MATERIALIZED VIEW P_CABAS
BUILD DEFERRED
USING INDEX
REFRESH COMPLETE ON DEMAND
USING DEFAULT LOCAL ROLLBACK SEGMENT
DISABLE QUERY REWRITE
AS
SELECT
seq_nextval AS ID,
NAME,
SEGMENT_ID,
reproject(geometry) AS GEOMETRY
FROM NET_CAB
where sdo_geom.validate_geometry(geometry,0.005) = 'TRUE'
AND cis > 4;
</pre>
Edited by: CrackerJack on May 22, 2012 8:58 PMI can run many procedures in a job:
BEGIN
SYS.DBMS_SCHEDULER.CREATE_JOB
job_name => 'JOB_REPORT_FPD'
,start_date => TO_TIMESTAMP_TZ('2012/05/31 23:30:00.000000 +07:00','yyyy/mm/dd hh24:mi:ss.ff tzh:tzm')
,repeat_interval => 'FREQ=MONTHLY;BYMONTHDAY=-1'
,end_date => NULL
,job_class => 'DEFAULT_JOB_CLASS'
,job_type => 'PLSQL_BLOCK'
,job_action => '
DECLARE
BEGIN
ibox_file.fpd_nbot_report;
ibox_file.fpd_nbot_report(''NBOT'');
ibox_file.order_report;
COMMIT;
EXCEPTION
WHEN OTHERS THEN ROLLBACK;
END;
,comments => 'USED FOR REPORTING FPD'
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
( name => 'JOB_REPORT_FPD'
,attribute => 'RESTARTABLE'
,value => FALSE);
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
( name => 'JOB_REPORT_FPD'
,attribute => 'LOGGING_LEVEL'
,value => SYS.DBMS_SCHEDULER.LOGGING_RUNS);
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
( name => 'JOB_REPORT_FPD'
,attribute => 'MAX_FAILURES');
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
( name => 'JOB_REPORT_FPD'
,attribute => 'MAX_RUNS');
BEGIN
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
( name => 'JOB_REPORT_FPD'
,attribute => 'STOP_ON_WINDOW_CLOSE'
,value => FALSE);
EXCEPTION
-- could fail if program is of type EXECUTABLE...
WHEN OTHERS THEN
NULL;
END;
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
( name => 'JOB_REPORT_FPD'
,attribute => 'JOB_PRIORITY'
,value => 3);
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
( name => 'JOB_REPORT_FPD'
,attribute => 'SCHEDULE_LIMIT');
SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
( name => 'JOB_REPORT_FPD'
,attribute => 'AUTO_DROP'
,value => FALSE);
SYS.DBMS_SCHEDULER.ENABLE
(name => 'JOB_REPORT_FPD');
END;
/ -
While running synchronization jobs I am getting an error with program terminated
Dear All,
While running the synchronization jobs I am getting an ABAP dump error in GRC system SAPMSSY1 and CL_GRAC_USER_REP.
Do somebody had any of such problem?
Regards,
AbhisshekDear Colleen,
That was correct! I was running multiple jobs at the same time and they might were trying to accesss the same table.
I am surprise SAP ST22 dumps also stating that I must send SAP message.
Regards,
Abhishek -
Hi,
I have tried creating and running multiple Jobs using following statements, Jobs are executing the given procedure but USER_SCHEDULER_JOBS's last_start_date is null.
//creation of job
DBMS_SCHEDULER.CREATE_JOB (job_name => 'demo'||i,job_type => 'STORED_PROCEDURE',number_of_arguments => 2,job_action => 'POPULATE_DATA');
//running the job using these commands..
dbms_scheduler.set_job_argument_value(PROCESS_NAME,1,''||START_RANGE);
dbms_scheduler.set_job_argument_value(PROCESS_NAME,2,''||END_RANGE);
dbms_scheduler.set_job_argument_value(PROCESS_NAME,3,''||PROCESS_NAME);
DBMS_SCHEDULER.RUN_JOB(PROCESS_NAME);
I am able to see data getting getting populated by each job, but USER_SCHEDULER_JOBS has last_start_date value as null.
Any help will be highly appreciated.
Thanks,
Vaseem Saeed.hi read this link, there is some explaination about start_date null:
http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_sched.htm -
Parallel run of the same function from multiple jobs
Hello, everyone!
I have a function which accepts a date range, reads invoices from a partitioned by date table and writes output to a partitioned by invoice table. Each invoice can have records only with one date, so both tables may have one invoice only in one partition, i.e. partitions do not overlap. Function commits after processing each date. The whole process was running about 6 hrs with 46 million records in source table.
We are expecting source table to grow over 150 million rows, so we decided to split it into 3 parallel jobs and each job will process 1/3 of dates, and, as a result, 1/3 of invoices.
So, we call this function from 3 concurrent UNIX jobs and each job passes its own range of dates.
What we noticed, is that even if we run 3 jobs concurrently, they do not run this way! When 1st job ends after 2 hrs of run, the number of commited rows in the target table is equal to the number of rows inserted by this job. When 2nd job ends after 4 hrs of run, the number of rows in the target table is equal the summary of two jobs. And the 3rd job ends only after 6 hrs.
So, instead of improving a process by splitting it into 3 parallel jobs we ended up having 3 jobs instead of one with the same 6 hrs until target table is loaded.
My question is - How to make it work? It looks like Oracle 11g is smart enough to recognize, that all 3 jobs are calling the same function and execute this function only once at the time. I.e. it looks like only one copy of the function is loaded into the memory at the same even if it called by 3 different sessions.
The function itself has a very complicated logic, does a lot of verifications by joining to another tables and we do not want to maintain 3 copies of the same code under different names. And beside this, the plan is that if with 150 mln rows we will have a performance problem, then split it to more concurrent jobs, for example 6 or 8 jobs. Obviously we do not want to maintain so many copies of the same code by copying this function into another names.
I was monitoring jobs by quering V$SESSION and V$SQLAREA ROWS_PROCESSED and EXECUTIONS and I can see, that each job has its own set of SID's (i.e. runs up to 8 parallel processes), but number of commited rows is always eqal to the number of rows from the 1st job, then 2nd+1st, etc. So, it looks like all processes of 2nd and 3rd jobs are waiting until 1st one is done.
Any ideas?OK, this is my SQL and results (some output columns are ommited as irrelevant)
SELECT
TRIM ( SESS.OSUSER ) "OSUser"
, TRIM ( SESS.USERNAME ) "OraUser"
, NVL(TRIM(SESS.SCHEMANAME),'------') "Schema"
, SESS.AUDSID "AudSID"
, SESS.SID "SID"
, TO_CHAR(SESS.LOGON_TIME,'HH24:MI:SS') "Sess Strt"
, SUBSTR(SQLAREA.FIRST_LOAD_TIME,12) "Tran Strt"
, NUMTODSINTERVAL((SYSDATE-TO_DATE(SQLAREA.FIRST_LOAD_TIME,'yyyy-mm-dd hh24:mi:ss')),'DAY') "Tran Time"
, SQLAREA.EXECUTIONS "Execs"
, TO_CHAR(SQLAREA.ROWS_PROCESSED,'999,999,999') "Rows"
, TO_CHAR(TRAN.USED_UREC,'999,999,999') "Undo Rec"
, TO_CHAR(TRAN.USED_UBLK,'999,999,999') "Undo Blks"
, SQLAREA.SORTS "Sorts"
, SQLAREA.FETCHES "Fetches"
, SQLAREA.LOADS "Loads"
, SQLAREA.PARSE_CALLS "Parse Calls"
, TRIM ( SESS.PROGRAM ) "Program"
, SESS.SERIAL# "Serial#"
, TRAN.STATUS "Status"
, SESS.STATE "State"
, SESS.EVENT "Event"
, SESS.P1TEXT||' '||SESS.P1 "P1"
, SESS.P2TEXT||' '||SESS.P2 "P2"
, SESS.P3TEXT||' '||SESS.P3 "P3"
, SESS.WAIT_CLASS "Wait Class"
, NUMTODSINTERVAL(SESS.WAIT_TIME_MICRO/1000000,'SECOND') "Wait Time"
, NUMTODSINTERVAL(SQLAREA.CONCURRENCY_WAIT_TIME/1000000,'SECOND') "Wait Concurr"
, NUMTODSINTERVAL(SQLAREA.CLUSTER_WAIT_TIME/1000000,'SECOND') "Wait Cluster"
, NUMTODSINTERVAL(SQLAREA.USER_IO_WAIT_TIME/1000000,'SECOND') "Wait I/O"
, SESS.ROW_WAIT_FILE# "Row Wait File"
, SESS.ROW_WAIT_OBJ# "Row Wait Obj"
, SESS.USER# "User#"
, SESS.OWNERID "OwnerID"
, SESS.SCHEMA# "Schema#"
, TRIM ( SESS.PROCESS ) "Process"
, NUMTODSINTERVAL(SQLAREA.CPU_TIME/1000000,'SECOND') "CPU Time"
, NUMTODSINTERVAL(SQLAREA.ELAPSED_TIME/1000000,'SECOND') "Elapsed Time"
, SQLAREA.DISK_READS "Disk Reads"
, SQLAREA.DIRECT_WRITES "Direct Writes"
, SQLAREA.BUFFER_GETS "Buffers"
, SQLAREA.SHARABLE_MEM "Sharable Memory"
, SQLAREA.PERSISTENT_MEM "Persistent Memory"
, SQLAREA.RUNTIME_MEM "RunTime Memory"
, TRIM ( SESS.MACHINE ) "Machine"
, TRIM ( SESS.TERMINAL ) "Terminal"
, TRIM ( SESS.TYPE ) "Type"
, SQLAREA.MODULE "Module"
, SESS.SERVICE_NAME "Service name"
FROM V$SESSION SESS
INNER JOIN V$SQLAREA SQLAREA
ON SESS.SQL_ADDRESS = SQLAREA.ADDRESS
and UPPER(SESS.STATUS) = 'ACTIVE'
LEFT JOIN V$TRANSACTION TRAN
ON TRAN.ADDR = SESS.TADDR
ORDER BY SESS.OSUSER
,SESS.USERNAME
,SESS.AUDSID
,NVL(SESS.SCHEMANAME,' ')
,SESS.SID
AudSID SID Sess Strt Tran Strt Tran Time Execs Rows Undo Rec Undo Blks Sorts Fetches Loads Parse Calls Status State Event P1 P2 P3 Wait Class Wait Time Wait Concurr Wait Cluster Wait I/O Row Wait File Row Wait Obj Process CPU Time Elapsed Time Disk Reads Direct Writes Buffers Sharable Memory Persistent Memory RunTime Memory
409585 272 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED SHORT TIME PX Deq: Execute Reply sleeptime/senderid 200 passes 2 0 Idle 0 0:0:0.436000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 7 21777 22739 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 203 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.9674000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 25 124730 4180 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 210 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.11714000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 24 124730 22854 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 231 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.4623000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 46 21451 4178 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 243 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED SHORT TIME PX qref latch function 154 sleeptime 13835058061074451432 qref 0 Other 0 0:0:0.4000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21451 3550 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 252 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.19815000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 49 21451 22860 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 273 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.11621000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 22 124730 4182 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 277 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 20 blocks 125 requests 125 User I/O 0 0:0:0.242651000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 39 21451 4184 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 283 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.2781000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 42 21451 3552 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 295 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.24424000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 40 21451 22862 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409585 311 22:30:01 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.15788000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 31 21451 22856 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 242 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITED KNOWN TIME PX Deq: Execute Reply sleeptime/senderid 200 passes 1 0 Idle 0 0:0:0.522344000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 28 137723 22736 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 192 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.14334000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 31 21462 4202 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 222 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.16694000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 37 21462 4194 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 233 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.7731000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 44 21462 4198 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 253 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 21 blocks 125 requests 125 User I/O 0 0:0:0.792518000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 39 21462 4204 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 259 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.2961000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21462 4196 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409586 291 22:29:20 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq Credit: send blkd sleeptime/senderid 268566527 passes 1 qref 0 Idle 0 0:0:0.9548000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 35 21462 4200 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 236 22:15:36 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Table Q Normal sleeptime/senderid 200 passes 2 0 Idle 0 0:0:0.91548000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 25 124870 22831 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 207 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Execution Msg sleeptime/senderid 268566527 passes 3 0 Idle 0 0:0:0.644662000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 43 21423 4208 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 241 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING PX Deq: Execution Msg sleeptime/senderid 268566527 passes 3 0 Idle 0 0:0:0.644594000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 47 21423 4192 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448
409587 297 22:30:30 22:15:36 0 0:14:52.999999999 302 383,521 305 0 1 3598 WAITING db file parallel read files 20 blocks 109 requests 109 User I/O 0 0:0:0.793261000 0 0:0:1.124995000 0 0:0:0.0 0 1:56:15.227863000 12 21316 4206 0 0:25:25.760000000 0 2:17:1.815044000 526959 0 25612732 277567 56344 55448Here I found one interesting query http://www.pythian.com/news/922/recent-spike-report-from-vactive_session_history-ash/
But it does not help me -
we are using VS 2013, I need to run multiple Coded UI Ordered Tests in parallel on different agents.
My requirement :
Example: I have 40 Coded UI Test scripts in single solution/project. i want to run in different OS environments(example 5 OS ). I have created 5 Ordered tests with the same 40 test cases.
I have one Controller machine and 5 test agent machines. Now I want my tests to be distributed in a way that every agent gets 1 Ordered test to execute.
Machine_C = Controller (Controls Machine_1,2,3,4,5)
Machine_1 = Test Agent 1 (Should execute Ordered Test 1 (ex: OS - WIN 7) )
Machine_2 = Test Agent 2 (Should execute Ordered Test 2 (ex:
OS - WIN 8) )
Machine_3 = Test Agent 3 (Should execute Ordered Test 3
(ex: OS - WIN 2008 server) )
Machine_4 = Test Agent 4 (Should execute Ordered Test 4 (ex:
OS - WIN 2012 server) )
Machine_5 = Test Agent 5 (Should execute Ordered Test 5 (ex:
OS - WIN 2003 server) )
I have changed the “MinimumTestsPerAgent” app setting value
as '1' in controller’s configuration file (QTController.exe.config).
When I run the Ordered tests from the test explorer all Test agent running with each Ordered test and showing the status as running. but with in the 5 Test Agents only 2 Agents executing the test cases remaining all 3 agents not executing the test cases but
status showing as 'running' still for long time (exp: More then 3 hr) after that all so its not responding.
I need to know how I can configure my controller or how I can tell it to execute these tests in parallel on different test agents. This will help me reducing the script execution time.
I am not sure what steps I am missing.
It will be of great help if someone can guide me how this can be achieved.
-- > One more thing Can I Run one Coded UI Ordered Test on One Specific Test Agent?
ex: Need to run ordered Test 1 in Win 7 OS (Test Agent 1) only.
Thanks in Advance.Hi Divakar,
Thank you for posting in MSDN forum.
As far as I know, we cannot specify coded UI ordered test run on specific test agent. And it is mainly that test controller determine which coded UI ordered test assign to which test agent.
Generally, I know that if we want to run multiple CodedUI Ordered Tests over multiple Test Agents for parallel execution using Test Controller.
We will need to change the MinimumTestsPerAgent property to 1 in the test controller configuration file (QTControllerConfig.exe.config) as you said.
And then we will need to change the bucketSize number of tests/number of machines in the test settings.
For more information about how to set this bucketSize value, please refer the following blog.
http://blogs.msdn.com/b/aseemb/archive/2010/08/11/how-to-run-automated-tests-on-different-machines-in-parallel.aspx
You can refer this Jack's suggestion to run your coded UI ordered test in lab Environment or load test.
https://social.msdn.microsoft.com/Forums/vstudio/en-US/661e73da-5a08-4c9b-8e5a-fc08c5962783/run-different-codedui-tests-simultaneously-on-different-test-agents-from-a-single-test-controller?forum=vstest
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
PowerShell using start job to run multiple code blocks at the same time
I will be working with many 1000’s of names in a list preforming multiple function on each name for test labs.
I notice when it is running the functions on each name I am using almost no CPU or memory. That led me to research can I run multiple threads at once in a PowerShell program. That lead me to articles suggesting start-job would do just want I am looking
for.
As a test I put this together. It is a simple action as an exercise to see if this is indeed the best approach. However it appears to me as if it is still only running the actions on one name at a time.
Is there a way to run multiple blocks of code at once?
Thanks
Start-Job {
$csv1 = (Import-Csv "C:\Copy AD to test Lab\data\Usergroups1.csv").username
foreach ($name1 in $csv1) { Write-Output "Job1 $name1"}
Start-Job {
$csv2 = (Import-Csv "C:\Copy AD to test Lab\data\Usergroups2.csv").username
foreach ($name2 in $csv2) { Write-Output " Job2 $name2"}
Get-Job | Receive-Job
LishronYou say your testing shows that you are using very little cpu or memory in processing each name, which suggests that processing a single name is a relatively trivial task.
You need to understand that using a background job is going to spin up another instance of powershell, and if you're going to do that per name what used to require a relatively insignificant amount of memory is going to take around 60 MB.
Background jobs are not really well suited for multi-threading short-running, trivial tasks. The overhead of setting up and tearing down the job session can be more than the task itself.
Background jobs are good for long-running tasks. For multi-threading short, trivial tasks runspaces or a workflow would probably be a better choice.
[string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "
Maybe you are looking for
-
Hi, I have a 15" Macbook Pro 2.4GHz running OS X 10.6.8. I have a brand new (4 week old) battery and brand new charger (Battery was replaced because the OEM battery swelled up and started to split open, which also happened on my prior two Macbook Pro
-
How to restore/reset Snow leapord(10.6.8) on macbook pro, no DVD
Hi, I want to restore/reset Snow leopard (10.6.8) on MacBook pro to factory settings. I don't have installation DVD. Reason for restoring/resetting, I purchased used MacBook pro. I never used MAC before, always used Windows. I did not realize while p
-
Dear all Is there any DTW template to import AP & AR creditmemo.Please give the details Regards M Auditya
-
Is there any way to download files on my computer from different websites i have created?
don´t know what happened but by any reason i cannot find my folders that contain all the files of different websites i have created, in my computers. So my question is if is there any way i can download those files from the business catalyst into my
-
Flex component kit for cs4:
Hello people: I've got flex component kit for cs4 installed. i can see the "convert to flex component" option in the flash menu and can convert movie clips into swc's; am able to import the movie clip to flex by going to the project menu in flex, sel