Why string type shared variable takes more time to update in the client
I am using shared variables to share the data across a master and the client PCs connected in a network. (Network published & no buffering)
I have created an integer type shared variable(I32) and a string type shared variable (data size is 60 bytes) in the Master and subscribing to the same in the client PCs. In the master PC, I am modifying the data in this order - update the data in the string type variable and then update the data in the integer vaiable.
But in the client PCs, due to the size difference in the variables, I am receiving the data of the integer first and after that about 3-4 sec later only I am receiving the string data. Can any optimization be done to reduce this latency period? Instead of the string data type any better data type will reduce this delay?
Please suggst. Advnced thanks.
Latency has a lot to do with your network. 3-4 seconds is a long time though. This could also be due to the larget data size of your string. The integer data size is definitaly not 60 bytes. If you're looking for better performance, I would highly recommend looking into Datasocket communication or TCP/IP communication. (There are shipping example for both.)
When it comes to performance of throughput and efficiency, network-published variables are lower on the totem pole.
I hope this helps,
Kevin S.
Applications Engineer
National Instruments
Similar Messages
-
Takes more time to start & shutdown the database
Hi All,
I have created a database in oracle9i by following manual steps. Every thing was created successfully and am able to start the database and shutdown also.
but the problem is while giving the startup command it takes more time to start the database and the same during the shutdown. So anyone help me..
the follwing are the pfile specifications:
db_name=practice
instance_name=practice
control_files= 'E:\practice\control\control1.ctl',
'D:\practice\control\control2.ctl'
db_block_size=2048
db_cache_size=20m
shared_pool_size=20m
background_dump_dest='E:\practice\bdump'
user_dump_dest='E:\practice\udump'
Thanks in AdvanceEvery thing was created successfully and am able to start the database and > shutdown also.Please restate the above.
problem is while giving the startup command it takes more time to start the >database and the same during the shutdownHow have you compared? Could it be O/S resources, installation of additional software; you have not mentioned the O/S and complete version of your database.
You can review the following although I am bit unclear;
http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/instreco.htm#440322
Adith -
Delete DML statment takes more time than Update or Insert.
i want to know whether a delete statement takes more time than an update or insert DML command. Please help in solving the doubt.
Regards.i do not get good answers sometimes, so, i ask again.I think Alex answer to your post was quite complete. If you missed some information, continue the same post, instead of opening a new thread with the same subject and content.
You should be satistied with the answers you get, I also answered your question about global indexes, and I do think my answer was very complete. You may ask more if you want, but stop multiposting please. It is quite annoying.
Ok, have a nice day -
App-V 5 Full Infrastructure Apps take long time to stream to the client
Hi was wondering if anyone has the same issue as i am or knows a fix for this, below is my problem and the troubleshooting i have done.
Overview of problem
App-V 5 apps delivered via App-V 5 full infrastructure take a long time to stream to the client and this means the user has to wait if they try and run an application before it has streamed to the client. Users sometimes have to
wait 2 or 3 minutes for an application to stream and this is about 40 times slower than basic SMB and HTTP transfer tests show the system is capable of (see performance results below).
App-V 4.6 apps delivered via App-V 4.6 full infrastructure and HTTP streaming are fine.
Overview of environment
App-V 5.0 SP1 Full Infrastructure.
App-V servers are running Server 2012 on Hyper-V 3 or ESX 5.1 with 2 x vCPU and 4GB RAM.
SQL servers are a SQL 2012 cluster.
Separate servers for SQL, management, publishing, content and reporting.
Management, Publishing and Content servers have two servers per role and NLB to provide load balancing. So 7 servers (2 x Man, 2 x Pub, 2 x Content, 1 x Reporting)
Two further sites with 2 x Pub and 2 x Content each. All publishing servers pointed at the load balance address for management.
Content delivered via HTTP
Clients are physical desktops and laptops running Windows 7 SP1 x86 and Windows 8 x86
App-V client is 5.0SP1
Clients are pointed at their nearest publishing server NLB via a script which looks up the client IP address and uses PowerShell to configure the publishing server
Content is streamed from the nearest content server NLB by setting the PackageSourceRoot to the nearest content NLB (via the same PowerShell script above).
App-V apps delivered per-user via AD group. One AD group per application. Approximately 200 App-V apps published so far - will eventually reach 400 as we sequence more. About 9000 users.
Analysis performed so far
Servers not heavily loaded. CPU averages 5%. Lots of RAM free. Very low disk IO. Problem also occurs out-of-hours so we are 99.9% certain that server resources are not a cause.
Streaming performance is the same from all 6 content servers and all 3 NLB addresses (tested by changing the value of PackageSourceRoot). Wireshark used to confirm packages are really streaming from the correct location, enforcing
our belief that the problem isn't at the server end (unless all 6 servers are affected).
Streaming via both HTTP and SMB2.1 is approximately the same (tested by changing the value of PackageSourceRoot between http://xxxx and \\server\AppVContent).
Wireshark used to confirm we really are using the protocol we think we are using.
All clients exhibit the same behaviour. Issue reported by many users. 5 test PCs chosen at random at all 3 sites confirmed to have the slow streaming problem.
Slow streaming from both Hyper-V and VMware ESX servers.
Client not heavily loaded.
Affects all App-V apps although it obviously affects the larger ones more.
All App-V apps have a Feature Block 1 setup.
If we copy the ".appv" file from the server to the client via either HTTP or SMB then it's reasonably quick (up to 480Mb/s). So we don't believe the network or servers are at fault. For example:
We can copy a 149MB .appv file via SMB from the content server to the client in 5 seconds.
We copy HTTP download the .appv file from the content server using IE on the client in 5 seconds.
But if you ask the App-V 5 client to fully download the sequence then it takes 2 - 3 minutes.
The App-V 4.6 client takes about 8 - 10 seconds to fully download a similar sized application.
App-V 5 publishing works fine - when a new user logs on they get their list of application straight away, it's just the streaming which is slow.
Once the App-V app has streamed locally it runs fine and with a decent performance.
Looking at a Wireshark trace of the streaming you can see that the slow performance is due to the transfer stopping and starting a lot. You only notice this when you zoom into the performance graph a fair bit.
Each time the HTTP server stops sending traffic, it doesn't start again until the client sends a "TCP Window update". Each "stop" is of a different length, but just taking a few from the middle I get 0.06s, 0.11s,
0.13s wasted etc.
I can see that it's the client stopping the transfer by reducing its advertised TCP Window Size. I'll provide an example:
Server sends 9 x 1514 bytes. Client responds with an ACK and a Window size of 54016 bytes (256x211)
Server sends 11 x 1514 bytes. Client responds with an ACK and a Window size of 37888 bytes (256x148)
Server sends 10 x 1514 bytes. Client responds with an ACK and a Window size of 23296 bytes (256x91)
Server sends 15 x 1514 bytes. Client responds with an ACK and a Window size of 1280 bytes (5 x 256)
Server stops sending (I'm guessing because the client advertised Window size was less than a single packet's worth of bytes)
<0.1 seconds passes>
Client sends a "TCP Window Update" re-advertising a TCP window size of 65536 (256x256).
Server starts transmitting again
So the way I see this is that the App-V 5 client is controlling the transfer speed by utilising TCP Window flow control. The trace was taken at the client end so there's no room for anything on the network to be fiddling with flow
control (and we've confirmed there are no traffic shapers in the loop).
We've also tried streaming directly from the local client by copying some App-V 5 apps down to the client, creating a SMB share on the client and changing PackageSourceRoot to \\localhost\AppVContent (i.e. so we are streaming directly
from the client to the client - to remove the network from the equation) and there is only an improvement of 5 to 10 seconds. So we know it's nothing to do with the network or the servers.
We've tried turning off TCP auto-tuning on the client with:
netsh interface tcp set global autotuninglevel=disabled
and turning off TCP chimney offloading (which is off anyway because the NIC doesn't support it and Netstat -t output shows "InHost" for offload state for all connections) with:
netsh int tcp set global chimney=disabled
and nothing has improved.
So we've now focussed on the extraction of the .appv (ZIP) file on the client.
Using Windows Explorer it takes 75 seconds to extract the ZIP file
Using 7ZIP it takes 9 seconds to extract the ZIP file
Yeah we've always known that the Explorer ZIP engine is terrible. That's why we use 7ZIP or WinRAR on our clients.
So we've started to wonder if the problem with the slow App-V 5 streaming is because the client is downloading the .appv file and extracting it as it goes along in a single thread. If the App-V 5 client is using the same terrible
ZIP engine that Explorer does then that would explain the slow performance. The "download" appears to take a long time because the client is using TCP flow control to slow the transfer since it's extracting the .appv file using a very slow ZIP engine
and it's all in a single thread.Guys,
Just wanted to give you a brief update basically close this thread as Answered.
We had submitted 4 App-V 5 Bugs to Microsoft and these were reproducible and an explanation was given on work around to them. Microsoft
sent down a App-v developer to have a look at our problems. They said they will try and include the Bug fixes in SP2 which should be out in a few weeks or they will definitely be included in SP3.
In regards to the slow streaming it all came down to the Disk IO.
We found that you could simply enable "Turn off
Windows write-cache buffer flushing on the device", then start streaming the app and then disable "Turn
off Windows write-cache buffer flushing on the device" immediately after
(we don't want to leave it on) and that basically fixed the issue.
But a normal user would not have permissions to do
this, so a code was written to enable and disable this option.
Apology for not going in detail, like my opening thread, its very late.
but if you would like a detailed analysis please message me.
I would like to Thank the Talented Consultant who designed and implemented are App-V infrastructure who found the bugs and created all
the work around and who also emailed the detailed analysis of the problems to Microsoft that got them interested.
Simon Bond from Ultima Business Solutions.
Thank you -
Bind variable code takes more time to complete?
Hello, My database is oracle11g.
I have same plsql code and first one is without bind variable and second one is with bind variable. Usually, bind variable should take less time. But here
the bind variable takes more time than the regular code... Can any one please explain why?
SQL> alter system flush shared_pool;
System altered.
SQL> declare
2 cursor c1 is select * from emp where rownum < 50;
3 l_start NUMBER DEFAULT DBMS_UTILITY.GET_TIME;
4 v_cnt number;
5 begin
6 for i in c1 loop
7 SELECT count(*) into v_cnt
8 FROM rate
9 WHERE rate_id IN (SELECT rate_id
10 FROM ratedetail
11 WHERE benefit_id = i.benefit_id)
12 AND effective_date =
13 TO_DATE ('2011-01-23 00:00:00', 'yyyy-MM-dd HH24:MI:SS')
14 AND rate_type_id = 1;
15 end loop;
16 DBMS_OUTPUT.PUT_LINE('total minutes....'||ROUND(ROUND((DBMS_UTILITY.GET_TIME - l_start)/100, 2)
/60,3));
17 end;
18 /
total minutes.....06
PL/SQL procedure successfully completed.
SQL> alter system flush shared_pool;
System altered.
SQL>
SQL> declare
2 cursor c1 is select benefit_id from emp where rownum < 50;
3 l_start NUMBER DEFAULT DBMS_UTILITY.GET_TIME;
4 v_cnt number;
5 begin
6 for i in c1 loop
7 execute immediate 'SELECT count(*)
8 FROM rate
9 WHERE rate_id IN (SELECT rate_id
10 FROM ratedetail
11 WHERE benefit_id = :x)
12 AND effective_date = trunc(sysdate)-202
13 AND rate_type_id = 1'
14 into v_cnt using i.benefit_id;
15 end loop;
16 DBMS_OUTPUT.PUT_LINE('total minutes....'||ROUND(ROUND((DBMS_UTILITY.GET_TIME - l_start)/100, 2)
/60,3));
17 end;
18 /
total minutes.....061
PL/SQL procedure successfully completed.
SQL>Shrinika wrote:
Thanks for the clarification.. Now i understand...
One final question on this thread before i close this thread....
My database is set to CURSOR_SHARING=FORCE for some reason. It seems somebody applied a "quick and dirty fix" to "database is slow" problem. BAD PRACTICE
My question is, when we use bind variable, does it parse the sql code every time? or does it reuse the execution plan?
In my database, it reuse the execution plan... Just checking... When we set CURSOR_SHARING=FORCE, it should generate the execution plan
for every unqiue sql code... Is that correct? Am i confusing?If by "parse" you mean a "hard parse" (which generates execution plan), then the answer is NO. As you observed, it reuses execution plan.
For e.g. with CURSOR_SHARING=FORCE setting, following SQLs
select employee_no, first_name, last_name from employees where dept_no = 10 ;and
select employee_no, first_name, last_name from employees where dept_no = 20 ;would tend to reuse the same execution plan since both of these will be rewritten by oracle (before execution) as
select employee_no, first_name, last_name from employees where dept_no = :SYS01 ;Hope this helps.
Edited by: user503699 on Aug 14, 2010 3:55 AM -
Save for web - system take more time
Hi, We used adobe photo shop cloud one of our blog for optimized images. We use "save for web" option in photoshop. But its will take more time. What is the reason. How to solve this issue.
A lot more information about your hardware and software is needed.
BOILERPLATE TEXT:
If you give complete and detailed information about your setup and the issue at hand,
such as your platform (Mac or Win),
exact versions of your OS, of Photoshop (not just "CC", but something like CC2014.v.2.2) and of Bridge,
your settings in Photoshop > Preference > Performance
the type of file you were working on,
machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
what troubleshooting steps you have taken so far,
what error message(s) you receive,
if having issues opening raw files also the exact camera make and model that generated them,
if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
a screen shot of your settings or of the image could be very helpful too,
etc.,
someone may be able to help you (not necessarily this poster, who is not a Windows user).
Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
http://forums.adobe.com/thread/419981?tstart=0
Thanks! -
Query takes more time from client
Hi,
I have a select query (which refers to views and calls a function), which fetches results in 2 secs when executed from database. But takes more than 10 mins from the client.
The tkprof for the call from the client is given below. Could you please suggest, what is going wrong and how this can be addressed?
The index IDX_table1_1 is on col3.
Trace file: trace_file.trc
Sort options: exeela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT ROUND(SUM(NVL((col1-col2),(SYSDATE - col2)
FROM
table1 WHERE col3 = :B1 GROUP BY col3
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 7402 0.27 7.40 0 0 0 0
Fetch 7402 1.13 59.37 1663 22535 0 7335
total 14804 1.40 66.77 1663 22535 0 7335
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 32 (ORADBA) (recursive depth: 1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 SORT (GROUP BY NOSORT)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'table1'
(TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_table1_1'
(INDEX)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1663 1.37 57.71
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 16039 3.09 385.04
db file scattered read 34 0.21 1.42
latch: cache buffers chains 26 0.34 2.14
SQL*Net break/reset to client 2 0.05 0.05
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 79.99 79.99
SQL*Net message to dblink 1 0.00 0.00
SQL*Net message from dblink 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 7402 0.27 7.40 0 0 0 0
Fetch 7402 1.13 59.37 1663 22535 0 7335
total 14804 1.40 66.77 1663 22535 0 7335
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1663 1.37 57.71
1 user SQL statements in session.
0 internal SQL statements in session.
1 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: trace_file.trc
Trace file compatibility: 10.01.00
Sort options: exeela
1 session in tracefile.
1 user SQL statements in trace file.
0 internal SQL statements in trace file.
1 SQL statements in trace file.
1 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ORADBA.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
84792 lines in trace file.
4152 elapsed seconds in trace file.Edited by: agathya on Feb 26, 2010 8:39 PMI have a select query (which refers to views and calls a function), which fetches results in 2 secs when >executed from database. But takes more than 10 mins from the client.You are providing proof for the latter part of your statement above.
But not for the former part (fetches in 2 secs when exec'd from db).
It would have been nice if you also provide the sql-trace information for that.
Without it we cannot help you much. Other than making the observation that you obviously have a query that is I/O bound, and that I/O on your system is rather slow: on average an I/O takes 0.04 seconds (66.77 divided by 1663). -
Af:commandNavigationItem takes more time to perform action on screen
For my project there is a dynamic implementation of af:commandNavigationItem and the actions also will be binded during runtime.
This code was working fine with the JDeveloper 11.1.1.0.2 (OWS 10.3.0) Version, but after migrating the code to JDeveloper 11.1.1.2.0 (OWS 10.3.2), it shows hourglass for a longer time (which was unusual) and then it perform the operation.
I ran the project in Debug mode and I found it takes more time to come to the breakpoint.
It writes the below information in console
<UnifiedDialogTag><setVisible> property "visible" setter is using a no-op implementation. Used in extreme cases when the property value, beyond the default value, results in unwanted behavior.
Experts: Please through some light to proceed with this issue as I am looking some lead information from where I have to look into the issue.Hi,
can you file a bug or provide a testcase ?
Frank -
Why import of change request in production takes more time than quality?
Hello All,
why import of change request in production takes more time than import into quality?Hi jahangeer,
I believe it takes same time to import a request in both quality and production as they will be in sync.
Even then if it takes more time in production that may depend on the change request.
Thanks
Pavan -
Why view have no stored data ? And what is the reason view take more time
Why view have no stored data ? And what is the reason view take more time to query ?
what happen if a view have stored data?user12941450 wrote:
I want to know the reason that why querying view is slower then querying a normal table?..Untrue.
For example take a table with 2laks record and a view for that table.
If i make a query like( Select name,address from table) then it works fast then select(name,address)from view..Incorrectly interpreting the results.
A view is a SQL statement. Only difference is that the SQL statement is stored in the database's dictionary. Let's consider the following view:
create or replace view foo_view as select * from empWhen you use the view as follows:
select * from foo_viewOracle sees it as follows:
select * from (select * from emp)This is no slower, or no faster, than providing the following SQL to Oracle:
select * from empSo if you observe a difference in performance between using plain SQL versus using that same SQL via a view, there are other reasons for that difference in performance. The reason is NOT that views are slower. -
Why Garbage Collection take more time on JRockit?
My company use <br>
<b>BEA WebLogic 8.1.2<br>
JRockit version 1.4.2<br>
Window 2003 32bit<br>
RAM 4 Gig<br>
<br>
-Xms = 1300<br>
-Xmx = 1300<br></b>
and running ejb application.<br>
My problem is why JRockit take more time. How Can I solve this problem. Because my application will down again.
<br>
This is my infomation on JRockit :
<br>
Gc Algorithm: JRockit Garbage Collection System currently running strategy: Single generational, parallel mark, parallel sweep.
<br>
Total Garbage Collection Count: 10340
<br>
Last GC End: Wed May 10 13:55:37 ICT 2006
<br>
Last GC Start: Wed May 10 13:55:35 ICT 2006
<br>
<b>Total Garbage Collection Time: 2:53:13.1</b>
<br>
GC Handles Compaction: true
<br>
Concurrent: false
<br>
Generational: false
<br>
Incremental: false
<br>
Parallel: true
<br>Hi,
I will suggest you to check a few places where you can see the status
1) SM37 job log (In source system if load is from R/3 or in BW if its a datamart load) (give request name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
Also see if there is any 'sysfail' for any datapacket in SM37.
2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. (In source system if load is from R/3 or in BW if its a datamart load). See if its accessing/updating some tables or is not doing anything at all.
3) RSMO see what is available in details tab. It may be in update rules.
4) ST22 check if any short dump has occured.(In source system if load is from R/3 or in BW if its a datamart load)
5) SM58 and BD87 for pending tRFCs and IDOCS.
Once you identify you can rectify the error.
If all the records are in PSA you can pull it from the PSA to target. Else you may have to pull it again from source infoprovider.
If its running and if you are able to see it active in SM66 you can wait for some time to let it finish. You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
If you feel its active and running you can verify by checking if the number of records has increased in the data tables.
SM21 - System log can also be helpful.
Also RSA7 will show LUWS which means more than one record.
Thanks,
JituK -
Shared Variables for Real-TIme Robot Control
I'm really stuck in my efforts to use LV real-time in my hardware control application. I have a 6-axis industrial robot arm that I must control programmatically from my PC. To do this I've developed a dynamic link library of functions for various robot control commands that I can call using Code Interface Nodes in LV (using 8.5). This has worked great, that is, until I tried to port parts of the application to a real-time controller. As it turns out, because the robot control dll is linked with and relies so heavily upon several Windows libraries, it is not compatible with use on a RT target, as verified by the the "DLL Checker" application I downloaded from the NI site. When the robot is not actually executing movements, I am constantly reading/writing analog and digital I/O from various sensors, etc.....
This seemed to suggest that I should simply segregate my robot commands from the I/O activities, using my host PC for the former, and my deterministic RT loop on the target machine for the latter. I set up a Robot Controller Server (RCS) vi running on my host PC that is continuously looking for (in a timed loop) a flag (a boolean) to initiate a robot movement command. Because several parameters are used to specify the robot movement, I created a custom control cluster (which includes the boolean variable) that I then used to make a Network Shared Variable that can be updated by either the RT target or the host PC running the RCS. I chose NOT to use buffering, and FIFO is not available with shared variables based on custom controls.
Here's sequence of events I'd like to accomplish:
1) on my host PC I deploy the RCS, which continuously pools a boolean variable in the control cluster that would indicate the robot should move. The shared variable cluster is initialized in the RBS and the timed loop begins.
2) I deploy the RT vi, which should set the boolean flag in the control cluster, then update the shared variable cluster.
3) an instance of the control cluster node in the RCS should update, thereby initiating a sequence of events in a case structure. (this happens on some occassions, but very few)
4) robot movement commands are executed, after which the boolean in the control cluster is set back to its original value.
5) the RT vi (which is polling in a loop) should see this latest change in the boolean as a loop stop condition and continue with the RT vi execution.
With the robot controller running in a timed loop, it occassionally "sees" and responds to a change of value in members of the shared variable cluster, but most times it does not. Furthermore, when the robot controller vi tries to trigger that the movement has completed by changing a boolean in the control cluster, the RT vi never sees it and does not respond.
1) Bad or inappropriate use of network shared variables?
2) a racing issue?
3) slow network?
4) should I buffer the control cluster?
5) a limitation of a custom control?
6) too many readers/writers?
7) should I change some control cluster nodes to relative, rather than absolute?
8) why can't I "compile" my RT vi into an executable?
Any help would be greatly appreciated. Unfortunately, I'm writing this from home and cannot attach vi files or pictures, but would be happy to do so at work tomorrow. I'm counting on the collective genius in the universe of LV users and veterans to save my bacon.....
DavidHi David,
I'm curious why you decided to build a CIN instead of developing the code in
LabVIEW. Is there some functionality that that LabVIEW couldn't
provide? Can you provide some more information about the LabVIEW
Real-Time target you're using? What type of IO are you using?
It is impossible to get LabVIEW Real-Time performance on a desktop PC running
an OS other than LabVIEW Real-Time. Even running a timed loop in LabVIEW
for Windows won't guarantee a jitter free application. Also, no TCP based
network communication can be deterministic. This means Network Shared
Variables are also not deterministic (they use a TCP for data transport) and I
advise against using them as a means to send time critical control data between
a Windows host and a LabVIEW Real-Time application.
In general, I would architect most LabVIEW-based control applications as
follows:
- Write all control logic and IO operations in LabVIEW Real-Time. The
LabVIEW Real-Time application would accept set points and/or commands from the
'host' (desktop PC). The Real-Time controller should be capable of
running independently or automatically shutting down safely if communication to
the PC is lost.
- Write a front-end user interface in LabVIEW that runs on the desktop
PC. Use Shared Variables with the RT-FIFO option enabled to send new set
points and/or commands to the LabVIEW Real-Time target.
Shared variable buffering and RT-FIFOs can be a little confusing. Granted
not all control applications are the same, but I generally recommend against
using buffering in control applications and in LabVIEW Real-Time applications
recommend using the RT-FIFO option. Here's why: Imagine you have a
Real-Time application with two timed loops. Time-loop 'A' calculates the
time critical control parameters that get written to hardware output in
timed-loop 'B'. Loop 'A' writes the outputs to a RT-FIFO enabled variable
with a RT-FIFO length of 50. Loop 'B' reads the outputs from the shared
variable, but for some reason, if loop 'B' gets behind then the shared variable
RT-FIFO will now contain several extra elements. Unless loop 'b' runs
extra fast to empty the RT-FIFO, loop 'B' will now start outputting values that
it should have output on previous cycles. The actual desired behavior is
that loop 'B' should output the most recent control settings, which means you
should turn off buffering and set the RT-FIFO length to 1.
There is also a clear distinction between buffering and the RT-FIFO
option. The RT-FIFO option is used to add a non-blocking layer between
network communication and time-critical code in LabVIEW Real-Time
applications. It also provides a safe mechanism to share data between two
loops running in a Real-Time application without introducing unnecessary
jitter. Network buffering is a feature that allows a client to receive
data change updates from the server even if the client is reading the variable
slower than the server is writing to it. In the example I presented above
you don't need to enable networking because the shared variable is used
entirely within the Real-Time application. However, it would be
appropriate to send control set points from a Windows PC to the Real-Time
application using network published shared variables with the RT-FIFO option
enabled. If it is critical that the Real-Time application executed all
commands in the sequence they were sent then you could enable an appropriate
buffer. If the control application only needs the latest set point
setting from the Windows host then you can safely disable network buffering
(but you should still enable the RT-FIFO option with a length of 1 element.)
Network buffering is especially good if the writer is 'bursty' and the reading
rate is relatively constant. In the robot application I can imagine buffering
would be useful if you wanted to send a sequence of timed movements to the
Real-Time controller using a cluster of timestamp and set point. In this
case, you may write the sequence values to the variable very quickly, but the
Real-Time controller would read the set points out as it proceeded through the movements.
The following document presents a good overview of shared variable
options: http://zone.ni.com/devzone/cda/tut/p/id/4679
-Nick
LabVIEW R&D
~~ -
Oracle coherence first read/write operation take more time
I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.In which case, why bother using Coherence? You're not really gaining anything, are you?
What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
Just my two cents.
Cheers,
Steve
NB. I don't work for Oracle, so maybe they have a different opinion. :) -
Automatic DOP take more time to execute query
We upgraded database to oracle 11gR2. While testing Automatic DOP feature with our existing query it takes more time than with parallel.
Note: No constrains or Index created on table to gain performance while loading data (5000records / sec)
Os : Sun Solaris 64bit
CPU = 8
RAM = 7456M
Default parameter settings:
parallel_degree_policy string MANUAL
parallel_degree_limit string CPU
parallel_threads_per_cpu integer 2
arallel_degree_limit string CPU
cpu_count integer 8
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 8
Query:
SELECT COUNT(*)
from (
SELECT
/*+ FIRST_ROWS(50), PARALLEL */
Query gets executed in 22minutes : execution plan
COUNT(*)
9600
Elapsed: 00:22:10.71
Execution Plan
Plan hash value: 3765539975
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 21 | 2164K (1)| 07:12:52 | | |
| 1 | SORT AGGREGATE | | 1 | 21 | | | | |
| 2 | PARTITION RANGE OR| | 89030 | 1825K| 2164K (1)| 07:12:52 |KEY(OR)|KEY(OR)|
|* 3 | TABLE ACCESS FULL| SUBSCRIBER_EVENT | 89030 | 1825K| 2164K (1)| 07:12:52 |KEY(OR)|KEY(OR)|Automatic DOP Query: parameters set
alter session set PARALLEL_DEGREE_POLICY = limited;
alter session force parallel query ;Query:
SELECT COUNT(*)
from (
SELECT /*+ FIRST_ROWS(50), PARALLEL*/
This query takes more than 2hrs to execute
COUNT(*)
9600
Elapsed: 02:07:48.81
Execution Plan
Plan hash value: 127536830
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart|Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 21 | 150K (1)| 00:30:01 | | | | | |
| 1 | SORT AGGREGATE | | 1 | 21 | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 21 | | | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 21 | | | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 89030 | 1825K| 150K (1)| 00:30:01 |KEY(OR)|KEY(OR)| Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| SUBSCRIBER_EVENT | 89030 | 1825K| 150K (1)| 00:30:01 |KEY(OR)|KEY(OR)| Q1,00 | PCWP | |
Note
- automatic DOP: Computed Degree of Parallelism is 16 because of degree limitcan some one help us to find out where we did wrong or any pointer will really helpful to resolve an issue.
Edited by: Sachin B on May 11, 2010 4:05 AMGenerated AWR report for ADOP
Foreground Wait Events DB/Inst: HDB/hdb Snaps: 158-161
-> s - second, ms - millisecond - 1000th of a second
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by wait time desc, waits desc (idle events last)
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
direct path read 522,173 0 125,051 239 628.4 99.3
db file sequential read 663 0 156 235 0.8 .1
log file sync 165 0 117 712 0.2 .1
Disk file operations I/O 267 0 63 236 0.3 .1
db file scattered read 251 0 36 145 0.3 .0
control file sequential re 217 0 32 149 0.3 .0
library cache load lock 2 0 10 4797 0.0 .0
cursor: pin S wait on X 3 0 9 3149 0.0 .0
read by other session 5 0 2 429 0.0 .0
kfk: async disk IO 613,170 0 2 0 737.9 .0
sort segment request 1 100 1 1007 0.0 .0
os thread startup 16 0 1 43 0.0 .0
direct path write temp 1 0 1 527 0.0 .0
latch free 51 0 0 2 0.1 .0
kksfbc child completion 1 100 0 59 0.0 .0
latch: cache buffers chain 19 0 0 2 0.0 .0
latch: shared pool 36 0 0 1 0.0 .0
PX Deq: Slave Session Stat 21 0 0 1 0.0 .0
library cache: mutex X 45 0 0 1 0.1 .0
CSS initialization 2 0 0 6 0.0 .0
enq: KO - fast object chec 1 0 0 11 0.0 .0
buffer busy waits 3 0 0 1 0.0 .0
cursor: pin S 9 0 0 0 0.0 .0
CSS operation: action 2 0 0 1 0.0 .0
direct path write 1 0 0 2 0.0 .0
jobq slave wait 17,554 100 8,942 509 21.1
PX Deq: Execute Reply 4,060 95 7,870 1938 4.9
SQL*Net message from clien 96 0 5,756 59962 0.1
PX Deq: Execution Msg 618 56 712 1152 0.7
KSV master wait 11 0 0 2 0.0
PX Deq: Join ACK 16 0 0 1 0.0
PX Deq: Parse Reply 14 0 0 1 0.0
Background Wait Events DB/Inst: HDB/hdb Snaps: 158-161
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
control file sequential re 6,249 0 2,375 380 7.5 55.6
control file parallel writ 2,003 0 744 371 2.4 17.4
db file parallel write 1,604 0 503 313 1.9 11.8
log file parallel write 861 0 320 371 1.0 7.5
db file sequential read 363 0 151 415 0.4 3.5
db file scattered read 152 0 64 421 0.2 1.5
Disk file operations I/O 276 0 21 77 0.3 .5
os thread startup 316 0 15 48 0.4 .4
ADR block file read 24 0 11 450 0.0 .3
rdbms ipc reply 17 12 7 403 0.0 .2
Data file init write 6 0 6 1016 0.0 .1
direct path write 21 0 6 287 0.0 .1
log file sync 7 0 6 796 0.0 .1
ADR block file write 10 0 4 414 0.0 .1
enq: JS - queue lock 1 0 3 2535 0.0 .1
ASM file metadata operatio 1,801 0 2 1 2.2 .0
db file parallel read 30 0 1 40 0.0 .0
kfk: async disk IO 955 0 1 1 1.1 .0
db file single write 1 0 0 415 0.0 .0
reliable message 10 0 0 23 0.0 .0
latch: shared pool 75 0 0 2 0.1 .0
latch: call allocation 26 0 0 2 0.0 .0
CSS initialization 7 0 0 6 0.0 .0
asynch descriptor resize 352 100 0 0 0.4 .0
undo segment extension 2 100 0 5 0.0 .0
CSS operation: action 9 0 0 1 0.0 .0
CSS operation: query 42 0 0 0 0.1 .0
latch: parallel query allo 4 0 0 0 0.0 .0
rdbms ipc message 37,948 97 104,599 2756 45.7
DIAG idle wait 16,762 100 16,927 1010 20.2
ASM background timer 1,724 0 8,467 4912 2.1
shared server idle wait 282 100 8,465 30019 0.3
pmon timer 3,123 90 8,465 2711 3.8
wait for unread message on 8,381 100 8,465 1010 10.1
dispatcher timer 141 100 8,463 60019 0.2
Streams AQ: qmn coordinato 604 50 8,462 14010 0.7
Streams AQ: qmn slave idle 304 0 8,462 27836 0.4
smon timer 35 71 8,382 239496 0.0
Space Manager: slave idle 1,621 99 8,083 4986 2.0
PX Idle Wait 2,392 99 4,739 1981 2.9
class slave wait 46 0 623 13546 0.1
KSV master wait 2 0 0 27 0.0
SQL*Net message from clien 7 0 0 1 0.0
Wait Event Histogram DB/Inst: HDB/hdb Snaps: 158-161
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
-> Ordered by Event (idle events last)
% of Waits
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 24 100.0
ADR block file write 10 100.0
ADR file lock 12 100.0
ASM file metadata operatio 1812 99.0 .3 .4 .2 .1
CSS initialization 9 100.0
CSS operation: action 11 90.9 9.1
CSS operation: query 54 100.0
Data file init write 6 16.7 16.7 16.7 50.0
Disk file operations I/O 533 88.7 2.6 .6 1.5 .2 6.4
PX Deq: Signal ACK EXT 4 100.0
PX Deq: Signal ACK RSG 2 100.0
PX Deq: Slave Session Stat 21 42.9 28.6 28.6
SQL*Net break/reset to cli 6 100.0
SQL*Net message to client 102 100.0
SQL*Net more data to clien 4 100.0
asynch descriptor resize 527 100.0
buffer busy waits 4 75.0 25.0
control file parallel writ 2003 9.3 .5 .0 .1 90.0
control file sequential re 6466 10.6 .0 .0 .0 .1 .2 89.0
cursor: pin S 9 100.0
cursor: pin S wait on X 3 33.3 33.3 33.3
db file parallel read 30 6.7 30.0 63.3
db file parallel write 1604 7.4 .1 .6 16.5 75.5
db file scattered read 403 3.7 .2 2.5 13.6 14.9 3.5 61.5
db file sequential read 1017 12.3 .8 2.3 7.3 6.6 2.0 68.8
db file single write 1 100.0
direct path read 522.2 2.2 2.1 .1 .0 1.8 17.9 75.9
direct path write 22 4.5 4.5 90.9
direct path write temp 1 100.0
enq: JS - queue lock 1 100.0
enq: KO - fast object chec 1 100.0
enq: PS - contention 1 100.0
kfk: async disk IO 614.1 100.0 .0
kksfbc child completion 1 100.0
latch free 58 46.6 27.6 15.5 10.3
latch: cache buffers chain 19 36.8 10.5 52.6
latch: call allocation 26 76.9 11.5 7.7 3.8
latch: parallel query allo 4 100.0
latch: shared pool 111 44.1 28.8 27.0
library cache load lock 2 100.0
library cache: mutex X 45 84.4 8.9 4.4 2.2
log file parallel write 861 10.0 .1 .1 89.5 .2
log file sync 172 6.4 90.1 3.5
os thread startup 332 100.0
rdbms ipc reply 18 72.2 11.1 16.7
read by other session 5 100.0
reliable message 11 81.8 9.1 9.1
sort segment request 1 100.0
undo segment extension 2 50.0 50.0
ASM background timer 1724 .8 .6 .1 .6 97.9
DIAG idle wait 16.8K 100.0
KSV master wait 13 7.7 23.1 61.5 7.7
PX Deq: Execute Reply 4060 .4 .0 .0 .1 3.4 96.0
PX Deq: Execution Msg 617 34.7 1.5 2.4 1.5 1.5 .2 .8 57.5
PX Deq: Join ACK 16 93.8 6.3
PX Deq: Parse Reply 14 71.4 7.1 14.3 7.1
PX Idle Wait 2384 .0 .6 99.3
SQL*Net message from clien 103 82.5 1.0 1.9 1.0 13.6
Space Manager: slave idle 1621 .2 99.8
Streams AQ: qmn coordinato 604 50.0 50.0
Wait Event Histogram DB/Inst: HDB/hdb Snaps: 158-161
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
-> Ordered by Event (idle events last)Edited by: Sachin B on May 11, 2010 4:52 AM -
A block of code takes more time in JRE 6_20 but less in the previous versio
while (entries.hasMoreElements()) {
ZipEntry zipEntry = (ZipEntry) entries.nextElement();
is = zipFile.getInputStream(zipEntry);
File file = new File(unzipDir, zipEntry.getName());
if (is.available() == 0) {
file.mkdir();
is.close();
} else {
file.createNewFile();
fos = new FileOutputStream(file);
CommonUtils.connectIO(is, fos, -1, true);
}Sorry a type,. The above code takes more time when I run withJDK 6_20. But its less time if the previous version is used. Cannot figure whats wrong. Here is the full method.
* Unzips the specified file into the specified directory. The optional file names list allows the caller
* to specify the actual files that get unzipped.
* @param srcFile the file to unzip
* @param unzipDir the directory where the unzipped files will be put
* @param fileNames the optional list of name strings for the zip entries to unzip or <code>null</code> to unzip
* all entries
* @throws NullPointerException if either the source file or the unzip directory is <code>null</code>
* @throws javax.faces.FacesException if the
public static void unzipFile(File srcFile, File unzipDir, List fileNames) {
if (srcFile == null) {
throw new NullPointerException("The zip file argument is null");
if (unzipDir == null) {
throw new NullPointerException("The unzip directory argument is null");
ZipFile zipFile = null;
InputStream is = null;
FileOutputStream fos = null;
try {
zipFile = new ZipFile(srcFile);
Enumeration entries = null;
if (fileNames != null) {
// Use a vector only so we can abstract away the zip entries enumeration...
Vector v = new Vector();
Iterator it = fileNames.iterator();
while (it.hasNext()) {
String name = (String) it.next();
v.add(new ZipEntry(name));
entries = v.elements();
} else {
entries = zipFile.entries();
while (entries.hasMoreElements()) {
ZipEntry zipEntry = (ZipEntry) entries.nextElement();
is = zipFile.getInputStream(zipEntry);
File file = new File(unzipDir, zipEntry.getName());
if (is.available() == 0) {
file.mkdir();
is.close();
} else {
file.createNewFile();
fos = new FileOutputStream(file);
CommonUtils.connectIO(is, fos, -1, true);
catch (IOException e) {
throw new FacesException("Problem unzipping file " + srcFile.getAbsolutePath(), e);
finally {
try {
if (is != null) {
is.close();
if (fos != null) {
fos.close();
if (zipFile != null) {
zipFile.close();
catch (IOException e) {
throw new FacesException("Problem closing resources when unzipping file " + srcFile.getAbsolutePath(), e);
}
Maybe you are looking for
-
Can you make random notes on a calendar date
How can i record notes on a date in ical without an event
-
Double Space File from GET Download from Server
Whenever I GET a file within Dreamweaver (FTP) from the server, PHP and CSS files come down as double spaced. I've search many forums and changed the Code Format under Preferences but still get the same issues. I'm running the most current version of
-
Problem in Forms...!!!
Hi All, i have created a form, and uploaded to respective directory. From apps menu it is calling the form and working fine, but the foreground colour is black... So if i highlighted the field then i can see the field value; otherwise full text field
-
Pick and Pack check Fulfillment% not only by item but also by document
when user set sales order set Allow Partial Delivery per Row = FALSE Allow Partial Delivery = FALSE when Pick and Pack check Fulfillment% set 100-100 i have a sales order which have multi-line . but line 2 item didn't have enough quantity.so this doc
-
DynamicConfiguration to Message Header
Hello XI SDN'ers, Does any one tried of reading or writing parameters to Header fields from Dynamic Configuration? If so, please advise me How to read and write the parameters. Note: I am interested in parameters like SOAPAction, Cookie Thanks & Rega