Best Practices for unit testing an Xlet?
I'd like to be able to unit test and Xlet using JUnit, but have been stumped by how this can be done correctly.
The first inclination is that I should mock the XletContext, but that seems to be getting out of control quickly.
Does anyone have any tips on this they can point me to?
Thanks.
Hi Jean-Paul Smit,
Here is a goog article about how to unit test SQL Server 2008 database using Visual Studio 2010, please see:
http://blogs.msdn.com/b/atverma/archive/2010/07/28/how-to-unit-test-sql-server-2008-database-using-visual-studio-2010.aspx
Thanks,
Eileen
TechNet Subscriber Support
If you are
TechNet Subscription user and have any feedback on our support quality, please send your feedback
here.
Similar Messages
-
Best practice for the test environment & DBA plan Activities Documents
Dears,,
In our company, we made sizing for hardware.
we have Three environments ( Test/Development , Training , Production ).
But, the test environment servers less than Production environment servers.
My question is:
How to make the best practice for the test environment?
( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
Also please , Can I have a detail document regarding the DBA plan activities?
I appreciate your help and advise
Thanks
Edited by: user4520487 on Mar 3, 2009 11:08 PMFollow your build document for the same steps you used to build production.
You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
-Kevin -
Best practice for GWT testing?
I was wondered what is the best way to test my GWT client logic which calls into my server and processes the results. I don't want to have to have a server running for these tests? That will come under my functional/performance testing. Is there a way to mock a server ?
Specifically some code like
String url = "localhost:8080/myApp/login";
RequestBuilder builder = new RequestBuilder( RequestBuilder.POST, url );
builder.sendRequest( "My data String", new MyResponseHandelr() );How should I test this without there being a service available at the given URL, but instead have a means where by I can send specific data back to the client code under test?The unit testing support of GWT does not work for you?
http://code.google.com/webtoolkit/doc/latest/tutorial/JUnit.html
If not, running a server might be the only option you have. Perhaps JBoss Arquillian can help you there.
http://www.jboss.org/arquillian
Arquillian supports Jetty as a container for example, which boots really fast. -
Hello Friends - We are doing EBS Configuration and it includes Search strings also. These changes are applicable to around 40-50 Bank accounts.
Is it that I should ask Bank to send us test Bank statements for all these accounts.
Can you pls share Best practices to test EBS set up.
ThanksHello!
You don't have to test EBS for each bank account. The best approach to testing is to identify the typical bank statement cases and test them. For instance, if you have bank statements from five banks with several bank accounts in each bank, you need to test the one bank statement for one bank account from each bank. Similarly, if you have different types of bank accounts (e.g. current account, deposit account, transfer account etc.), you will have different operation types in bank statements for these accounts. Therefore, you also have to test bank statement from different account types.
To sum up, test typical bank statements from each bank and different bank statements from each account type if applicable.
Hope this will help you!
Best regards! -
Best practice for creating test classes
I am quite new to java using NetBeans 6.1 with JDK 1.6 and for some of the functions I created I want to write test classes to run automated tests to check consistence when I do some changes.
I read something about this topic getting confused about different approaches depending on the jdk version / netbeans version used and however found very different looking examples on the net.
Now I am very confused how to continue. Any hint would be helpful,
Thanks,
Martin.georgemc wrote:
Can you expand on that? I suspect you're talking about JUnit < 4 vs JUnit 4, since JUnit 4 can only work with Java versions from 5 onwards.Yes, I am talking about JUnit - On NetBeans the default was 3.8.2 if I remember right and I installed 4.1.
The problem starts where to start; creating a JUnit Test, Test for Existing Class or Test Suite.
Trying to create tests for an existing class it shows me something like:
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.*;
public class TestsTest {
public TestsTest() {
@Before
public void setUp() {
@After
public void tearDown() {
}using 4.1
but
import junit.framework.TestCase;
public class TestsTest extends TestCase {
public TestsTest(String testName) {
super(testName);
protected void setUp() throws Exception {
super.setUp();
protected void tearDown() throws Exception {
super.tearDown();
}using 3.8.2
so first the versions seem to work completely different. The older version seems more understandable (for me as a beginner). Is it "save/stable" to use the already the newer JUnit? How is the support in NetBeans with the newer version - I mean do I have to expect problems not going with the older one? -
Best practice for test reports location -multiple installers
Hi,
What is recommended best practice for saving test reports with multiple installers of different applications:
For example, if I have 3 different teststand installers: Installer1, Installer2 and Installer3 and I want to save test reports of each installer at:
1. C:\Reports\Installer1\TestReportfilename
2. C:\Reports\Installer2\TestReportfilename
3. C:\Reports\Installer3\TestReportfilename
How could I do this programatically as to have all reports at the proper folder when teststand installers are deployed to a test PC?
Thanks,
FrankThere's no recommended best practice for what you're suggesting. The example here shows how to programmatically modify a report path. And, this Knowledge Base describes how you can change a report's filepath based on test results.
-Mike
Applications Engineer
National Instuments -
Best practice for ASA Active/Standby failover
Hi,
I have configured a pair of Cisco ASA in Active/ Standby mode (see attached). What can be done to allow traffic to go from R1 to R2 via ASA2 when ASA1 inside or outside interface is down?
Currently this happens only when ASA1 is down (shutdown). Is there any recommended best practice for such network redundancy? Thanks in advanced!Hi Vibhor,
I test ping from R1 to R2 and ping drop when I shutdown either inside (g1) or outside (g0) interface of the Active ASA. Below is the ASA 'show' failover' and 'show run',
ASSA1# conf t
ASSA1(config)# int g1
ASSA1(config-if)# shut
ASSA1(config-if)# show failover
Failover On
Failover unit Primary
Failover LAN Interface: FAILOVER GigabitEthernet2 (up)
Unit Poll frequency 1 seconds, holdtime 15 seconds
Interface Poll frequency 5 seconds, holdtime 25 seconds
Interface Policy 1
Monitored Interfaces 3 of 60 maximum
Version: Ours 8.4(2), Mate 8.4(2)
Last Failover at: 14:20:00 SGT Nov 18 2014
This host: Primary - Active
Active time: 7862 (sec)
Interface outside (100.100.100.1): Normal (Monitored)
Interface inside (192.168.1.1): Link Down (Monitored)
Interface mgmt (10.101.50.100): Normal (Waiting)
Other host: Secondary - Standby Ready
Active time: 0 (sec)
Interface outside (100.100.100.2): Normal (Monitored)
Interface inside (192.168.1.2): Link Down (Monitored)
Interface mgmt (0.0.0.0): Normal (Waiting)
Stateful Failover Logical Update Statistics
Link : FAILOVER GigabitEthernet2 (up)
Stateful Obj xmit xerr rcv rerr
General 1053 0 1045 0
sys cmd 1045 0 1045 0
up time 0 0 0 0
RPC services 0 0 0 0
TCP conn 0 0 0 0
UDP conn 0 0 0 0
ARP tbl 2 0 0 0
Xlate_Timeout 0 0 0 0
IPv6 ND tbl 0 0 0 0
VPN IKEv1 SA 0 0 0 0
VPN IKEv1 P2 0 0 0 0
VPN IKEv2 SA 0 0 0 0
VPN IKEv2 P2 0 0 0 0
VPN CTCP upd 0 0 0 0
VPN SDI upd 0 0 0 0
VPN DHCP upd 0 0 0 0
SIP Session 0 0 0 0
Route Session 5 0 0 0
User-Identity 1 0 0 0
Logical Update Queue Information
Cur Max Total
Recv Q: 0 9 1045
Xmit Q: 0 30 10226
ASSA1(config-if)#
ASSA1# sh run
: Saved
ASA Version 8.4(2)
hostname ASSA1
enable password 2KFQnbNIdI.2KYOU encrypted
passwd 2KFQnbNIdI.2KYOU encrypted
names
interface GigabitEthernet0
nameif outside
security-level 0
ip address 100.100.100.1 255.255.255.0 standby 100.100.100.2
ospf message-digest-key 20 md5 *****
ospf authentication message-digest
interface GigabitEthernet1
nameif inside
security-level 100
ip address 192.168.1.1 255.255.255.0 standby 192.168.1.2
ospf message-digest-key 20 md5 *****
ospf authentication message-digest
interface GigabitEthernet2
description LAN/STATE Failover Interface
interface GigabitEthernet3
shutdown
no nameif
no security-level
no ip address
interface GigabitEthernet4
nameif mgmt
security-level 0
ip address 10.101.50.100 255.255.255.0
interface GigabitEthernet5
shutdown
no nameif
no security-level
no ip address
ftp mode passive
clock timezone SGT 8
access-list OUTSIDE_ACCESS_IN extended permit icmp any any
pager lines 24
logging timestamp
logging console debugging
logging monitor debugging
mtu outside 1500
mtu inside 1500
mtu mgmt 1500
failover
failover lan unit primary
failover lan interface FAILOVER GigabitEthernet2
failover link FAILOVER GigabitEthernet2
failover interface ip FAILOVER 192.168.99.1 255.255.255.0 standby 192.168.99.2
icmp unreachable rate-limit 1 burst-size 1
asdm image disk0:/asdm-715-100.bin
no asdm history enable
arp timeout 14400
access-group OUTSIDE_ACCESS_IN in interface outside
router ospf 10
network 100.100.100.0 255.255.255.0 area 1
network 192.168.1.0 255.255.255.0 area 0
area 0 authentication message-digest
area 1 authentication message-digest
log-adj-changes
default-information originate always
route outside 0.0.0.0 0.0.0.0 100.100.100.254 1
timeout xlate 3:00:00
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
timeout tcp-proxy-reassembly 0:01:00
timeout floating-conn 0:00:00
dynamic-access-policy-record DfltAccessPolicy
user-identity default-domain LOCAL
aaa authentication ssh console LOCAL
http server enable
http 10.101.50.0 255.255.255.0 mgmt
no snmp-server location
no snmp-server contact
snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart
telnet timeout 5
ssh 10.101.50.0 255.255.255.0 mgmt
ssh timeout 5
console timeout 0
tls-proxy maximum-session 10000
threat-detection basic-threat
threat-detection statistics access-list
no threat-detection statistics tcp-intercept
webvpn
username cisco password 3USUcOPFUiMCO4Jk encrypted
prompt hostname context
no call-home reporting anonymous
call-home
profile CiscoTAC-1
no active
destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService
destination address email [email protected]
destination transport-method http
subscribe-to-alert-group diagnostic
subscribe-to-alert-group environment
subscribe-to-alert-group inventory periodic monthly
subscribe-to-alert-group configuration periodic monthly
subscribe-to-alert-group telemetry periodic daily
crashinfo save disable
Cryptochecksum:fafd8a885033aeac12a2f682260f57e9
: end
ASSA1# -
Best Practice for utility in Sol Man 4.0
We have software component ST-ICO of release 150_700 with Patch level 5
We want a Template Selection for Utility industry. I checked in
the service market place and found that 'Baseline Package United
Kingdom V1.50, Template: BP_BLKU150' is available in the above software
component.
But we are not getting any templates other than 'BP_UTUS147 - Best Practices for Water Utility' in the 'SOLAR_PROJECT_ADMIN'
transaction.
Kindly suggest any patch needs to be applied or some configuration need to be done.
Regards
ManiHi Mani,
Colud u plz give me the link of "where u find the template BP_BLKU150"?
It will be helpful for me.
Thanks
Senthil -
Best Practices for Defining NDS Java Projects...
We are doing a Proof of Concept on using NDS to develop non-SAP Java applications. We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects. For example, what is and when do you define Tracks, Software Components, Development Components, etc. All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see). We are also struggling with how the DTR and activities tie in to those components.
If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance. This is a very frustrating and time-consuming issue for us.
Thank you!!Hi Peggy,
In Component Model we divide software projects into small components.Components can use other components in well defined manner.
A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as sources in a repository.
A development component can be defined as a frame shared by a number of objects, which are part of the software.
Software components combine components (DCs) to larger units for delivery and deployment.
A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
You can also find lot of support in SDN for the above concepts with tutorials.
Refer this Link for a overview on Java development Infrastructure(JDI)
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
To understand further
Working with Net Weaver Development Infrastructure :
http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
Regards,
Vijith -
Networking "best practice" for setting up a farm
Hi all.
We would like to set an OracleVM farm, and I have a question about "best practice" for
configuring the network. Some background:
- The hardware I have is comprised of machines with 4 gig-eth NICs each.
- The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
- We have already allocated a separate VLAN for management.
- We would like to have HA capable VMs using OCFS2 (on top of NFS.)
I'm trying to decide between 2 possible configurations. The first would keep physical separation
between the mgt/storage networks and the DomU networks. The second would just trunk
everything together across all 4 NICs, something like:
Config 1:
- eth0 - management/cluster-interconnect
- eth1 - storage
- eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
Config 2:
- eth0/1/2/3 => bond0
Do people have experience or recommendation about the best configuration?
I'm attracted to the first option (perhaps naively) because CI/storage would benefit
from dedicated bandwidth and this configuration might also be more secure.
Regards,
Robert.user1070509 wrote:
Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
http://www.linuxfoundation.org/en/Net:Bonding -
Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server. (This
will be for a brand new network setup, new forest, domain, etc.)
Specifically, your best practices regarding:
the sizing of non virtual and virtual volumes/partitions/drives,
the use of sysvol, logs, & data volumes/drives on hosts & guests,
RAID levels for the host and the guest(s),
IDE vs SCSI and drivers both non virtual and virtual and the booting there of,
disk caching settings on both host and guests.
Thanks so much for any information you can share.A bit of non essential additional info:
We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
over to the new infrastructure . We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
information regarding virtualizing the DCs, which as been a bit inconsistent.
Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
seen all that much on Srvr2012 r2.
We have read these and others:
Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100),
Virtualized Domain Controller Technical Reference (Level 300),
Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
Support for using Hyper-V Replica for virtualized domain controllers.
Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
Chas. -
Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment
Hi there,
I'm looking for some hints regarding to the best practice deployment in a productive
environment (currently we are not using a WLS-cluster);
We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
on the development environment and this works fine (in the meantime);
For my point of view, I would like to prefere this kind of Deploment not only
for the development, also for the productive system.
But I found some hints in some books, and this guys prefere the static deployment
for the p-system.
My question now:
Could anybody provide me with some links to some whitepapers regarding best practice
for a deployment into a p-system ??
What is your experiance with the new two-phase-deploment coming up with WLS 7.0
Is it really a good idea to use the static deployment (what is the advantage of
this kind of deployment ???
THX in advanced
-MartinHi Siva,
What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
From my basis experience some of the best practices.
1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
2) It should have backup configured for which restore has been already tested
3) It should have all the monitoring setup viz application, OS and DB
4) Productive client should not be modifiable
5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
7) Relevant Database and OS security parameters should be tested before golive and enabled
8) Pre-Golive , Post Golive should have been performed on Production system
9) EWA should be configured atleast for Production system
10) Production system availability using DR should have been tested
Hope this helps.
Regards,
Deepak Kori -
Best practice for "Quantity" field in Asset Master
Hi
I want to know what is the best practice for "Quantity field" in asset master. It should be made displayed only or required field in Asset Master creation.
Initially I made this field as required entry. So user entered 1 quantity. At the time of posting F-90, he again entered quantity. So my quantity in asset master got increased. Hence i decided to make that field display only in asset master creation.
Now i made that field as display only in asset master creation. At the time of posting F-90, that quantity field is not coming only. I check my field status group for posting key as well as GL account. Its optional field. Inspite of that user is able to make entry in F-90. Now quantity field is '0' only in asset master even though there is some value in asset.
Please help what is the best practice wrt quantity field. Should be open in asset master or it should be display only.Hi:
SAP Standard does not recommend you to update quantity field in asset master data. Just leave the Qty Field Blank , just mention the Unit of Measure as EA. While you post acquisition through F-90 or MIGO this field will get updated in Asset master data automatically. Hope this will help you.
Regards -
Best Practice for Customization of ESS 50.4
Hi ,
We have implemented ESS 50.4 on EP 6.0 SP 14 and R3 4.6C . I want to know what is the best practice for minor modification of ESS transaction . For eg : I need to hide the change button in Personal information screen .
Pls let me know .
PS : Guaranteed award points
Aneez@Aneez
"Best Practice" is just going to be good ole' ITS custom development. All the "old" ESS services are all ITS based. What can not be done through config is then done by developing custom version of the ESS services. For what you describe (ie. the typical "hide a button" scenario) it is simply a matter of:
(1) create custom version(ie. "Z" version) of the standard service. The service file will still call the same backend transaction via the ITS parameter ~transaction.
(2) Since you are NOT making changes that require anything changed on the backend transaction (such as adding new fields, changing business logic, etc) you are lucky to ONLY have to change the web templates. Locate the web template in your new custom service file that corresponds to the screen in the transaction where the "CHANGE" button appears. The ITS naming convention for web templates is <sapprogramname>_<screennumber>.
(3) After locating the web template that corresponds to your needed screen, simply locate in the HTMLb where the "CHANGE" button code is and comment it out. Just that easy!
(4) Publish your new customized service and test it out directly through ITS. ie. via the direct URL to it: http://<yourdomain>/scripts/wgate/<yourservice>!
(5) once you see that it works, you can then make an iView for it in your portal (or simply change the iView you have to now point to your custom ITS service.
LOTS and LOTS more info on ITS development all around this site and in the ITS sepcific forum.
Hope this helps!
Award points or save them...I really don't care. I think the points system here is one of the dumbest ideas since square wheels. =) -
Best practice for use of spatial operators
Hi All,
I'm trying to build a .NET toolkit to interact with Oracles spatial operators. The most common use of this toolkit will be to find results which are within a given geometry - for example select parish boundaries within a county.
Our boundary data is high detail, commonly containing upwards of 50'000 vertices for a county sized polygon.
I've currently been experimenting with queries such as:
select
from
uk_ward a,
uk_county b
where
UPPER(b.name) = 'DORSET COUNTY' and
sdo_relate(a.geoloc, b.geoloc, 'mask=coveredby+inside') = 'TRUE';
However the speed is unacceptable, especially as most of the implementations of the toolkit will be web based. The query above takes around a minute to return.
Any comments or thoughts on the best practice for use of Oracle spatial in this way will be warmly welcomed. I'm looking for a solution which is as quick and efficient as possible.Thanks again for the reply... the query currently takes just under 90 seconds to return. Here are the results from the execution plan ran in sql*:
Elapsed: 00:01:24.81
Execution Plan
Plan hash value: 598052089
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 156 | 46956 | 76 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 156 | 46956 | 76 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL | UK_COUNTY | 2 | 262 | 5 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| UK_WARD | 75 | 12750 | 76 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | UK_WARD_SX | | | | |
Predicate Information (identified by operation id):
2 - filter(UPPER("B"."NAME")='DORSET COUNTY')
4 - access("MDSYS"."SDO_INT2_RELATE"("A"."GEOLOC","B"."GEOLOC",'mask=coveredby+i
nside')='TRUE')
Statistics
20431 recursive calls
60 db block gets
22432 consistent gets
1156 physical reads
0 redo size
2998369 bytes sent via SQL*Net to client
1158 bytes received via SQL*Net from client
17 SQL*Net roundtrips to/from client
452 sorts (memory)
0 sorts (disk)
125 rows processed
The wards table has 7545 rows, the county table has 207.
We are currently on release 10.2.0.3.
All i want to do with this is generate results which fall in a particular geometry. Most of my testing has been successful i just seem to run into issues when querying against a county sized polygon - i guess due to the amount of vertices.
Also looking through the forums now for tuning topics...
Maybe you are looking for
-
Can i set the timing for individual photos in a slideshow
there are text pages that need to be read but not at a 4 second speed it has to be slower more time syncing to music i compose is a must can each individual photo have its own time i hope so or else they ruined a wonderful product with this limitatio
-
Essbase Integration System problem: We build cubes using EIS query. The query which took around 10mins to fetch records is now taking more than 100mins. There hasn't been any significant change with the fetched data records. No of data is almost same
-
LR 5 does not want to synchronize one specific folder?
I have recently updated to LR 5 on a new computer (Win 8 - with 32GB RAM) and created the catalogs (I have 3 different catalogs) without any problem at all. But just found out that inside a series of folder, one was missing. The folder is on the driv
-
Looking for software driver for HP 6655A Power Supply with Labview 5.1
Hi, trying to control an HP 6655a Power Supply but the version of Labview in the PC is the old Labview 5.1. I don't want to upgrade it for fear that the program will no longer function. Does anyone have the driver for the HP6655a that will work with
-
Why are the photos i print much darker than the monitor image?
why are the photos i print much darker than the monitor image?