High performance website, best practices?

Hello all,
I'm working on a system with a web service/Hibernate (Java code linking web pages to the database) front-end which is expected to process up to 12,000 transactions per second with zero downtime. We're at the development/demonstration stage for phase 1 functionality but I don't think there has been much of a planning stage to make sure the metrics can be reached. I've not worked on a system with this many transactions before and I've always had downtime where database and application patches can be applied. I've had a quick look into the technologies available for Oracle High Availability and, since we are using 11g with RAC I know we have at least paid for them even if we're not using them.
There isn't a lot of programming logic in the system (no 1000-line packages accessing dozens of tables, in fact there are only about 20 tables) and there are very few updates. It's mostly inserts and small queries getting a piece of data for use in the front-end.
What I'd like to know is the best practice development for this type of system. As far as I know, the only person on the team with authority and an opinion on technical architecture wants to use the database as a store of data and move all the logic into the front-end. The thinking behind this is
1) it's easier to load balance or increase capacity in the front-end
2) the database will be the bottleneck in the system so should have as little demand placed on it as possible
3) pl/sql packages cannot always be updated without downtime (I'm not sure if this is true or if it can be managed -- the concern is that packages become invalid whilst the upgrade script is running -- or how updates in the front-end could be managed any better, especially if they need to be coordinated with changes to tables)
4) reference tables can be cached in the front-end to cut down on data access
Views please!

Couple of thoughts
- Zero downtime (Or at least very close to it) can be acheivable, but there is a rapidly diminishing return on cost in squeezing the last few percent out of uptime, if you can have the odd planned maintenance window then you can make your life a lot easier.
-If you decide ahead of time that the database is going to be the bottleneck, then it probably will be!
-I can understand where they are coming from with their thinking, the web tier will be easier to scale out, but eventually all that data still needs to get into the database. The database layer is where you need to start the design to get the most out of the platform. Can it handle 12,000 TPS? If it can't then it doesn't matter how quickly your application layer can service those requests.
-If this is mainly inserts, could these be queued in somesort of message queue? Allow the clients to get an instant (Well almost) 'Done' confirmation, where the database will be eventually consistent? Very much depends on what this is being used for of course but this could help with both the performance (At east the 'percieved' performance) and the uptime requirement.
- Caching fairly static data sounds like a good idea to me.
Carl

Similar Messages

  • Performance Tuning Best Practices/Recommendations

    We recently went like on a ECC6.0 system.  We have 3 application servers that are showing a lot of swaps in ST02. 
    Our buffers were initially set based off of SAP Go-Live Analysis checks.  But it is becoming apparent that we will need to enlarge some of our buffers.
    Are there any tips and tricks I should be aware of when tuning the buffers? 
    Does making them too big decrease performance?
    I am just wanting to adjust the system to allow the best performance possible, so any recommendations or best practices would be appreciated.
    Thanks.

    Hi,
    Please increase the value of parameters in small increments. If you set the parameters too large, memory is wasted. This can result in paging if too much memory is taken from the operating system and allocated to SAP buffers.
    For example, if abap/buffersize is 500000, change this to 600000 or 650000. Then analyze the performance and adjust parameters accordingly.
    Please check out <a href="http://help.sap.com/saphelp_nw04/helpdata/en/c4/3a6f4e505211d189550000e829fbbd/content.htm">this link</a> and all embedded links. The documentation provided there is fairly elaborate. Moreover, the thread mentioned by Prince Jose is very good for a guideline as well.
    Best regards

  • Collaborative Websites: Best Practice?

    Good morning! Afternoon? I'm starting to delve into more advanced topics in SharePoint and am aiming to make a collaborative website between various groups.
    I'm rather confused about the concept of site collections, wiki, etc.
    What I was hoping to create is a site with basic information: news, contacts, etc. But one tab on the navigation bar would lead to to a WIKI, and another to a personalizeable site. So I suppose I have two separate questions:
    1) Does the WIKI have to stand as its own site collection, under the same web application?
    2) Same question for the personalizeable site. Additionally, are there any resources you would recommend? I'm having a hard time finding things in 'beginner's' terms.
    Many thanks, my friends.
    Edit: To clarify on the personalizeable site, each person allowed editing rights on SharePoint site can have their own page (similar to Facebook profile) that they can add whatever their heart desires.

    Hi  Catherine,
    Firstly you need only one site collection to get your portal provisioned. However the sizing and no site collections are decided based on users, and volume of data growth.
    I would suggest you to create a team site collection, which is a collaboration template and you can explore possiblities of hosting blogs, with community sites.
    For Wiki, you can choose Wiki site template and can be a site under team site collection. Alternatively you could also Team site as wiki site by using wiki library.
    For personalizeable site, best option would My site, which is equivalent to Facebook, and offers more capabilities on the Enterprise social front.
    My Site Host itself is a site collection, so this would go into separate site collection. My Site has dependency on User profiles and other services, you may need to plan accordingly.
    here are links for reference -
    Overview of sites and site collections in SharePoint 2013
    http://technet.microsoft.com/en-us/library/cc262410(v=office.15).aspx
    Configure My Sites in SharePoint Server 2013
    http://technet.microsoft.com/en-us/library/ee624362(v=office.15).aspx
    Differences between Enterprise Wiki and Wiki Page Library in SharePoint 2013
    http://bernado-nguyen-hoan.com/2013/05/10/differences-between-enterprise-wiki-and-wiki-page-library-in-sharepoint-2013/
    Create and edit a wiki
    http://office.microsoft.com/en-us/office365-sharepoint-online-small-business-help/create-and-edit-a-wiki-HA102775321.aspx
    Plan sites and site collections in SharePoint 2013
    http://technet.microsoft.com/en-us/library/cc263267(v=office.15).aspx
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • ETL processing Performance and best practices

    I have been tasked with enhancing an existing ETL process. The process includes dumping data from a flat file to staging tables and process records from the initial tables to the permanent table. The first step, extracting data from flat file to staging
    tables is done by Biztalk, no problems here. The second part, processing records from staging tables and updating/inserting permanent tables is done in .Net. I find this process inefficient and prone to deadlocks because the code loads the data from the initial
    tables(using stored procs) and loops through each record in .net and makes several subsequent calls to stored procedures to process data and then updates the record. I see a variety of problems here, the process is very chatty with the database which is a
    big red flag. I need some opinions from ETL experts, so that I can convince my co-workers that this is not the best solution.
    Anonymous

    I'm not going to call myself an ETL expert, but you are right on the money that this is not an efficient way to work with the data. Indeed very chatty. Once you have the data in SQL Server - keep it there. (Well, if you are interacting with other data
    source, it's a different game.)
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best practice to configure a web server which hosting more than 1000+ Websites.

    We are deploying a new server for our shared clients. We have a new branded server with good config. Server is hosting IIS,MySQL,Mailserver is this ok. Or should i keep mailserver on different server. Also should i change the installation path of iis and
    other application or installing apps on C drive & keeping all websites and databases on D drive is good for server.
    Akshay Pate

    "Server is hosting IIS,MySQL,Mailserver is this ok."
    Impossible to tell.  You have not given us any indication of the configuration of the server nor the expected workload.  Yes, you say 1000+ web sites, but if each web site is looked at once a day, that's a lot different than if each web site
    is access 10,000 times a day.  So we can't tell if it makes sense to install different roles on different physical servers, though that is often considered a best practice.  Similarly, it is generally a good idea to install applications/data to drives
    other than the C: drive, but without a lot more information (too much for a technical forum), it is pretty difficult to say anything specific.
    If you do not feel comfortable making the decisions yourself, you would be better served by hiring a consultant to evaluate your proposed configuration and perform some benchmarking based on you expected workloads and traffic patterns.
    . : | : . : | : . tim

  • Oracle Best Practices for generating Transactions IDs in high OLTP systems

    We are in the process of designing a high OLTP system using Oracle 11g Database with the following NFRs:
    1) 1 million transactions per day
    2) 100,000 concurrent users
    There are close to about 160-180 entities in the database and we want to know the best approach/practice in deriving the transaction IDs for the OLTP system. Our preferences are given below:
    1) Use Oracle Sequence starting with 1,000,000,000 (1 billion) - This is to make the TXN ID look meaningful when it starts with 1 billion instead of starting it with 1.
    2) Use timestamp and cast it to number instead of using Oracle sequence.
    Note: Transaction IDs must appear in sequence as they are inserted - be it sequence/timestamp
    I would like to know pros/cons of the above methods and their impacts on performance. Also, appreciate if you could share any any best practices/methods that Oracle supports.
    Thanks in advance.
    Ken R

    Ken R wrote:
    I did a quick PoC using both Oracle Sequence & Timestamp for 1 million inserts in a Non-RAC environment. Code used is given below:
    create sequence testseq start with 1 cache 10000 order;
    create table test1 (txnid number, txndate timestamp(9));
    create table test2 (txnid number, txndate timestamp(9));
    begin
    for i in 1..1000000
    loop
    insert into test1 values(testseq.nextval,systimestamp(9));
    end loop;
    commit;
    end;
    begin
    for i in 1..1000000
    loop
    insert into test2 values(to_number(to_char(systimestamp(9),'yyyymmddhh24missff9')), systimestamp(9));
    end loop;
    commit;
    end;
    Here are the results:
    select max(txndate)-min(txndate) from test1;
    Result >> 0 0:3:3.514891000
    select max(txndate)-min(txndate) from test2;
    Result >> 0 0:1:32.386923000
    It appears that Timestamp is faster than sequence... Any thought is highly appreciated...Interesting that your sequence timing is so slow. You say this was a non-RAC environment, but I wonder if you had Oracle linked in RAC mode even though you were running single instance - this would result in the ORDERed sequence running through RAC's "DFS Lock Handle" mechanism which might account for the timing anomaly.
    Unfortunately your test is not particularly relevant. As DomBrooks points out there are lots of problems with sequence-based or time-based columns, especially in RAC, and most particularly if you think you want a "no-gap" sequence. On top of this, of course, your test doesn't include an index on the relevant column, and it's single user and doesn't test for any concurrency effects.
    Typical performance problems are: your RAC instances spend all their time negotiating who gets to use the next value; the index you use to enforce uniqueness suffers from massive contention on the "high-value" block unless you create a reverse-key index - at which point you have to be able to cache the entire index to minimise I/O overheads; you can hash partition the index to avoid using the reverse-key option - but that costs a lot of money if you don't already license the partitioning option.
    Regards
    Jonathan Lewis

  • Best Practice while configuring Traffic Manager for Azure Website

    Hi Team,
    I want to understand What is the best practice while we configure traffic manager for Azure website.
    To give you the base, Here let me explain my requirement. I have one website which 40% target audiences would be East US, while  40% would be UK and rest 20% would be from Asia-pacific.
    Now, What I want is Failover + Performance based Traffic Manager Configuration.
    My thinking:
    1) we need to create 1 website with 2 instances in each region (east us, east asia, west us for an example). so, total 3 deployment of website. (give region based url for the website)
    2) create traffic manager based on performance and add 3 of those instances. that would become website-tmonperformance
    3) create traffic manager based on failover and add 3 of those instances. that would become website-tmonfailover
    4) create traffic manager and ?? don't know the criteria but add both above traffic manager here and take your final url for end user.
    I am not sure (1) this may be the right approach or not (2) if this is right, in the 4th step which criteria we should select while creating final traffic manager round-robin/ performance/ failover?
    after all these if use try to access site from US.. traffic manager will divert that to US Data-Centre or it will wait for failover and till that it will be served from east-asia if in configuration, east-asia is my 1st instance?
    Regards, Brijesh Shah

    Hi Jonathan,
    Thanks for your quick reply. actually question is bit different. Let me explain you different way.
    I was asking for recommendation from Azure Traffic Manager team. whether my understanding is correct or not.We want Performance with Failover.
    So, One azure website we have: take an example todoapp. I deployed that in 3 different region. now, I want to have performance based routing as well as failover based routing. but obviously I can't give two URL to my end user. so, at the top of that I will
    require one more traffic manager. So,
    step 1: I will create one traffic manager with performance criteria named: TMForPerformance.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 2: I will create one more traffic manager with failover criteria named: TMForFailover.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 3: I will create one final traffic manager with performance criteria named: todoapp.trafficmanager.com where I will add these two traffic manager instead of 3 different region's website.
    Question 1) Is it correct structure if we want to achieve Performance with Failover or Is there any better solution?
    Question 2) in step 3, what criteria we should select? performance/ round robin/ failover
    Regards, Brijesh Shah

  • CE Benchmark/Performance Best Practice Tips

    We are in the early stages of starting a CE project where we expect a high volume of web service calls per day (e.g. customer master service, material master service, pricing service, order creation service etc).
    Are there any best-practice guidelines which could be taken into account to avoid any possible performance problems within the web service u201Cinfrastructureu201D? 
    Should master data normally residing in the backend ECC server be duplicated outside ECC? 
    e.g. if individual reads of the master data in the backend system take 2 seconds per call, would it be more efficient to duplicate     the master data on the SAP AS Java server, or elsewhere u2013 if the master data is expected to be read thousands of times each    day.
    Also, what kind of benchmarking tools (SAP std or 3rd party) are available to assess the performance of the different layers of the infrastructure during integration + volume testing phases?
    I've tried looking for any such documentation on SDN, OSS, help.sap.com, but to no avail.
    Many thanks in advance for any help.
    Ali Crawshaw

    Hi Ali,
    For performance and benchmarking have you had a look at Wiley Introscope?
    The following presentation has some interesting information [Wiley Introscope supports CE 7.1|http://www.google.co.za/url?sa=t&source=web&ct=res&cd=7&ved=0CCEQFjAG&url=http%3A%2F%2Fwww.thenewreality.be%2Fpresentations%2Fpdf%2FDay2Track6%2F265CTAC.pdf&ei=BUGES-yyBNWJ4QaN7KzXAQ&usg=AFQjCNE9qA310z2KKSMk4d42oyjuXJ_TfA&sig2=VD1iQvCUmWZMB5OB-Z4gEQ]
    With regards to best practice guidelines, if you are using PI for service routing try to keep to asynch services as far as possible, asynch with acknowledgments if need be. Make sure your CE Java AS is well tuned according to the SAP best practice.
    Will you be using SAP Global Data Types for your service development? If you are then the one performance tip i have regarding the use of GDT's is to keep your GDT structures as small (number of fields) as possible, as large GDT structures have an impact on memory consumption at runtime.
    Cheers
    Phillip

  • Is there a Mac OS X manual/best practice/performance enhancement guide?

    Hi
    I just got all Mac'ed up and am pretty new to it. I've been a PC looser for ages, but at least I knew what I was doing with it! Although my Mac is super fast and efficient now, I am paranoid that, like all the PC's I've ever owned, this "new car smell" state will not last for ever, unless some maintenance is kept up. Is there any kind of manual or maybe a website out there that can tell me about best practice with Macs. Stuff like keeping the registry clean (like on a PC), the best way to remove applications (entirely!), managing memory for best performance, etc. I'd also like to know about stuff like how using non-Apple made/brand applications effects the integrity of the system. Basically just find out how Mac software works and how best to use it.
    Thanks in advance.

    Start with these:
    Switching from Windows to Mac OS X,
    Basic Tutorials on using a Mac,
    MacFixIt Tutorials,
    MacTips, and
    Switching to the Mac: The Missing Manual, Leopard Edition.
    For maintenance, see these:
    Macintosh OS X Routine Maintenance
    Mac OS X speed FAQ
    Maintaining OS X
    Additionally, *Texas Mac Man* recommends:
    Quick Assist.
    Welcome to the Switch To A Mac Guides, and
    A guide for switching to a Mac.

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • What is the best Practice to improve MDIS performance in setting up file aggregation and chunk size

    Hello Experts,
    in our project we have planned to do some parameter change to improve the MDIS performance and want to know the best practice in setting up file aggregation and chunk size when we importing large numbers of small files(one file contains one record and each file size would be 2 to 3KB) through automatic import process,
    below is the current setting in production:-
    Chunk Size=2000
    No. Of Chunks Processed In Parallel=40
    file aggregation-5
    Records Per Minute processed-37
    and we made the below setting in Development system:-
    Chunk Size=70000
    No. Of Chunks Processed In Parallel=40
    file aggregation-25
    Records Per Minute processed-111
    after making the above changes import process improved but we want to get expert opinion making these changes in production because there is huge number different between what is there in prod and what change we made in Dev.
    thanks in advance,
    Regards
    Ajay

    Hi Ajay,
    The SAP default values are as below
    Chunk Size=50000
    No of Chunks processed in parallel = 5
    File aggregation: Depends  largely on the data , if you have one or 2 records being sent at a time then it is better to cluster them together and send it at one shot , instead of sending the one record at a time.
    Records per minute Processed - Same as above
    Regards,
    Vag Vignesh Shenoy

  • Best practice to monitor 10gR3 OSB performance using JMX API?

    Hi guys,
    I need some advice on the best practice to monitor 10gR3 OSB performance using JMX API.
    Jus to show I have done my home work, I managed to get the JMX sample code from
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/example.html#wp1109828
    working.
    The following is the list of options I am think about:
    * Set up: I have a cluster of one 1 admin server with 2 managed servers, which managed server runs an instance of OSB
    * What I try to achieve:
    - use JMX API to collect OSB stats data periodically as in sample code above then save data as a record to a
         database table
    Options/ideas:
    1. Simplest approach: Run the modified version of JMX sample on the Admin Server to save stats data to database
    regularly. I can't see problems with this one ...
    2. Use WLI to schedule the Task of collecting stats data regularly. May be overkill if option 1 above is good for production
    3. Deploy a simple web app on Admin Server, say a simple servlet that displays a simple page to start/stop and configure
    data collection interval for the timer
    What approach would you experts recommend?
    BTW, the caveats os using JMX in http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html#wp1095673
    says
         Oracle strongly discourages using this API in a concurrent manner with more than one thread or process. This is because a reset performed in
         one thread or process is not visible to another threads or processes. This caveat also applies to resets performed from the Monitoring Dashboard of
         the Oracle Service Bus Console, as such resets are not visible to this API.
    Under what scenario would I be breaking this rule? I am a little worried about its statement
         discourages using this API in a concurrent manner with more than one thread or process
    Thanks in advance,
    Sam

    Hi Manoj,
    Thanks for getting back. I am afraid configuring aggregation interval from Dashboard doesn't solve problem as I need to collect stats data of endpoint URI or in hourly or daily basis, then output to CSV files so line graphs can be drawn for chosen applications.
    Just for those who may be interested. It's not possible to use SQL to query database tables to extract OSB stats for a specified time period, say 9am - 5pm. I raised a support case already and the response I got back is 'No'.
    That means using JMX API will be the way to go :)
    Has anyone actually done this kind of OSB stats report and care to give some pointers?
    I am thinking of using 7 or 1 days as the aggregation interval set in Dashboard of OSB admin console then collects stats data using JMX(as described in previous link) hourly using WebLogic Server JMX Timer Service as described in
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jmxinst/timer.html instead of Java's Timer class.
    Not sure if this is the best practice.
    Thanks,
    Regards,
    Sam

  • Function Module performance in Crystal Reports - Best practices

    Hi all,
    We are following a function module based approach for our crystal reporting needs. We tried to follow an infoset approach, but found that most of the critical fields required for reports were retrieved from function modules and bapis.
    Our reports contain some project filters/parameter fields based on which the task reports would be created. I was wondering what would be the best approach/best practices to be considered while designing the FM so as not to impact the crystal report performance? 
    We created a sample FM in our test system with just the table descriptions ( without the input parameters) which would retrieve all the projects and found that crystal reports crashed while trying to retrieve all the records. I am not sure if this is the right approach since this is our project in using FMs for crystal reports.
    Thank you
    Vinnie

    yes. We did try following the infoset approach against the tables however since our project reports contain long text fields and status texts ( retrieved via FMs), we opted for the FM approach. Do you know how texts can be handles from ABAP to Crystal reports?

  • Reflection Performance / Best Practice

    Hi List
    Is reflection best practice in the followng situation, or should I head down the factory path? Having read http://forums.sun.com/thread.jspa?forumID=425&threadID=460054 I'm now wondering.
    I have a Web servlet application with a backend database. The servlet currently handles 8 different types of JSON data (there is one JSON data type for each table in the DB).
    Because JSON data is well structured, I have been able to write a simple handler, all using reflection, to dynamically invoke the Data Access Object and CRUD methods. So one class replaces 8 DAO's and 4 CRUD methods = 32 methods - this will grow as the application grows.
    Works brilliantly. It's also dynamic. I can add a new database table by simply subclassing a new DAO.
    Question is, is this best practice? Is there a better way? There are two sets of Class.forName(), newInstance(), getClass().getMethod(), invoke() ; one for getting the DAO and one for getting the CRUD method.....
    What is best practice here. Performance is important.
    Thanks, Len

    bocockli wrote:
    What is best practice here. Performance is important.I'm going to ignore the meat of your question (sorry, there are others who probably have better insights there) and focus on this point, because I think it's important.
    A best practice, when it comes to performance is: have clear, measurable goals.
    If your only performance-related goal is "it has to be fast", then you never know when you're done. You can always optimize some more. But you almost never need to.
    So you need to have a goal that can be verified. If your goal is "I need to be able to handle 100 update requests for Foo and 100 update requests for Bar and 100 read-only queries for Baz at the same time per second", then you have a definite goal and can check if you reached it (or how far away you are).
    If you don't have such a goal, then you'll be optimizing until the end of time and still won't be "done".

  • Best practice for highly available management / publishing servers

    I am testing a highly available appv 5.0 environment, which will deploy appv packages to a Xenapp farm.  I have two SQL 2012 servers configured as an availability group for the backend, and two publishing / management servers for the front end. 
    What is the best practice to configure the publishing / management servers for high availability?  Should I configure them as an NLB cluster, which I have tested and does seem to work, or should I just use the GPO to configure the clients to use both
    publishing servers, which I have also tested and appears to work?
    Thanks,
    Patrick Sullivan

    In App-V 5.0 the Management and Publishing Servers are hosted in IIS, so use the same approach for HA as you would any web application.
    If NLB is all that's available to you, then use that; otherwise I would recommend a proper load balancing solution such as Citrix NetScaler or KEMP LoadManager.
    Please remember to click "Mark as Answer" or "Vote as Helpful" on the post that answers your question (or click "Unmark as Answer" if a marked post does not actually
    answer your question). This can be beneficial to other community members reading the thread.
    This forum post is my own opinion and does not necessarily reflect the opinion or view of my employer, Microsoft, its employees, or other MVPs.
    Twitter:
    @stealthpuppy | Blog:
    stealthpuppy.com |
    The Definitive Guide to Delivering Microsoft Office with App-V

Maybe you are looking for

  • Ipod Touch 4 with iOS 6.1.6 crashes

    I have an iPod Touch 4 running iOS 6.1.6 that crashes frequently when I try to start or use some apps.  Words with Friends may take five or more attempts to get it to start, then it often crashes after I've entered a word to send to my opponent.  The

  • Web link in Flash Web gallery templates?

    I've created a number of HTML web galleries from templates, where you could have your id plate as a link. However the flash versions do have the ability to have your id plate be a link! I've tried to track down the file that I could change manually,

  • Pdf files print incorrectly with 10.1.7 update

    I upgraded to the 10.1.7 Acrobat update a few days ago. Now when I print pdf files all the text prints as a solid garbled mass of compressed overlapping characters. Graphic elements print fine. I'm running CS5.5 on OSX 10.6.8 and printing on an Espso

  • Netsh diag gone???

    Hi guys i just want to test port status i use netsh diag connect iphost xxx xxx in windows XP, but i can't use this command in Vista. how do i test port status in Vista ???

  • I want to learn about Ps

    How to use it better !