Best 17", 19" Monitor

Hi there,
I am looking for peoples opinion on which 17" or 19" flat screen monitor they think is the best and why!
I am hoping to purchase a new monitor very soon and am looking for some advice from people who use their monitor for similar tasks as what I will be: programming and game playing.
Therefore it must have the best of both worlds - great graphics display capabilities and very sharp text!
I'd appreciate any and all comments that people can put forth.
Cheers!
J-Ral.

If you are purchasing it for game playing then be prepared to spend quite a bit of money on it to avoid the problem of blurry games.
This one from samsung for instance has a 12ms refresh.
http://www.amazon.com/exec/obidos/tg/detail/-/B0000VD3KS/ref=ase_interactiveda38-20/103-9119645-0395819?v=glance&s=electronics
Anything higher than that won't be great. But, it is very subjective, and the ms rating is only a guide. Don't forget to check out the contrast, good values are 600:1 or 700:1. Also when you look for the response time of an LCD, make sure that the specifications give both the rise and the fall time.
Obviously your best bet is to head down to the local "bricks & mortar" outlet and fire up a dvd or game with that has lots of action an see it for yourself.
Alternatively you could always just opt for this one :p
http://www.apple.com/displays/ for just $3,299 + $599 for the graphics card :-)

Similar Messages

  • What are the best apps to monitor tests and emails on childs phone

    what are the best apps to monitor tests and emails on childs phone?

    You could simply log into your child's email with the password that the two of you agree upon, and monitor the emails manually.  As for the texts, you could purchase a separate device, and use the child's Apple ID to monitor the iMessages.  For SMS/MMS, no way to monitor it.  You'll have to use the app called 'Random Parental Inspections'.

  • Best practice on monitoring Endeca health / defining outage

    (This is a double post from the Endeca Experience Management forum)
    I am looking for best practice on how to define Endeca service outage and monitor the health of the system. I understand this depends on your user requirements and it may vary from customer to customer. Specifically what criteria do you use to notify your engineer there is a problem? We have our load balancers pinging dgraphs on an interval. However the ping operation is not sufficient in our use case. We are also experimenting running a "low cost" query to the dgraphs on an interval and using some query latency thresholds to determine outage. I want to hear from people on the field running large commercial web site about your best practice of monitoring/notifying health of the system.
    Thanks.

    The performance metric should help to analyse the query and metrics for fine tuning.
    Here are few best practices:
    1. Reduce the number of components per page
    2. Avoid complex LQL queries
    3. Keep the LQL threshold small
    4. Display the minimum number of columns needed

  • Best practice to monitor 10gR3 OSB performance using JMX API?

    Hi guys,
    I need some advice on the best practice to monitor 10gR3 OSB performance using JMX API.
    Jus to show I have done my home work, I managed to get the JMX sample code from
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/example.html#wp1109828
    working.
    The following is the list of options I am think about:
    * Set up: I have a cluster of one 1 admin server with 2 managed servers, which managed server runs an instance of OSB
    * What I try to achieve:
    - use JMX API to collect OSB stats data periodically as in sample code above then save data as a record to a
         database table
    Options/ideas:
    1. Simplest approach: Run the modified version of JMX sample on the Admin Server to save stats data to database
    regularly. I can't see problems with this one ...
    2. Use WLI to schedule the Task of collecting stats data regularly. May be overkill if option 1 above is good for production
    3. Deploy a simple web app on Admin Server, say a simple servlet that displays a simple page to start/stop and configure
    data collection interval for the timer
    What approach would you experts recommend?
    BTW, the caveats os using JMX in http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html#wp1095673
    says
         Oracle strongly discourages using this API in a concurrent manner with more than one thread or process. This is because a reset performed in
         one thread or process is not visible to another threads or processes. This caveat also applies to resets performed from the Monitoring Dashboard of
         the Oracle Service Bus Console, as such resets are not visible to this API.
    Under what scenario would I be breaking this rule? I am a little worried about its statement
         discourages using this API in a concurrent manner with more than one thread or process
    Thanks in advance,
    Sam

    Hi Manoj,
    Thanks for getting back. I am afraid configuring aggregation interval from Dashboard doesn't solve problem as I need to collect stats data of endpoint URI or in hourly or daily basis, then output to CSV files so line graphs can be drawn for chosen applications.
    Just for those who may be interested. It's not possible to use SQL to query database tables to extract OSB stats for a specified time period, say 9am - 5pm. I raised a support case already and the response I got back is 'No'.
    That means using JMX API will be the way to go :)
    Has anyone actually done this kind of OSB stats report and care to give some pointers?
    I am thinking of using 7 or 1 days as the aggregation interval set in Dashboard of OSB admin console then collects stats data using JMX(as described in previous link) hourly using WebLogic Server JMX Timer Service as described in
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jmxinst/timer.html instead of Java's Timer class.
    Not sure if this is the best practice.
    Thanks,
    Regards,
    Sam

  • What is the best way of monitoring HD content in FCP?

    I hope to get the Mac Pro listed below, What is the best way of monitoring my HD edit from FCP?
    Can I output to a HDTV, or is a computer monitor better?
    Do I need a capture card to output the signal, or is my below system adequate? i don't need one for capturing as i have will be using a Panasonic AG-HVX200 with P2 card, Can i use this camera to output to a TV/Monitor?
    Message was edited by: calihal

    Well, given your situation, the best option is the Matrox MXO and Apple Cinema Display. I have a full review here as to why it is a good solution.
    http://library.creativecow.net/articles/ross_shane/MXO.php
    This will cost you $995 for the MXO, and $900 for the Apple display. So under $2k. The other options are more expensive. Broadcast quality HD LCDs start at $3500. And other capture cards range from $295 (decklink Intensity) to $3500 (AJA and Decklink). And HDTV will still require a capture card, like the Intensity with HDMI out, but won't be suitable for broadcast quality. It will, however, be perfectly fine for seeing what your HD footage looks like on an HD set that most people will have in their homes. So I take back my initial statement and say that if you have a tower, the Intensity and HDTV is your best option...if you aren't needing full broadcast quality. If you just need to see what it looks like.
    Shane

  • Best practice for monitoring MXE3500 v3.2.1

    Hi all
    We have recently upgraded to v3.2.1 on our MXE3500, all has gone well but I am looking for the best way to monitor the system health of the device for our support teams.
    The Show and Share and DMM servers come with SNMP monitoring but I can't see the equivelent for the MXE.
    Due to the lock down nature of the Windows VM on the 3.2.1 software I do not want to install our standard OS monitoring software which is BMC patrol as Cisco have adviced to not install any additional software.
    Has anyone got any ideas on this?
    Adam

    Hello,
    Of course too sensitive might cause failover to happen when some packets get lost, but remember the whole purpose of this is to provide as less downtime to your network as possible,
    Now if you tune these parameters what happen is that failover will be triggered on a different time basis.
    This is taken from a cisco document ( If you tune the sla process as this states, 3 packets will be sent each 10 seconds, so 3 of them need to fail to SLA to happen) This CISCO configuration example looks good but there are network engineers that would rather to use a lower time-line than that.
    sla monitor 123
    type echo protocol ipIcmpEcho 10.0.0.1 interface outside
    num-packets 3
    frequency 10
    Regards,
    Remember to rate all of the helpful posts ( If you need assistance knowing how to rate a post just let me know )

  • What is the best way to monitor taffic across a Campus?

    I am trying to find the best way/ways to monitor traffic across a campus network. The two solutions I have thought of are using Netflow or ERSPAN. However, neither are supported by the devices in this network. Here is a quick overview of the network...
    Core Switches (3750 Stacks) using Layer 3
    |
    Distribution Switches (3750 & 3650s) using L3 towards Core and L2 towards Access
    |
    Access Switches (Mostly 3500s) using L2
    What are the best options for monitoring traffic on this type of network? All links between switches are Gig, so we have plenty of bandwidth. I would really like to be able to setup snort/ntop or something similar.
    Are there any solutions available that I could use RSPAN and a monitoring computer at the Access Switches and have them report back to a central monitoring machine? I would prefer a centralized solution.
    Thanks,
    Garrett

    Hello Garrett
    Each monitoring software has its own limitations/specifications..
    If you want to monitor traffic/protocols running on ur network, on a constant basis, you will have to use Netflow.. You can use a simple netflow collector, and collect reports, and analyze the application traffic on your LAN/WAN.. Not sure if this will help too much in troubleshooting, since this will be more used for trending your applications. You can probably discover new applications, which arent used much on your network, using this..
    But for real troubleshooting, you will need something like a syslog server.. u can configure logging levels and push important errors/updates from the cisco gear to this box. In case your box goes down, or has issues, system log messages will be dumped to this server and will be a very useful device for troubleshooting... eg, kiwi cat, solarwinds, 3cdaemon, and lots of other freewares...
    I would ideally have both these components on my network, for trending and troubleshooting..
    apart from this, if you have other advance technology products, like wireless, application accelaration etc, there are other network management solutions available..
    Hope this helps.. All the best.. rate replies if found useful
    Raj

  • Best way to monitor the ON time of something in a minute ?!

    Greetings everybody,
    I first have to thank everybody offers help to others here.
    I have a question regarding the Best way to monitor the ON time of something in a minute.
    Say I have an On/Off switch that I want to know how many seconds that it was ON in the last minute (say) .. and reports that to a file or database each minute. So every minute I send a report to the DataBase with the number of seconds the switch was ON in the last minute.
    I already made a solution, But it's not that good I think and there is a problem there .. Please check my VI as it describes the solution more than my words here.
    Any comment is appreciated.
    Thanks in advance.
    Ayman Mohammad Metwally
    Automation Engineer
    Egypt - Cairo
    Attachments:
    On timet.vi ‏127 KB

    Hello Ayman,
    I attached a changed version of your vi. It uses two parallel loops.
    The communication is made via local variable and controled by a flag.
    Just have a look to get the idea.
    You can do the communication also on different ways like queues...
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome
    Attachments:
    OnTime 2.vi ‏37 KB

  • What is best non apple monitor for Mac mini

    What is the best non Apple monitor that is 21.5" for under $200 that can be used with a new Mac Mini?

    I like both my older $200 LG Flatron and newer $160 Samsung SyncMaster monitors. Both have (digital) DVI input, the LG is on the Apple Supplied HDMI to DVI Adapter and the Samsung is on a MiniDisplay Port to DVI adapter.
    Just beware that you can only use one older (analog) VGA monitor on the Thunderbolt/Display Port with Apple's MiniDisplay Port to VGA adapter and you should look at ones that support (digital) HDMI, DVI and/or DisplayPort inputs.

  • Best program for monitoring and blocking internet use on my kids mobile devices, iPads, iPhones from my MAC

    Whats the best program for monitoring and blocking internet sites and usage for my kids on their mobile devices ipads, iphones and ipods, all from my desktop mac?

    Can't be done from your Mac remotely.
    But you can enable parental controls directly on iOS devices.
    iOS: Understanding Restrictions (parental controls)

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Best Way to monitor standby, primary databases, including alert logs, etc.

    Hi, Guys, I finally cutover the new environment to the new linux redhat and everything working great so far (the primary/standby).
    Now I would like to setup monitoring scripts to monitor it automatically so I can let it run by itself.
    What is the best way?
    I talked to another dba friend outside of the company and he told me his shop not use any cron jobs to monitor, they use grid control.
    We have no grid control. I would like to see what is the best option here? should we setup grid control?
    And also for the meantime, I would appreciate any good ideas of any cronjob scripts.
    Thanks

    Hello;
    I came up with this which I run on the Primary daily, Since its SQL you can add any extras you need.
    SPOOL OFF
    CLEAR SCREEN
    SPOOL /tmp/quickaudit.lst
    PROMPT
    PROMPT -----------------------------------------------------------------------|
    PROMPT
    SET TERMOUT ON
    SET VERIFY OFF
    SET FEEDBACK ON
    PROMPT
    PROMPT Checking database name and archive mode
    PROMPT
    column NAME format A9
    column LOG_MODE format A12
    SELECT NAME,CREATED, LOG_MODE FROM V$DATABASE;
    PROMPT
    PROMPT -----------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking Tablespace name and status
    PROMPT
    column TABLESPACE_NAME format a30
    column STATUS format a10
    set pagesize 400
    SELECT TABLESPACE_NAME, STATUS FROM DBA_TABLESPACES;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking free space in tablespaces
    PROMPT
    column tablespace_name format a30
    SELECT tablespace_name ,sum(bytes)/1024/1024 "MB Free" FROM dba_free_space WHERE
    tablespace_name <>'TEMP' GROUP BY tablespace_name;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking freespace by tablespace
    PROMPT
    column dummy noprint
    column  pct_used format 999.9       heading "%|Used"
    column  name    format a16      heading "Tablespace Name"
    column  bytes   format 9,999,999,999,999    heading "Total Bytes"
    column  used    format 99,999,999,999   heading "Used"
    column  free    format 999,999,999,999  heading "Free"
    break   on report
    compute sum of bytes on report
    compute sum of free on report
    compute sum of used on report
    set linesize 132
    set termout off
    select a.tablespace_name                                              name,
           b.tablespace_name                                              dummy,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )      bytes,
           sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id ) -
           sum(a.bytes)/count( distinct b.file_id )              used,
           sum(a.bytes)/count( distinct b.file_id )                       free,
           100 * ( (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) -
                   (sum(a.bytes)/count( distinct b.file_id ) )) /
           (sum(b.bytes)/count( distinct a.file_id||'.'||a.block_id )) pct_used
    from sys.dba_free_space a, sys.dba_data_files b
    where a.tablespace_name = b.tablespace_name
    group by a.tablespace_name, b.tablespace_name;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking Size and usage in GB of Flash Recovery Area
    PROMPT
    SELECT
      ROUND((A.SPACE_LIMIT / 1024 / 1024 / 1024), 2) AS FLASH_IN_GB,
      ROUND((A.SPACE_USED / 1024 / 1024 / 1024), 2) AS FLASH_USED_IN_GB,
      ROUND((A.SPACE_RECLAIMABLE / 1024 / 1024 / 1024), 2) AS FLASH_RECLAIMABLE_GB,
      SUM(B.PERCENT_SPACE_USED)  AS PERCENT_OF_SPACE_USED
    FROM
      V$RECOVERY_FILE_DEST A,
      V$FLASH_RECOVERY_AREA_USAGE B
    GROUP BY
      SPACE_LIMIT,
      SPACE_USED ,
      SPACE_RECLAIMABLE ;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking free space In Flash Recovery Area
    PROMPT
    column FILE_TYPE format a20
    select * from v$flash_recovery_area_usage;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking last sequence in v$archived_log
    PROMPT
    clear screen
    set linesize 100
    column STANDBY format a20
    column applied format a10
    --select max(sequence#), applied from v$archived_log where applied = 'YES' group by applied;
    SELECT  name as STANDBY, SEQUENCE#, applied, completion_time from v$archived_log WHERE  DEST_ID = 2 AND NEXT_TIME > SYSDATE -1;
    prompt
    prompt----------------Last log on Primary--------------------------------------|
    prompt
    select max(sequence#) from v$archived_log where NEXT_TIME > sysdate -1;
    PROMPT
    PROMPT ------------------------------------------------------------------------|
    PROMPT
    PROMPT
    PROMPT Checking switchover status
    PROMPT
    select switchover_status from v$database;I run it from a shell script and email myself quickaudit.lst
    Alert logs are great source of information when you have an issue or just want to check something.
    Best Regards
    mseberg

  • Best 23" LCD Monitor Under $150

    I'm looking for a 23" LCD monitor. Any recommendations. I'm not looking to spend any more than $150-$170. I'm not very flexible on the price. Thanks in advance!
    Best,
    Coander15

    Microcenter typically has decent (maybe not best, but decent) prices and a good selection.  For $170 and lower, not much there ...
    http://www.microcenter.com/search/search_results.phtml?sortby=pricelow&N=4294966 896%2B116

  • Best, lowest price monitor for iMac

    Opinikons Please
    What is the best, lowest price mointor for early 2009 iMac?
    Thanks

    Thanks Rudegar.  The problem is my iMac covertoer went away and I cannot afford to have it replaced but I though a moitor would do.  Live in USA.  Nearly blind and need a monitor I can see (lighter screen needed).  Looked at Dells and they seem to be OK but so do a lot of them.  Only need a 20-22" screen.  The plan is to get a moini mac eventually (desk space i a premium around here) so would need a monitor that is good for thaty too.  Hve all the periferals I need and just need to have a good monitor to put in place.
    C

  • How best to calibrate monitor using Display Calibrator Assistant?

    I have had my 20" iMac since January 2007, but have not yet calibrated the monitor which I understand is a good thing to do periodically. I would like to do this using the Display Calibrator Assistant available via Mac System Preferences.
    My Internet research on this subject has uncovered some controversy on how to perform the calibration. For example, a preponderance of opinion seems to be that the Gamma setting should be 2.2 instead of the 1.8 recommended by Apple for Mac OS computers. There are also differing opinions on whether to use the Expert Mode when calibrating.
    I would appreciate an authoritative response on how best to perform the monitor calibration. For your information my primary interest regarding calibration is making sure that colors are correct when I am viewing and editing photographic digital images that I store in iPhoto.
    Bob

    Hey, Pete!
    With my Nikon I can set the color space to either Adobe RGB or sRGB. Ken Rockwell (kenrockwell.com) is very knowledgeable in this area, and in his User's Guide for the D200, strongly advises the default sRGB setting unless the user really knows what he is doing with Adobe RGB and does his own printing. I don't agree with all of Mr. Rockwell's pronouncements, but on this issue I opted to follow his advice.
    I don't find noise to be a significant issue with the D200 from ISO 100 through 1600.  Noise gets pretty bad at ISO 3200, but the Noise Ninja program is very helpful in reducing noise to acceptable levels. Noise is worse in underexposed areas, but on balance I still prefer underexposure--within reason--to overexposure. 
    I started out my use of the D200 shooting everything in RAW + JPEG (Large Normal Optimal Quality), but in most instances I didn't feel that RAW gave me noticeably better results than JPEG. Also, RAW images chew up a lot of hard drive space. I now shoot most stuff JPEG, but employ RAW in situations that might present a problem, like when shooting sporting events in school gymnasiums which typically have abominable lighting conditions. With RAW images white balance is easier to adjust than with JPEGs. On the Internet I have read a number of impassioned arguments in favor of RAW over JPEG, but I still consider RAW to be a digital image format and not a religion! Therefore, I don't fear consignment to hades as a penalty for shooting JPEGs!
    Bob

Maybe you are looking for