Shared NFS Disk causing high Wait CPU on dd command

Hi,
we have mounted the partition /OVS to a NFS shared disk, and we are running a virtual machine from this shared disk.
Doing the following command in the virtual machine, we calculate the needed time to write a file into the NFS disk with block size = 1M. We also tried with different block sizes.
time dd if=/dev/zero of=/tmp/1GB.test bs=1M count=1024
1024+0 records in
1024+0 records out
real     0m4.085s
user     0m0.000s
If we check the Virtual Machine CPU Usage during this process, Wait CPU time highly increases as we show below (doesn't matter the block size)
!http://img69.imageshack.us/img69/7651/highwaitcpu.th.png!
http://img69.imageshack.us/i/highwaitcpu.png/
In the other hand, executing the same command in a local disk, Wait CPU is really low and normal.
Is a NFS issue? Is the right way to work? Do we have a wrong Virtual Machine configuration?
Thanks in advance,
Marc
Edited by: Marc Caubet on Nov 23, 2009 8:20 AM
Edited by: Marc Caubet on Nov 23, 2009 8:20 AM
Edited by: Marc Caubet on Nov 23, 2009 8:21 AM

Hi Herb,
thanks for your fast reply. Elapsed time using NFS should be lower than using the local disk since throughput is faster. This should be in theory
I took some samples by using the NFS. and I got the following numbers:
+0.00user 3.75system 0:12.46elapsed 30%CPU (0avgtext+0avgdata 0maxresident)k+
+1920inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 4.14system 0:08.83elapsed 46%CPU (0avgtext+0avgdata 0maxresident)k+
+112inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
+0.00user 4.24system 0:04.46elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k+
+8inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
+0.00user 4.29system 2:06.02elapsed 3%CPU (0avgtext+0avgdata 0maxresident)k+
+88inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
+0.00user 4.08system 0:04.14elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 4.12system 0:04.43elapsed 92%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
+0.00user 4.25system 0:04.34elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
+0.00user 4.29system 3:44.51elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
+0.00user 4.10system 0:04.18elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 3.96system 0:04.00elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
+0.00user 4.06system 0:04.16elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
+0.00user 4.06system 0:04.15elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
Some examples with local disk:
+0.00user 3.52system 0:07.46elapsed 47%CPU (0avgtext+0avgdata+ 0maxresident)k+
+96inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 3.77system 0:04.51elapsed 83%CPU (0avgtext+0avgdata 0maxresident)k
+0inputs+2097160outputs (0major+160minor)pagefaults 0swaps
+0.00user 3.85system 0:08.65elapsed 44%CPU (0avgtext+0avgdata 0maxresident)k+
+104inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
+0.00user 3.80system 0:04.73elapsed 80%CPU (0avgtext+0avgdata 0maxresident)k+
+8inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
+0.00user 3.90system 0:04.79elapsed 81%CPU (0avgtext+0avgdata 0maxresident)k+
+16inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 3.91system 0:04.37elapsed 89%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 4.03system 0:19.57elapsed 20%CPU (0avgtext+0avgdata 0maxresident)k+
+160inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
+0.00user 3.44system 0:05.53elapsed 62%CPU (0avgtext+0avgdata 0maxresident)k+
+0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
+0.00user 3.44system 0:05.20elapsed 66%CPU (0avgtext+0avgdata 0maxresident)k+
+8inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 3.43system 0:07.04elapsed 48%CPU (0avgtext+0avgdata 0maxresident)k+
+120inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.00user 3.42system 0:06.34elapsed 53%CPU (0avgtext+0avgdata 0maxresident)k+
+56inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
+0.01user 3.59system 0:13.35elapsed 26%CPU (0avgtext+0avgdata 0maxresident)k+
+160inputs+2097160outputs (1major+175minor)pagefaults 0swaps+
Elapsed time in NFS case, sometimes is really huge. Hence, this should mean network problems, am I right? Maybe we should increase bandwidth. Actually we got 1GB of network connection shared with 10 hypervisors.
Edited by: Marc Caubet on Nov 24, 2009 6:38 AM

Similar Messages

  • Does debug ip OSPF adjacency will cause high CPU

    Hello All,
    We are finding a OSPF flapping issue on Cisco 3750x switches. To isolate the issue TAC has suggested "debug ip ospf adj"
    Does this command will cause high CPU utilization?
    There are 12 OSPF neighbours connected to that switch and we have the hello interval of 1 second.
    Any of your help would be appreciable.
    Regards,
    Thiyagu

    It may do... But unlikely. There is always a risk with debug commands that have large output. Especially that you have hello intervals set to 1 second. I would advise to do out of hours if necessary to mitigate risks of impacting production.
    What version of ios are you on and show the output of 'show proc cpu sorted'
    I have had a similar experience and multicast was the problem. It was high CPU and dropping ospf adjacencies. We downgraded to 12.2 55 SE 8
    It helped the problem for CPU utilisation but didn't fix it. Since then we are using 6509Es in VSS

  • Browser's javascript engine causing high CPU load and lost swipe events

    I'm having performance issues on some websites. I think that my the javascript engine in the Playbook's javascript engine is causing high CPU usage, which means the device misses swipe gestures.
    Visit: http://www.bbc.co.uk/news/world-middle-east-15814035
    Once the page is loaded swipe to scroll up and down. On my device (and my friend's) it is incredibly slow. Sometimes it refuses to register any swipe events.
    Now turn off javascript and reload the page. Swipe to scroll up and down. Smooth as silk!
    My device is a new 16Gb with version 1.0.7.3312. Thanks

    I'm having the same problem on a number of websites with my 32gb Playbook. On the same sites that I notice slow, sometimes barely responsive swipes for scrolling my wife's iPad 2 handles them with ease. Disabling javascript fixes the problem however a lot of websites including gmail requires javascript to be enabled on your browser to load. I notice the problem quite a bit when using webmail to check my gmail account which is rather frustrating because it is my only option until PB gets a native email client. My PB is running software 1.0.8.4295, but the problem was also occurring when it was running 1.0.7.

  • BTServer causing high CPU utilization

    Just got a new MacBook Pro with OS X Lion and updated to 10.7.1. I've seen the BTServer process cause high CPU utilization twice now. One morning I also noticed that the fan was running loudly while the lid was closed and the computer was locked up; I had to force power it off. Anyone seeing similar issues? Doing a search for BTServer on Google or even here in the Apple Support Communities does not bring up anything related to Lion at all, only iPhone-related articles.

    yup, i had this problem too.  i have a new 2.2ghz macbook pro and while at a coffee shop, i noticed my battery draining very quickly, and a look at activity monitor showed that BTServer was running away with high-cpu utilization.    i killed off the process and didn't notice any loss of functionality. 
    not sure what it was supposed to do with so much freaking cpu cycles!!!
    is this for bluetooth? i dont even have bluetooth on.  im not happy about this having to be something i'd need to keep a eye on when i'm trying to maximize my battery life. 

  • Can Ciscoview cause high cpu spikes?

    I was using LMS 3.2, Ciscoview to open a view on a 6500 that has 2 WS-SUP720-3B supervisors and is running s72033-adventerprisek9_wan-mz.122-33.SXH4.bin.
    The device was taking a bit to open up, so I clicked it a couple more times and a couple of minutes later it finally opened.
    At the same time I had a telnet session to the device and noted thru sh proc cpu sorted that the SNMP ENGINE had spiked to 95% utilization.
    Could Ciscoview be the cause of this? I have found through testing that multiple clicks can cause a small cpu jump (6% - 12%) but I haven't yet reproduced the 95% utilization I saw that day.
    Is there an initial discovery that Ciscoview first does to gather it's information that's more intensive than subsquent polling?
    -Dave

    A couple more questions. Does Ciscoview perform snmpwalks or snmpgets on the device when it is the first time it's contacting it? In the trace logs, I noted:
    2009/10/16 22:07:42 EDT TRACE CvDevice: Orig Device Info:
    id=192.168.255.21, ipAddr=/192.168.255.21
    2009/10/16 22:07:42 EDT TRACE CvDevice: New Device Info:
    id=192.168.255.21, ipAddr=192.168.255.21
    2009/10/16 22:07:42 EDT TRACE CvDevice: Opening device
    192.168.255.21_
    polling interval=60...
    2009/10/16 22:07:42 EDT TRACE Vector: Getting existing CT....
    2009/10/16 22:07:42 EDT TRACE Vector: Index of matched CT is -1
    2009/10/16 22:07:42 EDT TRACE Vector: Containment tree not found in the
    cache
    2009/10/16 22:07:42 EDT TRACE Util: Request: ReqId=47, Cmd=Get,
    Session=192.168.255.21Session,
    I've noted in attempts to open a device the first time, it takes awhile to open which I'm assuming is the snmpwalk/gets being performed on the device to update its cache. When is this cache cleared? On server reboot?

  • High wait on Library Cache Lock while doing ALTER TABLE EXCHANGE PARTITION

    We are observing a very high wait time on "Library cache lock" while performing Exchange partition.
    Here we go
    ALTER TABLE PSALESREG EXCHANGE PARTITION P123
    WITH TABLE PBKPSALESREF WITHOUT VALIDATION
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.11 *6684.73* 2 9 2 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.11 6684.73 2 9 2 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    library cache lock 2274 3.12 *6681.32*
    Is it a bug? is anyone there who experienced the same issue?
    Rgds

    Maurice Muller wrote:
    Hi,
    As far as I remember a exchange partition can only be done when no other query is accessing the table.
    So you should check if any other queries are executed against the table PSALESREG while you do the exchange partition. Maurice,
    queries won't block the exchange operation but continue to read from the "original", exchanged segment; this "cross-DDL" read consistency is a feature that has been introduced a long time ago.
    But any kind of (long-running) DML should effectively prevent the exchange operation. Still I would assume that this shouldn't show up as "library cache lock", but you would get an "ORA-00054: resource busy" error.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Extremely high system CPU load on my new 2012 Macbook Pro

    I just brought my brand new 2012 Macbook Pro (non-retina) home from the store and noticed that it was running INCREDIBLY slow. I opened up the activity monitor and found that the system's CPU load is consistently around 70%-90%. I haven't installed any software outside of google chrome, although the issue was happening even before I installed google chrome.
    Here are screenshots of my macbook's specs
    Here's a screenshot of the processes sorted by %CPU
    Here's a screenshot of the processes sorted by CPU Time
    Any idea what would cause such a high system CPU load? Again I haven't installed any software outside of google chrome, and I don't have anything hooked up to my Macbook. This issue happens when my Macbook is both plugged and unplugged and I haven't noticed any overheating or extensive fan use.

    harkiamsuperman,
    your first image shows Chrome using over half of the CPU time. Note that Chrome on Mavericks is more CPU-intensive than other browsers, since Google does not provide a 64-bit version of the app for OS X.
    Since you’ve just brought your MacBook Pro home, perhaps Spotlight is doing its initial indexing of your internal disk; that could be one possible explanation. If that is the cause, then your CPU usage should settle down after it finishes.
    If you’d like to try something simple and immediate, you could reset your MacBook Pro’s System Management Controller, to see if that makes a difference.

  • Swiping causes spikes in CPU utilization.

    Using app to monitor CPU on iPhone 5 and iPad mini. Swiping causes spikes in CPU utilization from 5% to 65% for every swipe. Heavy swiping causes processor to run near 100%. Seems abnormal.
    Reduce Motion turned on.
    IOS 7.1
    Anything to make this not run so high? Battery drain seems much faster with just general usage.

    Hey Apple. This is likely a major issue with battery drain. Constant CPU spikes will cause the CPU to heat which increases battery drain. Swiping should not peg the CPU. I suspect it has to do with how The screen is rendered to the user.somebody needs to look into this from Apple. This is been an issue for at least the last few iOS versions.

  • DI job causing high levels of I/O on database server

    We have a DI job that is loading a sql server 2005 database.  When the facts are loaded itu2019s causingu2019s a high level of I/O on the database server causing the DI job to slow down.  No more than 5 facts are loaded concurrently.  The fact dataflows all have a sql transform to run the select query against the DB, a few query transforms to do lookups to get dimension keys, and all do inserts to the target. The DBA says there are too many DB connections open and DI is not closing them.  My thinking was DI would manage the open connections for lookup, etc and would close then properly when the dataflow is complete. 
    Any thoughts on what else would cause high levels of DB I/O?
    Additional Info:
    - Run the DI job, source and target tables are in SQL Server, and it takes 5 hours.
    - Run the same DI job again, on the same data set, and it takes 12+ hours.  This run will have high levels on DB I/O.
    - But if SQL Server is stopped and restart, the job will again take 5 hours the first time it runs.
    Edited by: Chris Sam on Apr 15, 2009 3:43 PM

    There are a lot of areas of a DI Job that can be tuned for performance, but given the fact that your job runs fine after the database is restarted, it sounds like a problem with the database server and not the Data Integrator job.
    There are a lot of resources out there for dealing with SQL Server disk IO bottlenecks.  As a minimum first step all of them will recommend putting your .mdf and .ldf files on seperate drives and using Raid 10 for the .mdf file.

  • 2008 R2/ Win7 Offline Files Sync causing High Load on server

    Hi Everyone,
    I have recently been investigating extremely high CPU Usage from the System Process on my company's main File Cluster.
    We managed to track SRV2.sys Threads causing high CPU load within the system process, but was having issues identifying why this was the case.
    As per Microsoft's direction via support call, we have installed the latest SRV2.sys Hotfixes, but this does not appear to have allivated the issue we are experiencing. We have also added more CPU and Memory into both nodes, which has not helped either.
    We have since managed to create a system dump and is being sent to MS Support for analysis.
    I have noticed the following that appears to happen on our cluster:
    Whenever our CAD/Design department run certain functions within their apps running on a windows 7 client (apps include MicroStation, Revit, AutoCAD etc) we see a massive spike and flatline of the system process.
    We found several users with Windows 7 clients that have Configured Offline File to sync an entire Network Volume (some volumes are 2TB Plus, so would never fit on a users computer anyway, i was quite shocked when I found this). How we spotted this was through
    Resource Monitor, the System Process was trolling through all the folders and files in a given volume (it was "reading every single folder). Now, while this was the system process, we could identify the user by using the Open Files view in Server Manager's
    Share and Storage Management tool.
    I have done a fair bit of research and found that a lot of CAD/Drawing applications in the market have issues with using SMB2 (srv2.sys). When reviewing the products that we use, I noticed that a lot of these products actually recommended disabling SMB2
    and reverting to SMB1 on File Server and/or the clients.
    I have gone down the path of disabling SMB2 on all Windows 7 clients that have these CAD Applications installed to assist with lowering the load (our other options are to potentially shift the CAD Volumes off our main file cluster to further isolation these
    performance issues we have been experiencing.) We will be testing this again tomorrow to confirm that the issue is resolved when mass amounts of CAD users access data on these volumes via their CAD Application.
    We only noticed the issue with Offline Files today with trying to sync an ENTIRE Volume. My questions are:
    Should Offline File sync's really cause this much load on a File Server?
    Would the the size of the volume the sync is trying to complete create this additional load within the system process?
    What is the industry considered "Best Practice" regarding Offline Files setup for Volumes which could have up to 1000+ users attached? (My personal opinion is that Offline Files should only be sync of user "Personal/Home" Folders
    as they users themselves have a 1 to 1 relationship with this data.)
    Is there an easier way to identify what users have Offline Files enabled and "actually being used" (from my understanding, Sync Center and Offline Files are enabled by default, but you obviously have to add the folders/drives you wish to sync)?
    If I disable the ability for Offline Files on the volumes, what will the user experience be when/if they try to sync their offline files config after this has been disabled on the volume?
    Hoping for some guidance regarding this setup with Offline Files.
    Thanks in Advance!
    Simon

    Hi Everyone,
    Just thought I would give an update on this.
    While we're still deploying http://support.microsoft.com/kb/2733363/en-us to
    the remainder of our Windows 7 SP1 fleet, according to some Network Traces and XPerf that were sent to MS Support, it looks as though the issu with the Offline Files is now resolved.
    However, we are still having some issues with High CPU with the System Process in particular. Upon review of the traces, they have found a lot of ABE related traffic, leading them to believe that the issue may also be caused by ABE Access on our File Cluster.
    We already have the required hotfix for ABE on 2008 R2 installed (please see
    http://support.microsoft.com/kb/2732618/en-us), although we have it set to a certain value that MS believe may be too high. Our current value is set to "5", as that is the lowest level of any type of permission is set (i.e. where the lowest level of inheritance
    is broken).
    I think I will mark this as resolved regarding the Offline Files issue (as it did resolve the problem with the Offline Files)...
    Fingers crossed we can identify the rest of the high load within the System Process!!!

  • High load CPU X process

    Hi,
    I'm using ArchLinux 64bit, with KDE 4.3.4 on a sony vaio, everything work fine, but after some normal work I see that process X takes more then 90% of CPU and never slow down. Even if I closed all my application it stays at 90%. I don't know how to see which application could cause this.
    As first impression it seems when I use samba adn move some file from my machine to a windows machine, but today it takes high load cpu without using samba, so I don't know what else do, any good advice?
    thanks a lot
    cesare

    Hi thank you very much for your reply, I'm sorry to reply just now.
    Anyway I have a Sony Vaio VGN.SR21M, using opensource driver xf86-video-radeonhd with hardware acc on. My graphics card is a Radeon3400HD
    I used the closed driver till I switch to Xorg 1.7 and so I can't using it anymore so I switched to the free one.
    Should I disable desktop effects?
    Thanks again
    Cesare
    Last edited by cesare (2010-01-25 08:48:01)

  • Shared Server converted to Dedicated (WAIT(RECEIVE))

    Hi,
    I have an Oracle 9.2.0.7 and using MTS. Someone know why when I start my JBoss application 20 shared servers are in statu WAIT(RECEIVE)? I saw in metalink that this status means that my shared servers were converted to dedicated! How can I turn those connection to shared again?
    Thanks in advance,
    Paulo Almeida
    São Paulo/SP- Brazil

    Hi Paulo,
    Do you have a RAM shortage on your server?
    http://www.dba-oracle.com/t_mts_multithreaded_servers_shared.htm
    My experience concurs that Oracle shared servers should not be used without a compelling reasons (i.e. super high connect/disconnect rates on an instance with limited resources) and that the vast majority of Oracle databases will run more efficiently without shared servers. Dedicated server connects are far faster than multi-threaded server connections, and 64-bit Oracle combined with the low cost of RAM has driven-down the rare cases where shared servers are justified.
    Oracle's Tom Kyte notes:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:5269794407347
    - "Unless you have a real reason to use MTS -- don't."
    - "a shared server connection is by design "slower" than a dedicated server (more stuff goes on, more complex) it is most likely only getting in the way."
    Hope this helps. . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Shared Virtual Disk

    When creating a virtual shared disk oracle VM Manager -> Resources -> Shared Virtual Disks the only option you have are
    Virtual Disk Name:
    Virtual Disk Size:
    Server Pool name:
    Group Name:
    Description:
    My question is how do you know what this virtual disk is mapped to? i.e. SAN, NFS, LOCAL DISK?
    Also how do you specific what disk to use ideally I would like the virtual disk to use a 150GB LUN from my SAN?

    With the exception of multipath devices (and plain files obviously), there is no user interface for other kinds of devices for back end storage for DomUs, but Xen supports them. You have to edit the config yourself. They are in /OVS/running_pool/<name>/vm.cfg
    Look for the disk = line and edit it to something like this:
    disk = [
    'phy:/dev/DomUDisk2/LVOracleApp,xvda,w',
    In this case, I used a local LVM logical volume as back end storage (/dev/DumUDisk2) and I mapped it to the paravirtual device in the vm (xvda). You have to do something similar. I don't have a SAN/HBAs, but assuming your LUN shows up on Dom0 as /dev/sda, you might give a try to something like:
    disk = ['file:/OVS/running_pool/myvm/System.img,hda,w', # or whater you want to do for the boot/root disk
    *'phy:/dev/sda,xvda,w',* ]
    Once this works, you might want to configure multipath if your hardware supports it.
    Best of luck, keep us posted.

  • Full (Level 0) backup to local/NFS disk

    I am new to SAP's BR*Tools, and need advice in configuring it to use RMAN.  
    1. My backups are to be written to a staging area on our SAN.
    2. What specific parameters do I need to configure in the initSID.sap file to achive Full (Level 0) backup to local/NFS disk - preferably using RMAN?
    3. I have successfully performed a Full (Level 0) backup without RMAN to local disk.  Also, I have successfully done an incremental (Level 1) backup using BRTools with RMAN.  I want to get BRTools + RMAN based Full (Level 0) backup to local/NFS disk.
    --VJ

    You need to set the values for the following.
    tape_address
    archive_copy_dir
    tape_size =
    backup_mode =
    backup_dev_type =
    backup_type =
    backup_root_dir
    volume_archive
    volume_backup
    tape_pos_cmd
    Cheers
    Shaji

  • How to create Shared RAW disk on solaris 10?

    Hi,
    I have built two solaris machine using VMware. I am going to configuare RAC. how to create shared RAW disk for ocr,votedisk and asm?
    OS version = 5.10 32 bit
    Thanks

    use openfiler ( software based SAN) if do not have shared disk storage system, this is an appliance kinda software can behave as SAN storage, it can be installed on VM as a guest machine and rest configuration remain same as storage and your RAC deployment.
    http://www.openfiler.com/community/download
    alternativly you can follow below method
    http://startoracle.com/2007/09/30/so-you-want-to-play-with-oracle-11gs-rac-heres-how/

Maybe you are looking for

  • Can't get WRT54GS v6 to work as access point

    I know this may seem to be beaten to death, but I can't find the answer for my specific situation. Here is what I have: Netgear DGND3700 v2 set in use as the router/dhcp/modem/wireless. It's wireless range is **bleep** (more like pathetic). I ran a C

  • Printer does not show in Rendezvous in Preinter Setup Utility

    I am using a G4 Ti 15" laptop usually connected wirelessly to an AirPort Extreme BS. Also connected to that BS is a desktop (ethernet), cable modem, and a HP LaserJet 2200 printer. If I connected the printer directly to the laptop, it prints. However

  • ANuB figures out how == overlap transparent gif in a JPanel!

    I have to say I have a hate/love realtionship with this forum. Sorry if I ofend but I know it's hard sometimes to get an answer for help as it is purely voluntary. But I've asked help for this a bazillion times and help has been min to null, but I wa

  • Trial version for 10.4 user?

    Hi, can anyone advise me where to get a trial version of Final Cut Pro that I can use on my Powerbook running OSX 10.4.11? Can't seem to find one anywhere! Thanks.

  • Why Process 2003 when using Lightroom-Preset on Import?

    Hi, When I choose one of the predefined sharpening presets ("Sharpening - Narrow Edges (Scenic)" or "Sharpening - Wide Edges (Faces)") on import, then my process version gets set to 2003. When I choose "None" for the preset, then the process is 2010.