LR 5: 100% CPU with offline files

Hi all,
I'm experiencing a problem with LR5, something that I can live with, but I'd like to share the experience.
I'm using LR5 on a mac. The root folder used to contain subfolders named after the year (2008, 2009, etc.) but after migrating to LR5, I turned on smart previews and replaced (most of) the folders with soft links.
This setup roughly works for my needs: the softlinks point to a network path on a NAS, which may or may not be mounted (the only downside is that "synchronize folder" thinks that all files are missing...).
When I mount the network storage with the original files, and start up LR5, everything works as expected: LR reports "original + smart preview" for the pictures, etc.
But the NEXT time I start LR in "offline mode" (i.e. without the NAS available), LR uses both CPUs at ~100% for 3-4 minutes. It's still (kinda) responsive, but obviously less than perfect.
Did anyone have a similar experience?
MH

Duplicate post. Look here.
Regards,
Chris Delvizis
National Instruments

Similar Messages

  • Problem with Offline Files with Cisco NSS322 (firmware 1.3)

    Hi
    We've recently bought a new NSS322 NAS box to replace an ageing Windows SBS 2003 file server. We've upgraded the firmware to 1.3, and begun moving files across from the Windows server to the new NAS box. However, we're having a problem with Offline Files on the NAS, as attempting to make directories available offline on Windows clients (only tried Vista Business 32-bit so far) we get loads of errors stating that these files cannot be made available offline "because they are in use" (or something like that).
    Any suggestions as to why this might be? It doesn't appear to affect all files, but most of them.
    I've read somewhere that the "allow oplocks" setting can affect this if disabled, but it is already enabled on the device, so this can't be the cause of the problem.
    Any help you could provide would be much appreciated, as at the moment this issue is rendering our NAS unusable as we rely heavily on the offline files functionality.
    Thanks in advance.
    I have some more information - this appears to be related to the ownership permissions of the files on the NAS. New files created on the NAS can be made available offline, and have the ownership of the person who created the file, however, the files we moved across from the server are all listed as being owned by the user "guest" (I think this may be related to the fact that when we moved the files over, the user was logged in as "admin" rather than a user on the AD Domain). I should be able to resolve this through changing the owner of the files on the NAS, but I can't seem to do this, as I get an "access denied" error each time I try to change ownership.
    Message was edited by: redcitrus
    Is there any chance someone's going to be able to help me with this? I've currently spent £500 on a box that I can't use. If it can't handle Offline Files satisfactorily, it's no use to us.
    I've been looking at this a bit more - if appears that each user can make files they have created available offline, but files created by other people cannot - the error message that appears is: "The process cannot access the file because it is being used by another process".
    I'm getting desperate here...can anyone help?
    Message was edited by: redcitrus

    Sorry, but I don1 speak very good english..
    I was Try this quick test: As a quick test, you can do a forward all and make sure you can get into the ftp server.
    but result it is to same.
    I think, that problem is in firmware, becasue I know more people who own this router with to same firmware version (2.0.1.3), and have they to same problem how I.
    I have DISABLED FIREWALL on my router and problem hold over.
    If I replace router Cisco Linksys WRVS4400N by Ovislink router or any other, issue (problem) already no is.
    Configuration in Cisco Linksys WRVS4400N and configurations in any other router is to same, but FTP connection via Cisco Lynksys always will crash.
    If I connect to any other FTP server, sometimes requieres from me ftp login and password. It is OK, but if I assign login and password, after connect me to this FTP server, i see folders and files on this server, but without reason connecting fall down.
    Few seconds later if I connect to the to same FTP server, sometime either not request login and password and connection fall down.

  • Excel 2010 changes relative link paths to absolute in files synced with Offline Files in Windows 7

    Hello! I'm wondering if anyone else has seen this problem: I have a large number of Excel 2010 and 2003 files in a folder on my file server. This whole folder is also synced to my computer using Offline Files in Windows 7. I have a lot of references between
    cells in different Excel files, and all referenced workbooks are physically in the same folder. This all works nicely when I create these files at work - all file paths referenced in the cells are created as relative paths and the documents open correctly.
    This is, I understand, the expected and default behavior when Excel creates links. When I edit these files at home, nothing seems odd until I get back to work and sync these files back to the file server. At this point, I discovered that Excel 2010 has, when
    I saved the files while away from the corporate network, changed /all/ the cell references in any offline-edited Excel files to point at absolute paths, and that these absolute paths point to somewhere in my %APPDATA% structure. So whenever I come to work
    and I try to open an Excel file that I have recently worked with offline, I get a bunch of error messages about referenced files that are missing, although clearly they exist in the same folder as the file I've opened, and I must edit all the file references
    again, whereupon they are again created correctly as relative paths (since all files exist in the same folder), which are promptly mangled into absolute C:\....\Offline Files\.....\..... paths whenever I save them at home (and since that works too, I don't
    notice it again until I come back to work and the offline files are synced back to the real network location). This seems to be a case of Windows 7's Offline Files not being able to fool Excel 2010 into believing it is working on a file server - apparently
    Excel 2010 can see through the fakery and decides on it's own to "fix" the problem (which obviously isn't a problem since the paths are relative to begin with) by saving the paths as absolute paths instead. Yes, really clever, Excel. The exepected behavior
    according to MSKB is that links are created as relative paths, so why does it change to absolute whenever Offline Files are involved? I know Offline Files only syncs, it doesn't actually change the files, so I can conclude that Excel is the program at fault
    here. Is there a fix for this, or a known workaround? Because frankly, this bug makes it impossible for me to work in any advanced manner with linked Excel files. The sad thing is that this worked perfectly fine with Office 2003 and Windows XP. Is there a
    patch for this problem that I might have missed (I am running the latest Service Pack and I get Office updates from Microsoft Update). If not, is there a workaround I can use to prevent Excel from corrupting my links when I edit the files offline?

    Hello danceswithwindows,
    Thank you for your post.
    This is a quick note to let you know that we are performing research on this issue.
    Sincerely
    Rex Zhang
    Rex Zhang
    TechNet Community Support

  • Finder and loginwindow use 100% cpu with 10.6.8

    I recently installed the patch to OS X 10.6.8 and now the finder is very unstable and uses 100% cpu and the loginwindow uses 100% cpu also.  Yes I have Parallels 6 on this machine but I already "un-checked" the "show windows applications in doc" option which is the recommended solution.  However, this did not fix the problem.  I do not know how to determine what actual process is causing this finder / loginwindow issue.  I very much appreciate if anyone has suggestion how to fix this problem. 
    This is on a mid-2011 MacBookPro.
    thank you.

    TBauer, I had the same problem with 10.6.8 using 100% of the CPU with Parallels 6 installed.  The following steps solved this problem for me:
    0.  Take a screen shot of your Dock if you wish to remember it as currently set
    1. Launch Activity Monitor and click on the CPU column so you can see Dock using the CPU
    2.  Move   ///Users/ ...  /Library/Preferences/com.apple.dock.plist to the trash
    3.  Click on Dock and click on Quit Process to stop the Dock
    OS X will then rebuild the default Dock
    4.  Put the items back in the Dock that you want
    That solved the problem for me yesterday morning.  I've launched Parallels 6 several times now and moved the machine from my work network to home and the 100% CPU problem hasn't returned.
    Hope that works for you, too!
    I don't think using the combo updater will solve the problem.  The problem appears to be a .plist that is generated with OS X 10.6.7 and earlier that contains icons of a larger than Apple documented size causes the CPU looping.  Rebuilding the dock.plist under OS X 10.6.8 appears to solve the problem.
    Parallels has issued a knowledge bulletin saying they are looking into the problem.  On their web site they have another approach involving moving the offending icon out of the dock to solve the problem.  I like my solution better but theirs works, too.

  • Drive with Offline Files folder not available for 2 minutes after being connected to the network.

    Hi
    We use Windows SP1 with one folder from the User HomeDirectory set to be available offline.
    The problem we encounter is that when going from Offline to Online via VPN, there is a 2 minutes delay for the drive where the offline folder is located to be available again. Other drives mapped at VPN connection time are available immediately
    after we applied KB2705233. Only the drive that contains the folder that we make available offline has this delay issue.
    I found out KB2663418 but it does not apply as we have Windows 7 SP1. However the symptoms are the same:
    < This issue occurs because the client-side caching (CSC) interface tries to reconnect to the source before the network is actually available. This behavior causes the SMB cache to enter a negative state. In this situation, the CSC interface tries to
    reconnect to the source on the next 2-minute retry interval.>
    Does one know how to force a check of the connectivity or if there is another fix available ?
    thanks in advance
    bruno

    Hi,
    Why KB2663418 didn't applied to your computer? It applied to
    Windows 7 Service Pack 1, when used with:
    Windows 7 Enterprise
    Windows 7 Professional
    Windows 7 Ultimate
    Did you use Windows 7 Home Basic or Home Premium?  However, Windows 7 Home Basic and Premium does not support Offline Files.
    Thus please try that hotfix.
    Karen Hu
    TechNet Community Support

  • File server migration with Offline files involved

    Hi,
    We are planning a file server migration in following weeks.
    This morning, our customer came with the good old "Ow, and I just thought about something else"
    Here's the scenario :
    -They are using 1 network drive
    -That network drive is made offline available for all laptop users
    -Those users are spread out in several country's. No VPN connection
    -They are working for months on their offline network drive, right in the middle of the wood, no internet connection, it was already short for them to find power supply for their laptop ...
    ...nevermind
    -The day they come back to the office, the file server to which points the network drives will be offline.
    Now the 1 Million question : What happens with their "dirty" files ?
    yep exactly. those they changed 6 months ago, have no clue about if you ask them but certainly will the day I will clear the damn cache.
    My first analysis :
    -The new file server will have another name, no alias or re-using the old name is possible (the customer don't want to)
    -I can't tell to those laptops "hey for that offline cache, please use this new network drive"
    So :
    >> Those users have to identify manually files they changed while being offline, copy them locally on their machine and work that way the time they come back to the main office.
    >> When they finally show up, clear the cache, offline the new network drive and replace file copied locally
    >> If no internet connexion available in the branch office, let them work locally, it's still better than this hybrid-non-sense 6month offline folder "solution". If internet connexion is mainly available remotely, propose some Citrix/View/RDS
    Setup which is, for me, a more professional looking solution
    Someone has another (better?) idea/solution ?

    Hi, 
    I suggest you ask users to collect their laptop to internet, then start offline files synchronization on the old file server. After that, use
    Robocopy to copy the date from the old server to the new server. As the offline files cache cannot be recognized by the new file server, so we need to synchronize data first.
    If the older server cannot be enabled, as you mentioned, you might need to ask users to copy their changed files to the new file servers.
    Regards, 
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Folder redirection with Offline files enabled - Can I change the location of the locally cached files?

    I have a 2012 r2 server setup with folder redirection and offline files enabled. All this works perfectly.
    As you probably know, the local cache for offline files is stored at c:\windows\csc\v2.0.6\namespace\servername\
    The problem I have run into is that one user (who cannot be told to delete files cough ceo cough) has a very large documents folder, and a small SSD drive for his C drive. So the offline files are filling up his SSD. He wants all his files to be synced,
    so decreasing the max disk usage is also not an option.
    What I would like to do is move the Offline files to his D drive which is a large drive, however I have been unable to find any official method for doing this. Is there any provision to change this?
    If not, would it work to move the entire \servername\ path to the d drive and then create a junction at c:\windows\csc\v2.0.6\namespace\servername\ that points to d:\servername\?
    Thanks,
    Travis

    Hi,
    The following article is for Windows Vista but it should work at least in Windows 7.
    How to change the location of the CSC folder by configuring the CacheLocation registry value in Windows Vista
    http://support.microsoft.com/kb/937475
    Meanwhile creating a symbolic link should also work like:
    mklink /d "C:\Windows\CSC" "D:\CSC"
    Note: It will create d:\csc folder so you do not need to manually create it. 
    Note 2: As mentioned above, you may need to re-sync the offline files. Personally I also think robocopy will not work. 
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • 100% CPU spike opening file

    Hi there,
    I'm using JDev 10.1.3.0.4(SU4) under Windows 2000, configured to use the default 512MB of memory. When I attempt to load one of several files JDev spikes to 100% CPU utilization, very reproducibly (I've been restarting JDeveloper all afternoon trying to work around the problem), even when I've freshly rebooted and all extensions have been turned off!
    The file is as so:
    -- begin
    package blah;
    import blah;
    public class bleh implements blah
    public bleh()
    --end
    If I edit the file from DOS and add a linefeed before the closing brace, the file loads fine but one of its brethren that's just like it continues not to load. I've turned many switches on and off this afternoon -- is there something I can do not to have to discover what files need hand-tweaking by having to restart the whole thing?
    thanks,
    James
    PS the actual file has normal indentation not showing up here

    Yesterday I was close, today I found the reason. IT'S SO F***ING STUPID. It's actually another bug, which is only fixed in mustang:
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6317789
    For some reason I indeed had a shortcut on my desktop to the desktop itself. We have some document management system where you can checkin and checkout documents, and I configured it so to check them out to the desktop, and apparently that program created a shortcut on my desktop to the checkout folder!
    Well, anyway, I have all the latestest security patches and drivers now - and also Java is working again. Can I now assign the duke dollars to myselfi? :-)

  • 100% CPU with DAQmxRegisterSignalEvent(.., DAQmx_Val_CounterOutputEvent,..) / NIDaqmx 8.71 C client

    Hi All,
    Please see code below for my problem. I removed error checking to make things easily readable.  Whenever I enable callbacks using DAQmx_Val_CounterOutputEven,  the following code utilizes 100% CPU even with the 10Hz counter!!!!  Am I missing something here?  I tried all the options for interrupt based processing instead of busy polling to no avail.  This is just unacceptable.  These days one can't even deliver an app that is permanently running at 100% cpu without raising some eyebrows...
    I must be missing something...
    My application requires that I generate TTL level trigger pulses and at the rising edge of each trigger pulse, I also need to execute some very short C code. This code should start running within .5ms of the trigger edge.  This should be no problem for a modern CPU. This is why I am using DAQmx_Val_CounterOutputEvent.  However, I cannot have NIDaq implement this using busy polling (which I assume it's doing since it's eating 100% cpu).  It is simply not acceptable to be running at 100% CPU for several reasons.  The documentation and this support site talk about interrupt based processing, and the hardware is obviously capable of it.
    So, the 2 questions I have are:
    1. What do I have to do in order to have NIDaq use interrupt based processing instead of busy polling (for the DAQmx_Val_CounterOutputEvent callbacks in my application)?
    2. In addition to (1), can I make it so that my callback only gets called on the rising edge of the trigger signal instead of both the rising and falling edges?  Perhaps I can wire the counter output to a general purpose input and then generate interrupts based on the rising edge that the input measures? 
    Please help,
    Philip
    // currently the callback simply does nothing to make sure i'm not causing the 100% cpu problem
    static int StaticTriggerCallback(TaskHandle taskHandle, int32 signalID, void *callbackData)
    // we will get called once for every rising and once for every falling edge (20hz).  We would prefer to be called only on the rising edge.  Is that possible?
    return 0;
    void TTCameras:tartTimer(void)
     // Configure DAQmx ct3 as our spin camera trigger.  10Hz
     DAQmxCreateTask("", &taskHandle);
     DAQmxCreateCOPulseChanFreq(taskHandle, "/Dev1/ctr3", "CameraSpinTrigger", DAQmx_Val_Hz, DAQmx_Val_Low, 0.0, 10.0, 0.5);
     DAQmxCfgImplicitTiming(taskHandle, DAQmx_Val_ContSamps, 1000);
     DAQmxSetReadWaitMode(taskHandle, DAQmx_Val_WaitForInterrupt);
    // I tried this but it does not help:
     //DAQmxSetRealTimeWaitForNextSampClkWaitMode(taskHandle, DAQmx_Val_WaitForInterrupt);
     // If I register this callback, CPU usage goes to 100%.  This callback needs to be done based on interrupts!
     DAQmxRegisterSignalEvent(taskHandle, DAQmx_Val_CounterOutputEvent, 0, (DAQmxSignalEventCallbackPtr)StaticTriggerCallback, NULL);
    DAQmxStartTask(taskHandle);

    Duplicate post. Look here.
    Regards,
    Chris Delvizis
    National Instruments

  • 100% cpu load on file open

    Hi,
    very recently I have a very strange problem. whenever I open a file dialog on windows with a java program, the cpu load goes to 100% and I have to kill the process. It does not make a difference which java application I choose, it happens with dbvisualizer, freemind or ldapbrowser all the same.
    Here's what I tried already:
    * uninstall recent security patches
    * upgrade from 1.5.0_06 to 1.5.0_07
    * kill all running process and services (firewall, clearcase, virus scanner, all kind of other stuff) that are not required to run the machine (and boy was my computer fast, after I did this! lol)
    All with no effect whatsoever.
    Facts:
    * It''s not related to the java program
    * It's not related to the JVM
    It's somewhere in the integration with windows, but where is it broken, and what can I do, to fix it? Any idea what I could do is really appreciated!
    Help :-)

    Yesterday I was close, today I found the reason. IT'S SO F***ING STUPID. It's actually another bug, which is only fixed in mustang:
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6317789
    For some reason I indeed had a shortcut on my desktop to the desktop itself. We have some document management system where you can checkin and checkout documents, and I configured it so to check them out to the desktop, and apparently that program created a shortcut on my desktop to the checkout folder!
    Well, anyway, I have all the latestest security patches and drivers now - and also Java is working again. Can I now assign the duke dollars to myselfi? :-)

  • Ora_m000 consuming 100% cpu with ORA-00600 [kcbz_check_objd_typ_3], [0]

    Hello,
    I have been trying to troubleshoot high cpu usage in our Ebiz R12 environment where cpu is running 100% by ora_m000 process and I see ORA-00600 in alert log.
    I have tried work around from metalink note 466049.1 and even applied fixed for patch 4430244, 5752105 but it hasn't fixed the issue.
    Before going through opening a ticket with support, I thought try my luck here. I'd really appreciate any suggestions to fix this problem.
    content of trace file.
    *** KEWROCISTMTEXEC - encountered error: (ORA-00600: internal error code, arguments: [kcbz_check_objd_typ_3], [0], [0], [1], [], [], [], [])
    *** SQLSTR: total-len=176, dump-len=176,
    STR={insert into wrh$_sysstat   (snap_id, dbid, instance_number, stat_id, value)  select    :snap_id, :dbid, :instance_number, stat_id, value  from    v$sysstat  order by    stat_id}
    *** KEWRAFM1: Error=13509 encountered by kewrfteh

    What's your database version in four decimal places?
    ORA-600 is Oracle internal error you need to work with Oracle support to fix the problem.
    Also use "Troubleshoot an ORA-600 or ORA-7445 Error Using the Error Lookup Tool"
    The lookup tool will show you the known bug related to the error message.

  • Win 7 pro 100% cpu with only a few programs running

    I have a 4.2 rating with a amd athlon 64 3400+ running at 2.41ghz, 2.5 gig ram in 32bit windows 7 pro.  My motherboard doesn't have windows 7 drivers which may contribute to the issue.  Asus k8N so I have no illusions that this computer should be lightning quick but it is very difficult to keep my patience due to the high cpu constantly.  I run about the same as I did in XP and I didn't have nearly the high cpu usage.  Outside of antivirus (AVG free), logmein, creative camtray, and tversity running all the time.  If I then open Outlook 2007, Firefox more then one tab and itunes, my cpu is at 100%, it's still usable for a definite slow down.  I've set the visual effects to best performance and that really hasn't made a difference.  What suggestions can you offer other then stopping logmein, the camtray, and tversity all of which are not using hardly any cpu.  It's constantly flipping between firefox and itunes primarily.

    Hi Cougar78,
    I would like to confirm what process uses the high CPU usage you find with Task Manager?
    You may update the BIOS and the hardware drivers first to check the issue.
    If it does not work, I also would like to suggest you test the issue in Safe Mode and Clean Boot.
    What are the results in both modes?
    Regards,
    Arthur Li - MSFT

  • Dispatcher using 100% CPU with 4GB space "used"

    A short story with a happy ending ...
    A user tried to create a "big" table with a possibly somewhat cartesian select, resulting in the "oh no you don't: that would exceed 4GB" error/cancel scenario.
    They cancelled and went back to working on the few hundred megs of data but "system seems very slow ..." . The dispatcher d000 was consuming 99.9% CPU. After a fair amount of looking around on the system and in this excellent forum (some hints but no exact match - hence this post) I decided that the problem was the TEMP tablespace being stuck with nearly 4G size (5MB used) but unshrikable .
    Solution: I squeezed SYSAUX and USERS till they squeeled to get a couple of extra available MB, then created a 10MB new temporary tablespace, made it default temp, (would have altered any users set with non-default temporary tablespace but assumed there were none so ... ;-) bounced the database, shrank/fixed the original temp and made it the default again (with a lower max size constraint and a longing for the production version where temp will not count ;-)
    For info: OS/Environment Linux SuSE 10 x86_64 : Opteron with 8GB Mem and
    lots of disk ...
    (No relevant info in the trace files/alert log etc so not posted - everything up and unremarkable but slow and with 1 very busy dispatcher process)
    Otherwise pretty standard XE install running excellently for months.
    Thanks everyone!

    Gruss Dietmar,
    thanks for linking this to a better and clearer exposition of the space issues
    (<excuse>it was late here and home was already phoning to say the dinner was in the cat ;-)</excuse>.
    My reason for posting remains the strange 99.9% CPU user of the dispatcher while the system was in the "on-the-edge-of 4GB" state (and this behaviour survived an instance bounce and produced no other sign (trace file or alert log) than the evidence of ps/top or the obvious degradation in performance. Could be there's a little bug for the developers to squash in the marginal case (which could even apply in production if you fill (almost ??) the data counted in production ? Whatever the case the problem is pretty benign and easily remedied when you know how. I guess the GUI shrink facility and a touch of documentation should help out in the production release as well.
    Best Regards,
    P-)

  • With Latest Update, Premiere CC Crashing at 100% Export, with Increased File Sizes

    I'm a bit baffled, and hoping someone can help. With the latest update of Premiere CC, I am having trouble exporting to Vimeo. As a filmmaker and educator in Chicago, I do a lot of exporting to Vimeo, up until now with great success. Over the past few days, not only is Premiere crashing at 100% export, but it seems to be exporting larger files than before (and with fewer Vimeo options, I notice, in Media Encoder). Not only did this happen with a 30-minute class exercise straight out of a Canon camcorder, that inexplicably was being exported at around 3 or 4Gs, but more importantly: I am trying to replace the video file on Vimeo of a feature film of mine, and the EXACT same feature film, at the exact same length (82 minutes), with the exact same export stats, that was 3 GB in all it's past successful exports, is now exporting, inexplicably, as a 9 GB file - it's Vimeo HD 1080 - always is. Nothing has changed at all on my end.
    This is a problem, as I have several items to export to Vimeo in the coming days. What's going on?
    Thanks,
    Stephen

    One other specific change I've noticed in the export settings is that export is now defaulting to H.264 Match Settings - High Bitrate, whereas before it defaulted to whatever settings I had previously used.

  • 100% CPU usage with XP Cooler from PCAlert4

         Any idea why I would be running 100% cpu usage when running XP Cooler from PCAlert4?  I noticed PC was sluggish when running it so CTRL+ALT+DEL and under performance is 100% CPU with 245MB Ram Usage. No other TSR's running - several application but under 10% CPU usage without PCAlert and XP Cooler.
         I was also wondering if anyone had any ideas on the video?  I have the K7N2G with 768MB DDR PC 2100 (3x256) XP2200+ 266FSB running 1804MHz - Maxtor 30GB 7200RPM - 350W PSU (raidmax - yea yea I know, dont say a word though LOL) on board video and here is the problem - graphics are great, but they get choppy - running NFS HotPursuit 2 at 32bit 800x600 (doesnt seem to matter even when running 16bit 640x480still does it.) what setting am I missing?  Thanks in advance.

    Deathstalker (Richard),
         I did what you suggest in memory management and also changed out the mem on the board - it did make a difference, not as much as I had hoped, but it did.
    Raven,
         I have used Fraps in the past, forgot about it, but went ahead and downloaded it to get a better look.
    To the Both of you,
         Thanks so much for your replies, sometimes it just takes someone outside looking in to see the things overlooked.  But I went ahead and also ran all of the latest updates for the board (semi new system and I only installed with the CD that came with the Mobo) and everything is running better than expected.  Thanks again for your thoughts - keep 'em flowin, it's amazing whom we could help.  Take care.
    Don

Maybe you are looking for