Weblogic.kernel.Default not using all threads

We are running weblogic 8.1 sp4 on HP Tru64 5.1.
We have an admin server and 3 managed servers in a cluster.
When monitoring the weblogic.kernal.Default execute threads through the weblogic console, threads 0 to 15 have zero total requests, and the user is marked '<WLS Kernal>'. Threads 16 to 24 share the entire workload with thousands of requests shared between them, and have user marked as 'n/a' (when idle).
I have done a thread dump, and threads 0 to 15 just seem to be idle.
This is the case for the admin server, server 1 and server 3. But server 2 actually shows all threads 0 to 24 as sharing the workload, and the user is 'n/a' for all threads.
Is there a reason for this?
Below is a snippet of the thread dump showing a thread that is has user '<WLS Kernel>':
**** ExecuteThread: '3' for queue: 'weblogic.kernel.Default' thread 0x207d8e80 Stack trace:
pc 0x03ff805c2cc8 sp 0x200026ecc00 __hstTransferRegisters (line -1)
pc 0x03ff805b2384 sp 0x200026ecc00 __osTransferContext (line -1)
pc 0x03ff805a3d20 sp 0x200026ecd90 __dspDispatch (line -1)
pc 0x03ff805a309c sp 0x200026ecde0 __cvWaitPrim (line -1)
pc 0x03ff805a0518 sp 0x200026ece80 __pthread_cond_wait (line -1)
pc 0x03ffbff31558 sp 0x200026ecea0 monitor_wait (line 625)
pc 0x03ffbff5ea48 sp 0x200026ecf10 JVM_MonitorWait (line 1037)
pc 0x00003001304c sp 0x200026ecf20 -1: java/lang/Object.wait(J)V
pc 0x0000302047dc sp 0x200026ecfd0 2: java/lang/Object.wait()V
pc 0x000030204784 sp 0x200026ecfe0 8: weblogic/kernel/ExecuteThread.waitForRequest()V
pc 0x0000302044ec sp 0x200026ed000 43: weblogic/kernel/ExecuteThread.run()V
pc 0x03ffbff91b28 sp 0x200026ed030 unpack_and_call (line 100)
pc 0x03ffbff89ca4 sp 0x200026ed040 make_native_call (line 331)
pc 0x03ffbff27bb0 sp 0x200026ed0f0 interpret (line 368)
pc 0x03ffbff36bf0 sp 0x200026ed9d0 jni_call (line 583)
pc 0x03ffbff378d8 sp 0x200026eda60 jni_CallVoidMethodA (line 70)
pc 0x03ffbff37980 sp 0x200026eda90 jni_CallVoidMethod (line 88)
pc 0x03ffbff2ecf4 sp 0x200026edb10 java_thread_start (line 592)
pc 0x03ffbff2e720 sp 0x200026edb30 thread_body (line 460)
pc 0x03ff805cf278 sp 0x200026edc00 __thdBase (line -1)

Grand central dispatch - infancy, not really doing its job, and I don't think apps have to be specifically written for HT, but they do have to not do things that they use to - prevent threads from going to sleep! or be parked.
high usage is not necessarily high efficiency. often the opposite.
Windows 7 seems to be optimized for multi-core thanks to a lot of reworking. Intel wants and knows it isn't possible to hand code, that the hardware has to be smarter, too. But the OS has a job, and right now I don't think it does it properly. Or handle memory.
Gulftown's 12MB cache will help, and over all should be 20% more efficient doing its work.
With dual processors, and it doesn't look like there are two quick path bridges, data shuffling has led to memory thrashing. Use to be page thrashing with not enough memory. Then core thrashing but having the cores, but not integrated (2008 is often touted as being greatest design so far, but it was FOUR dual-cores, 2009 was the first with a processor that really was new design and (native) 4-core.
One core should be owned by the OS so it is always available for its own work and housekeeping.
The iTunes audio bug last year showed how damaging and not to implement code and how a thread could usurp processing and add a high cpu temperature while basically doing nothing, sort of a denial of service attack on the processor - those 80*C temps people had.
All those new technology features under development and not like OpenCL, GCD and even OpenGL are tested, mature but rather 1.0 foundation for the future. A year ahead of readiness.

Similar Messages

  • Why are the weblogic.kernel.Default Execute Threads used by WLS Kernel

    In my Admin Console, it's displaying 10 out of the 15 weblogic.kernel.Default Execute Threads are used by the WLS Kernel user. The total requests column for these threads are showing 0. The other 5 threads are showing 20K to 40K requests. Why is the WLS Kernel users hogging these threads and not allowing the applications to use them?

    Hi,
    As work enters a WebLogic Server, it is placed in an execute queue. This work is then assigned to a thread within the queue that performs the work.
    By default, a new server instance is configured with a default execute queue, weblogic.kernel.default, that contains 15 threads.
    Go through the following link and find the usefull information on this issue.
    http://e-docs.bea.com/wls/docs81/ConsoleHelp/domain_executequeuetable.html
    Regards
    Anilkumar kari

  • Weblogic.kernel.Default

    I have two servers, serverOne and serverTwo, that are running different applications on WL815, Solaris 10, x86 platform.
    Both have ThreadCount=25, Threads Max=400, and Threads Increase = 15.
    1. I have noticed when I monitor the weblogic.kernel.Default queue, no matter how much the load is on the server, there are no more than 25 thread in the queue. Why is it?
    2. On serverOne, only 6 threads out of the 25 are being used by <WLS Kernel> and the rest of the threads are allocated to the applications. However, on serverTwo, about 19 threads out of 25 are being used by <WLS Kernel>, and ther rest are used by the application. FYI, there are 4 application queues deployed on serverTwo. What is the reason for so many threads being used by <WLS Kernel> on ServerTwo?
    Thanks for all your answers in advance!

    Can you post the complete stacktrace? That may yield a clue.

  • ExecuteThread for queue: 'weblogic.kernel.Default' stuck

    Hello!
    I have web application deployed on WebLogic 8.1 SP 4 cluster which is seriously loaded (~30 concurrent users) and periodically I get such messages wrote to log file:
    <Apr 2, 2007 10:11:54 AM EDT> <Error> <WebLogicServer> <BEA-000337> <ExecuteThread: '51' for queue: 'weblogic.kernel.Default' has been busy for "233" seconds working on the request "Http Request: /myurl/some_jsp.jsp", which is more than the configured time (StuckThreadMaxTime) of "180" seconds.>
    As far as I could find from BEA documentation these messages mean that some threads (it is ExecuteThread#51 here) are runned for too long time.
    I've made a thread dump and found that for a few (10-15min) thread is doing the same work:
    "ExecuteThread: '51' for queue: 'weblogic.kernel.Default'" daemon prio=1 tid=0x6ca004b8 nid=0x3094 runnable [0x6c9fe000..0x6c9ff868]
         at java.net.SocketOutputStream.socketWrite0(Native Method)
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkUtils.writeChunkNoTransfer(ChunkUtils.java:280)
         at weblogic.servlet.internal.ChunkUtils.writeChunks(ChunkUtils.java:241)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:311)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:387)
         at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:254)
         at weblogic.servlet.internal.MultibyteOutput.write(ChunkOutput.java:482)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:125)
         at weblogic.servlet.jsp.JspWriterImpl.write(JspWriterImpl.java:455)
    As you can see, thread is just writing to the socket for 10-15 minutes!!!
    Can you help with some suggestions? Can this problem occure because of huge overloading to the server?
    Thanks for all your responses.
    Thanks

    I have installed the Spring application on WebLogic 8.1 SP4 on Red HAt Linux 3.0.
    These message doesn't appear for WebLogic 8.1 SP4 on windows.
    2007-05-10 10:22:50,356 [l.Default'] DEBUG org.springframework.web.servlet.DispatcherServlet: Rendering view [org.springframework.web.servlet.view.JstlView: unnamed; URL [/util/noAccess.jsp]] in DispatcherServlet with name 'spring'
    2007-05-10 10:22:50,376 [l.Default'] DEBUG org.springframework.web.servlet.DispatcherServlet: Could not complete request
    java.net.SocketException: Connection reset
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:96)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkUtils.writeChunkTransfer(ChunkUtils.java:267)
    at weblogic.servlet.internal.ChunkUtils.writeChunks(ChunkUtils.java:239)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:311)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:387)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:254)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:125)
    at weblogic.servlet.jsp.JspWriterImpl.write(JspWriterImpl.java:455)
    at jsp_servlet._util.__noaccess._writeText(__noaccess.java:74)
    at jsp_servlet._util.__noaccess._jspService(__noaccess.java:283)
    at weblogic.servlet.jsp.JspBase.service(JspBase.java:33)
    at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1006)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:419)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:315)
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:322)
    at org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:111)
    at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250)
    at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:965)
    at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:744)
    at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:663)
    at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:394)
    at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:348)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:740)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1006)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:419)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:315)
    at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:322)
    at weblogic.servlet.internal.ServletResponseImpl.sendError(ServletResponseImpl.java:531)
    at weblogic.servlet.internal.ServletResponseImpl.sendError(ServletResponseImpl.java:383)
    at weblogic.servlet.FileServlet.findSource(FileServlet.java:281)
    at weblogic.servlet.FileServlet.service(FileServlet.java:184)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
    at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:1006)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:419)
    at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:315)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6718)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3764)
    at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2644)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

  • How to get rid of the weblogic.kernel.Default errors and warning?

    Hi there,
    When i'm running my application deployed on WLS8.1 SP3 and the application is running fine, but the following error and warnings were thrown.
    2004-08-05 11:26:30,453 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] ERROR com.bea.wlw.runtime.core.util.Config - Failed to obtain connection to datasource=cgDataSource, using generic DB properties
    <Aug 5, 2004 11:26:30 AM SGT> <Error> <WLW> <000000> <Failed to obtain connection to datasource=cgDataSource, using generic DB properties>
    2004-08-05 11:27:12,281 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] WARN org.apache.jcs.config.OptionConverter - Could not find value for key jcs.default.elementattributes
    2004-08-05 11:27:12,282 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] WARN org.apache.jcs.engine.control.CompositeCacheConfigurator - Could not instantiate eAttr named 'jcs.default.elementattributes', using defaults.
    2004-08-05 11:27:12,308 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] WARN org.apache.jcs.config.OptionConverter - Could not find value for key jcs.system.groupIdCache.elementattributes
    2004-08-05 11:27:12,308 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] WARN org.apache.jcs.engine.control.CompositeCacheConfigurator - Could not instantiate eAttr named 'jcs.system.groupIdCache.elementattributes', using defaults.
    2004-08-05 11:27:12,386 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] WARN org.apache.jcs.config.OptionConverter - Could not find value for key jcs.region.CodeTableCache.elementattributes
    2004-08-05 11:27:12,386 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] WARN org.apache.jcs.engine.control.CompositeCacheConfigurator - Could not instantiate eAttr named 'jcs.region.CodeTableCache.elementattributes', using defaults.
    2004-08-05 11:27:13,527 [ExecuteThread: '13' for queue: 'weblogic.kernel.Default'] WARN com.bea.wlw.netui.script.el.NetUIReadVariableResolver - Could not create a ContextFactory for type "com.bea.netuix.servlets.script.PortalVariableResolver$PortalContextFactory" because the ContextFactory implementation class could not be found.
    May i know why this is so? How to get rid of these error and warnings?
    Thanks
    Derek
    Message was edited by derekchan at Aug 4, 2004 8:41 PM
    Message was edited by derekchan at Aug 4, 2004 8:42 PM

    you didnt seem to have configured the datasource / connection pool / database control with proper db properties
    you should check your configuration to see if you have done it right and then you should check your application and the properties settings in the controls to ensure they are right.

  • Weblogic.kernel.Default Queue problems

    Hello,
    We have a problem with our Production server where the threads in the weblogic.kernel.Default queue simply hang up .
    The Stuck Thread Max Time and the Stuck Thread Timer Interval have been configured for 60s but many times the threads simply dont get UnStuck. :(
    Is there anyway to kill the threads without restarting the Server itself ???
    thanks,
    sg

    We generated varios thread dumps and we found a number of threads in "locked" state with LDAP :
    "ExecuteThread: '15' for queue: 'weblogic.kernel.Default'" daemon pri
         at java.lang.Object.wait(Native Method)
         - waiting on <73e5b9d0> (a com.sun.jndi.ldap.LdapRequest)
         at com.sun.jndi.ldap.Connection.readReply(Connection.java:418)
         - locked <73e5b9d0> (a com.sun.jndi.ldap.LdapRequest)
         at com.sun.jndi.ldap.LdapClient.getSearchReply(LdapClient.java:701)
    The thing is, there is nothing wrong with LDAP. We are simulating new requests to LDAP and LDAP is responding is less than 1 s ! But the thread dump indicates the threads are hung in LDAP :(
    The LDAP admin did a thorough check and came up with nothing... Is there anything else I can check for ?
    thnx,
    sg

  • Weblogic.kernel.Default' has become "unstuck"

    Running WL 8.1 sp2 on Windows XP Pro. with 2Gb. of memory.
    I have an EJB 2.0 CMP application accessing an Oracle 8.1.7 database.
    I retrieve 1,440 rows and then start processing each row, and submitt "Jobs" to
    a third party product using WebServices.
    It only had 3 rows remaining to process when I got;
    <03-Feb-2004 13:45:47 o'clock GMT> <Info> <WebLogicServer> <BEA-000339> <ExecuteThread:
    '10' for queue: 'weblogic.kernel.Default' has become "unstuck".>
    Any pointers?
    Thanks.

    增大JVM堆栈,如果不行的话打开GC详细输出看看

  • Can not use all the tools in my drawing markups any ideas on getting them to work?

    Can not use all the tools in my drawing markups any ideas on getting them to work?

    Hi tonys60181,
    Could you please let me know what version of Adobe Acrobat are you using.
    What all drawing tools are you unable to access?
    Is this issue specific to one PDF file or all?
    What exactly happens when you try to use any drawing markup?
    Please let me know.
    Regards,
    Anubha

  • Premiere Pro CC not using all resources... 4k

    I just put together a new workstation machine for editing 1D-C 4k files in Premiere Pro and After Effects.
    Specs...
    i7 3770     3.4
    Gigabyte Z77 Mobo
    32gb Corsair Vengance DDR3 1600 RAM
    250gb Samsung SSD
    4x 2TB Seagate 7200 HDDs in RAID 0 (Yes I know it isn't backed up, we have that part covered)
    Nvidia Quadro K4000 (3gb vram)
    Anyway, we transfered a project with all the media files to our RAID media drive.
    It opened up very quickly and loaded all the media much faster then our old workstation.
    Then I hit the spacebar (perview) and the problems started..
    It won't play back smoothly at all,
    AND it is only running the CPU at 10-20%, GPU 10-50%, ram at 25% and it is not really taxing the drives very hard either...
    Why won't PP use all the resources at 100% like I have previously seen it do on other machines?
    ** The project we are trying to play is 2 layers of 4k .MOV footage in a 1080 timeline... with sharpening, Fast color corrector, and RGB curves.
    ** It also drops every fourth frame when playing back just one layer of 4k .mov footage in a 1080 timeline with NO Effects applied. (Much smoother, but still not using all resources)
    What I am asking, is why would it be giving up so quickly? Before even taxing the resources very hard?

    I too have a recent 1D-C project I'm editing.  That 1D-C is something, yes?  Most filmic images I've ever seen from a DSLR.  But I digress. 
    I'm following a workflow closer to the one used by Philip Bloom and Shane Hurlbut.  The Motion Jpeg files will not play well at all on your system.  It's strictly a capture medium - not an editing one.
    I converted all my 1D-C files to Prores 422 at 4K size using Mpeg Streamclip.  Those files imported beautifully and edit well on my Retina Macbook Pro laptop.  I usually set playback to 1/2 quality, ps. 
    I have had problems using CUDA, so I'm sticking to software mode.  But I don't see any problem with being able to cut my film and finish on my computer.
    Also - my sequence is a 4K sequence right in Premiere.  I simply chose a clip and "made a sequence" from that.  Why scale when I don't have to?  I've exported to 1080p easily from the sequence. 
    If anyone finds this workflow less than ideal, I'm all ears.  Thanks. 

  • HELP: Premiere Pro CC not using all CPU and RAM during rendering and export

    Hello,
    I am using Premiere Pro CC on a Windows 7. My timeline is quite simple with two videos, one with the movie (mpeg) and the other with the subtitles (avi).
    When I render the sequence in PP or export, the rendering time is way too slow and it only uses around 15-20% of the CPU and 3 GB of RAM.
    My hardware config is :
    - CPU : i7-4770k 3.50Ghz
    - RAM : 8 GB
    - Disk : 2 x 3 TB SATA (no raid)
    RAM is not the bottleneck, neither the disk access.
    I have tried rendering and exporting the same project on an iMac (with an i5 2.7 Ghz and 4 GB RAM and only 1 disk) and the result is 4x faster !!!
    The CPU usage is close to 100% as well as RAM usage.
    So how come PP uses all resources availble on an iMac and not on a Windows 7 ?
    Is there any known bug or software bottleneck on Windows 7 ?
    My machine is brand new and nothing much installed besides Adobe products.
    Any help is very much appreciated.
    Thanks,

    I just rendered out a a 2 minute sequence with about 100 clips in it and Colorista effects on everything to the Vimeo 1080 H264 preset. It took about 5 minutes to render straight from Premiere, it used all the recourses it could, my CPU was running at near %100, same with my ram and GPU, I was happy.
    Then I did another render with Red Giant Denoiser and it now wants to take 30 mins and it is only using about %20 of the recourses available. My problem isn't that its taking longer with Denoiser but that its not using all of my computers CPU and GPU.
    Im rendering at maximum render quality and bit dept to H264 (Im happy to wait the extra time), if I try to use VRB 2 it encodes 1 pass at a time and wants to take up to 40 minutes.
    I would appreciate some advice on this.
    Premiere Pro CC 2013
    2.6 GHz Intel Core i7
    NVIDIA GeForce GT 750M 2048 MB (CUDA GPU enabled in Premiere)
    16 GB 1600 MHz DDR3
    OS X 10.9.4 (13E28)

  • SAP job not using all dialog processes that are available for parallel processing

    He Experts,
    The customer is running a job which is not using all the dialog processes that are available for parallel processing. It appears to use up the parallel processes (60) for the first 4-5 minutes of the job and then maxes out about 3-5 processes for the remainder of the job.
    How do I analyze the job to find out the issue from a Basis perspective?
    Thanks,
    Zahra

    Hi Daniel,
    Thanks for replying!
    I don't believe its a standard job.
    I was thinking of starting a trace using ST05 before the job. What do you think?
    Thanks,
    Zahra

  • MBP not using all memory while rendering video

    Hey,
    When I do rendering in Adobe Premiere Por CS6, my Mac is not using all the available RAM: 2773 MB used....5416 MB free
    Memory Pages:
                                  1658 MB active....1099 MB wired
                                  5420 MB inactive....11.2 MB free
    it also says that:       3 swap files at peak usage
                                   256 MB total swap space (20 MB used)  
    350 GB free storage space( total 750 GB)
    I have a MBP (OS x Lion) late 2011 with i7 2.8 GHz and 8 GB RAM
    Adobe Premiere Pro is set to use 6.5 GB of RAM but not even while working in the programme it has ever used that much of memory.
    During rendering, it is the only app running, except for Google Chrome.
    If this info is not enough, just ask what do you need to know...
    PLEASE HELP, rendering takes forever.
    Thanks,
      kund

    You never said what model Mac you have. Since this is the MacBook Pro forum, I assume you have some version of the MBP. 390% CPU would indicate CPU-boundedness for a quad-core CPU without active hyperthreading, or a dual-core CPU with hyperthreading active.
    Given the high CPU usage and the low swap usage, it seems unlikely that memory usage is the problem, but I'm not a Premiere Pro user, so I couldn't guess what it.
    You might try posting your question in an Adobe forum for Premiere Pro.

  • System not using all avail. mem.

    Hi all, did a lot of forum searching and haven't seen this topic.
    I'm using PPCS5(in production Premium) While rendering, the program is not using all available memory. before I go on here is the system I am using:
    Win7pro
    EVGA X58FTW mobo
    I7 970
    6x4gig Kingston hyperX 1600 RAM
    Quadro fx4000
    WD 140gig Raptor sys. drive
    2x1TB RAID 0 (3gig SATA) data
    2x2TB RAID 0 (6gig SATA) data
    see image:
    I have 21gigs allocated to the program.
    Is the bottleneck the cache memory?
    This is what I'm outputting:
    Thanks for looking.

    Bob Sackter wrote:
    I am encoding H.264 640x360
    Source is 1920x1080p60 HDV 8 sources,  With a timecode window
    bitrate set at 2.0 1 pass VBR
    I'm making a multicam reference.
    There is a little contradiction in the information here, HDV is 1440 x 1080 i60 or possible p30 I do not know of any camera that is 60p

  • Lightroom 6 / CC2015 - Facial Recognition Terribly Slow, not using all of CPU or GPU, still keeps going when paused.

    This is actually a couple of issue but wondering if others are experiencing it and worked through it.
    I am using a Dual Xeon CPU 3.2 ghz Mac Pro, Catalog on SSD, Images files on Mirrored Pair, Dedicated GPU (max I can install in my version of Mac Pro) and hardware acceleration enabled in Lightroom.
    I watched the Lightroom 6 Facial Recognition tutorial which leaves out a lot of the bulk editing and says basically let it loose on your whole catalog.. NOT recommended.
    I started out with a couple small portrait galleries that identified a couple hundred total people to seed facial recognition so it didn't suggest everyone is the first person I confirmed (which it will do otherwise). I have also optimized my catalog multiple times.
    I have encountered the following serious performance issues and bugs with Facial Recognition.:
    Lightroom Facial Recognition goes to a ridiculous crawl after about 2000 images to be confirmed. (i.e. 2000-2300 in a couple hours, 800-1200 in the next 12 hours)
    Lightroom becomes largely unresponsive after having a fair number of images to be confirmed, even after pausing Address and Facial Recognition. So even selecting 4 rows of images can take 5 minutes with several long pauses.
    Once I select and click confirm it takes up to 2 minutes to update the "to be confirmed" list again.
    When I click on an individual at the top of the page, pause facial recognition and address lookup it still continues to "Look for similar faces" [BUG!!!!!!!!] even though all I want to do is just confirm some individuals more quickly in bulk with the images already identified.. not continue to look for more as a work around for the painfully slow responsiveness of the module.
    The odd part is that with all of the performance issues Lightroom will not use more than 20-30% of my two Xeon CPUs, barely touches my GPU (<10% CPU, 30% memory), my and no more than 35% of my memory. Computer Temps are also barely above startup temperatures and 15-25 degrees cooler than when I run other applications which will consume my entire CPU and memory if I let it. I have explored Lightroom's settings but seen nothing further I can configure to speed it all up.  I have also attempted the operation on images on the SSD, my drobo (known to be slow), an independent fast disk I have, and a pair of raided disks and have the same issues.
    I will also note that all of my other applications seem to continue to operate just fine.. the slowness seems to be contained to the Lightroom application itself.
    Lightroom version: 6.0 [1014445]
    Operating system: Mac OS 10 Version: 10.10 [3]
    Application architecture: x64
    Logical processor count: 8
    Processor speed: 3.2 GHz
    Built-in memory: 18,432.0 MB
    Real memory available to Lightroom: 18,432.0 MB
    Real memory used by Lightroom: 5,537.5 MB (30.0%)
    Virtual memory used by Lightroom: 32,240.6 MB
    Memory cache size: 4,342.0 MB
    Maximum thread count used by Camera Raw: 8
    Camera Raw SIMD optimization: SSE2
    Displays: 1) 2048x1152
    Graphics Processor Info:
    NVIDIA GeForce GTX 285 OpenGL Engine
    Check OpenGL support: Passed
    Vendor: NVIDIA Corporation
    Version: 3.3 NVIDIA-10.0.31 310.90.10.05b12
    Renderer: NVIDIA GeForce GTX 285 OpenGL Engine
    LanguageVersion: 3.30
    Application folder: /Applications/Adobe Lightroom
    Library Path: /Users/DryClean/Documents/Lightroom_Catalog/MyCat_LR6.lrcat
    Settings Folder: /Users/DryClean/Library/Application Support/Adobe/Lightroom
    Anyone have any suggestions?

    A big problem continues to be that once you wait for it to index all your faces you find it missed over half of them. There are many cases where it missed the subject of the photo but managed to find a tiny waiter off in the shadows. I don't know how this will ever get fixed; it seems it'll require an update that lets you rerun the indexing a few times, maybe with different levels of granularity. I doubt that's coming.
    It stands to reason that a system that hands you thousands of false positives for every face can't recognize if something is or isn't a face in over half the cases. Faces in profile or tilted down, especially with the eyes looking down, are bypassed completely. I have directories with a thousand people shots in them, many with multiple people, and instead of LR6 returning an index of 1.5k or so, it gives me 385. I'm not sure how valuable the search advantages will be in this case; I can see it not returning some favorite shots of people.
    Anyway, for anyone looking to get past the spinning wheels, work on one directory at a time. Then once a directory is done, keep it selected so you have the same thumbnails of identified faces in the confirmed area, and control-select the next directory. You won't have to reseed the confirmed faces area this way and things move much faster when you're not working with the entire library. It also helps to click each person in the confirmed faces and work in that view sometimes. It'll return faces of family members of that person and you can rename those. You can watch it focus its top results as you confirm but, annoyingly, it doesn't narrow the total suggestions but actually expands them the more faces you confirm and the fewer correct positives remain. (This seems opposite to how it should work. It should give you fewer faces the more you confirm as it gets a better idea what the person looks like and has fewer shots left unconfirmed of that person.)  Yet, even with the expanded results, faces will still escape it and pop up in other people's results as false positives.
    Bottom line: It's only a little better than manual tagging, and not as thorough because of the poor hit rate of the initial indexing. But it works better if you stick to isolated directories and occasionally individual people. At least that way you don't have to wait for it to re-sort tens of thousands of results with every click.

  • Lightroom 5.2 64-bit not using all of computer's resources?

    Hi,
    Ive got a question. I have the following custom-built lightroom and photoshop machine:
    -Windows 8 64-bit
    -8092 Mb 1600Mhz RAM
    -Intel i7 2600k Quadcore processor with 8 logical processors running at 4Ghz
    -Corsair SSD that stores Lightroom catalog.
    When I am creating 1:1 previews from a Nikon D800 (Huge 74Mb RAW files) Lightroom only uses about 25% of my CPU power (sometimes it tops out at around 50% for a second) and about 3.3GBs of RAM
    which results in relatively slow performance. Why isnt Lightroom using more resources? It barely is using the workhorse components I put in this machine.
    Ive added a screenshot from the performance monitor during the generation of 500+ 1:1 previews.
    Is there anything I can do to help it? Generating 500+ Full 14bit RAW files from the D800 could surely benefit from using the computers' full computing capabilities?
    Thanks,
    Morgan

    Check the CPU area of Resource Monitor and see what the individual core usage, is, and whether they are being simultaneously used or not.
    LR appears to export images sequentially, too; however, when an individual image is being rendered, 7 of 8 of the cores of my quad Xeon appear to be in use to 80-90% for a moment, so at least the CPU-intensive part of the process is using multiple threads.
    The source and destination of the images are a USB2-attached hard-drive so there is an I/O bottleneck, but the total-CPU is jumping between 33% and 66% most of the time, and in resource-monitor, I can see the usage of the 7 of 8 cores rising and falling in unison, so it’s not like something is using only one by swapping amongst them, they are almost all being used.
    What’s doesn't appear to be happening is the rendering phase of one image overlapping the I/O phase of another image, where you have to run multiple exports in parallel to have that occur.
    Generally, it appears that images are processed one-by-one whenever you tell LR to do an operation on a group of images, but within the processing of a specific image, certain CPU-intensive phases are highly multithreaded.
    I too, wish I could tell LR to do things more in parallel when multiple images were involved, but it does require a different way of organizing operations on multiple images, and apparently LR is not architected this way. 
    The way Exports work on my computer, I see images written in order, one by one, and I see the CPU usage, across most of the cores, peaking once per image.  To me this suggests the following processing is occurring for each image and the entire process for one image must complete before the next image is started:
    Select next image to work on, read-image, multithread-render, write-image, repeat until no more images.
    Think about what LR could do to keep things busier.  There could be three processing threads each performing a portion of the Export process for each image, and the pacing set by each phase not getting too far ahead of the subsequent phase:
    1 Read-thread:  Wait until there are less than X images waiting to be rendered, read new image into memory, send that image to render queue, repeat until no more images.
    2 Render-thread:  Wait until there is at least one image to render but also less than Y images waiting to be written, multithread-render the waiting image, send it to write-queue, repeat until no more images.
    3 Write-thread: Wait until there is at least one image to write, write the next image from write queue, repeat until no more images.
    There are two tuning parameters, X as the number of images to read into memory, and Y as the number of images to render into memory before waiting for previously-rendered images to be written out.
    If the reading is slower the the CPU-rendering (an I/O bottleneck), then the process would appear to be single-threaded, because we're always waiting on more data to be read in before the CPU can be used.  The fact that you can speed up Exporting by running two or three simultaneously, suggests reading is not the slowest thing occurring.
    The X and Y limiting parameters are needed in case the render or writing is much slower than the reading; otherwise, reading and rendering each use some memory per image and all the memory on the computer would be exhausted, waiting for the writing process to catch up, and memory swapping would start to occur which would interfere with the writing process, and grind things to a halt.
    If X and Y were set to 1 or 2 images, each, then the process would be reasonably efficient with memory because we wouldn't be queuing up too many images in memory waiting for them to be rendered or written, but we also would be overlapping the reading and writing of images with the rendering of images, just to keep things as busy as possible.
    Another factor is that memory is being used to store image data when it is passed between the various stages, so if memory is slow, then it wouldn't matter how much overlapping of the I/O and CPU there was.
    Different sized images, simple or complex operations being applied, and different system configurations will impact balance of I/O and CPU and Memory and it would be nice to LR to be able to automatically, or at least by power-user tuning parameter adjustments, adapt to each by using some parallel image processing with tunable parameters at least as sophisticated as outlined, above, it if doesn't already do so.

Maybe you are looking for

  • How do you get rid of remote from itunes library?

    I opened my Itunes library today and the remote feature appeared and now I can't get it off.  How do I remove it? It's on my laptop library.

  • Wine makes mic stop working.

    A bit of a niche problem. But my mic stops working in linux apps when my wine game application is started. Closing the wine application makes the mic work again. It's an inboard mic. I don't use the sound from the wine application so I don't need to

  • Updates failed to install after deployment - SCCM 2007

    Hi folks, I wonder could anyone help! We deployed JAVA 7 U 45 last night using SCCM 2007 and most went out ok although some computers keep showing up the Config Manager – Available Software Updates and showing the status as failed. Can someone please

  • I can not get my apps to synch on my computer the screen remains grayed out, why?

    The Synch screen remains remains grayed out, and I can not move my apps why?

  • Strucking with exportSDK.dll

    Hi, Im doing XI R2 client installtion on my machine.It is struckup with exportsdk.dll.i forcefully exited the setup ,cleared all the registeries,deleted the folders and restarted the machine and did the installion from the scratch.Again it is stuckup