Limit CPU Usage
I have a swing program that creates a number of rather large html files in a loop, around 50kb each. While the program is writing the files my CPU Usage stays at a constant 100%. I feel like this is a bad thing. Any way to limit this, or any insight that will make me feel better about what's going on? I'm running Windows 2K Professional. Thanks.
my computer has 256mb of RAM, and is a Pentium 4. The code is a few nested for loops that ouptut 12 large dynamic html pages, picking data stored out of double[][] arrays. There is definitely a lot of data stored in memory, but memory is cheap right?
I basically want to know the risks of having a program that demands so much CPU usage during execution. The CPU Usage is at 100% for less than 10 seconds while the files are written. Is this bad on the computer? will it cause computers problems with less memory and processor speed? A broad answer would be fine.
Similar Messages
-
[SOLVED] How to limit cpu usage by makepkg?
I tried
cpulimit -l 30 makepkg
but the cpu usage still reaches 100% + 100% due to conky (i have 2 core cpu). OK, I've found out that cpulimit does not restrict child processes but cgroup is promised to do that. So I installed it from AUR and did the following:
sudo systemctl enable cgconfig
sudo systemctl start cgconfig
sudo cgcreate -t my_name:my_name -a my_name:my_name -g memory,cpu:my_build
sudo echo 100000000 > /sys/fs/cgroup/memory/my_build/memory.limit_in_bytes
sudo echo 300 > /sys/fs/cgroup/cpu/my_build/cpu.shares
cgexec -g 'memory,cpu:my_build' makepkg
But cpu usage is nevertheless 100% + 100%.
Am I doing something wrong? And how to fix it?
Last edited by Next7 (2015-01-16 00:40:18)I've found the solution for my problem. First of all "cpu.shares" parameter is useless for one task and one cgroup which is my case since it specifies a relative share of CPU among different cgroups.
So for my purpose the following parameters are needed: "cpu.cfs_period_us" and "cpu.cfs_quota_us". The execution of the following commands resulted in 10% + 10% CPU load.
echo 1000000 > /sys/fs/cgroup/cpu/my_build/cpu.cfs_period_us
echo 200000 > /sys/fs/cgroup/cpu/my_build/cpu.cfs_quota_us
And there's another important point: "cgexec" command should be executed with option "--sticky" to restrict child processes.
cgexec -g memory,cpu:my_build --sticky makepkg -
How to limit CPU usage on webcam stream handling?
Hi, I am working on a robotics project where I use Java and JMF for image processing which is passed through JNI to OpenCV. My project pages are at: http://robot.lonningdal.net
I am having problems with performance - the VIA CN13000 board is not exactly a racer, but a good and cheap alternative for robotics projects. The two main CPU consumers are speech recognition and webcamera stream handling.
After some profiling I see that 50% the CPU is busy just decoding the stream where I discard almost all the pictures because my robot really only needs 1 frame per second to operate. I have looked up the JMF samples and found that you could adjust the framerate, but this doesnt work on my Logitech Pro 4000. If I could only limit the amount of pictures the webcamera sent down the stream I would free quite a lot of CPU resources.
So my questions are, can anyone recommend a webcamera that does allow me to set this framerate through JMF to limit the actual stream from the camera? (important not to confuse this with adjusting visual framerate which actually just discards pictures). Or is there an alternative API I can use to grab single images if webcameras support this feature? Its important that this doesnt have a lot of overhead processing required per picture but that I can e.g. do a getImage() call to the API and it gets the image immediately.
Any help would be greatly appreciated. Thank you.Hi John,
I saw your update to my original thread in
http://forum.java.sun.com/thread.jspa?threadID=570463&start=0&tstart=0
I'm always glad to hear people using the code :-)
You can set the framerate of that JMF captures the video stream, in the VideoFormat object.
I've had some success with this approach before.
So, assuming you still kept the setFormat() method from my code, here's a simple hard coded modification, where you set the framerate in the code.
Ps. just curious, whereabouts you are in the world ?
regards,
Owen
public void setFormat ( VideoFormat selectedFormat )
if ( formatControl != null )
player.stop();
currentFormat = selectedFormat;
replace with
public void setFormat ( VideoFormat selectedFormat )
float frameRate = 2.0f; // 2 frames per second, alter as you wish
if ( formatControl != null )
player.stop();
VideoFormat selectedFormatPlusFrameRate = new VideoFormat(selectedFormat.getEncoding(),
selectedFormat.getSize(),
selectedFormat.getMaxDataLength(),
selectedFormat.getDataType(),
frameRate);
currentFormat = selectedFormatPlusFrameRate;Edited posted code, had commented out player.stop(), but you really do need that. -
Problem:100 % cpu usage with tcp server dll
Hello,
I am trying to write a dll with labwindows/cvi that allows me to create a TCP Server. This dll is integrated in labview.I created this Dll with example provides by Labwindows/cvi(rtserver.dll).
Description of my problem: when i execute this dll in a "While Loop" in labview,the TCp server wait a connection and 100% cpu usage occurs. However when a client is connect to the server,the CPU is normaly used because the program stop when he meets the timeout of the tcpread() function. I would like to know how i could limit CPU usage when the server is awaiting a client in the labview "While loop".
I know I could use a DELAY() to limit Cpu usage ,but I would like to know if there are any others solutions.
thank you.i dont know your exact application, but i generally use Q to transfer data to TCP loop in my prgram it helps me in two ways.
1. it automatically restricts the iteration when ther is no data (Less CPU Usage, Less Unnecessary Trafic)
2. Q can eliminate problems arising due to non synchronization of loop
Tushar Jambhekar
[email protected]
Jambhekar Automation Solutions
LabVIEW Consultancy, LabVIEW Training
Rent a LabVIEW Developer, My Blog -
Hello Experts,
I have a query regarding the CPU usage by Essbase Server ? Can it be limited to certain % of the whole server by a setting ?
Thanks in advance.
Regards,
SudhirHi,
Is this not the same question as :- Query about limiting the essbase application use of CPU and RAM
There is no essbase specific configuration to limit the cpu usage, depending on your OS you could at look at trying to limit CPU usage of a process but I am not sure how well that would work in practice.
Another Re: "Dedicated" CPU for Essbase service that may be useful to you, not the same question but on processor usage.
Cheers
John
http://john-goodwin.blogspot.com/ -
CPU usage by SophosWebIntelligence
SophosWebIntelligence uses up to 90% and more of CPU when using Safari, visiting different sites. Resulting in fans speeding up and unbearable noise. MacBookPro Mid 20092,8 GHz Intel Core 2 DUo, 8GB RAM, OSX 10.9.5Any solution available for this?
Hi,
Is this not the same question as :- Query about limiting the essbase application use of CPU and RAM
There is no essbase specific configuration to limit the cpu usage, depending on your OS you could at look at trying to limit CPU usage of a process but I am not sure how well that would work in practice.
Another Re: "Dedicated" CPU for Essbase service that may be useful to you, not the same question but on processor usage.
Cheers
John
http://john-goodwin.blogspot.com/ -
ORA-02393 Exceeded Call Limit on CPU Usage
I have created a Profile and attached it to a user, in this example:
Create Profile percall
Limit
CPU_PER_CALL 10
IDLE_TIME 5;
I have attached it to one user - USER1
When USER1 runs a SQL Statement -
SELECT COUNT(*) FROM TABLE1 A WHERE A.EFFDT = (SELECT MAX(B.EFFDT) WHERE B.EMPLID = A.EMPLID AND B.EFFDT <= SYSDATE);
I get an error (Which I want to receive) ORA-02393 Exceeded Call Limit on CPU Usage.
The SQL statement shows in the table DBA_COMMON_AUDIT_TRAIL, but shows a success even though the user received an error ORA-02393.
What I want is a way for a DBA to be able to report on those ORA-02393 errors. I don't see any entries in the Log files, and don't notice any errors in the Oracle Tables.
I would like to be able to show the user (after a week when they bring up the issue) what the SQL statement was and why it Exceeded the CPU Usage. If the error could place the SQL statement in a table or just display it in an error log with the Statement to verify that THIS is the statement which exceeded the CPU Usage.
Thank you
Aaroncan you modify the procedure in which the SELECT resides.
If so, trap & log the error. -
How to remove the cpu usage limit?
I have to run a C program on terminal as fast as possible. However, there seems to be a cpu usage limit for the terminal, the program supose to run around 15 seconds on a linux machine with similar configuration where cpu usage is at 85-95%. But, it runs one minute on my macbook pro with cpu usage less than 15%. Finally, my question is, how do I utilize all of the 85% idle cpu for this program or at least most of the idle cpu?
Per se, MacOS X does not impose any CPU usage limits other than those from the processor scheduling priorities. Standard Unix scheduling priorities go from -20 to +20, with the default being 0. If you have administrator privileges, you increase your process' priority (a more negative value) with the nice or renice commands. See their man pages. On a four-core 15" or 17" MBP, even setting max -20 priority should not impact the rest of the system too much.
You may also want to go over and discuss these things in the Unix forums:
https://discussions.apple.com/community/mac_os/mac_os_x_technologies -
Exceeded session limit on CPU usage
Hi All,
We are getting message while generating some reports. Pl. see the Error Text below for message. For time being we have bumped the session limit to unlimited to take care of this problem for now. But the question is u201CIs there a way available in MII to refresh(cycle) the Data source connectionu201D So that the DB session limit can be kept un altered?
When we search this on different forums, we got a solution which we aleady implemented(bumping the session limit to unlimited). But we are looking for a solution from MII side.
Any help will be appreciated
Regards,
Rajesh.
Error Text:
Error occurred while processing data stream, A SQL Error has occurred on query, ORA-02392: exceeded session limit on CPU usage, you are being logged off . com.lighthammer.Illuminator.logging.LHException: Error occurred while processing data stream, A SQL Error has occurred on query, ORA-02392: exceeded session limit on CPU usage, you are being logged off . at com.lighthammer.Illuminator.logging.ErrorHandler.handleError(Unknown Source) at com.lighthammer.Illuminator.logging.ErrorHandler.handleError(Unknown Source) at com.lighthammer.Illuminator.connectors.Proxy.Proxy.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.handlers.IlluminatorService.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.ServiceManager.runQuery(Unknown Source) at com.lighthammer.Illuminator.servlet.Illuminator.service(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.lighthammer.Illuminator.servlet.ServletRunner.run(Unknown Source) at com.lighthammer.Illuminator.servlet.ServletRunner.runAsXmlQuery(Unknown Source) at com.lighthammer.xacute.actions.illuminator.queries.IlluminatorQueryObject.LoadDocument(Unknown Source) at com.lighthammer.xacute.actions.illuminator.queries.IlluminatorQueryObject.Invoke(Unknown Source) at com.lighthammer.xacute.core.Action.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.Conditional.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.core.ActionSequence.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Process(Unknown Source) at com.lighthammer.xacute.engine.TransactionEngine.Execute(Unknown Source) at com.lighthammer.Illuminator.connectors.Xacute.XacuteRequestHandler.processQueryRequest(Unknown Source) at com.lighthammer.Illuminator.connectors.Xacute.XacuteRequestHandler.QueryRequest(Unknown Source) at com.lighthammer.Illuminator.connectors.Xacute.XacuteConnector.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.handlers.IlluminatorService.processRequest(Unknown Source) at com.lighthammer.Illuminator.services.ServiceManager.runQuery(Unknown Source) at com.lighthammer.Illuminator.servlet.Illuminator.service(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.newatlanta.servletexec.SERequestDispatcher.forwardServlet(SERequestDispatcher.java:638) at com.newatlanta.servletexec.SERequestDispatcher.forward(SERequestDispatcher.java:236) at com.newatlanta.servletexec.SERequestDispatcher.internalForward(SERequestDispatcher.java:283) at com.newatlanta.servletexec.SEFilterChain.doFilter(SEFilterChain.java:96) at com.lighthammer.cms.system.CMSFilter.doFilter(Unknown Source) at com.newatlanta.servletexec.SEFilterChain.doFilter(SEFilterChain.java:60) at com.newatlanta.servletexec.ApplicationInfo.filterApplRequest(ApplicationInfo.java:2159) at com.newatlanta.servletexec.ApplicationInfo.processApplRequest(ApplicationInfo.java:1823) at com.newatlanta.servletexec.ServerHostInfo.processApplRequest(ServerHostInfo.java:937) at com.newatlanta.servletexec.ServletExec.ProcessRequest(ServletExec.java:1091) at com.newatlanta.servletexec.ServletExec.ProcessRequest(ServletExec.java:973) at com.newatlanta.servletexec.ServletExecService.processServletRequest(ServletExecService.java:167) at com.newatlanta.servletexec.ServletExecService.Run(ServletExecService.java:204) at com.newatlanta.servletexec.HttpServerRequest.run(HttpServerRequest.java:487)Hi,
Kindly try out the below option from database side.
Error : ORA-02392: exceeded session limit on CPU usage, you are being logged off
Cause : An attempt was made to exceed the maximum CPU usage allowed by the CPU_PER_SESSION clause of the user profile.
Action : If this happens often, ask the database administrator to increase the CPU_PER_SESSION limit of the user profile.
If you looking for solution in MII end,
Check with SAP MII administrator on log files.
Check Data server tab for configuration details. (eg : Pool Size, Pool Max etc)
Kindly let us know the version of SAP MII.
Thanks
Rajesh Sivaprakasam. -
Need to limit ram per user +cpu usage % and internet speed
I go into Windows System Resource Management but I only see limit by process. I need to be able to say limit "greg" to only be able to use a total of 5GB of ram and say 10% of the total available cpu usage. I saw some people suggesting to set up
proxies for speed limiting but have no idea on how to go about doing that. Or if any1 has any software ideas like I'v tried Net Limiter 3 b4 but it was buggy and stopped other things from working properly on my pc plus don't know if I can use it on server.
so just to clarify if they are running 1 program they can only uses say the 5GB of ram or if there using 5 differant programs there still limited to only using 5GB of ram.
Thanks all in advance,
BlueHi,
You can use the System Resource Manager if you are using the Server 2008R2 or previous edition server, You can set an upper limit on the working set of a matched process,
but about the RAM limits there have some additional considerations:
• Do not use memory limits in Windows System Resource Manager to manage applications or processes that modify their own memory limits dynamically. This can interfere with
the correct operation of Windows System Resource Manager and the managed application.
•As a best practice, use CPU targets to manage resources. Apply memory limits selectively to applications that exhibit memory consumption issues. Excessively limiting the
memory that is available to an application can increase the time it takes the application to complete a task, and it can increase disk usage.
More detail please refer the following related KB:
Understanding Memory Management in Windows System Resource Manager
http://technet.microsoft.com/en-us/library/cc753446.aspx
More information:
Can a process be limited on how much physical memory it uses?
http://blogs.technet.com/b/clinth/archive/2012/10/11/can-a-process-be-limited-on-how-much-physical-memory-it-uses.aspx
Install Windows System Resource Manager
http://technet.microsoft.com/en-us/library/cc753939.aspx
Hope this helps.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
ORA-02393: exceeded call limit on CPU usage -- Concept Understanding is req
In our System CPU_PER_CALL is set to 1.5 Hours for Reporting Users.
I can see some query runs for 10 hours-15 hours and complete successfully and some queries fail exactly after 1.5 hours.
I want to understand what does CPU_PER_CALL Means. On what basis it calculates CPU_PER_CALL ( Fetch , Execute , parse). How a query is calculating time ?
With the same profile options some queries run for 10 hours but some queries fail after 1.5 hours.
Regards
Sourabh GuptaThe short answer is that different queries wait on different sorts of events. Let's assume that the only 2 wait events in the world are waits for CPU and waits for I/O (there are many other types of waits but most reporting queries will primarily be waiting for these two resources). If you have a query that runs for 15 hours but spends 14.5 hours waiting on I/O and only 0.5 hours on the CPU doing comparisons and/or calculations, the CPU usage for that query is only 0.5 hours. Another query might run for 1.51 hours and do 0.01 hours of I/O and spend 1.5 hours on the CPU calculating various aggregate values for that data. The second query would use 1.5 hours of CPU (and thus exceed your CPU_PER_CALL) while the first query would only use a third as much CPU.
Oracle profiles allow you to specify a number of different limits so that you can specify limits on CPU usage (CPU_PER_CALL/ CPU_PER_SESSION) or I/O usage (LOGICAL_READS_PER_CALL/ LOGICAL_READS_PER_SESSION) or a combination of the two (COMPOSITE_LIMIT).
Justin -
SBS 2011 - High CPU usage - Help me Microsoft forums! You're my only hope!
My company supports a client that has a SBS 2011 server. For about the past year, we've been fighting a recurring issue with performance on this server. There are about ten local users and four remote users. The server's CPU idles at
about 60%-80% but is usally running when under *any* load from 80%-100%. If you do anything on the console, it will stay pretty much at 100%. These are not power users by any means. The server is utilized for file/printer sharing, Exchange
2010, and one flat file database application (non-SQL). SharePoint is not utilized.
Needless to say, our client is frustrated. When opening files, using their database application, or doing anything Exchange-related, there is a large amount of lag on the client side. First, here are the server's specs:
Make:Dell PowerEdge T420
OS: SBS 2011 Standard SP1
CPU: 2 - Intel Xeon E5-2407
Memory: 32GB
RAID: RAID 1 - Operating System (C:)/Data Volume (E:) | RAID 5 - Data Volume (D:)
Here is what we have tried to resolve this to finality:
* Doubled resources - Initially the server had a single physical processor and 16GB of memory. While these specs alone should have been fine, and were fine when the server was installed, we had periods of time where the server would just sit all day at
100% usage. We doubled the resources and while this seemed like it would fix the issue, we are still seeing abnormally high processor usage.
* Removed all monitoring tools, antivirus, and backup software - As part of our testing, we removed our monitoring agent (LabTech) and antivirus (GFI Vipre). Mozy is utilized for an off-site backup so that was disabled. No dice.
* Verified updates - We made absolutely sure the server was 100% patched.
* Malware/Virus/Rootkit checks - We have ran scans checking for any potential issues with security.
* Ran MBSA and MBCA to fix any issues with the server's configuration.
There is no single process which is using all of the CPU, or we would simply be able to narrow it down. Our calls to Microsoft support have yielded no answers. The last call ended with Microsoft stating that a SBS server should always be running
at high CPU usage. Meanwhile, we have many other clients with less-beefy servers, with more users, who have no issues like these.
So, I'm turning to you all. I will gladly provide logs, configuration settings, even remote assistance sessions if you all can help shed some light on what might be causing my issues.
Thank you!Some comments/ideas:
How long was the server running after this screenshot? I ask because store.exe is only got a 1GB of RAM which is really low - it should grab most of the RAM within a few hours.
The server was up for about 12 hours. I believe an adjustment was made before to limit the Exchange memory usage.
Strange that SearchIndexer (wsearch service) is so high although that may be a startup condition.
The LT* processes seem to be a 3rd party monitoring tool - no idea why it would ever need that much CPU though (I thought you disabled this?).
We had, but we cannot go forever without monitoring our client's server. It has been pulled off in the past and results on performance are pretty much the same.
The taskmgr process run by amnet_admin has used a lot of total CPU Time. What is it? (can't see the command line).
That's the user I was logged in as when I took the screenshot. Even the task manager seems to eat up the CPU.
The sqlserver process right above it is also busy - may want to look at the command line and figure out which SQL database that is (SBS has 3 - WSUS, Sharepoint, and SBS monitoring)
I believe that's the SharePoint database. They don't currently use their site. Would you recommend a removal and reinstallation? I would not completely remove as I know SBS doesn't like you removing parts of the complete package.
Strange that vds.exe is 10% - that is the interface to the disk management interface IIRC. Perhaps your monitoring service has gone awry here - definitely lose it.
I'll see about pulling it off and I'll see if there are any improvements.
-- Al -
100% CPU Usage Overhead running EM DBConsole 11g on OEL-5.2
After upgrading to OEL-5.2 and relinking all Oracle binaries, my old Oracle 11g installation, installed several months before on OEL-5.1, has been working well, including Enterprise Manager Database Console working nicely as always with respectful performance. Unfortunatelly, it lasted just several days.
Yesterday I decided to uninstall the 11g completely and perform new clean installation (software and database) with the same configuration options and settings as before, including EM dbconsole, all configured using dbca. After completing the installation (EM was started automatically by dbca), oracle continued to suck 80-85% CPU time. In further few minutes CPU utilization raised up to 99% due to only one (always the same PID) client process - "oracleorcl (LOCAL=NO)". For first ten minutes I didn't care too much since I always enable Automatic Management in dbca. But after two hours, I started to worry. The process was still running, consuming sustained 99% of CPU power. No other system activity, no database activity, no disks activity at all!
I was really puzzled since I installed and reinstalled the 11g at least 20 times on OEL-5.0 and 5.1, experimenting with ASM, raw devices, loopback devices and various combinations of installation options, but never experienced such a behaviour. It took me 3 minutes to log in to EM dbconsole as it was almost unusable performing too slow. After three hours CPU temperature was nearly 60 degrees celsius. I decided to shutdown EM and after that everything became quiet. Oracle was running normally. Started EM again, the problem was back again. Tracing enabled, it filled a 350 MB trace file in just 20 minutes. Reinstalling the software and database once again didn't help. Whenever EM is up, the CPU usage overhead of 99% persists.
Here is a cca 23 minutes session summary report taken from EM dbconsole's Performance page. The trace file is too big to list it here, but it shows the same.
Host CPU: 100%
Active Sessions: 100%The details for the Selected 5 Minute Interval (the last 5 min interval) are shown as follow:
TOP SESSIONS: SYSMAN, Program: OMS
Activity: 100%
TOP MODULES: OEM.CacheModeWaitPool, Service: orcl
Activity: 100%
TOP CLIENT: Unnamed
Activity: 99.1%
TOP ACTIONS: Unnamed (OEM.CacheModeWaitPool) (orcl)
Activity: 100%
TOP OBJECTS: SYSMAN.MGMT_JOB_EXEC_SUMMARY (Table)
Activity: 100%
TOP PL/SQL: SYSMAN.MGMT_JOB_ENGINE.INSERT_EXECUTION
PL/SQL Source: SYSMAN.MGMT_JOB_ENGINE
Line Number: 7135
Activity: 100%
TOP SQL: SELECT EXECUTION_ID, STATUS, STATUS_DETAIL FROM MGMT_JOB_EXEC_SUMMARY
WHERE JOB_ID = :B3 AND TARGET_LIST_INDEX = :B2 AND EXPECTED_START_TIME = :B1;
Activity: 100%
STATISTICS SUMMARY
cca 23 minutes session
with no other system activity
Per
Total Execution Per Row
Executions 105,103 1 10,510.30
Elapsed Time (sec) 1,358.95 0.01 135.90
CPU Time (sec) 1,070.42 0.01 107.04
Buffer Gets 85,585,518 814.30 8,558,551.80
Disk Reads 2 <0.01 0.20
Direct Writes 0 0.00 0.00
Rows 10 <0.01 1
Fetches 105,103 1.00 10,510.30
----------------------------------------Wow!!! Note: no disk, no database activity !
Has anyone experienced this or similar behaviour after clean 11g installation on OEL-5.2? If not, anyone has a clue what the hell is going on?
Thanks in advance.Hi Tommy,
I didn't want to experiment further with already working OEL-5.2, oracle and dbconsole on this machine, specially not after googling the problem and finding out that I am not alone in this world. There are another two threads on OTN forums (Database General) showing the same problem even on 2GB machines:
DBConsole easting a CPU
11g stuck. 50-100% CPU after fresh install
So, I took another, a smaller free machine I've got at home (1GB RAM, 2.2MHz Pentium4, three 80GB disks), on which I used to experiment with new releases of software (this is the machine on which I installed 11g for the first time when it was released on OEL-5.0, and I can recall that everything was OK with EM). This is what I did:
1. I installed OEL-5.0 on the machine, adjusted linux and kernel parameters, and performed full 11g installation. Database and EM dbconsole worked nice with acceptable performance. Without activity in the database, %CPU = zero !!! The whole system was perfectly quiet.
2. Since everything was OK, I shutdown EM and oracle, and performed the full upgrade to OEL-5.2. When the upgrade finished, restarted the system, relinked all oracle binaries, and started oracle and EM dbconsole. Both worked perfectly again, just as before the upgrade. I repeated restarting the database and dbconsole several times, always with the same result - it really rocks. Without database activity, %CPU = zero%.
3. Using dbca, I dropped the database and created the new one with the same configuration options. Wow! I'm again in trouble. A half an hour after the creation of the database, %CPU raised up to 99%. That's it.
The crucial question here is: what is that in OEL-5.2, not existing in the 5.0, that causes dbca/em scripts to be embarrassed at the time of EM agent configuration?
Here are the outputs you required picked 30 minutes after starting the database and EM dbconsole (sustained 99% CPU utilization). Note that this is just a 1GB machine.
Kernel command line: ro root=LABEL=/ elevator=deadline rhgb quiet
[root@localhost ~]# cat /proc/meminfo
MemTotal: 1034576 kB
MemFree: 27356 kB
Buffers: 8388 kB
Cached: 609660 kB
SwapCached: 18628 kB
Active: 675376 kB
Inactive: 287072 kB
HighTotal: 130304 kB
HighFree: 260 kB
LowTotal: 904272 kB
LowFree: 27096 kB
SwapTotal: 3148700 kB
SwapFree: 2940636 kB
Dirty: 72 kB
Writeback: 0 kB
AnonPages: 328700 kB
Mapped: 271316 kB
Slab: 21136 kB
PageTables: 14196 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3665988 kB
Committed_AS: 1187464 kB
VmallocTotal: 114680 kB
VmallocUsed: 5860 kB
VmallocChunk: 108476 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 4096 kB
[root@localhost ~]# cat /proc/slabinfo
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
rpc_buffers 8 8 2048 2 1 : tunables 24 12 8 : slabdata 4 4 0
rpc_tasks 8 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
rpc_inode_cache 6 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
ip_conntrack_expect 0 0 96 40 1 : tunables 120 60 8 : slabdata 0 0 0
ip_conntrack 68 68 228 17 1 : tunables 120 60 8 : slabdata 4 4 0
ip_fib_alias 7 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 7 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
fib6_nodes 22 113 32 113 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 13 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
ndisc_cache 1 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
RAWv6 4 5 768 5 1 : tunables 54 27 8 : slabdata 1 1 0
UDPv6 9 12 640 6 1 : tunables 54 27 8 : slabdata 2 2 0
tw_sock_TCPv6 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
request_sock_TCPv6 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
TCPv6 1 3 1280 3 1 : tunables 24 12 8 : slabdata 1 1 0
jbd_1k 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
dm_mpath 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
dm_uevent 0 0 2460 3 2 : tunables 24 12 8 : slabdata 0 0 0
dm_tio 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
dm_io 0 0 20 169 1 : tunables 120 60 8 : slabdata 0 0 0
jbd_4k 1 1 4096 1 1 : tunables 24 12 8 : slabdata 1 1 0
scsi_cmd_cache 10 10 384 10 1 : tunables 54 27 8 : slabdata 1 1 0
sgpool-128 36 36 2048 2 1 : tunables 24 12 8 : slabdata 18 18 0
sgpool-64 33 36 1024 4 1 : tunables 54 27 8 : slabdata 9 9 0
sgpool-32 34 40 512 8 1 : tunables 54 27 8 : slabdata 5 5 0
sgpool-16 35 45 256 15 1 : tunables 120 60 8 : slabdata 3 3 0
sgpool-8 60 60 128 30 1 : tunables 120 60 8 : slabdata 2 2 0
scsi_io_context 0 0 104 37 1 : tunables 120 60 8 : slabdata 0 0 0
ext3_inode_cache 4376 8216 492 8 1 : tunables 54 27 8 : slabdata 1027 1027 0
ext3_xattr 165 234 48 78 1 : tunables 120 60 8 : slabdata 3 3 0
journal_handle 8 169 20 169 1 : tunables 120 60 8 : slabdata 1 1 0
journal_head 684 1008 52 72 1 : tunables 120 60 8 : slabdata 14 14 0
revoke_table 18 254 12 254 1 : tunables 120 60 8 : slabdata 1 1 0
revoke_record 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
uhci_urb_priv 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
UNIX 56 112 512 7 1 : tunables 54 27 8 : slabdata 16 16 0
flow_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
cfq_ioc_pool 0 0 92 42 1 : tunables 120 60 8 : slabdata 0 0 0
cfq_pool 0 0 96 40 1 : tunables 120 60 8 : slabdata 0 0 0
crq_pool 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
deadline_drq 140 252 44 84 1 : tunables 120 60 8 : slabdata 3 3 0
as_arq 0 0 56 67 1 : tunables 120 60 8 : slabdata 0 0 0
mqueue_inode_cache 1 6 640 6 1 : tunables 54 27 8 : slabdata 1 1 0
isofs_inode_cache 0 0 368 10 1 : tunables 54 27 8 : slabdata 0 0 0
hugetlbfs_inode_cache 1 11 340 11 1 : tunables 54 27 8 : slabdata 1 1 0
ext2_inode_cache 0 0 476 8 1 : tunables 54 27 8 : slabdata 0 0 0
ext2_xattr 0 0 48 78 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_cache 2 169 20 169 1 : tunables 120 60 8 : slabdata 1 1 0
dquot 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
eventpoll_pwq 1 101 36 101 1 : tunables 120 60 8 : slabdata 1 1 0
eventpoll_epi 1 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
inotify_event_cache 1 127 28 127 1 : tunables 120 60 8 : slabdata 1 1 0
inotify_watch_cache 23 92 40 92 1 : tunables 120 60 8 : slabdata 1 1 0
kioctx 135 135 256 15 1 : tunables 120 60 8 : slabdata 9 9 0
kiocb 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
fasync_cache 0 0 16 203 1 : tunables 120 60 8 : slabdata 0 0 0
shmem_inode_cache 553 585 436 9 1 : tunables 54 27 8 : slabdata 65 65 0
posix_timers_cache 0 0 88 44 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 5 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
tcp_bind_bucket 32 203 16 203 1 : tunables 120 60 8 : slabdata 1 1 0
inet_peer_cache 1 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
secpath_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
xfrm_dst_cache 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
ip_dst_cache 6 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
arp_cache 2 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
RAW 2 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
UDP 3 7 512 7 1 : tunables 54 27 8 : slabdata 1 1 0
tw_sock_TCP 3 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
request_sock_TCP 4 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
TCP 43 49 1152 7 2 : tunables 24 12 8 : slabdata 7 7 0
blkdev_ioc 3 127 28 127 1 : tunables 120 60 8 : slabdata 1 1 0
blkdev_queue 23 24 956 4 1 : tunables 54 27 8 : slabdata 6 6 0
blkdev_requests 137 161 172 23 1 : tunables 120 60 8 : slabdata 7 7 0
biovec-256 7 8 3072 2 2 : tunables 24 12 8 : slabdata 4 4 0
biovec-128 7 10 1536 5 2 : tunables 24 12 8 : slabdata 2 2 0
biovec-64 7 10 768 5 1 : tunables 54 27 8 : slabdata 2 2 0
biovec-16 7 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-4 8 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
biovec-1 406 406 16 203 1 : tunables 120 60 8 : slabdata 2 2 300
bio 564 660 128 30 1 : tunables 120 60 8 : slabdata 21 22 204
utrace_engine_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
utrace_cache 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
sock_inode_cache 149 230 384 10 1 : tunables 54 27 8 : slabdata 23 23 0
skbuff_fclone_cache 20 20 384 10 1 : tunables 54 27 8 : slabdata 2 2 0
skbuff_head_cache 86 210 256 15 1 : tunables 120 60 8 : slabdata 14 14 0
file_lock_cache 22 40 96 40 1 : tunables 120 60 8 : slabdata 1 1 0
Acpi-Operand 1147 1196 40 92 1 : tunables 120 60 8 : slabdata 13 13 0
Acpi-ParseExt 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Parse 0 0 28 127 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-State 0 0 44 84 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Namespace 615 676 20 169 1 : tunables 120 60 8 : slabdata 4 4 0
delayacct_cache 233 312 48 78 1 : tunables 120 60 8 : slabdata 4 4 0
taskstats_cache 12 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
proc_inode_cache 622 693 356 11 1 : tunables 54 27 8 : slabdata 63 63 0
sigqueue 8 27 144 27 1 : tunables 120 60 8 : slabdata 1 1 0
radix_tree_node 6220 8134 276 14 1 : tunables 54 27 8 : slabdata 581 581 0
bdev_cache 37 42 512 7 1 : tunables 54 27 8 : slabdata 6 6 0
sysfs_dir_cache 4980 4992 48 78 1 : tunables 120 60 8 : slabdata 64 64 0
mnt_cache 36 60 128 30 1 : tunables 120 60 8 : slabdata 2 2 0
inode_cache 1113 1254 340 11 1 : tunables 54 27 8 : slabdata 114 114 81
dentry_cache 11442 18560 136 29 1 : tunables 120 60 8 : slabdata 640 640 180
filp 7607 10000 192 20 1 : tunables 120 60 8 : slabdata 500 500 120
names_cache 19 19 4096 1 1 : tunables 24 12 8 : slabdata 19 19 0
avc_node 14 72 52 72 1 : tunables 120 60 8 : slabdata 1 1 0
selinux_inode_security 814 1170 48 78 1 : tunables 120 60 8 : slabdata 15 15 0
key_jar 14 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
idr_layer_cache 170 203 136 29 1 : tunables 120 60 8 : slabdata 7 7 0
buffer_head 38892 39024 52 72 1 : tunables 120 60 8 : slabdata 542 542 0
mm_struct 108 135 448 9 1 : tunables 54 27 8 : slabdata 15 15 0
vm_area_struct 11169 14904 84 46 1 : tunables 120 60 8 : slabdata 324 324 144
fs_cache 82 177 64 59 1 : tunables 120 60 8 : slabdata 3 3 0
files_cache 108 140 384 10 1 : tunables 54 27 8 : slabdata 14 14 0
signal_cache 142 171 448 9 1 : tunables 54 27 8 : slabdata 19 19 0
sighand_cache 127 135 1344 3 1 : tunables 24 12 8 : slabdata 45 45 0
task_struct 184 246 1360 3 1 : tunables 24 12 8 : slabdata 82 82 0
anon_vma 3313 5842 12 254 1 : tunables 120 60 8 : slabdata 23 23 0
pgd 84 84 4096 1 1 : tunables 24 12 8 : slabdata 84 84 0
pid 237 303 36 101 1 : tunables 120 60 8 : slabdata 3 3 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 2 2 65536 1 16 : tunables 8 4 0 : slabdata 2 2 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 9 9 32768 1 8 : tunables 8 4 0 : slabdata 9 9 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 6 6 16384 1 4 : tunables 8 4 0 : slabdata 6 6 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 5 5 8192 1 2 : tunables 8 4 0 : slabdata 5 5 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 205 205 4096 1 1 : tunables 24 12 8 : slabdata 205 205 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 260 270 2048 2 1 : tunables 24 12 8 : slabdata 135 135 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 204 204 1024 4 1 : tunables 54 27 8 : slabdata 51 51 0
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 367 464 512 8 1 : tunables 54 27 8 : slabdata 58 58 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 487 495 256 15 1 : tunables 120 60 8 : slabdata 33 33 0
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 2242 2490 128 30 1 : tunables 120 60 8 : slabdata 83 83 0
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
size-32(DMA) 0 0 32 113 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 1409 2950 64 59 1 : tunables 120 60 8 : slabdata 50 50 0
size-32 3596 3842 32 113 1 : tunables 120 60 8 : slabdata 34 34 0
kmem_cache 145 150 256 15 1 : tunables 120 60 8 : slabdata 10 10 0
[root@localhost ~]# slabtop -d 5
Active / Total Objects (% used) : 97257 / 113249 (85.9%)
Active / Total Slabs (% used) : 4488 / 4488 (100.0%)
Active / Total Caches (% used) : 101 / 146 (69.2%)
Active / Total Size (% used) : 15076.34K / 17587.55K (85.7%)
Minimum / Average / Maximum Object : 0.01K / 0.16K / 128.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
25776 25764 99% 0.05K 358 72 1432K buffer_head
16146 15351 95% 0.08K 351 46 1404K vm_area_struct
15138 7779 51% 0.13K 522 29 2088K dentry_cache
9720 9106 93% 0.19K 486 20 1944K filp
7714 7032 91% 0.27K 551 14 2204K radix_tree_node
5070 5018 98% 0.05K 65 78 260K sysfs_dir_cache
4826 4766 98% 0.01K 19 254 76K anon_vma
4824 3406 70% 0.48K 603 8 2412K ext3_inode_cache
3842 3691 96% 0.03K 34 113 136K size-32
2190 2174 99% 0.12K 73 30 292K size-128
1711 1364 79% 0.06K 29 59 116K size-64
1210 1053 87% 0.33K 110 11 440K inode_cache
1196 1147 95% 0.04K 13 92 52K Acpi-Operand
1170 814 69% 0.05K 15 78 60K selinux_inode_security
936 414 44% 0.05K 13 72 52K journal_head
747 738 98% 0.43K 83 9 332K shmem_inode_cache
693 617 89% 0.35K 63 11 252K proc_inode_cache
676 615 90% 0.02K 4 169 16K Acpi-Namespace
609 136 22% 0.02K 3 203 12K biovec-1
495 493 99% 0.25K 33 15 132K size-256
480 384 80% 0.12K 16 30 64K bio
440 399 90% 0.50K 55 8 220K size-512
312 206 66% 0.05K 4 78 16K delayacct_cache
303 209 68% 0.04K 3 101 12K pid
290 290 100% 0.38K 29 10 116K sock_inode_cache
[root@localhost ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
# Controls IP packet forwarding
net.ipv4.ip_forward=0
# Controls source route verification
net.ipv4.conf.default.rp_filter=1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route=0
# Oracle
net.ipv4.ip_local_port_range=1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
net.ipv4.tcp_rmem=4096 65536 4194304
net.ipv4.tcp_wmem=4096 65536 4194304
# Keepalive Oracle
net.ipv4.tcp_keepalive_time=3000
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=15
net.ipv4.tcp_retries2=3
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_sack=0
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_window_scaling=0
# Oracle
fs.file-max = 6553600
fs.aio-max-nr=3145728
kernel.shmmni=4096
kernel.sem=250 32000 100 142
kernel.shmmax=2147483648
kernel.shmall=3279547
kernel.msgmnb=65536
kernel.msgmni=2878
kernel.msgmax=8192
kernel.exec-shield=0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq=1
kernel.panic=60
kernel.core_uses_pid=1
[root@localhost ~]# free | grep Swap
Swap: 3148700 319916 2828784
[root@localhost ~]# cat /etc/fstab | grep "/dev/shm"
tmpfs /dev/shm tmpfs size=1024M 0 0
[root@localhost ~]# df | grep "/dev/shm"
tmpfs 1048576 452128 596448 44% /dev/shm
NON-DEFAULT DB PARAMETERS:
db_block_size 8192
memory_target 633339904 /* automatic memory management */
open_cursors 300
processes 256
disk_async_io TRUE
filesystemio_options SETALL -
Lightroom Mobile Sync - Extremely High CPU Usage/Sync Process Causes LR To Lag
Since my other thread doesn't seem to be getting any responses, I'm pasting what I've found here. Please keep in mind I am not a beginner with Lightroom and consider myself very familiar with Lightroom's features excluding the new mobile sync.
1st message:
I'm on Lr 5.5 and using the 30 day trial of Adobe CC to try syncing one collection of slight more than 1000 images. Despite already having generated the Smart Previews, I can see my CPU crunching through image after image (the rolling hills pattern in the task manager) while doing the sync. I was assuming, since I already created the Smart Previews, that the sync of this collection would begin immediately and be done by simply uploading all of the existing Smart Previews. The Smart Previews folder of the catalog is 871MB and has stayed the same despite the CPU obviously doing *something*. As it is now, the sync progress is incredibly slow, almost at a pace like it's actually exporting full-res JPGs from the RAW images (as a comparison only, I know this should not be what it's actually doing).
Another side effect of this is that I'm basically unable to use my computer for other tasks due to the high CPU utilization.
Win 7 x64 / Lightroom 5.5
Intel i5 2500k OC'd 4.5GHz
16GB RAM
SSD for OS, separate SSD for working catalog and files
2nd message:
As a follow up, now Lightroom thinks all 1026 photos are synced (as shown in "All Sync Photographs" portion of the Catalog though all images after the 832nd image show the sync icon per image stuck at "Building Previews for Lightroom Mobile" and the status at the top left corner has been stuck at "Syncing 194 photos" for over 12 hours. Is there no option to force another sync via Lightroom Desktop and also force the iOS app to manually refresh (perhaps by pulling down on the collections view, like refreshing via the Mail app)?
3rd message:
One more update, I went into Preferences and deleted all mobile data, which automatically signed me out of Adobe CC and then I signed back in. Please keep in mind the Smart Previews were long generated before even starting the trial, and I also manually generated them again (it ran through quickly since it found they were already generated) many times. Now that I'm re-syncing my collection of 1026 images, I can clearly see Lightroom using the CPU to regenerate the Smart Previews which already exist. I have no idea why it's doing this except that it's making the process of uploading the Smart Previews extremely slow. I hope this time around it will at least sync all 1026 images to the cloud.
4th message:
All 1026 images synced just fine and I could run through my culling workflow on the iPad/iPhone perfectly. Now I'm on a new catalog (my current workflow unfortunately uses one catalog per event) and I see the same problem: Smart Previews already generated but when syncing, Lightroom seems to re-generate them again anyway (or take up a lot of CPU simply to upload the existing Smart Previews). Can anyone else chime in on what their CPU utilization is like during the sync process when Smart Previews are already created?
New information:
Now I'm editing a catalog of images that is synced to Lightroom Mobile and notice that my workflow has gotten even slower between photos (relative to what it was before, this is not a discussion about how fast/slow LR should perform). Obviously Lightroom is syncing the edited settings to the cloud, but I can see my CPU running intensively (all 4 cores) on every image I edit and the CPU utilization graph looks different than before I started using LR mobile sync. It still feels like every change isn't simply syncing an SQLite database change but re-generating a Smart Preview to go with it (I'm not saying this is definitely what's happening, but something is intensively using the CPU that wasn't prior to using LR Mobile).
For example: I only update the tint +5 on an image. I see the CPU spike up to around 30-40%, then falls back down, then back up to 100%, then back down to another smaller spike while Lightroom says "Syncing 1 photo". I've attached a screenshot of my CPU graph when doing this edit on just one image. During this entire time, if I try to move onto edit another image, the program is noticeably slower to respond than it was prior to using LR mobile, due to the fact that there appear to be much more CPU intensive tasks running to sync the previous edit. This is proven by un-syncing the collection and immediately the lag goes away.
I'd be happy to test/try anything you have in mind, because it's my understanding that re-syncing photos that are edited that are already in the cloud should be simply updating the database file rather than require regenerating any Smart Previews or other image data. If indeed that's what it should be doing, then some other portion of LR is causing massive CPU usage. If this continues, I will probably not choose to proceed with a subscription despite the fact that i think LR mobile adds a lot of value and boosts my workflow significantly if it wasn't causing the program to lag so badly in the process.
I know this message was incredibly long and probably tedious to read through so thanks in advance to anyone who gets through it
-JeffThanks for reporting. Just passed along your info to some of our devs. One of the things that needs to be created (beside smart previews) during an initial sync are thumbnails + previews for the LrM app - Guido
Hi Guido,
Thanks for pointing this out. I realized the same thing when I tried syncing a collection for offline mode and found out the required space sounded more like Previews + Smart Previews rather than just the Smart Previews.
greule wrote:
Hi Jeff, are your images particularly large or do you make a lot of changes which you save to the original file as part of your workflow?
The CPU usage is almost certainly from us uploading JPEG previews not the Smart Previews - particularly during develop edits as these force new JPEG previews to be sent from Lightroom desktop, but would not force new Smart Previews (unless the develop edits are modifying the original file making us think the Smart Preview is out of date) to be sent.
Guido
My images are full-resolution ~22mp Canon 5D Mark III RAW files so they're fairly large. Even if I only make one basic change such as exposure changes, I saw the issue. By "save to the original file" I'm assuming metadata changes such as timestamps, otherwise edits to the images aren't actually written to the original file. I'm only doing develop module edits so I shouldn't be touching the original file at all at this point in my workflow.
I think it makes sense now that you mention that new JPEG previews need to be generated and sent to the cloud due to updated develop edits. My concern is that this seems to be done in real-time as opposed to how Lightroom Desktop works (which is to render a new Standard Preview or 1:1 Preview on demand, which means only one is being rendered at any given time while viewing it in Loupe View or possibly 2 in Compare View). If I edit, for example, 10 images quickly in a row, once the sync kicks in a few seconds later, editing the 11th image is severely hindered due to the previous 10 images' JPEG previews being rendered and sync'd to the cloud (I'm assuming the upload portion doesn't take much CPU, but the JPEG render will utilize CPU resources to the fullest if it can). Rendering Standard/1:1 Previews locally and being able to walk away while the process finishes works because it is at the start of my workflow, but having to deal with on-the-fly preview rendering while I'm editing greatly impacts my ability to edit. Perhaps there can be a way to limit max CPU utilization for background sync tasks?
It may help to know that I'm running a dual-monitor setup, with Lightroom on a 27" 2560x1440 display maximized to fit the display (2nd display not running LR's 2nd monitor). Since I'm using a retina iPad, the optimal Standard Previews resolution should be the same at 2880 pixels.
Thanks again for the help! -
Ocaasionally SQL Developer comes in a state where CPU usage is 50% (on a dual core machine) and according to jvisualvm "Background Parser" and InsightThread is using the CPU, and only these two.
This continues also after closing all connections.
I don't really know what lead up to this.
Version 3.2.09/Build MAIN-09.23
Edited by: jnp1234 on Aug 28, 2012 7:01 AMHi,
This temporary high CPU usage is most likely related to the size of your SQL History. Look for the preference that controls the size limit in Tools | Preferences | Database | Worksheet.
Regards,
Gary
SQL Developer Team
Maybe you are looking for
-
Cannot log in to Windows 8.1 Pro virtual machine!
Hello. I installed Windows 7 Ultimate on my iMac (Mid 2011) via Boot Camp, then I installed Parallels Desktop 8 and my Windows virtual machine was recognised by PD8. Then I upgraded my Windows version to Windows 8 Pro and added Windows Media Centre.
-
When i try to scan multiple pages using Acrobat 8.1 Professional, my scanner actually scans the pages I feed it, but only the first page is converted into a .pdf. The other pages do not show up. This happens whether I am creating a new .pdf or amendi
-
send me the financial year ending process regarding material management customisation in detail.
-
Error using IDoc SALESORDER_CREATEFROMDATA202
Hi I'm trying to use IDoc SALESORDER_CREATEFROMDATA202 to create a sales order in the R/3 system, but I' keep getting the following error: "Maintain billing plan type in sales document type or item category" We are not using billing plan type at all
-
I am updating a new IPAD to the latest version from ITUNES. The update is not complete and has stopped. What is the procedure to restart? If I close ITUNES i get a message that the IPAD is updating. Will closing the application cause the update to b