Fade in/out effect too slow

Hi!
I'm trying to do a fade in/out effect between two images using the following code:
public void render(Graphics g, Image imageFrom, Image imageTo, int width, int height) {
Image buffer=gc.createCompatibleVolatileImage(width,height);
Graphics2D g2 = (Graphics2D)buffer.getGraphics(); 
int counter=0;
while (counter<90) {
   counter=counter+2;
   double alphaScalar = Math.sin(Math.toRadians(counter)); 
   g2.setComposite(AlphaComposite.SrcOver);
   g2.drawImage(imageFrom, 0, 0, null); 
   g2.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, (float)alphaScalar));
   g2.drawImage(imageTo, 0, 0, null);
   g.drawImage(buffer, 0, 0, null);
  try {
   Thread.sleep(1); 
  catch(InterruptedException ex){}
g.dispose();
g2.dispose();
buffer.flush();
}The problem is that it's too slow and jerky, even in fast computers. I need a smoother transition. Can someone help me, please ?
Before you ask me, I've put a "Thread.sleep(1)" line to allow another thread playing a background music to breath.
Thanks in advance

Have you tried Full-Screen Exclusive mode?
import java.awt.*;
import java.awt.event.*;
import java.awt.image.*;
import java.io.*;
import java.net.*;
import javax.imageio.*;
public class FlipExample {
    public static void main(String[] args) throws IOException {
        new FlipExample().animate();
        System.exit(0);
    private boolean quit = false;
    private BufferedImage im1, im2;
    private Window w;
    public FlipExample() throws IOException {
        w = new Window(new Frame());
        w.setIgnoreRepaint(true);
        w.addMouseListener(new MouseAdapter(){
            public void mousePressed(MouseEvent e) {
                if (e.getClickCount() > 1)
                    quit = true;
        String urlPrefix = "http://www3.us.porsche.com/english/usa/carreragt/modelinformation/experience/desktop/bilder/icon";
        String urlSuffix = "_800x600.jpg";
        im1=ImageIO.read(new URL(urlPrefix + "3" + urlSuffix));
        im2=ImageIO.read(new URL(urlPrefix + "4" + urlSuffix));
    public void animate() {
        GraphicsDevice gd = w.getGraphicsConfiguration().getDevice();
        try {
            gd.setFullScreenWindow(w);
            gd.setDisplayMode(new DisplayMode(800, 600, 32, 0));
            w.createBufferStrategy(2);
            BufferStrategy bs = w.getBufferStrategy();
            for(int times=0; times<5; ++times) {
                render(bs, im1, im2);
                render(bs, im2, im1);
        } finally {
            gd.setFullScreenWindow(null);
    private void render(BufferStrategy bs, BufferedImage from, BufferedImage to) {
        for(int i=0, UB=50; !quit && i<=UB; ++i) {
            Graphics2D g = (Graphics2D) bs.getDrawGraphics();
            g.drawRenderedImage(from, null);
            g.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, (float)i/UB));
            g.drawRenderedImage(to, null);
            bs.show();
}

Similar Messages

  • Hi all.When pressed play and make some changes in loop (eg fade in fade out) are very slow to implement, and also the loops from the library are very slow to play, corrects the somewhat self so is the Logic??

    hi all.When pressed play and make some changes in loop (eg fade in fade out) are very slow to implement, and also the loops from the library are very slow to play, corrects the somewhat self so is the Logic??

    Hey there Logic Pro21,
    It sounds like you are seeing some odd performance issues with Logic Pro X. I recommend these troubleshooting steps specifically from the following article to help troubleshoot what is happening:
    Logic Pro X: Troubleshooting basics
    http://support.apple.com/kb/HT5859
    Verify that your computer meets the system requirements for Logic Pro X
    See Logic Pro X Technical Specifications.
    Test using the computer's built-in audio hardware
    If you use external audio hardware, try setting Logic Pro X to use the built-in audio hardware on your computer. Choose Logic Pro X > Preferences > Audio from the main menu and click the Devices tab. Choose the built in audio hardware from the Input Device and Output Device pop-up menus. If the issue is resolved using built-in audio, refer to the manufacturer of your audio interface.
    Start Logic with a different project template
    Sometimes project files can become damaged, causing unexpected behavior in Logic. If you use a template, damage to the template can cause unexpected results with any project subsequently created from it. To create a completely fresh project choose File > New from Template and select Empty Project in the template selector window. Test to see if the issue is resolved in the new project.
    Sometimes, issues with the data in a project can be repaired. Open an affected project and open the Project Information window with the Project Information key command. Click Reorganize Memory to attempt to repair the project. When you reorganize memory, the current project is checked for any signs of damage, structural problems, and unused blocks. If any unused blocks are found, you will be able to remove these, and repair the project. Project memory is also reorganized automatically after saving or opening a project.
    Delete the user preferences
    You can resolve many issues by restoring Logic Pro X back to its original settings. This will not impact your media files. To reset your Logic Pro X user preference settings to their original state, do the following:
    In the Finder, choose Go to Folder from the Go menu.
    Type ~/Library/Preferences in the "Go to the folder" field.
    Press the Go button.
    Remove the com.apple.logic10.plist file from the Preferences folder. Note that if you have programmed any custom key commands, this will reset them to the defaults. You may wish to export your custom key command as a preset before performing this step. See the Logic Pro X User Manual for details on how to do this. If you are having trouble with a control surface in Logic Pro X, then you may also wish to delete the com.apple.logic.pro.cs file from the preferences folder.
    If you have upgraded from an earlier version of Logic Pro, you should also remove~/Library/Preferences/Logic/com.apple.logic.pro.
    Restart the computer.
    Isolate an issue by using another user account
    For more information see Isolating an issue by using another user account.
    Reinstall Logic Pro X
    Another approach you might consider is reinstalling Logic Pro X. To do this effectively, you need to remove the application, then reinstall Logic Pro X. You don't have to remove everything that was installed with Logic Pro X. Follow the steps below to completely reinstall a fresh copy of Logic Pro X.
    In the Finder, choose Applications from the Go menu.
    Locate the Logic Pro X application and drag it to the trash.
    Open the Mac App Store
    Click the Purchases button in the Mac App Store toolbar.
    Sign in to the Mac App Store using the Apple ID you first used to purchase Logic Pro X.
    Look for Logic Pro X in the list of purchased applications in the App Store. If you don't see Logic Pro X in the list, make sure it's not hidden. See Mac App Store: Hiding and unhiding purchases for more information.
    Click Install to download and install Logic Pro X.
    Thank you for using Apple Support Communities.
    Cheers,
    Sterling

  • How can I make a fade out effect at the end of the song in garageband for iPad?

    I want to made this effect but honestly I have no idea about what I have to do. u.u Please help.

    ClaudioWalrus wrote:
    How can I make a fade out effect at the end of the song in garageband for iPad?
    GB for iPad doesn't have volume automation, you'd need to import the project into GB on a Mac to create the fade with a volume curve.
    as an alt, finish your song and export the audio file, then import the audio file in an audio editor and create a volume fade with that:
    http://www.bulletsandbones.com/GB/GBFAQ.html#audioeditors
    (Let the page FULLY load. The link to your answer is at the top of your screen)

  • Creating fade in fade out effect in catalyst cs5

    Hi there ive just downloaded the flash catalyst cs5 trial version and am trying
    to create a fade in fade out effect with my pictures so i can upload to my
    website. Ive been on it for two days now and am getting no where
    anybody have any suggestions on how to do it. Ive tried alot of tutorials
    on the net but nothing seems to work
    please help thanks

    To make maters worse all the tutorials i use seem to have different cs5 installed than me
    they seem to have more features. They say click on this and click on that but i dont even
    have what they say in my menus??? man why cant it be easy just upload the photos
    click fade in fade out and be done with it.

  • Converting NTSC to PAL in CS5, the audio is out of sync-too slow. What do I need to adjust?

    Hi. I had to purchase a new Sony video camera in USA (NTSC 29.9fps). We are PAL, so most of my holiday movie is PAL 16:9 widescreen. When I imported the clips from the NTSC camera, I converted the clips to PAL 25fps but the audio is out of sync and appears to be too slow. eg. a male voice is high pitched and not quite audible. Is it the 'bit rate' difference causing the problem and if so, how do I change that in CS5? The PAL clips off my old camera are imported as MPG files. The new camera files are MTS. I am not sure if this is important but would appreciate some help. Thanks.

    Adobe doesn't have any tools that perform adequate NTSC<=>PAL conversions, in my view.  You may be better off creating a separate video for the NTSC stuff, keeping it as NTSC.  Your player should be able to handle it fine.

  • Core Audio : Disk too slow (prepare)

    Can anyone help me. I have created a track that will not bounce
    because it always comes up with this error message: Core Audio: disk
    too slow (prepare). I cannot bounce tracks individually for the same
    reason. I have tried deleting some tracks that i dont need but it still does
    the same. I have 700 and something ram. Is this not enough. I have
    logic pro 7.1 and am using a few ultrabeats in the song. Know Any
    tricks i can get around this problem easily with. Any help would be
    greatly apreciateddd. thanx in advance.

    The drives in the rear positions are on an ATA-100 Bus, those in the front are on an ATA-66 bus. The CD is on an ATA-33 bus, and no Hard drives should be sharing the CD cable. ATA-66 and ATA-100 are plenty fast enough not to be a bottleneck, as long as you are not experiencing low-level correctable Disk Errors.
    A Disk Error will result in retries, both Hardware and Software, which often produces the data, corrected, after a delay.
    My recommendation would be to find a time when you can back up all data from the troublesome drive, and take the \[several hours] to Initialize with the "Zero all Data" option. This writes Zeroes to every data block, then does a read-after-write to check the data.
    If the Drive's error correction was required to recover the just-written zeroed data, that block is declared bad, and the drive will make a near-permanent substitution of a nearby spare block for the one found defective. This process has the effect of laundering out marginal blocks and returning the drive to "factory fresh" condition of all good blocks.

  • TCP-IP ICOM is too slow?

    Hello everybody,
    I am new to DasyLab (I have good experience with labview though) and am trying to build up a TCP-IP Connection between a PLC and DasyLab10 using the ICOM module . So far I managed to build up a connection and send measured data to dasylab. There it gets decoded correctly and it even displays correctly after some fine-tuning in the options of that ICOM module. But the problem is, the ICOM module works too slow for my TCP-Server.
    I have two channels so far and I do send pakets with 2 floats in it every millisecond. No delimiter, no string, nothing else just 8 bytes. So its around 8kB/s. I had different delimiter in it, but some people stated out, that this could slow down the interpretation process.
    I tinkered around with packet-size on Dasylab as well as sending more then 2 floats per TCP-packet with no effect. The TCP-connection gets shut down(the server does it, because of an error) within some seconds after the buffer of DasyLab filled up.
    I got it working so far for a sample-rate of 1kHz and the packet-size of 1 with maximum CPU-Load on one core (With the "One Measurement per"-option: Gathermode), but only when the line-plotter is minimized. Then it behaves almost as intended. When I increase the packet-size within the ICOM module, than its gets laggy again, but the CPU-Load drops significantly. It seems to process the data considerably slower then with a packet-size of 1. It takes the ICOM Module between 2-10 seconds to process 1 sec of data depending on packet-size with higher packet-size resulting in a slower process. IThe global data-rate and packet-size does not seem to have any influence on that laggyness.
    The goal is actually a sample-rate of 2,5kHz with 25channels, which would be around 250kB/s on a direct crossover FastEthernet connection. It should be possible to save all the data to hdd as well as some sort of live view of the data like a line plotter. It would be okay if the plotter only refreshes every 10th second as long as it shows the data live(with no lag, correct timestamp) and completely(every data-point sent by the server is in there, no duplicates).
    The wiring diagram so far only includes one ICOM module plus a digital instrument and a line plotter. I have attached it to this message.
    Is my task really impossible with DasyLab? Do I need a faster PC? It has got a Core 2 Duo 2Ghz CPU. I feel like I am missing something important to get this up and running.
    Thanks in advance. Any help is appreciated.
    Have a nice day
    Philipp Michalski
    Attachments:
    Messdaten-1khz.DSB ‏18 KB

    Phillip,
    The module is simply not designed for high speed data. You have tried all of the settings... it suggests that DASYLab cannot keep up with the data stream even if you tell it that it's coming in at a rapid rate and to use a larger block.
    The only other option is to tackle writing a driver, or having someone write a driver for you. We have two ways to do that - DASYLab 13 Full version has the Script module, or you can purchased the Extension toolkit to write a module using C. 
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

  • Aperture - Too slow to be really efficient?

    I have been thinking about whether Aperture might be a way to go for me as far as working with my photographs. The ads etc. seemed like it offered some great features and I don't like the Nikon software that has come with my cameras. iPhoto is OK for my family snapshots, but I want more capabilities for my more serious stuff.
    So I when i got an email mentioning the 30 day trial I figured that was a great opportunity to check it out. I was hoping I'd be pleased enough over that 30 days that spending the $300 dollars to buy it would be justified despite the fact that I have Photoshop CS2 and a few other way to organize my photos. Thus far $300 seemed way too steep simply to duplicate capabilities I already have and doesn't come close to offering the capabilities of a full blown image editing app like PS.
    So my question is this. Is this thing really supposed to be as slow as it is? The app is running so slow on my Mac G5 dual 2.5 with Radeon 9600 with 2gb of Apple installed RAM that I can't see suffering through using it long enough to even see if it does some of the things I hope it does. I certainly couldn't see using it rountinely as an app for PRODUCTIVITY. There's nothing productive about an app that takes a good 5 seconds merely to remove a master from the library. Even clicking on an image to look at it takes way too long as the preview refines it's resolution.
    I did a quick search on this issue and saw lots of stuff out there. I have waded through some of it and some of it is enlightening and some just adds to the confusion. Apparently there may be an issue when running dual monitors. Am I understanding that correctly? Seems to me that most Pros and heavy duty amateurs run 2 monitors (and for good reason - most of the apps these days need to screens to display all the neccessary palattes etc.) So if 2 displays hurt performance, I can't see how this is an app that is going to work for me. I certainly don't want to go reconfiguring my Mac simply to go through the latest batch of pics. It also seems to me that a large library could also be an issue - while I don't have huge numbers of images, I do have a couple of thousand that I keep active on my machine (and a ton more on backup DVD).
    So anyway, I'd like to hear from people with a similar setup to mine . . . from your experience is trying to use this app on such a system (which is by no means a slow and incapable Mac at all - I push heavy pixels doing HD work in After Effects and Final Cut all day long and never feel like the apps are too slow to even use)? I am not going to run out and buy a new system simply to run Aperture since. Even if all the bells and whistles are what they seem, they aren't enough to justify it. I also don't want to waste any time using this trial version and learning it's ins and outs if going In know the app won't run efficiently enough.
    So am I crazy? Is this thing really as slow as it seems? Are there really as many speed issues as a search of the forum reveals? Is there something I am missing?

    So am I crazy?
    I have no such evidence.
    Well, give it time.
    In my personal experience, for all practical purposes
    it's unusable on my configuration. A Mac user since
    1986, I purchased my Mac Pro configured specifically
    for Aperture, including an upgraded video card. The
    outcome has been underwhelming.
    As I can't buy a faster computer, I can only assume
    Aperture is suited to hardware yet to be released.
    Aperture is the lowest point in my experience as a
    Mac user.
    I am actually hearing that sentiment more than I would like to as I ask around. Too bad. On the surface it looks quite good. I findit odd that the company that has taken on such great things as it has in the video world can't make this a more efficient piece of software. Perhaps they see much bigger money in the volume that comes from focusing on phones with video and computers for your TV and thus don't really care about an app like Aperature which would really just appeal to dedicated photographers at it's current pricing and probably doesn't rpresent enough revenue - even amongst photogs with cash to burn.
    I'm evaluating Aperture (which I have purchased at
    full retail price) alongside the beta of another
    professional photo app which is yet to be released.
    The jury is still out, but growing restless.
    Yeah, I have played with the Beta from one of the "Others". It was interesting. It was a while ago and that version didn't seem as potentially "robust" as aperture did. But I think it may be time to go back and take another look.

  • EXPDP is too slow even though the value of cursor_sharing changed to EXACT.

    Hi
    We are having a 10g standarad edition database (10.2.0.4) on Solaris 5 which is RAC with ASM. Infact we are planning to migrate it to LINUX x86-64 and to 11.2.0.3. The database size is around 1.3 TB. We are planning to go with an expdp backup and impdp to new server and new version database.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Release 10.2.0.4.0 - Production
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> !uname -a
    SunOS ibmxn920 5.10 Generic_127128-11 i86pc i386 i86pc
    As per the plan I started the expdp. But unfortunately the processing of tables even continued for one and half days and the backup didnt started also. After going through few docs I found that the CURSOR_SHARING should be EXACT to make the expdp more faster (Previously it was SIMILAR). So I changed the parameter to EXACT in one of the node and started the backup again yesterday night on the same node where I change the parameter. When today I came back still the processing going on. I checked the job status and found that the table processing is still going. It is not hanged at all. But its too slow.
    What could be the reason. Here is the memory details and kernal parameter details.
    Mem
    Memory: 24G phys mem, 6914M free mem, 31G swap, 31G free swap
    Kernal Parameters
    forceload: sys/msgsys
    forceload: sys/semsys
    forceload: sys/shmsys
    set noexec_user_stack=1
    set msgsys:msginfo_msgmax=65535
    set msgsys:msginfo_msgmnb=65535
    set msgsys:msginfo_msgmni=2560
    set msgsys:msginfo_msgtql=2560
    set semsys:seminfo_semmni=3072
    set semsys:seminfo_semmns=6452
    set semsys:seminfo_semmnu=3072
    set semsys:seminfo_semume=240
    set semsys:seminfo_semopm=100
    set semsys:seminfo_semmsl=1500
    set semsys:seminfo_semvmx=327670
    set shmsys:shminfo_shmmax=4294967295
    set shmsys:shminfo_shmmin=268435456
    set shmsys:shminfo_shmmni=4096
    set shmsys:shminfo_shmseg=1024
    set noexec_user_stack = 1
    set noexec_user_stack_log = 1
    #Non-administrative users cannot change file ownership.
    rstchown=1
    Do I need to make changes above of these. The dump is taking to local file system.

    Hi,
    I'd be looking at doing this in parallel over a database link and completely miss out sending anything to nfs - it will make the whole process quicker (you effectively skip the export part and everything is an import into the new instance).
    I ran a 600GB impdp in this way over a db link and it maybe took 12 hours (can't remember exactly) - a lot of that time is index build in the new database so make sure your pga etc is set up correctky for that.
    LOB data massively slows down datapump so that could be the issue here also. You should be able to acheive the whole process in less than a day (if you have no lobs...)
    Cheers,
    Harry

  • "Disk too slow or System Overload" ... hardly

    Hi all!
    I  hung onto my 2007 MacPro until just before summer when I upgraded to be on the safe side. I do orchestral work and have been accustomed to running heavy VSL projects on one single machine pretty effortlessly. On my new 12 MacPro, things are working even smootherEXCEPT for this "Disk too slow or System Overload" happening from time to time on fades. The projects in hich I experience this behaviour are audio only, orchestral mixes of between 60 and 70 audio tracks and I get the message when executing fades on all tracks simultaneously. I can't remember getting this on my old MacPro which had a fraction of the cpu-power and not nearly as much ram.
    My specs are 2x2.66 6-Core w/32GB ram, and all audio files, fade files and othe project files, are written to two 2 TB 7200 disks in the internal disk slots, configured as striped RAID. This gives more than enough speed and I still have 1,64 TB of free space on the RAID set.
    This issue comes and goes and I can't seem to figure out what triggers Logic's problem to read fades fast enough. Just now I had some corrupted fade ailes, rebuildt them and now Logic can't get past the fades at all unless I start playing in the middle of them.
    I can't understand why this problem should be introduced on a configuration much, much faster than my previous MacPro where this problem hardly ever occured. I even doubled the I/O buffer from 512 that I was using on my old mac, to 1024 with no difference at all. Increasing Process buffer size to "Large" doesn't have any affet either.
    Any clues anyone?
    Best regards,
    Ginge

    Good point, nice link!
    But the thing is I'm not using any software instruments and apart from one EQ on a track here and there, two Tube tech plugs and two sends to Altiverb (of which one is inactive). This kind of load was not a problem on the old heap and shouldn't be a challenge for a 12-core... Also, without changing anything apart from the move above, it is now playing with only one pixel-high movement on the meter, like you would expect it to do.
    BTW the quirk is now back on the project that was fixed. I did new fades at another position and CPU 1 is now maxing out again. A new set of fades means equal fades on regions on 63 tracks playing simultaneously.
    As I'm writing I'm becoming aware of one interesting aspect: the project files contains imported aaf data and to save space I'm leaving the media-files at the original location where it was put by Pyramix who exported the aaf. I figure there shouldn't be a difference if the audio files reside in a folder called "Media files" or "Audio files". These folders are sitting on the same disk albeit not in the same subfolder. If anything, I'd assume it would minimize potential for error caused by having several copies of files with identically names on different locations on the disks. But now it seems the issue is less likely to appear if I save the project including assets, copying external audio files (on any disk), or at least that is how it looks like right now. New fades are working fine after I've done this.
    Doesn't make sense to me but it seems to make a difference...
    Ginge

  • CoreAudio: Disk is too slow or System Overload. (-10010)

    I keep getting this error when I'm playing a song that has 16 tracks and effects on each track, but each track is frozen (!): CoreAudio: Disk is too slow or System Overload. (-10010). I'm running a G4 and have 2G RAM. Is this common? What can I do to prevent this? Thanks for any advice you can give!
    -David

    In addition to Rohan's suggestions you have more options depending on how you work and what phases of the project you are in.
    Something overlooked very often is polyphony enabled for the EXS24-some patches are set to 64 drop the polyphony to either the minimum you will need or less (if less you will notice older notes cutting out-but you can set it to the necessary number upon bounce or export). Some mono patches from sound designers are set to 32 or 64-even if you enable unison this is often worthless.
    Up your buffer size-I usually have my PBook at 512-I usually do offline editing, location recording and sound design with the book anyways.
    Export the track.
    Set virtual memory in EXS prefs to 'on' and with the largest RAM allocation-slow disk, lots of activity.
    If you use certain exs patches often but only parts of them trimming the excess off the instrument can improve performance. If you have a piano program with good samples you can initially create a lite version with 2 velocity layers and then when it is time to get things sounding good revert the instrument to the original 5 layer-obviously this makes a huge difference depending on what you are using the piano for, leave it if it is critical.
    Save the Channel strip and export as audio file while keeping the sequence in your session. Like Freezing but you can manually trim the file start position and set the bit depth. Freeze files render from the start until the last audible sample and then truncate the tails, meaning even if you can't hear it it is still pulling "silence" from disk as long as there is sound after it. Exporting as 24 or 16 bit will effectively reduce your file size by 25 (24bit) or 50 (16bit) percent. If you know you don't have to edit a performance for weeks to come exporting it will solve alot of problems for CPU.
    Someone should back me up on this one but at the moment I want to say EXSamples load into memory regardless of freeze settings-reducing your resources.
    Disable the filter (if live or playback-N/A with Freeze)-it is often not critical for some patches. The more synthetic sounds or sounds with fewer/shorter samples rely on them.
    Disable Unison.
    Most sample libraries were not recorded at 88/96k-there is no benefit in running them in sessions in these Sampling rates as far as the EXS sound quality goes.
    That should keep you busy for a while, just some tuning to up your system performance.

  • Fade to color effect on a Title

    I can't get a satisfactory Fade to color effect on a title. I tried various "Midpoint" and "Hold" settings but no one can let me get a slow fade in of the title. Any suggestion? Myabe this is not the right way to obtain what I'm searching for?

    So it could be just the Basic Title. Just do a slow cross dissolve, which will fade out the title to black. Put the next title adjacent to it, but not touching it and cross disssolve to fade that up from black.

  • [solved] fan too slow, cpu temperature is 80 degrees

    ThinkPad X200T | x86_64 | Linux 3.6.8.1-ARCH | systemd | gnome3.6.2
    lm_sensors already installed, sensors.service already enable and start. I have read the wiki, but understandless. The wiki is Fan Speed Control and lm_sensors.
    BUT the fan is too slow. It can't auto increase the speed
    There is some code I use in the next.
    $ sudo modprobe coretemp
    $ sensors
    acpitz-virtual-0
    Adapter: Virtual device
    temp1: +79.0°C (crit = +100.0°C)
    thinkpad-isa-0000
    Adapter: ISA adapter
    fan1: 4877 RPM
    temp1: +79.0°C
    temp2: +0.0°C
    temp3: +0.0°C
    temp4: +0.0°C
    temp5: +0.0°C
    temp6: +0.0°C
    temp7: +0.0°C
    temp8: +0.0°C
    coretemp-isa-0000
    Adapter: ISA adapter
    Core 0: +80.0°C (high = +95.0°C, crit = +105.0°C)
    Core 2: +79.0°C (high = +95.0°C, crit = +105.0°C)
    I don't know why fan1's speed always above 4877 rpm.
    $ cat /sys/class/hwmon/hwmon1/device/pwm1
    255
    In the next, only find out one driver is suitable. It's coretemp.
    $ sudo sensors-detect
    [sudo] password for l:
    # sensors-detect revision 6085 (2012-10-30 18:18:45 +0100)
    # System: LENOVO 3093RZ6 [ThinkPad] (laptop)
    This program will help you determine which kernel modules you need
    to load to use lm_sensors most effectively. It is generally safe
    and recommended to accept the default answers to all questions,
    unless you know what you're doing.
    Some south bridges, CPUs or memory controllers contain embedded sensors.
    Do you want to scan for them? This is totally safe. (YES/no):
    Module cpuid loaded successfully.
    Silicon Integrated Systems SIS5595... No
    VIA VT82C686 Integrated Sensors... No
    VIA VT8231 Integrated Sensors... No
    AMD K8 thermal sensors... No
    AMD Family 10h thermal sensors... No
    AMD Family 11h thermal sensors... No
    AMD Family 12h and 14h thermal sensors... No
    AMD Family 15h thermal sensors... No
    AMD Family 15h power sensors... No
    Intel digital thermal sensor... Success!
    (driver `coretemp')
    Intel AMB FB-DIMM thermal sensor... No
    VIA C7 thermal sensor... No
    VIA Nano thermal sensor... No
    Some Super I/O chips contain embedded sensors. We have to write to
    standard I/O ports to probe them. This is usually safe.
    Do you want to scan for Super I/O sensors? (YES/no):
    Probing for Super-I/O at 0x2e/0x2f
    Trying family `National Semiconductor/ITE'... No
    Trying family `SMSC'... No
    Trying family `VIA/Winbond/Nuvoton/Fintek'... No
    Trying family `ITE'... No
    Probing for Super-I/O at 0x4e/0x4f
    Trying family `National Semiconductor/ITE'... No
    Trying family `SMSC'... No
    Trying family `VIA/Winbond/Nuvoton/Fintek'... No
    Trying family `ITE'... No
    Some hardware monitoring chips are accessible through the ISA I/O ports.
    We have to write to arbitrary I/O ports to probe them. This is usually
    safe though. Yes, you do have ISA I/O ports even if you do not have any
    ISA slots! Do you want to scan the ISA I/O ports? (YES/no):
    Probing for `National Semiconductor LM78' at 0x290... No
    Probing for `National Semiconductor LM79' at 0x290... No
    Probing for `Winbond W83781D' at 0x290... No
    Probing for `Winbond W83782D' at 0x290... No
    Lastly, we can probe the I2C/SMBus adapters for connected hardware
    monitoring devices. This is the most risky part, and while it works
    reasonably well on most systems, it has been reported to cause trouble
    on some systems.
    Do you want to probe the I2C/SMBus adapters now? (YES/no):
    Using driver `i2c-i801' for device 0000:00:1f.3: Intel 3400/5 Series (PCH)
    Module i2c-dev loaded successfully.
    Next adapter: SMBus I801 adapter at 1880 (i2c-0)
    Do you want to scan it? (YES/no/selectively):
    Client found at address 0x50
    Probing for `Analog Devices ADM1033'... No
    Probing for `Analog Devices ADM1034'... No
    Probing for `SPD EEPROM'... Yes
    (confidence 8, not a hardware monitoring chip)
    Probing for `EDID EEPROM'... No
    Client found at address 0x51
    Probing for `Analog Devices ADM1033'... No
    Probing for `Analog Devices ADM1034'... No
    Probing for `SPD EEPROM'... Yes
    (confidence 8, not a hardware monitoring chip)
    Client found at address 0x5c
    Probing for `Analog Devices ADT7462'... No
    Probing for `SMSC EMC1072'... No
    Probing for `SMSC EMC1073'... No
    Probing for `SMSC EMC1074'... No
    Next adapter: i915 gmbus ssc (i2c-1)
    Do you want to scan it? (yes/NO/selectively): y
    Next adapter: i915 gmbus vga (i2c-2)
    Do you want to scan it? (yes/NO/selectively): y
    Next adapter: i915 gmbus panel (i2c-3)
    Do you want to scan it? (yes/NO/selectively): y
    Next adapter: i915 gmbus dpc (i2c-4)
    Do you want to scan it? (yes/NO/selectively): y
    Next adapter: i915 gmbus dpb (i2c-5)
    Do you want to scan it? (yes/NO/selectively): y
    Next adapter: i915 gmbus dpd (i2c-6)
    Do you want to scan it? (yes/NO/selectively): y
    Next adapter: DPDDC-C (i2c-7)
    Do you want to scan it? (YES/no/selectively):
    Now follows a summary of the probes I have just done.
    Just press ENTER to continue:
    Driver `coretemp':
    * Chip `Intel digital thermal sensor' (confidence: 9)
    Do you want to overwrite /etc/conf.d/lm_sensors? (YES/no):
    Unloading i2c-dev... OK
    Unloading cpuid... OK
    $ ls /sys/class/hwmon/hwmon0/
    name power subsystem temp1_crit temp1_input uevent
    $ ls /sys/class/hwmon/hwmon1/device
    driver modalias pwm1 temp1_input temp4_input temp7_input
    fan1_input name pwm1_enable temp2_input temp5_input temp8_input
    hwmon power subsystem temp3_input temp6_input uevent
    $ cat /sys/class/hwmon/hwmon1/device/pwm1_enable
    2
    Why the file "pwm1_enable" is set 2?
    $ cat /sys/class/hwmon/hwmon1/device/fan1_input
    4851
    What else is required?Reply to me, please. I will provide more information in this post. Think you!
    Last edited by blue sea & blue sail (2012-12-09 07:53:51)

    No one help me! OK. I always self-reliant.
    bule sea & blue sail wrote:
    昨晚看了俩小时电影,关了播放器之后发现GPU温度飙到80多摄氏度。手摸在键盘上都能感觉到烧烧的,风扇出风口的铜片很烫手。为了小黑的健康,我决定折腾一下。
    我的小黑是Thinkpad T61p,在装Ubuntu的时候相信是默认安装好thinkpad_acpi的,但是默认情况下,风扇的转速都是在“自动档”上的。要换成手动档你必须要在/etc/modprobe.d/下添加一个文件thinkpad_acpi.conf(老版本的可能是叫options)。在文件中要放:
    options thinkpad_acpi fan_control=1
    接着重载模块:
    sudo modprobe -r thinkpad_acpi && sudo modprobe thinkpad_acpi
    这样一来就可以直接通过修改/proc/acpi/ibm/fan来改变转速了:
    echo level auto | sudo tee /proc/acpi/ibm/fan
    cat /proc/acpi/ibm/fan可以查看如何命令风扇:
    commands: level <level> (</level><level> is 0-7, auto, disengaged, full-speed)
    commands: enable, disable
    commands: watchdog <timeout> (</timeout><timeout> is 0 (off), 1-120 (seconds))
    由此,可以根据你的需要来控制风扇了。
    如果想要自定义自动化就有好多种办法了。一种比较简单的办法是直接安装thinkfan,在 /etc/thinkfan.conf修改好阀值和对应的转速级别(0–7 level值)。默认情况下,thinkfan是在deamon模式下运行的。不过有个缺点是好像它只能操控0–7的等级,不可以auto或full-speed。
    还有一种自动化方法是使用自动脚本,但是我从源代码看貌似也只是能控制0–7的转速。
    其实0–7或auto在常规任务的时候是足够的,温度一般在40–60度之间,但是像看电影这种损耗资源的任务就会开始升温。0–7的转速是不够的(level 7的转速应该只是3000多转吧),而且貌似这些软件和脚本检测的温度数据都只是CPU温度。所以几时GPU温度升到80度风扇还是慢慢悠悠两三千转……要解决这个问题其实也很方便。因为温度监测的接口就在/sys/devices/platform/thinkpad_hwmon/temp*_input,各部分温度对应的ID可以到这张网页上找到。
    有了这些,相信写个适合自己小黑的bash code已经没有多少问题了。当然最佳还是利用现成的代码,稍作修改即可。
    –EOF–
    This is the original linkhttp://www.conanblog.me/notes/control-f … kpad-acpi/

  • RECORDING ERROR MESSAGE-'DISC TOO SLOW'

    Every once and a while after I have recorded around 12+ tracks, I get a "Disc too slow" message with a black block at the end of the recording. The black block when played back is just a loud noise. What is this and what setting might I have wrong. I have the new IMac 27" with quad cores. The cores meter is barely moving and I don't hardly have any processors on. HELP.

    sand box wrote:
    Erik.......do you feel it is better to record direct to my external HD or the Imac HD?
    External. IF it is Firewire. USB is less suitable, but it'll do too. The startup disk is also busy handling RAM for virtual memory, so it is doubly stressed when recording to it, and will 'refuse' to record, or drop out, or say "disk too slow" sooner.
    What are the best settings for buffer and the other recording parameters?
    Depends on your machine; rule of thumb: keep it as low as possible when recording for minimum latency (32, 64 or 128 or 256), and set it higher when playing back, (256-1024) to avoid "system overload".
    The best recording format imo is 24 bit/44.1 KHz (or 48 KHz). If you record 10+ tracks that are all subtle acoustic recordings, it may ever so slighly improve sound/mix quality to go with 24/96KHz, but that will also double the overhead for the CPU. And imo the difference even then is hardly perceptible, save for the most highly trained professional ears.
    I record basic rock with 16 tracks or less and not overly complicated effects. Also I still don't understand what "flattening" of a track means. Thanx.
    Okay, so 24/44.1 is enough for that. Flattening a track means that you make a new audio file (solo the track, and bounce) that includes the plugin effects you applied to it. It is an (old fashioned) way of freeing up CPU (because you can switch off the plugins afterwards and use the 'flattened' audio file).
    Freezing provides a better alternative for freeing up CPU though. Look it up in the manual, it is a simple and effective feature.
    regards, Erik.

  • Java web start application runs too slow...

    Hello,
    I am new to Java Web Start. I have created a java web start application and when i enable web start from local Execution, then it works perfectly well. But when i upload it on server and then download the application, then it is too too slow...i mean it takes minutes to get the output on clicking some button....my jnlp file is as under:
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <jnlp codebase="http://(web server code base)" href="launch.jnlp" spec="1.0+">
    <information>
    <title>ERD</title>
    <vendor>Deepika Gohil</vendor>
    <homepage href="http://appframework.dev.java.net"/>
    <description>A simple java desktop application based on Swing Application Framework</description>
    <description kind="short">ER Deign Tools</description>
    </information>
    <update check="always"/>
    <security>
    <all-permissions/>
    </security>
    <resources>
    <j2se version="1.5+"/>
    <jar href="ERD_1_2.jar" main="true"/>
    <jar href="lib/appframework-1.0.3.jar"/>
    <jar href="lib/swing-worker-1.1.jar"/>
    <jar href="lib/jaxb-impl.jar"/>
    <jar href="lib/jaxb-xjc.jar"/>
    <jar href="lib/jaxb1-impl.jar"/>
    <jar href="lib/activation.jar"/>
    <jar href="lib/jaxb-api.jar"/>
    <jar href="lib/jsr173_api.jar"/>
    <jar href="lib/ant-contrib-1.0b3.jar"/>
    <jar href="lib/jaxb-impl.jar"/>
    <jar href="lib/jaxb-xjc.jar"/>
    <jar href="lib/FastInfoset.jar"/>
    <jar href="lib/gmbal-api-only.jar"/>
    <jar href="lib/http.jar"/>
    <jar href="lib/jaxws-rt.jar"/>
    <jar href="lib/jaxws-tools.jar"/>
    <jar href="lib/management-api.jar"/>
    <jar href="lib/mimepull.jar"/>
    <jar href="lib/policy.jar"/>
    <jar href="lib/saaj-impl.jar"/>
    <jar href="lib/stax-ex.jar"/>
    <jar href="lib/streambuffer.jar"/>
    <jar href="lib/woodstox.jar"/>
    <jar href="lib/jaxws-api.jar"/>
    <jar href="lib/jsr181-api.jar"/>
    <jar href="lib/jsr250-api.jar"/>
    <jar href="lib/saaj-api.jar"/>
    <jar href="lib/activation.jar"/>
    <jar href="lib/jaxb-api.jar"/>
    <jar href="lib/jsr173_api.jar"/>
    </resources>
    <application-desc main-class="erd.screen1">
    </application-desc>
    </jnlp>
    I dont understand the reason. Could you please help me out.
    Thank you,
    Deepika Gohil.

    Check your web server's access logs to see how many requests web start is sending for each jar. After you've loaded the application the first time, for each subsequent launch, if you've got everything configured right, you should only see requests for the JNLP file and maybe some gifs because web start should load everything else out of the cache (if you're using the version-based download protocol). Or if you're using the basic download protocol, then you might see requests for each jar file, but even in this case, if your web server is prepared to evaluate the last-updated attribute for each jar request and for jars that have not changed, respond with no actual payload and a header value of Not-Modified, then that should run almost as fast.
    You might also want to consider changing the "check" attribute of the "update" element from "always" to "background" for a couple of reasons. It should allow your app to start sooner (but this means that you might have to launch once or twice after an update is applied to the web server before the update shows up on the workstation). Also, my impression is that "always" is broken and prevents web start from ever checking to see if your jnlp file has been updated if you launch your app from a web start shortcut - launching from a browser is less likely to have this problem, depending on how often your browser is configured to check for updated resources.

Maybe you are looking for

  • Slow query for one person

    I have a simple query that runs fast for all users except one. He has the same version of software, the same tablespaces, privelges, etc as other users but the query returns 4 minutes slower than others. any idea what is wrong?

  • Printer compatability for my G4

    I have a powermac G4 using OSX 10.4.11 and also loaded with system 9 which I still use for some of my older applications. I've been using macs for years but am now a late comer to OSX. I'm looking at an HP 8500 jet printer to use with this computer.

  • MPLS to support multicast traffic

    Dear Gurus, Does EoMPLS passes multicast traffic? Before my customer can pass multicast traffic (video) thru our Metro-Ethernet network. What we did is migrate the connection to our EoMPLS network, and then suddenly the video is not working. Thanks.

  • Shopping Cart Table name

    Hi experts i'd like to know what are the Table releated with Shopping Cart in SRM 5.0 Thanks Andrea

  • Temp and kernel panic relation?

    Is there a relationship between temperature and kernel panic? I've spent the day restarting the computer trying everything I've found at apple discussions and support to fix the nonstop kernel panic restart requests. After I Left the thing off for an