Too High Wakeups-from-idle per second

I am using Arch for the past few months and I have been noticing that the battery is draining much quicker than it used to be in windows.
I used powertop and found out that the "Wakeups-from-idle per second" is way too high. It is ~300 per second on average and once i even saw numbers like 8000 !!
The link http://www.linuxpowertop.org/powertop.php quotes it is possible to get 3 wakeups running a full GNOME desktop. It says that 193 is lot more than 3.
But in my case the numbers are way too high than expected. I run Arch with XFCE4.
Can somebody explain why am I seeing such high numbers and is this the reason for my battery draining ?

nachiappan wrote:Can somebody explain why am I seeing such high numbers and is this the reason for my battery draining ?
Powertop shows what is causing the wakeups. If it is the kernel, trying a different frequency governor might help.
More than likely tweaking a few different settings together will get you more battery life, the wakeups alone won't be doing all the battery draining.
There's lots of helpful hints on power saving here:
https://wiki.archlinux.org/index.php/La … ttery_life
edit: seems your cpu isn't being monitored correctly. 3000% seems so wrong.
What kernel and cpu are you using?
Last edited by moetunes (2012-02-23 00:12:46)

Similar Messages

  • Blu-ray Error: "fatal error". Code: "6". "Video buffer underflows. Total bitrate is too high near time = 1.456333 seconds -

    Every time i try to burn a blu-ray, this message appears and i can't proceed to the process.
    Can anybody help me? I need it for my job.

    I could swear I posted this yesterday. But no, the forum did not. Here is the recovered version....
    From http://forums.adobe.com/message/6004943#6004943
    You will find a number of other threads with this error message.
    I think it usually reflects a real datarate problem, but sometimes because there are spikes that should not have occurred with the transcode settings used.
    But the solution/workaround is to reduce datarate.
    There may also be times where changes in the type of transcoding (CBR, VBR one or two pass) has fixed the issue for a particular project.
    In this thread, Jon Geddes recommends a max of 27 or less.
    http://forums.adobe.com/message/5214691#5214691

  • BluRay error message "code 6, audio buffer underflows. Total bitrate is too high near time = 000000 seconds."

    Hello,
    I’m trying to create BluRays using Encore and I keep getting the following error messages: ‘code 6, audio buffer underflows. Total bitrate is too high near time = 000000 seconds.”
    For info, I created a H264 Bluray - NTSC 24fps master from an Apple prores HQ in Adobe Media Encoder. Duration: 52m, size: 11go. I'm on Mac OSX 10.9.5, the bluray burner is Samsung SE-506CB/RSWD and the BR disks are TDK Blu-ray Disc 50 Spindle - 25GB 4X BD-R - Printable.
    I looked around in forums and tried the following without success: replacing disk name to shorter name without space, creating bluray without menu frame, I also tried with a Mpeg2 bluray master. I tried to export a new master but I can't seem to be able to change the audio bitrate.
    Can anyone please help ?
    Thanks.

    Hi Stan, thanks for getting back.
    I tried to create a new master from Media Encoder but I can only export audio in PCM, I don't get a dolby option. See pic below.
    I tried a new project in Encore and chose PCM instead of Dolby in the preference menu, but I still got the same error. Should I try again limiting bitrate to 15 ? I was on 30 before.
    Please help, I've already wasted 10 bluray and this is getting really frustrating!

  • When trying to project video from their Win 7 laptop, the CTS1300 TP unit states that the resolution is too high

    Hi,
    Have a customer who advises as per the discussion title :-
    When trying to project video from their Win 7 laptop, the CTS1300 TP unit states that the resolution is too high
    It has worked fine up until the upgrade to Windows 7
    Customer advises colleagues can connect and project OK at another site, using Win7 laptop, however that site uses CTS3010 and CTS1000 and, as yet, it is unconfirmed which they used to test on
    I could raise a case with TAC but wondered if anyone knows of any issue with Windows 7 and the CTS1300
    Many Thanks
    Nigel

    I've had issues with Win7 laptops before, but it all depends on the model of the laptop, and what it's display is capable of, and whether the end user is using it in desktop extension / duplication / remote only mode, etc.
    In most cases, switching it to the 2nd screen only mode fixes the issue, and worst case, rebooting it while it's set this way, and it's connected, fixes the problem.
    So, from my expereince, it's pretty well always a problem wtth the laptop/user, and very rarely (if ever) a problem with the TelePresence device.
    Wayne
    Please remember to rate responses and to mark your question as answered if appropriate.

  • Active Directory Performance Counter Searches per second is High

    We are running Windows Server 2008 R2 (without integrated DNS). We have 6 domain controllers, and one of them is showing high searches per second, hanging around 800, while the others are around 200.
    Can someone please advise how can I find out which server, or service, is generating the high number of searches on that particular DC?
    I'm using SCOM 2012 SP1 for the performance monitoring. 
    Thank you.

    We are running Windows Server 2008 R2 (without integrated DNS). We have 6 domain controllers, and one of them is showing high searches per second, hanging around 800, while the others are around 200.
    Can someone please advise how can I find out which server, or service, is generating the high number of searches on that particular DC?
    I'm using SCOM 2012 SP1 for the performance monitoring. 
    Thank you.
    Give a try to ADInsight tool from Sysinternal suite.
    http://technet.microsoft.com/en-in/sysinternals/bb897539
    You can try performance tuning guidelines suggested in the below article.
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn529134
    Try reliability monitor might able to help you.
    http://technet.microsoft.com/en-us/library/cc771692%28v=ws.10%29.aspx
    http://support.microsoft.com/kb/983386
    Awinish Vishwakarma - MVP
    My Blog: awinish.wordpress.com
    Disclaimer This posting is provided AS-IS with no warranties/guarantees and confers no rights.

  • Console reporting webkit having hundreds of wakeups per second

    I have been noticing frequent computer slowdowns on my 2011 MacBook Pro, always fully patched.
    Checking Console, I found dozens of reports from Webkit saying things like "140 wake ups per second for 194 seconds"
    I think these are related!!!  Can someone tell me how to resolve this

    Hello Antonio,
    This forum is used to discuss and ask questions about .NET Framework Base Classes (BCL) such as Collections, I/O, Regigistry, Globalization, Reflection. Also discuss all the other Microsoft libraries that are built on or extend the .NET Framework, including
    Managed Extensibility Framework (MEF), Charting Controls, CardSpace, Windows Identity Foundation (WIF), Point of Sale (POS), Transactions.
    It seems your issue is more regarding Web Application issue, I suggest you posting it to:
    http://forums.iis.net/1158.aspx
    And you can check this article which may be helpful:
    http://aspalliance.com/1184_Avoiding_Blocking_Issues_in_ASPNET_Session_State_Databases
    Thanks for understanding.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • I install latest update from my ipad 2, after the upgrade tablet running slowly. Probably the system requirements too high for my device. I think that users with older models should be able to install IOS 6.x.x on their devices or consider buying devices

    I install latest update from my ipad 2, after the upgrade tablet running slowly. Probably the system requirements too high for my device. I think that users with older models should be able to install IOS 6.x.x on their devices or consider buying devices from other manufacturers ...
    People are dissatisfied with the new update a lot, including some of my friends and acquaintances. Is it reasonable to wait until Apple implements the possibility to return to Ios Ios 6 to 7? And how soon it can be done?
    Sorry for my english, i am from Russia.

    I have a similar issue. I recently purchased an iPad Mini, and I started off by backing it up from the apps I had on my iPhone. Now, I have a number of apps that never downloaded and that I don't want on the iPad. Some of these are apps that I don't need, while others are the iPhone versions of apps. There are something like 23 apps that I have no use for, but iTunes tries to install every time I connect. I've also tried clicking the 'Will Install' buttons. There's no way to delete these apps from the iPad without first installing them, as far as I can see, and there's no way to NOT install them, in iTunes.
    It''s not just annoying, it's causing problems with installation of larger files and movies. Now, every time I sync to the computer, I get a warning that I'm over capacity by 580 MB, and larger apps will not install! Clicking Revert and a bunch of other things doesn't work, either. And I can't just delete some of the apps, because I or my wife use them on our phones!

  • Can 10000 sample per second can be acuire from a sensor through cRIO

    I just want to know, can I acure 10000 smaple (when the programme run in real time) per second per chanel or more, when data taking from a sensor, through the cRIO.   

    Bikash wrote:
    Can I apply this method in simulation loop. Where I want make my control algorithm. 
    It sounds like a perfect application for your needs. Remember to buy powerful hardware not having to worry about the performance.
    http://sine.ni.com/nips/cds/view/p/lang/en/nid/210400
    Or something similar performance with a PXI solution.
    Br,
    /Roger

  • Memory pages per sec is too high

    Hi,  I see the value 5000 in monitor "Memory Pages Per Second" in the tab "Threshold Comparison>Specify Threshold value to compare".  Is this value in MB or pages/sec that system writes/reads.
    Please make me understand.
    -Satya

    I don't think 5000 is a value in MB.
    Taken from
    http://blogs.technet.com/b/askperf/archive/2008/01/29/an-overview-of-troubleshooting-memory-issues-part-two.aspx
    Please check the underlined part below -
    Pages/sec is the rate at which pages are read from or written to disk to resolve
    hard page faults. This counter is a primary indicator of the kinds of faults
    that cause system-wide delays.  It is the sum of Memory \ Pages Input/sec and
    Memory \ Pages Output/sec.  It is counted in numbers of pages, so it can be
    compared to other counts of pages, such as Memory \ Page Faults/sec, without
    conversion. It includes pages retrieved to satisfy faults in the file system
    cache (usually requested by applications) non-cached mapped memory files.
    Thanks, S K Agrawal

  • İ bougt a vip membership for 1 month but by mistake a bought from same place 1 year vip membership too, how i can cancell the second one?

    İ bougt a vip membership for 1 month but by mistake a bought from same place 1 year vip membership too, how i can cancell the second one?

    Order ID: MHFQLWV6S7
    Receipt Date: 17/11/12
    Order Total: ¥130.00
    Billed To: Visa
    Item
    Developer
    Type
    Unit Price
    PPTV, PPTV 一个月(31天)VIP会员资格
    Report a Problem
    PPLive Corporation
    In App Purchase
    ¥12.00
    PPTV, PPTV十二个月VIP会员资格
    Report a Problem
    PPLive Corporation
    In App Purchase
    ¥118.00
    Order Total:
    ¥130.00
    first 12rmb is for 1 month and the second is for one year, who can help me to cancel the second 1 year purchase

  • Help. trying to burn a dvd from fcpx. says bitrate too high

    I'm using fcpx for a montage with effects like generators etc...not sure if this is why it says bitrate too high. I want to burn a dvd and it stops at 66% saying bitrate too high. I've seen an answer from 2011....wondering if anyone has anything new? My computer is Mac book pro 15 inch, latest software. I went to compressor but the final product doesn't allow me to put in a title and i'm worried about final quality. Thanks for any help

    I would say usually it's about the same. If the settings are adjusted properly, it can sometimes improve the results; conversely, there's always the potential to get terrible results if the settings are wrong. (FCP X Share kind of takes that risk out of the process.)
    Where the software can often add value is when you have long videos that you're trying to fit on a disk. Or trying to compress certain HD material with complex scenes. Or sometimes, you'll run into players that choke on a DVD and the adjustments in Compressor give you a chance to adjust the bit rate to a level that widens the disk's playability. And there are audio and video effects that can be added in Compressor without re-editing in FCP.
    BTW, I don't believe you've yet told us how long your project is.
    Russ

  • I have taken a photo from an airplane and cannot get the 'map' function on the exifwizard app. Is it because I was too high? Altitude that is.

    I have taken a photo from an airplane and cannot get the 'map' function on the exifwizard app. Is it because I was too high? Altitude that is. Other photos allow me to use the 'map' function.

    Most likely, if you were inside the airplane, rather than sat on the wing, the iPhone couldn't get a GPS lock-on, or you didn't wait long enough for it to get a lock on.
    EDIT: Lawrence is right... forgot about that.

  • Disk Transfer (reads and writes) Latency is Too High

    i keep getting this error:
    the Logical Disk\Avg. Disk sec/Transfer performance counter  has been exceeded.
    i got these errors on the following servers:
    active directory
    SQL01 (i have 2 sql clustered)
    CAS03 (4 cas server loadbalanced)
    HUB01
    MBX02(Clustered)
    a little info on our environment:
    *Using SAN storage.
    *Disks are new ,and working fine
    *the server has GOOD hardware components(16-32 Gb RAM;Xeon or quadcore........)
    i keep having these notifications everyday; i searched on the internet and i found the cause to be 1 of the 2:
    1) disk hardware issue( non common=rarely )
    2) the queue time on the hard-disk( time to write on the Hard-disk)
    if anyone can assist me with the following:
    1) is this a serious issue that will affect our enviroment?
    2) is it good to edit the time of monitoring to be 10minute(instead of the default 5min)
    3) is there any solution for this?(to prevent these annoying -useless??--- notifications)
    4)what is the cause of this queue delay;;and FYI sometime this happens when nothing and noone is using the server (i.e the server is almost Idle)
    Regards

    The problem is....  exactly what the knowledge of the alert says is wrong.  It is very simple.  Your disk latency is too high at times. 
    This is likely due to overloading the capabilities of the disk, and during peak times, the disk is underperforming.  Or - it could be that occasionally, due to the design of your disks - you get a very large spike in disk latency... and this trips the
    "average" counter.  You could change this monitor to be a consecutive sample threshold monitor, and that would likely quiet it down.... but only doing an analysis of a perfmon of several disks over 24 hours would you be able to determine specifically
    whats going on.
    SCOM did exactly what it is supposed to do.... it alerted your, proactively, to the possible existence of an issue.  Now you, using the knowledge already in the alert, use that information to further investigate, and determine what is the corrective
    action to take. 
    Summary
    The Avg. Disk sec/Transfer (LogicalDisk\Avg. Disk sec/Transfer) for the logical disk has exceeded the threshold. The logical disk and possibly even overall system performance may significantly diminish which will result in poor operating system and application
    performance.
    The Avg. Disk sec/ Transfer counter measures the average rate of disk Transfer requests (I/O request packets (IRPs)) that are executed per second on a specific logical disk. This is one measure of storage subsystem throughput.
    Causes
    A high Avg. Disk sec/Transfer performance counter value may occur due to a burst of disk transfer requests by either an operating system or application.
    Resolutions
    To increase the available storage subsystem throughput for this logical disk, do one or more of the following:
    •
    Upgrade the controllers or disk drives.
    •
    Switch from RAID-5 to RAID-0+1.
    •
    Increase the number of actual spindles.
    Be sure to set this threshold value appropriately for your specific storage hardware. The threshold value will vary according to the disk’s underlying storage subsystem. For example, the “disk” might be
    a single spindle or a large disk array. You can use MOM overrides to define exception thresholds, which can be applied to specific computers or entire computer groups.
    Additional Information
    The Avg. Disk sec/Transfer counter is useful in gathering throughput data. If the average time is long enough, you can analyze a histogram of the array’s response to specific loads (queues, request sizes, and so on). If possible, you should
    observe workloads separately.
    You can use throughput metrics to determine:
    •
    The behavior of a workload running on a given host system. You can track the workload requirements for disk transfer requests over time. Characterization of workloads is an important part of performance analysis and capacity planning.
    •
    The peak and sustainable levels of performance that are provided by a given storage subsystem. A workload can either be used to artificially or naturally push a storage subsystem (in this case, a given logical disk) to its limits. Determining these
    limits provides useful configuration information for system designers and administrators.
    However, without thorough knowledge of the underlying storage subsystem of the logical disk (for example, knowing whether it is a single spindle or a massive disk array), it can be difficult to provide an optimized one size fits all threshold value.
    You must also consider the Avg. Disk sec/Transfer counter in conjunction with other transfer request characteristics (for example, request size and randomness/sequentially) and the equivalent counters for write disk requests.
    If the Avg. Disk sec/Transfers counter is tracked over time and if it increases with the intensity of the workloads that are driving the transfer requests, it is reasonable to suspect that the logical disk is saturated if throughput does not increase and
    the user experiences degraded system throughput.
    For more information about storage architecture and driver support, see the Storage - Architecture and Driver Support Web site at
    http://go.microsoft.com/fwlink/?LinkId=26156.

  • How many of these objects should I be able to insert per second?

    I'm inserting these objects using default (not POF) serialization with putAll(myMap). I receive about 4000 new quotes per second to put in the cache. I try coalescing them to various degrees but my other apps are still slowing down when these inserts are taking place. The applications are listening to the cache where these inserts are going using CQCs. The apps may also be doing get()s on the cache. What is the ideal size for the putAll? If I chop up myMap into batches of 100 or 200 objects then it increases the responsiveness of other apps but slows down the overall time to complete the putAll. Maybe I need a different cache topology? Currently I have 3 storage enabled cluster nodes and 3 proxy nodes. The quotes go to a distributed-scheme cache. I have tried both having the quote inserting app use Extend and becoming a TCMP cluster member. Similar issues either way.
    Thanks,
    Andrew
    import java.io.Serializable;
    public class Quote implements Serializable {
        public char type;
        public String symbol;
        public char exch;
        public float bid = 0;
        public float ask = 0;
        public int bidSize = 0;
        public int askSize = 0;
        public int hour = 0;
        public int minute = 0;
        public int second = 0;
        public float last = 0;
        public long volume = 0;
        public char fastMarket; //askSource for NBBO
        public long sequence = 0;
        public int lastTradeSize = 0;
        public String toString() {
            return "type='" + type + "'\tsymbol='" + symbol + "'\texch='" + exch + "'\tbid=" +
                    bid + "\task=" + ask +
                    "\tsize=" + bidSize + "x" + askSize + "\tlast=" + lastTradeSize + " @ " + last +
                    "\tvolume=" + volume + "\t" +
                    hour + ":" + (minute<10?"0":"") + minute + ":" + (second<10?"0":"") + second + "\tsequence=" + sequence;
        public boolean equals(Object object) {
            if (this == object) {
                return true;
            if ( !(object instanceof Quote) ) {
                return false;
            final Quote other = (Quote)object;
            if (!(symbol == null ? other.symbol == null : symbol.equals(other.symbol))) {
                return false;
            if (exch != other.exch) {
                return false;
            return true;
        public int hashCode() {
            final int PRIME = 37;
            int result = 1;
            result = PRIME * result + ((symbol == null) ? 0 : symbol.hashCode());
            result = PRIME * result + (int)exch;
            return result;
        public Object clone() throws CloneNotSupportedException {
            Quote q = new Quote();
            q.type=this.type;
            q.symbol=this.symbol;
            q.exch=this.exch;
            q.bid=this.bid;
            q.ask = this.ask;
            q.bidSize = this.bidSize;
            q.askSize = this.askSize;
            q.hour = this.hour;
            q.minute = this.minute;
            q.second = this.second;
            q.last = this.last;
            q.volume = this.volume;
            q.fastMarket = this.fastMarket;
            q.sequence = this.sequence;
            q.lastTradeSize = this.lastTradeSize;
            return q;
    }

    Well, firstly, I surprised you are using "float" objects in a financial object, but that's a different debate... :)
    Second, why aren't you using pof? Much more compact from my testing; better performance too.
    I've inserted similar objects (but with BigDecimal for the numeric types) and seen insert rates in the 30-40,000 / second (single machine, one node). Obviously you take a whack when you start the second node (backup's being maintained, plus that node is probably on a separate server, so you are introducing network latency.) Still, I would have thought 10-20,000/second would be easily doable.
    What are the thread counts on the service's you are using?; I've found this to be quite a choke point on high-throughput caches. What stats are you getting back from JMX for the Coherence components?; what stats from the server (CPU, Memory, swap, etc)?; What spec of machines are you using? Which JVM are you using? How is the JVM configured? What's are the GC times looking like? Are you CQC queries using indexes? Are your get()'s using indexes, or just using keys? Have you instrumented your own code to get some stats from it? Are you doing excessive logging? So many variables here... Very difficult to say what the problem is with so little info./insight into your system.
    Also, maybe look at using a multi-threaded "feeder" client program for your trades. That's what I do (as well as upping the thread-count on the cache service thread) and it seems to run fine (with smaller batch sizes per thread, say 50.) We "push" as well as fully "process" trades (into Positions) at a rate of about 7-10,000 / sec on a 4 server set-up (two cache storage nodes / server; two proxies / server.) Machines are dual socket, quad-core 3GHz Xeons. The clients use CQC and get()'s, similar to your set-up.
    Steve

  • PGC...data rate too high

    Hallo,
    message
    nunew33, "Mpeg not valid error message" #4, 31 Jan 2006 3:29 pm describes a certain error message. The user had problems with an imported MPEG movie.
    Now I receive the same message, but the MPEG that is causing the problem is created by Encore DVD itself!?
    I am working with the german version, but here is a rough translation of the message:
    "PGC 'Weitere Bilder' has an error at 00:36:42:07.
    The data rate of this file is too high for DVD. You must replace the file with one of a lower data rate. - PGC Info: Name = Weitere Bilder, Ref = SApgc, Time = 00:36:42:07"
    My test project has two menus and a slide show with approx. 25 slides and blending as transition. The menus are ok, I verified that before.
    First I thought it was a problem with the audio I use in the slide show. Because I am still in the state of learning how to use the application, I use some test data. The audio tracks are MP3s. I learned already that it is better to convert the MP3s to WAV files with certain properties.
    I did that, but still the DVD generation was not successful.
    Then I deleted all slides from the slide show but the first. Now the generation worked!? As far as a single slide (an image file) can not have a bitrate per second, and there was no sound any more, and as far as the error message appears AFTER the slide shows are generated, while Encore DVD is importing video and audio just before the burning process, I think that the MPEG that is showing the slide show is the problem.
    But this MPEG is created by Encore DVD itself. Can Encore DVD create Data that is not compliant to the DVD specs?
    The last two days I had to find out the cause for a "general error". Eventually I found out that image names must not be too long. Now there is something else, and I still have to just waste time for finding solutions for apparent bugs in Encore DVD. Why doesn't the project check find and tell me such problems? Problem is that the errors appear at the end of the generation process, so I always have to wait for - in my case - approx. 30 minutes.
    If the project check would have told me before that there are files with file names that are too long, I wouldn't have had to search or this for two days.
    Now I get this PGC error (what is PGC by the way?), and still have no clue, cause again the project check didn't mention anything.
    Any help would be greatly appreciated.
    Regards,
    Christian Kirchhoff

    Hallo,
    thanks, Ruud and Jeff, for your comments.
    The images are all scans of ancient paintings. And they are all rather dark. They are not "optimized", meaning they are JPGs right now (RGB), and they are bigger then the resolution for PAL 3:4 would require. I just found out that if I choose "None" as scaling, there is no error, and the generation of the DVD is much, much faster.
    A DVD with a slide show containing two slides and a 4 second transition takes about 3 minutes to generate when the scaling is set to something other than "None". Without scaling it takes approx. 14 seconds. The resulting movies size is the same (5,35 MB).
    I wonder why the time differs so much. Obviously the images have to be scaled to the target size. But it seems that the images are not scaled only once, that those scaled versions of the source images are cached, and those cached versions are used to generate then blend effect, but for every frame the source images seem to be scaled again.
    So I presume that the scaling - unfortunately - has an effect on the resulting movie, too, and thus influences the success of the process of DVD generation.
    basic situation:
    good image > 4 secs blend > bad image => error
    variations:
    other blend times don't cause an error:
    good image > 2 secs blend > bad image => success
    good image > 8 secs blend > bad image => success
    other transitions cause an error, too:
    good image > 4 secs fade to black > bad image => error
    good image > 4 secs page turn > bad image => error
    changing the image order prevents the error:
    bad image > 4 secs blend > good image => success
    changing the format of the bad image to TIFF doesn't prevent the error.
    changing colors/brightness of the bad image: a drastic change prevents the error. I adjusted the histogram and made everything much lighter.
    Just a gamma correction with values between 1.2 and 2.0 didn't help.
    changing the image size prevents the error. I decreased the size. The resulting image was still bigger than the monitor area, thus it still had to be scaled a bit by Encore DVD, but with this smaller version the error didn't occur. The original image is approx. 2000 px x 1400 px. Decreasing the size by 50% helped. Less scaling (I tried 90%, 80%, 70% and 60%, too) didn't help.
    using a slightly blurred version (gaussian blur, 2 px, in Photoshop CS) of the bad image prevents the error.
    My guess is that the error depends on rather subtle image properties. The blur doesn't change the images average brightness, the balance of colors or the size of the image, but still the error was gone afterwards.
    The problem is that I will work with slide shows that contain more images than two. It would be too time consuming to try to generate the DVD over and over again, look at which slide an error occurs, change that slide, and then generate again. Even the testing I am doing right now already "ate" a couple of days of my working time.
    Only thing I can do is to use a two image slide show and test image couple after image couple. If n is the number of images, I will spend (n - 1) times 3 minutes (which is the average time to create a two slides slide how with a blend). But of course I will try to prepare the images and make them as big as the monitor resolution, so Encore DVD doesn't have to scale the images any more. That'll make the whole generation process much shorter.
    If I use JPGs or TIFFs, the pixel aspect ratio is not preserved when the image is imported. I scaled one of the images in Photoshop, using a modified menu file that was installed with Encore DVD, because it already has the correct size for PAL, the pixel aspect ratio and the guides for the save areas. I saved the image as TIFF and as PSD and imported both into Encore DVD. The TIFF is rendered with a 1:1 pixel aspect ratio and NOT with the D1/DV PAL aspect ration that is stored in the TIFF. Thus the image gets narrowed and isn't displayed the way I wanted it any more. Only the PSD looks correct. But I think I saw this already in another thread...
    I cannot really understand why the MPEG encoding engine would produce bit rates that are illegal and that are not accepted afterwards, when Encore DVD is putting together all the stuff. Why is the MPEG encoding engine itself not throwing an error during the encoding process? This would save the developer so much time. Instead they have to wait until the end, thinking everything went right, and find out then that there was a problem.
    Still, if sometime somebody finds out more about the whole matter I would be glad about further explanations.
    Best regards,
    Christian

Maybe you are looking for

  • How to use WS-RM web Service from Session Bean?

    Hi all. Could you tell me the way how to call WS-RM web service(Staring BPM Process) from SessionBean? Our environment: SAP NetWeaver CE 7.2 SP3 I do the following. I have created a BPM Process starting with Message Start Event. This Message Start Ev

  • Multiple MIDI Inputs Yet???

    Can anyone tell me if LP8 offers multiple MIDI inputs? I use the step sequencer from my Elektron Monomachine for pattern creation and would like to be able to assign the external tracks from it to instrument tracks hosting either native or 3rd party

  • CS5 Project Copy disappeared!

    Hi, I am new here and Im hoping that someone can help! Working on CS5 - worked on project and sequences fine, drawing rushes from hardrives... Project file saved on desktop Saved a copy of the project to the desktop (same location as original!) Brief

  • Transferring Music

    why wont my songs upload to my new macbook from my iPhone?

  • Financial Reporting Studio work space.

    I am currently developing a financial report that will have 26 grids on it, but there is only one page size in the Work Area. Can some one let me know how to make it two pages/increase it? Thanks for the help.