Average Queuing delay too high!

Below is my server's .perf output. My server is Intel Pentium IV 3.06 Mhz 1 GB RAM on windows 2000.
Can anyone help?
Process 4028 started Thu Feb 19 09:56:34 2004
ConnectionQueue:
Current/Peak/Limit Queue Length 0/23/4096
Total Connections Queued 39754
Average Queueing Delay 1179,77 milliseconds
ListenSocket group1:
Address http://213.232.7.201:80
Acceptor Threads 2
Default Virtual Server https-jo.smsnet.com.tr
KeepAliveInfo:
KeepAliveCount 185/1024
KeepAliveHits 17554
KeepAliveFlushes 0
KeepAliveRefusals 0
KeepAliveTimeouts 12830
KeepAliveTimeout 30 seconds
SessionCreationInfo:
Active Sessions 1
Total Sessions Created 48/128
CacheInfo:
enabled yes
CacheEntries 934/1024
Hit Ratio 33616/46930 ( 71,63%)
Maximum Age 60
Native pools:
NativePool:
Idle/Peak/Limit 22/22/128
Work Queue Length/Peak/Limit 0/20/0
Server DNS cache disabled
Async DNS disabled
Performance Counters:
Average Total Percent
Total number of requests: 35776
Request processing time: 0,2735 9785,5781
default-bucket (Default bucket)
Number of Requests: 35776 (100,00%)
Number of Invocations: 474693 (100,00%)
Latency: 0,0149 532,5323 ( 5,44%)
Function Processing Time: 0,2586 9253,0459 ( 94,56%)
Total Response Time: 0,2735 9785,5781 (100,00%)
Sessions:
Process Status Function
4028 response service-dump

Web Server 6.1 will sometimes dramatically misreport the average queueing delay, particularly on SMP systems or systems with fast CPUs. This is almost certainly what is occurring on your server.
This problem should be fixed in 6.1 SP2.
Apart from a low cache hit rate -- perhaps you are making extensive use of CGIs or other dynamic content? -- the rest of the performance data seems reasonable. Unless you are experiencing performance problems, I recommend ignoring the reported average queueing delay.

Similar Messages

  • The Average Wait Time of SQL instance "CONFIGMGRSEC" on computer " SEC_SITE_SERVER " is too high

    I have a SCCM 2012 SP1 CU4 environment with SCOM monitoring installed.
    I also do have 4 secondary sites installed below my primary. The secondaries are using SQL 2012 Express version default deployed using the secondary site installation.
    My SCOM monitoring is generating tickets with the following message:
    The Average Wait Time of SQL instance "CONFIGMGRSEC" on computer "<SEC_SITE_SERVER>" is too high
    How can i solve this ? Or do I need to ignore this ?

    Never ignore messages, but tune them.
    In this specific case you might want to take a look at this:
    http://social.technet.microsoft.com/Forums/en-US/ffeefe0d-0ef7-49a3-862e-9be27989dc5d/scom2012-alert-sql-2008-db-average-wait-time-recompilationis-too-high?forum=operationsmanagergeneral
    My Blog: http://www.petervanderwoude.nl/
    Follow me on twitter: pvanderwoude

  • SCOM2012 Alert: SQL 2008 DB Average Wait Time & Recompilationis too high

    WE have SCOM 2012sp1 CU3 updated.
    I receive the below critical alerts from SQL servers continuously, please help me to resolve this issue.
    SQL 2008 DB Average Wait Time is too high
       SQL DB 2008 SQL Recompilation is too high

    I don't know about anyone else but overriding those monitors and rules didn’t work for me, I had to override<o:p></o:p>
    SQL Re-Compilation monitor for SQL 2012 DB Engine<o:p></o:p>
    SQL Re-Compilation monitor for SQL 2008 DB Engine<o:p></o:p>
    Average Wait Time monitor for SQL 2012 DB<o:p></o:p>
    Average Wait Time monitor for SQL 2008 DB<o:p></o:p>
    Now I am wondering if other monitors are valid as well in particular I have multiple alerts for<o:p></o:p>
    Buffer Cache Hit Ratio monitor for SQL 2008 DB Engine is too low<o:p></o:p>
    Page Life Expectancy (s) for 2008 DB Engine is too low<o:p></o:p>
    is anyone else seeing these issues as well?

  • Disk Transfer (reads and writes) Latency is Too High

    i keep getting this error:
    the Logical Disk\Avg. Disk sec/Transfer performance counter  has been exceeded.
    i got these errors on the following servers:
    active directory
    SQL01 (i have 2 sql clustered)
    CAS03 (4 cas server loadbalanced)
    HUB01
    MBX02(Clustered)
    a little info on our environment:
    *Using SAN storage.
    *Disks are new ,and working fine
    *the server has GOOD hardware components(16-32 Gb RAM;Xeon or quadcore........)
    i keep having these notifications everyday; i searched on the internet and i found the cause to be 1 of the 2:
    1) disk hardware issue( non common=rarely )
    2) the queue time on the hard-disk( time to write on the Hard-disk)
    if anyone can assist me with the following:
    1) is this a serious issue that will affect our enviroment?
    2) is it good to edit the time of monitoring to be 10minute(instead of the default 5min)
    3) is there any solution for this?(to prevent these annoying -useless??--- notifications)
    4)what is the cause of this queue delay;;and FYI sometime this happens when nothing and noone is using the server (i.e the server is almost Idle)
    Regards

    The problem is....  exactly what the knowledge of the alert says is wrong.  It is very simple.  Your disk latency is too high at times. 
    This is likely due to overloading the capabilities of the disk, and during peak times, the disk is underperforming.  Or - it could be that occasionally, due to the design of your disks - you get a very large spike in disk latency... and this trips the
    "average" counter.  You could change this monitor to be a consecutive sample threshold monitor, and that would likely quiet it down.... but only doing an analysis of a perfmon of several disks over 24 hours would you be able to determine specifically
    whats going on.
    SCOM did exactly what it is supposed to do.... it alerted your, proactively, to the possible existence of an issue.  Now you, using the knowledge already in the alert, use that information to further investigate, and determine what is the corrective
    action to take. 
    Summary
    The Avg. Disk sec/Transfer (LogicalDisk\Avg. Disk sec/Transfer) for the logical disk has exceeded the threshold. The logical disk and possibly even overall system performance may significantly diminish which will result in poor operating system and application
    performance.
    The Avg. Disk sec/ Transfer counter measures the average rate of disk Transfer requests (I/O request packets (IRPs)) that are executed per second on a specific logical disk. This is one measure of storage subsystem throughput.
    Causes
    A high Avg. Disk sec/Transfer performance counter value may occur due to a burst of disk transfer requests by either an operating system or application.
    Resolutions
    To increase the available storage subsystem throughput for this logical disk, do one or more of the following:
    •
    Upgrade the controllers or disk drives.
    •
    Switch from RAID-5 to RAID-0+1.
    •
    Increase the number of actual spindles.
    Be sure to set this threshold value appropriately for your specific storage hardware. The threshold value will vary according to the disk’s underlying storage subsystem. For example, the “disk” might be
    a single spindle or a large disk array. You can use MOM overrides to define exception thresholds, which can be applied to specific computers or entire computer groups.
    Additional Information
    The Avg. Disk sec/Transfer counter is useful in gathering throughput data. If the average time is long enough, you can analyze a histogram of the array’s response to specific loads (queues, request sizes, and so on). If possible, you should
    observe workloads separately.
    You can use throughput metrics to determine:
    •
    The behavior of a workload running on a given host system. You can track the workload requirements for disk transfer requests over time. Characterization of workloads is an important part of performance analysis and capacity planning.
    •
    The peak and sustainable levels of performance that are provided by a given storage subsystem. A workload can either be used to artificially or naturally push a storage subsystem (in this case, a given logical disk) to its limits. Determining these
    limits provides useful configuration information for system designers and administrators.
    However, without thorough knowledge of the underlying storage subsystem of the logical disk (for example, knowing whether it is a single spindle or a massive disk array), it can be difficult to provide an optimized one size fits all threshold value.
    You must also consider the Avg. Disk sec/Transfer counter in conjunction with other transfer request characteristics (for example, request size and randomness/sequentially) and the equivalent counters for write disk requests.
    If the Avg. Disk sec/Transfers counter is tracked over time and if it increases with the intensity of the workloads that are driving the transfer requests, it is reasonable to suspect that the logical disk is saturated if throughput does not increase and
    the user experiences degraded system throughput.
    For more information about storage architecture and driver support, see the Storage - Architecture and Driver Support Web site at
    http://go.microsoft.com/fwlink/?LinkId=26156.

  • Internal Email recipient Status: 400 4.4.7 Message delayed, too many concurrent deliveries of this message at this time

    Hi All,
    I've read the other threads on this but not found one exactly like it. We are running Exchange 2010 SP3 Update 1. We recently sent out an internal email to a distribution that had a couple of dynamic DL's in it and a few regular DL's. 
    A lot of people got the initial email and others didn't until the following day. There was around 3000 or so users in that group. When investigating it with the message tracking log, a lot of users had "400 4.4.7 Message delayed, too many concurrent
    deliveries of this message at this time" for the status.
    Could this be to do with message throttling? I can find very little information on this on the Internet.
    Thanks in advance,
    Chris

    This is not throttling.
    How many HT servers do you have? What is the name of queue they all are residing? 
    Cheers,
    Gulab Prasad
    Technology Consultant
    Thanks for your response Gulab,
    I have two HT servers and I'm not sure what queue they were residing in as they eventually all went through. Is there any way I can tell this with the message tracking logs?
    Thanks again,
    Chris

  • ORA-25307: Enqueue rate too high, flow control enabled

    I am stuck. I have my stream setup and they were previously working on two of my dev environments well. Now when I get the streams setup the CAPTURE process has a state of "CAPTURING CHANGES" for like 10 seconds and then changes to state "PAUSED FOR FLOW CONTROL". I believe this is happening because the PROPAGATION process is showing an error of "ORA-25307: Enqueue rate too high, flow control enabled".
    I don't know what to tweak to get rid of this error message. The two environments are dev databases and there is minimal activity on them so i don't think it's a case of the APPLY process is lagging behind the PROPAGATION process. Has anyone run into this issue?? I've verified my db link works, my stream admin user has dba access. Any help or advise would be greatly appreciated.
    thanks, dave

    As rule of thumb, you don't need to set GLOBAL_NAME=TRUE as long as your are 100% GLOBAL_NAME compliant.
    So, setting GLOBAL_NAME=TRUE will not have any effect if your dblink is not global_name compliant
    and if your installation is global_name compliant, you don't need to set GLOBAL_NAME=TRUE.
    The first thing when you diagnose is to get the exact facts.
    Please run this queries both on source and target so that to see what are in the queues and where.
    Run it multiple time to see if figures evolves.
    - If they are fixed, then your Streams is stuck in its last stage. As a cheap and good starting point, just stop/start the capture, propagation and target apply process. Check also the alert.log on both site. when you have a propagation problem, they do contains information's. If you have re-bounced everything and no improvement then the real diagnose work must start here but then we know that the message is wrong and the problems is elsewhere.
    - if they are not fixed then your apply really lag behind for what ever good reason, but this is usually easy to find.
    set termout off
    col version new_value version noprint
    col queue_table format A26 head 'Queue Table'
    col queue_name format A32 head 'Queue Name'
    select substr(version,1,instr(version,'.',1)-1) version from v$instance;
    col mysql new_value mysql noprint
    col primary_instance format 9999 head 'Prim|inst'
    col secondary_instance format 9999 head 'Sec|inst'
    col owner_instance format 99 head 'Own|inst'
    COLUMN MEM_MSG HEADING 'Messages|in Memory' FORMAT 99999999
    COLUMN SPILL_MSGS HEADING 'Messages|Spilled' FORMAT 99999999
    COLUMN NUM_MSGS HEADING 'Total Messages|in Buffered Queue' FORMAT 99999999
    set linesize 150
    select case
      when &version=9 then ' distinct a.QID, a.owner||''.''||a.name nam, a.queue_table,
                  decode(a.queue_type,''NORMAL_QUEUE'',''NORMAL'', ''EXCEPTION_QUEUE'',''EXCEPTION'',a.queue_type) qt,
                  trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, x.bufqm_nmsg msg, b.recipients
                  from dba_queues a , sys.v_$bufqm x, dba_queue_tables b
            where
                   a.qid = x.bufqm_qid (+) and a.owner not like ''SYS%''
               and a.queue_table = b.queue_table (+)
               and a.name not like ''%_E'' '
       when &version=10 then ' a.owner||''.''|| a.name nam, a.queue_table,
                  decode(a.queue_type,''NORMAL_QUEUE'',''NORMAL'', ''EXCEPTION_QUEUE'',''EXCEPTION'',a.queue_type) qt,
                  trim(a.enqueue_enabled) enq, trim(a.dequeue_enabled) deq, (NUM_MSGS - SPILL_MSGS) MEM_MSG, spill_msgs, x.num_msgs msg,
                  x.INST_ID owner_instance
                  from dba_queues a , sys.gv_$buffered_queues x
            where
                   a.qid = x.queue_id (+) and a.owner not in ( ''SYS'',''SYSTEM'',''WMSYS'')  order by a.owner ,qt desc'
       end mysql
    from dual
    set termout on
    select &mysql
    /B. Polarski

  • PGC...data rate too high

    Hallo,
    message
    nunew33, "Mpeg not valid error message" #4, 31 Jan 2006 3:29 pm describes a certain error message. The user had problems with an imported MPEG movie.
    Now I receive the same message, but the MPEG that is causing the problem is created by Encore DVD itself!?
    I am working with the german version, but here is a rough translation of the message:
    "PGC 'Weitere Bilder' has an error at 00:36:42:07.
    The data rate of this file is too high for DVD. You must replace the file with one of a lower data rate. - PGC Info: Name = Weitere Bilder, Ref = SApgc, Time = 00:36:42:07"
    My test project has two menus and a slide show with approx. 25 slides and blending as transition. The menus are ok, I verified that before.
    First I thought it was a problem with the audio I use in the slide show. Because I am still in the state of learning how to use the application, I use some test data. The audio tracks are MP3s. I learned already that it is better to convert the MP3s to WAV files with certain properties.
    I did that, but still the DVD generation was not successful.
    Then I deleted all slides from the slide show but the first. Now the generation worked!? As far as a single slide (an image file) can not have a bitrate per second, and there was no sound any more, and as far as the error message appears AFTER the slide shows are generated, while Encore DVD is importing video and audio just before the burning process, I think that the MPEG that is showing the slide show is the problem.
    But this MPEG is created by Encore DVD itself. Can Encore DVD create Data that is not compliant to the DVD specs?
    The last two days I had to find out the cause for a "general error". Eventually I found out that image names must not be too long. Now there is something else, and I still have to just waste time for finding solutions for apparent bugs in Encore DVD. Why doesn't the project check find and tell me such problems? Problem is that the errors appear at the end of the generation process, so I always have to wait for - in my case - approx. 30 minutes.
    If the project check would have told me before that there are files with file names that are too long, I wouldn't have had to search or this for two days.
    Now I get this PGC error (what is PGC by the way?), and still have no clue, cause again the project check didn't mention anything.
    Any help would be greatly appreciated.
    Regards,
    Christian Kirchhoff

    Hallo,
    thanks, Ruud and Jeff, for your comments.
    The images are all scans of ancient paintings. And they are all rather dark. They are not "optimized", meaning they are JPGs right now (RGB), and they are bigger then the resolution for PAL 3:4 would require. I just found out that if I choose "None" as scaling, there is no error, and the generation of the DVD is much, much faster.
    A DVD with a slide show containing two slides and a 4 second transition takes about 3 minutes to generate when the scaling is set to something other than "None". Without scaling it takes approx. 14 seconds. The resulting movies size is the same (5,35 MB).
    I wonder why the time differs so much. Obviously the images have to be scaled to the target size. But it seems that the images are not scaled only once, that those scaled versions of the source images are cached, and those cached versions are used to generate then blend effect, but for every frame the source images seem to be scaled again.
    So I presume that the scaling - unfortunately - has an effect on the resulting movie, too, and thus influences the success of the process of DVD generation.
    basic situation:
    good image > 4 secs blend > bad image => error
    variations:
    other blend times don't cause an error:
    good image > 2 secs blend > bad image => success
    good image > 8 secs blend > bad image => success
    other transitions cause an error, too:
    good image > 4 secs fade to black > bad image => error
    good image > 4 secs page turn > bad image => error
    changing the image order prevents the error:
    bad image > 4 secs blend > good image => success
    changing the format of the bad image to TIFF doesn't prevent the error.
    changing colors/brightness of the bad image: a drastic change prevents the error. I adjusted the histogram and made everything much lighter.
    Just a gamma correction with values between 1.2 and 2.0 didn't help.
    changing the image size prevents the error. I decreased the size. The resulting image was still bigger than the monitor area, thus it still had to be scaled a bit by Encore DVD, but with this smaller version the error didn't occur. The original image is approx. 2000 px x 1400 px. Decreasing the size by 50% helped. Less scaling (I tried 90%, 80%, 70% and 60%, too) didn't help.
    using a slightly blurred version (gaussian blur, 2 px, in Photoshop CS) of the bad image prevents the error.
    My guess is that the error depends on rather subtle image properties. The blur doesn't change the images average brightness, the balance of colors or the size of the image, but still the error was gone afterwards.
    The problem is that I will work with slide shows that contain more images than two. It would be too time consuming to try to generate the DVD over and over again, look at which slide an error occurs, change that slide, and then generate again. Even the testing I am doing right now already "ate" a couple of days of my working time.
    Only thing I can do is to use a two image slide show and test image couple after image couple. If n is the number of images, I will spend (n - 1) times 3 minutes (which is the average time to create a two slides slide how with a blend). But of course I will try to prepare the images and make them as big as the monitor resolution, so Encore DVD doesn't have to scale the images any more. That'll make the whole generation process much shorter.
    If I use JPGs or TIFFs, the pixel aspect ratio is not preserved when the image is imported. I scaled one of the images in Photoshop, using a modified menu file that was installed with Encore DVD, because it already has the correct size for PAL, the pixel aspect ratio and the guides for the save areas. I saved the image as TIFF and as PSD and imported both into Encore DVD. The TIFF is rendered with a 1:1 pixel aspect ratio and NOT with the D1/DV PAL aspect ration that is stored in the TIFF. Thus the image gets narrowed and isn't displayed the way I wanted it any more. Only the PSD looks correct. But I think I saw this already in another thread...
    I cannot really understand why the MPEG encoding engine would produce bit rates that are illegal and that are not accepted afterwards, when Encore DVD is putting together all the stuff. Why is the MPEG encoding engine itself not throwing an error during the encoding process? This would save the developer so much time. Instead they have to wait until the end, thinking everything went right, and find out then that there was a problem.
    Still, if sometime somebody finds out more about the whole matter I would be glad about further explanations.
    Best regards,
    Christian

  • No Server Name displyed in mail notification Alert generated by "total CPU Utilization Percentage is too high" Monitor

    Hello Everyone,
    I need your assisstance in one of the issues with Mail Notification Alert generated by SCOM.
    Mail notification received as below:
    Alert: Total CPU Utilization Percentage is too high
    Severity: 2
    ServerName:Microsoft Windows Server 2008 R2 Standard  - The threshold for the Processor\% Processor Time\_Total performance counter has been exceeded. The values that exceeded the threshold are: 97.091921488444015% CPU and a processor queue length of 16.
    Last modified by: System
    The Alert was generated by "Total CPU Utilization Percentage"
    But the problem with the above mail notification is it doesn't mentions about the affected server. So I would like to know how to tweak the monitors to provide the server name into it.
    Thanks & Regards,
    VROAN

    Hi 
    You can add alert source to the email format in the scom server.
    refer below link for parameters 
    http://blogs.technet.com/b/kevinholman/archive/2007/12/12/adding-custom-information-to-alert-descriptions-and-notifications.aspx
    Regards
    sridhar v

  • Ms word95 to pdf - "security level too high"?

    I'm a brand new user of Acrobat 9.0 (on XP system) - all kinds of problems (including major file crash and loss during 9.0  installation - more on that later). for now, I need to get started immediately on converting some MS Word doc files to pdfs - What I get from 9.0 is the message that "The Security Level is Too High".   If it's referring to the MS word docs, they are unprotected (I've checked and tried several - Word shows they are not protected, and as far as I know, never have been).  They were originally created on a mac with macWord - but were not protected and converted to windows with MacDrive7.  They show in MS Word in good condition and unprotected.  What do I need to do to get them into acrobat and into pdf format? I've also check the knowledge base here and elswhere without any clues except one chap who seemed to be having similar problems (along with serious crashes) using 8.1. Other than that, I'm mistified.
    I've also tried using the context menu 'covert to pdf' method and also creating a new pdf (blank) and inserting them.  In both cases the security message aborted the process.  Need to do this right away. I'm not technically skilled, so if someone can give me some clear instructions I'd be grateful.  - red

    Thank you all for responding so quickly. First, I'll mention the serious message and a warning. DO NOT INSTALL ACROBAT 9.0 IN AN ENVIRONMENT WITH WORD 7.0 (or any old(er) MS Word version before 2k).  The consequences are ghastly, including the deletion of half or more of your program files (including your email clients, av software and other primary programs), the corruption of your browser, registry (including restore points) and other not so nice events - worse than most bad viruses.  That's a problem Adobe and I will probably be taking a look at next week. Mean time, they indicate that they are going to add the matter to their KB and elswhere so that users have a heads-up on the issue.
    As for the conversion problem from Word 7.0 .doc to .pdf - Bill, you just about nailed it. It was, indeed, a problem that could be circumvented by going to the printer dialog and setting the printer to  'Adobe pdf file' (something a novice wouldn't think of, nor line tech-support for that matter.).  As far as Word/pdf 'printer' is concerned you're just printing the file. However, as I understand things, that's how Adobe attaches the Word documents - It does it through the printer interface. Once that setting is changed to 'Adobe pdf printer' the file is simply picked from the print queue (or before) and loaded into A9. Save it from A9, and the job is done.  So, Bill, If Adobe hadn't found the answer, I do believe you would have been telling me exactly how to do it after a few more posts. The credit, though, goes to Neo Johnson, tech-support supervisor in New Delhi.  The last two days (almost 9 hours of phone time) were spent with various tech-support agents at Adobe; but,  he was the one who finely thought about the interaction between A9 and Word and figured it out.
    Ok -that's the brief.  The rest is a little history/background for whomever is interested (skip, otherwise - not important).  The problem begins with failure to install - first, setup can't find the msi file - it was there, and I browsed it, so that was solved. Then 'invalid licensing - process stopped' messages appear. That was a little tougher and http://kb2.adobe.com/cps/405/kb405970.html  and some other articles had me doing repair, reinstall, and other complex (for me) procedures. One of the problems was that flexnet had failed to install, which was a stumper for me (I couldn't find it to download separately - barely knew what it was/did - and finally understood that Adobe was supposed to install it. After that,  I did several uninstalls, to no effect. Finally I did a few moderate and then deep uninstalls (with Revo) and several reinstalls. Things got progressively worse.  On one reboot, my desktop came up and all the program icons were broken links.  I examined targets and such and then went to my 'program files' directory. To my horror, nearly all my primary program (including thunderbird email client, AVG etc.)  files had disappeared. The folders were simply empty.  Firefox still loaded, but the tabs were non functional.  Several checks and some light disc analysis indicated the files vanished. No trace. However, my document folders and data were intact (also backed-up). I went to restore and found that all the old restore points (including the one's Revo sets before uninstalling) were gone.  If it had been a virus, it couldn't have done a better job at making a mess of things.  At that point, I knew the registry had been toasted and I was facing a complete OS reinstall.  Instead, I opted for reinstalling some of the critical programs (and because the document files appeared to be intact).  After the first few - thunderbird, firefox etc.  - I was relieve to find that they were picking up on the old settings and restoring themselves to their previous states. I still have a number of these to do - and a few must be re-configured. But that's going ok. 
    Then the saga of Adobe, several phone calls; several times the phone connection was cut off and I had to call again and start over from the beginning with a new person. The matter always had to be esculated to the next tier - more time, more cues, no solutions.  They went over the firefox settings, the adobe settings. They were puzzled about the broken links.  Attempts to open doc files (after a fresh install of winword) were resulting in 'invalid win32 application'. All kinds of problems made progress difficult.  We cleared up the 'invalid....' messages by reparing the file associations (in XP folder options) and then opening the docs in Word and resaving them as something else.  It was a labor.  Finally, there was simply no answer except, like the post here, Word 7 is simply too old and uses different scripting. The only solution was to either buy (ugh, ouch!) Word 2007 (and hope that it would load them and save them in A9 useable form) or, try installing Word2k (which I have) and processing them through that; and, then using Acrobat 8.x to load those and save the pdfs for A9 to use.  However, when Adobe said they could not provide me with a free (even trial) version of 8.x to do the job - licensing problems etc. -- It seemed like a really ugly solution.  Finally, I'm begging Adobe to give me a free copy of 8.x and in steps Neo.  He can't provide the free copy, but he asks a few questions himself.  We go to Adobe and reset some of the security settings (something other agents didn't know or think of). No dice - still can't load the docs.  But then he says, Open up Word. Ok.  load the file and then hit 'print' - ok, the print dialog comes up. 'Now,' he says, 'open the properties and see what printers are listed.'  Ok I do that, and I'll be... 'Adobe pdf printer' is among them.  "Just what I thought," he said, Adobe was hooking up with word, but didn't have its printer to attach." So we set 'Adobe pdf' as the printer and lo and behold, the docs loaded into Adobe as pdfs.  End of that story. (so bill, you had it too - wish you had answered the phone in the first place!)
    Clean up.  So, there's a few simple solutions, I think (though i'm no techie and you folks will certainly have better ideas). First, I don't buy the story that early versions of Word are either 1) unsupported by MS or, 2) nobody uses them, as valid reasons why not to fix the problem of the "unloadable" docs.  I figure there are at least a couple of aproaches and easy patches that will correct the matter. One is from the Word  side - to is to set the current printer setting to use 'Adobe printer', get the file and then reset the printer back to what it was - default.   The other is to patch A9 to detect legacy source applications and bypass things that would normally make the file unloadable, unless, of course, they were actually protected or, read only files. In that case, Adobe could simply inform the user to 'unprotect' them, the same as it now does with its   'Security Setting too High' message for later versions.  I'm sure there are even better ways. But, that would fix things as far as file loading and conversion.
    As to the installation and crash problems - those need to be addressed. Even if its only a few dozen people that might have the same problem, it needs 1) to be given as a noticable warning and keyword in Adobe documents (which now simply indicate that it can process .doc files);  2) it needs to be examined to  insure systems that have Word 7.x or older can install without problem, and certainly without harming their system.  Adobe has a good reputation and does a good job. That's worth protecting with all customers, even if Marketing can't quite see why and the bean counters can't find much profit in the task.  It's what I expect from professionals and to do less certainly subtracts from Adobe's standing. That should be worth a great deal, I would imagine.
    Anyway, thanks folks - got to get some sleept, and then get those pdfs done and sent to people who are waiting for them. - best to you all, red.

  • Java.lang.OutOfMemoryError:class Your Java heap size might be set too high.

    I am getting the below error while deploying some soa applications in WLS 10.3 on solaris platform.As a workaround,
    the server instance has to be restarted everytime i make the deployments when i face this OOM situations.
    usually i face this issue after first deployment onwards.
    So, everytime i make the next deployment the server instance has to be restarted before that.
    The server is configured with 1Gb JVM and runs on JRockit jrrt-4.0.0-1.6.0.
    MEM_ARGS="-Xms1g -Xmx1g -Xgcprio:throughput -Xverbose:gc -XX:+HeapDumpOnOutO
    fMemoryError -XXcompressedRefs:enable=true"
    Can anybody please provide any pointers to track/solve this issue?
    Log extract:
    ####<Oct 10, 2010 4:27:31 PM PST> <Error> <Deployer> <BOX1> <SERVER2> <[ACTIVE] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <1256893524756> <BEA-149265> <Failure occurred in the execution of deployment request with ID '1256893524756' for task '3'. Error is: 'weblogic.application.ModuleException: '
    weblogic.application.ModuleException:
    at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1373)
    at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:468)
    at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:204)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
    at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:60)
    at weblogic.application.internal.flow.ScopedModuleDriver.start(ScopedModuleDriver.java:201)
    at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:118)
    at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:205)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
    at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:60)
    at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:28)
    at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:636)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
    at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:212)
    at weblogic.application.internal.EarDeployment.activate(EarDeployment.java:16)
    at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:162)
    at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:79)
    at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:569)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:140)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:106)
    at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:323)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:820)
    at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1223)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:436)
    at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:164)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:181)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:12)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:68)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    java.lang.OutOfMemoryError: class allocation, 21977184 loaded, 20M footprint in check_alloc (classalloc.c:213) 1412 bytes requested.
    Java heap 1G reserved, 1G committed
    Paged memory=18014398507524232K/16781304K.
    Your Java heap size might be set too high.
    Try to reduce the Java heap
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
    at weblogic.utils.classloaders.GenericClassLoader.defineClass(GenericClassLoader.java:335)
    at weblogic.utils.classloaders.GenericClassLoader.findLocalClass(GenericClassLoader.java:288)
    at weblogic.utils.classloaders.GenericClassLoader.findClass(GenericClassLoader.java:256)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:303)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
    at weblogic.utils.classloaders.GenericClassLoader.loadClass(GenericClassLoader.java:176)
    at java.lang.Class.getDeclaredConstructors0(Native Method)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
    at java.lang.Class.getConstructor0(Class.java:2699)
    at java.lang.Class.newInstance0(Class.java:326)
    at java.lang.Class.newInstance(Class.java:308)
    at com.sun.faces.config.ConfigureListener.configure(ConfigureListener.java:1058)
    at com.sun.faces.config.ConfigureListener.configure(ConfigureListener.java:1129)
    at com.sun.faces.config.ConfigureListener.configure(ConfigureListener.java:542)
    at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:435)
    at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:465)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:175)
    at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1784)
    at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:2999)
    at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1371)
    at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:468)
    at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:204)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
    at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:60)
    at weblogic.application.internal.flow.ScopedModuleDriver.start(ScopedModuleDriver.java:200)
    at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:117)
    at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:204)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
    at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:60)
    at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:27)
    at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:635)
    at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37)
    at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:212)
    at weblogic.application.internal.EarDeployment.activate(EarDeployment.java:16)
    at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:162)
    at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:79)
    at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:569)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:140)
    at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:106)
    at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:323)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:820)
    at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1227)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:436)
    at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:163)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:181)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:12)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:67)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

    Its running on Solaris 10 platform.The server is configured with 1Gb JVM and JRockit jrrt-4.0.0-1.6.0.
    MEM_ARGS="-Xms1g -Xmx1g -Xgcprio:throughput -Xverbose:gc -XX:+HeapDumpOnOutO
    fMemoryError -XXcompressedRefs:enable=true"
    Let me know if you need any more information to track this issue.

  • Bit rate too high DVDSP, build stops.

    DVD Studio Pro used to be DVD Maestro from Discreet, but I was thinking since Apple bought it long ago, it would have a forum.  I searched for DVD and DVDSP and only came up with this forum. 
    If anybody in here uses DVDSP and if you know the solution I would appreciate help with this.
    Upon building this, I did lower the bit rate to a lower size after getting the error message.
    As of now I have the BR between 4.2 and 5.2 as the max br. 
    I'm concerned if I lower it anymore it will start looking like a pixelated mess.  Does anybody know how low you can take the rate and still have a good looking project?
    The project was taped in HDV.  However I encoded it using compressor so I don't think it's an HD file anymore that DVDSP can't read. In the settings you'll see "SD". 
    I'm using DVD Studio Pro 4.
    The settings I used for compressor were:
    Video:
    Name: MPEG-2 6.2Mbps 2-pass
    Description: Fits up to 90 minutes of video with Dolby Digital audio at 192 Kbps or 60 minutes with AIFF audio on a DVD-5
    File Extension: m2v
    Estimated size: 2.79 GB/hour of source
    Type: MPEG-2 video elementary stream
              Usage:SD DVD
    Video Encoder
              Format: M2V
              Width and Height: Automatic
                        Selected: 720 x 480
              Pixel aspect ratio: NTSC CCIR 601/DV (16:9)
              Crop: None
              Padding: None
              Frame rate: (100% of source)
                        Selected: 29.97
              Frame Controls Automatically selected:
                        Retiming: (Fast) Nearest Frame
                        Resize Filter: Linear Filter
                        Deinterlace Filter: Fast (Line Averaging)
                        Adaptive Details: On
                        Antialias: 0
                        Detail Level: 0
                        Field Output: Same as Source
              Start timecode from source
              Aspect ratio: Automatic
                        Selected 16:9
              Field dominance: Automatic:
                         Selected Progressive
              Average bit rate: 6.2 (Mbps)
              2 Pass VBR enabled
                        Maximum bit rate: 7.7 (Mbps)
              High quality
              Best motion estimation
              Closed GOP Size: 15, Structure: IBBP
              DVD Studio Pro meta-data enabled
    Audio:
    Name: AIFF 48:16
    Description: 48kHz, 16-bit AIFF stereo audio
    File Extension: aiff
    Estimated size: unknown
    Audio Encoder
              16-bit Integer (Big Endian), Stereo (L R), 48.000 kHz
    Error message from DVDSP
    Starting DVD Build BIG_DADDY_DVD...
    Compiler Initializing...
    Precompiling Project BIG_DADDY_DVD
    Compiling VMG Information...
    Created 8 PGCs in VTSM1
    Created 8 PGCs in VMG.
    1 Menu(s) will be created...
    Compiling Menu PGCs...
    Compiling Menu#1 (Menu 1)...
    Rendering Menu:Menu 1,Language:1...
    Generating Transition: VTSM #01, VOB #1...
    Writing VIDEO_TS.VOB
    1 VTSs and 1 Titles will be created...
    Compiling VTS#1 (Big Daddy compress video)...
    Muxing VTS_01_1.VOB
    Video Bitrate Too High
    Build cancelled
    Building was not successful. See log.
    I tried the same project in IDVD and it worked like a charm the first time

    There is still a DVDSP Forum - it's under Final Cut Studio at https://discussions.apple.com/community/professional_applications/final_cut_stud io
    Select "refine this list" in the blue button to nake it easier to find DVDSP related issues.

  • Too High Wakeups-from-idle per second

    I am using Arch for the past few months and I have been noticing that the battery is draining much quicker than it used to be in windows.
    I used powertop and found out that the "Wakeups-from-idle per second" is way too high. It is ~300 per second on average and once i even saw numbers like 8000 !!
    The link http://www.linuxpowertop.org/powertop.php quotes it is possible to get 3 wakeups running a full GNOME desktop. It says that 193 is lot more than 3.
    But in my case the numbers are way too high than expected. I run Arch with XFCE4.
    Can somebody explain why am I seeing such high numbers and is this the reason for my battery draining ?

    nachiappan wrote:Can somebody explain why am I seeing such high numbers and is this the reason for my battery draining ?
    Powertop shows what is causing the wakeups. If it is the kernel, trying a different frequency governor might help.
    More than likely tweaking a few different settings together will get you more battery life, the wakeups alone won't be doing all the battery draining.
    There's lots of helpful hints on power saving here:
    https://wiki.archlinux.org/index.php/La … ttery_life
    edit: seems your cpu isn't being monitored correctly. 3000% seems so wrong.
    What kernel and cpu are you using?
    Last edited by moetunes (2012-02-23 00:12:46)

  • ADSL NOT AVAILABLE AS TRANSMISSION LOSS TOO HIGH?

    Hi, I'm hoping someone can offer some advice ..... I have lived in the same street for the past 5 years and have tried to get ADSL broadband for that entire time.  I live in a small regional coastal town in Northern NSW that has about 8000 population.  There are houses 2 doors away that have ADSL broadband, but I have been knocked back several times when applying for ADSL for the reason that the "transmission loss was too high" and I'm "too far from the exchange".    Can anyone offer any advice on what to do to try and get broadband?   I have already changed my residential landline account to a business account (linked to our factory broadband/phone line accounts with Telstra) as Telstra advised us that this would give us priority in the queue.   I have to put up with wireless broadband (and at one time the more expensive option of satellite broadband) which disconnects often while I'm trying to work from home.  In our business, as in most others, we are very reliant on the ability to send/receive emails and work on the net for research. I went to www.pacific.net.au/servicequal to see what it would say regarding ADSL service options for my phone number.  This is what it said ... AAPT SQ failed
    Telstra Compatibility Check:passedCompatibility Result:B2B Service Number: ******** Product Compatibility: DSL-L2 DSL-L2 compatible with existing productsSQ Check:passedSQ Result:B2B SQ: DSL-L2 Service Number: ******** SQ Response: Pass SQ Results Result: 256Kbps/64Kbps (Zone 3) , Pass , Y002 , Product is supported. Result: 512Kbps/512Kbps (Zone 3) , Pass , Y002 , Product is supported. Result: 512Kbps/128Kbps (Zone 3) , Pass , Y002 , Product is supported. Result: 1.5Mbps/256Kbps (Zone 3) , Pass , Y002 , Product is supported. Result: Open 1 (Zone 3) , Pass , Y002 , Product is supported. Result: Open 2 (Zone 3) , Pass , Y002 , Product is supported.SQ Reference:169937631 

    Hi barbara_c 
    Unfortunately there isn't anything that can be done in these cases. 
    The transmission loss is actually tested to determine how high it is, and if it's too high we can't supply a service as the signal will be too degraded to supply a usable service. 
    In addition, it's important to remember that you can't see your line length. So you might be closer to your exchange than a friend as the crow flies, or even driving, however your line doesn't usually take the most direct path from the exchange due to the streets and locations it has to cover to get to your property.

  • Timeline bitrate too high error

    Hi folks,
    I am new here, so hopefully I am asking in the right place.  I am new to Encore, and trying to understand this error, as it makes no sense to me.  I have a .m2v video file that is 4:30 long.  720x480.  29.97 fps. 180MB.  It plays for me at decent quality on my computer.  When I open the file in Bitrate Viewer, I see an average bitrate of 5499 kbps with a peak of 5810 kbps. 
    I have that video on a timeline in Encore, and three audio tracks that the viewer can choose from.
    The audio files are .ac3 using Digital Dolby only at 192kbps.  When I go to build the DVD, I see a "timeline bitrate too high" for that timeline if I try and build in Encore...  What the heck am I missing?

    mbirame wrote:
    Yes, I am certain.  I first tried this with PCM (.wav) files as the math still seemed like it would go lower than 9.6 Mbps.  I then tried with the 192's. 
    3 PCM files would indeed have tripped the error.
    I wonder if Encore is still thinking there are PCM audio streams instead of DD?
    Can you please try killing the project & starting over with just the AC3?
    Alternatively it may help to delete all cache files as it is quite possible the PCM are still the referred files here

  • Video Bitrate Too High-DVD

    I have edited many videos and burned the DVDs in FCX 10.1.3 without any problem until now with the dreaded warning message: "Video bitrate too high." The video is around 1 hour and 8 min. I had no problem with other videos as long as this one for sharing with the DVD. I have looked for help in Google by using this warning message. Most of the messages are related to older version of FCX. I have tried to use Compressor 4 without success. Now, I can't find any solutions from them due to the different features in the current FCX. How do I fix it? Is there anyway I can reduce the video bitrate by tweaking the software? Any suggestions? Your help will be much appreciated.

    C4 is Compressor? Do you have 4.1 or later or still in version 4.
    Duplicate the MPEG-2 preset and in the video tab adjust the maximum and average bit rate. See if this helps you get started working with the application
    http://www.fcpxbook.com/articles/compressor41/

Maybe you are looking for

  • I have an error message that won't go away and stops me from using any other buttons in the screen.

    I have an error message reading 'item over 1 byte. Connect to a wifi network or use iTunes on your computer to download 'tv show'. When I connect it to iTunes or use a wifi network nothing changes.  The warning remains in place no matter how many tim

  • NEED YOUR HELP: Can't open iTunes - iTunes has encountered a problem and...

    Could someone please help me with the following problem... After successfully installing iTunes 8 I'm unable to open iTunes as I get the following message: **"iTunes has encountered a problem and needs to close.** **We are sorry for the inconvenience

  • Clearing of text messages history

    for some reason my blackberry curve just started clearing all my text message history???  i've no idea why or how?  in fact, i checked the history option and i have it set to keeping it for 6 months, before it was 3 months, but i never had the proble

  • Web Gallery Source Files

    Do we have assess to the source files for the Flash and html web galleries? I found a bunch of ".lrtemplate" files in Application Support, but they don't open into editable parts. (The Flash galleries look exactly like Alan Musselman's FW Album Creat

  • SharePoint 2013 Canonical Links Adding Port Numbers???

    I love the new SEO features of SharePoint 2013.  But I am experiencing a major problem with the canonical href that SharePoint 2013 SEO Optimization features automatically renders: < link rel="canonical" href="http://www.mysite.com:80/" /> It is appe