Too high terminal resolution

Hi!
I have installed AL onto USB flash drive (preparing for home NAS). At some booting point terminal switches to very high resolution mode. How to eliminate all these video-tunings and keep raw terminal mode (without loading unneeded modules, and so on)?

Thanks for the tips!
Yes, adding '<driver name>.modeset=0' to kernel options prevents mode switching. On the other hand both video driver as well as drm module are loading. At case when X11 will not be used at all I think there is more radical way to prevent all these video-related steps. I'd want to keep console at state it was before loading video related modules. Something must be disabled

Similar Messages

  • NB200-10z: background is too big for resolution - taskbar is hidden

    Hi,
    I have a problem with the display on my netbook.
    The background is to big for the display and the taskbar is hidden below the bottom of the screen.(I can see this if I do a screen shot.
    I have tried all the usual settings on the display but cannot get the background to fit inside the display.
    The driver is correct and I have the latest bios update.
    Its prob a simple fix but I am a bit stuck so any ideas would be welcome.
    Thanks in advance.

    Hi
    Did you try to change the screen resolution?
    Do this. mostly this happens if too high screen resolution has been chosen.
    I dont know what OS you have but in here is a nice video from Microsoft:
    Change your screen resolution
    http://windows.microsoft.com/en-US/windows7/Change-your-screen-resolution

  • Accidental​ly set my screen resolution too high

    I was playing around with screen resolutions and accidentally set one too high and now every time i load up my Computer, it says 'Input not supported' , please help me.

    It would help greatly if you:
    A) Identified the installed operating system
    B) Identified your HP desktop PC. (look on the service tag. Post the complete  p/n
    If your OS is Windows 7, start the operating system in Safe Mode. Press the power button and before Windows begins to boot, start tapping the F8 key. Choose Safe Mode.  In Safe Mode click on the start button and type  msconfig in the search all programs and files box . Click on the msconfig icon that appears above to invoke msconfig. In the msconfig dialog choose the Boot tab as in the following image. In Boot options put a tick inside the box to the left of base video, then click on apply and OK. Restart your PC and  right-click an unused area on the desktop and set your resolution. Choose the recommended resolution.  After you do that, go back to the msconfig and remove the tick mark in the Boot options -->Base video, click on OK and restart your PC. All should be back to normal again.
    The desktop I am using has a dual display setup, so yours will appear a bit differently. the operatng system will recommend an optimal resolution and designate it as recommended.
    Best regards,
    erico
    ****Please click on Accept As Solution if a suggestion solves your problem. It helps others facing the same problem to find a solution easily****
    2015 Microsoft MVP - Windows Experience Consumer

  • Are JPEG 3504x2336 pixel pictures too high resolution for the photo books?

    Hi,
    I am done creating my photo book and I am ready to order, but I'd like to know if my pictures are too high resolution. They are professional wedding pictures so they are 7.4 mb each JPEG 3504x2336 pixel pictures. (Last time, I printed through MyPublisher but the pages with multiple pictures didn't turn out well and they said it was because the resolution was too high so when they compressed the file for the pages with multiple pictures it didn't turn out well.) Will I run into the same problem if I print directly through Iphoto?
    If I do need to lower the resolution, how do I do it, given that I've aready created the book?
    Thanks for any help you can give me!

    Welcome to the Apple Discussions.
    When you order the book it will be compressed and a PDF will be created. That PDF is what gets uploaded to Apple (via Kodak) for printing. So the best way to check your book before ordering is to create a PDF, then inspect it. Chances are, what you see in your PDF is what you will get printed.
    To do this, select your book and do File > Print. You'll get an "Assembling book" progress bar. That can take several minutes. When done, you get the Print Dialog box. Click the PDF button and choose "Save as PDF..." Choose a name and location for the file (the Desktop is convenient) and Save. Then switch to the Finder and open that PDF in Preview. You might select a page or two and do a test print yourself, to see if the quality looks good. The PDF saves you from having to print out the whole book yourself to check for errors.
    Regards.

  • When trying to project video from their Win 7 laptop, the CTS1300 TP unit states that the resolution is too high

    Hi,
    Have a customer who advises as per the discussion title :-
    When trying to project video from their Win 7 laptop, the CTS1300 TP unit states that the resolution is too high
    It has worked fine up until the upgrade to Windows 7
    Customer advises colleagues can connect and project OK at another site, using Win7 laptop, however that site uses CTS3010 and CTS1000 and, as yet, it is unconfirmed which they used to test on
    I could raise a case with TAC but wondered if anyone knows of any issue with Windows 7 and the CTS1300
    Many Thanks
    Nigel

    I've had issues with Win7 laptops before, but it all depends on the model of the laptop, and what it's display is capable of, and whether the end user is using it in desktop extension / duplication / remote only mode, etc.
    In most cases, switching it to the 2nd screen only mode fixes the issue, and worst case, rebooting it while it's set this way, and it's connected, fixes the problem.
    So, from my expereince, it's pretty well always a problem wtth the laptop/user, and very rarely (if ever) a problem with the TelePresence device.
    Wayne
    Please remember to rate responses and to mark your question as answered if appropriate.

  • Resolution too high for monitor

    hi again, i had some problems installing arch linux, so i quit and tried ubuntu, but it seems that he doesnt like me and my ati card, so i couldnt get the cube and all efects working.
    so here i am again, my problem was:
    i install arch
    make all updates
    install gnome and gnome extras etc
    then, i run "x -configure"
    create an account
    and i activate gdm (through the "rc.config" i thing)
    now i get to the problem, when i try to start arch when it comes to the graphical enviroment, the screen goes blan and tell that can't support resolution, that its too high. should that be automatic? how can i solve the problem? driver problem, should i use my tft cdrom? help me please:(

    DarkForte wrote:Try editing your xorg.conf by hand and removing the unsupported resolutions.
    This can be done with a line that looks something like this:
    Modes "1280x1024" "1024x768" "800x600" "640x480"
    Make sure the highest resolution is the one your monitor supports.
    Edit: that line goes below the "Depth" entries in the "Screen" section (usually near the bottom) of xorg.conf.
    Last edited by peets (2008-01-04 02:16:55)

  • Alert: Logical disk transfer (reads and writes) latency is too high Resolution state

    Hi 
    We are getting following errors for my 2 virtual servers. We are getting this alert continuously. My setup Windows 2008 R2 SP1 2 node Hyper V cluster. Which is hosted 7 guest OS out of am facing this problem with to guest os. Once this alert started
    my backup running slow.  
    Alert: Logical disk transfer (reads and writes) latency  is too high
    Source: E:
    Path: Servername.domain.com
    Last modified by: System
    Last modified time: 4/23/2013 4:15:47 PM Alert description: The threshold for the Logical Disk\Avg. Disk sec/Transfer performance counter has been exceeded.
    Alert view link: "http://server/OperationsManager?DisplayMode=Pivot&AlertID=%7bca891ba3-e9f2-421f-9994-7b4d6e867b33%7d"
    Notification subscription ID generating this message: {F71E01AF-0BE6-8377-7BE5-5CB6F5C037A1}
    Reagrds
    Mahesh

    Hi,
    Please see if following helps
    Disk transfer (reads and writes) latency is too high
    The
    threshold for the Logical Disk\Avg. Disk sec/Transfer performance counter has been exceeded
    If they are of no help, try asking this question in Operations Manager - General forum since alerts are generated by SCOM.
    Regards, Santosh
    I do not represent the organisation I work for, all the opinions expressed here are my own.
    This posting is provided AS IS with no warranties or guarantees and confers no rights.

  • Bit rate too high

    I have been encoding a DVD all day, using the same settings (DVDSP's factory default for an SD DVD). I then had to replace some footage in one of the videos and that is when I strating getting the message about the bit rate being too high. The replaced footage was the same duration as the previous media and no additional video effects or audio were added.
    I searched forums which suggested that I should delete a dvdstudiopro.plist file, but I can't find this file! I am running OS Lion 10.7.4 and the latest version of DVD Studio Pro.
    Any ideas? (Thanks in anticipation)
    Mike

    The file is in: your user name/Library/Preferences.
    Alternatively, you can download this free tool, which will remove the file for you and provides the option to backup your FCS Preference files.
    Mac OS X Lion hides the Library directory by default, to access it temporarily bring Finder to the front, hold the Option key and select it from the Go menu bar item.
    To have permanent access type the following into Terminal:
    chflags nohidden ~/Library/
    execute the command by pressing Return.

  • Can't start MAIL or STORE because of "too low screen resolution"

    I can't start either mail app or store app because of "too low screen resolution".
    My netbook (eMachines eM350) has maximum 1024x600, these app require at least 1024x768

    I have developed a lot of code, and while more recent machines have higher resolutions, I find it prudent not to assume everyone has an 8K panel
    No one cares much what you have or have not done, this thread is not about you. If you'd actually developed any Modern apps you'd know that the minimum resolution for them is, as has been stated numerous times here already. Stating that you develop for 640x480
    displays clearly indicates that you've never developed a Modern app. Additionally, it should be obvious even to you that a minimum resolution of 1024x768 does not assume that everyone has "an 8K panel".
    go write some shaders you clearly have no idea whatsoever about development
    i can make programs rescale to fit any screen size, just because you are too stunned to figure that out
    programmatically designed dialogs can adapt
    Place your rig specifics into your signature like I have, makes it 100x easier!
    Hardcore Games Legendary is the Only Way to Play!
    Vegan Advocate How can you be an environmentalist and still eat meat?

  • PGC...data rate too high

    Hallo,
    message
    nunew33, "Mpeg not valid error message" #4, 31 Jan 2006 3:29 pm describes a certain error message. The user had problems with an imported MPEG movie.
    Now I receive the same message, but the MPEG that is causing the problem is created by Encore DVD itself!?
    I am working with the german version, but here is a rough translation of the message:
    "PGC 'Weitere Bilder' has an error at 00:36:42:07.
    The data rate of this file is too high for DVD. You must replace the file with one of a lower data rate. - PGC Info: Name = Weitere Bilder, Ref = SApgc, Time = 00:36:42:07"
    My test project has two menus and a slide show with approx. 25 slides and blending as transition. The menus are ok, I verified that before.
    First I thought it was a problem with the audio I use in the slide show. Because I am still in the state of learning how to use the application, I use some test data. The audio tracks are MP3s. I learned already that it is better to convert the MP3s to WAV files with certain properties.
    I did that, but still the DVD generation was not successful.
    Then I deleted all slides from the slide show but the first. Now the generation worked!? As far as a single slide (an image file) can not have a bitrate per second, and there was no sound any more, and as far as the error message appears AFTER the slide shows are generated, while Encore DVD is importing video and audio just before the burning process, I think that the MPEG that is showing the slide show is the problem.
    But this MPEG is created by Encore DVD itself. Can Encore DVD create Data that is not compliant to the DVD specs?
    The last two days I had to find out the cause for a "general error". Eventually I found out that image names must not be too long. Now there is something else, and I still have to just waste time for finding solutions for apparent bugs in Encore DVD. Why doesn't the project check find and tell me such problems? Problem is that the errors appear at the end of the generation process, so I always have to wait for - in my case - approx. 30 minutes.
    If the project check would have told me before that there are files with file names that are too long, I wouldn't have had to search or this for two days.
    Now I get this PGC error (what is PGC by the way?), and still have no clue, cause again the project check didn't mention anything.
    Any help would be greatly appreciated.
    Regards,
    Christian Kirchhoff

    Hallo,
    thanks, Ruud and Jeff, for your comments.
    The images are all scans of ancient paintings. And they are all rather dark. They are not "optimized", meaning they are JPGs right now (RGB), and they are bigger then the resolution for PAL 3:4 would require. I just found out that if I choose "None" as scaling, there is no error, and the generation of the DVD is much, much faster.
    A DVD with a slide show containing two slides and a 4 second transition takes about 3 minutes to generate when the scaling is set to something other than "None". Without scaling it takes approx. 14 seconds. The resulting movies size is the same (5,35 MB).
    I wonder why the time differs so much. Obviously the images have to be scaled to the target size. But it seems that the images are not scaled only once, that those scaled versions of the source images are cached, and those cached versions are used to generate then blend effect, but for every frame the source images seem to be scaled again.
    So I presume that the scaling - unfortunately - has an effect on the resulting movie, too, and thus influences the success of the process of DVD generation.
    basic situation:
    good image > 4 secs blend > bad image => error
    variations:
    other blend times don't cause an error:
    good image > 2 secs blend > bad image => success
    good image > 8 secs blend > bad image => success
    other transitions cause an error, too:
    good image > 4 secs fade to black > bad image => error
    good image > 4 secs page turn > bad image => error
    changing the image order prevents the error:
    bad image > 4 secs blend > good image => success
    changing the format of the bad image to TIFF doesn't prevent the error.
    changing colors/brightness of the bad image: a drastic change prevents the error. I adjusted the histogram and made everything much lighter.
    Just a gamma correction with values between 1.2 and 2.0 didn't help.
    changing the image size prevents the error. I decreased the size. The resulting image was still bigger than the monitor area, thus it still had to be scaled a bit by Encore DVD, but with this smaller version the error didn't occur. The original image is approx. 2000 px x 1400 px. Decreasing the size by 50% helped. Less scaling (I tried 90%, 80%, 70% and 60%, too) didn't help.
    using a slightly blurred version (gaussian blur, 2 px, in Photoshop CS) of the bad image prevents the error.
    My guess is that the error depends on rather subtle image properties. The blur doesn't change the images average brightness, the balance of colors or the size of the image, but still the error was gone afterwards.
    The problem is that I will work with slide shows that contain more images than two. It would be too time consuming to try to generate the DVD over and over again, look at which slide an error occurs, change that slide, and then generate again. Even the testing I am doing right now already "ate" a couple of days of my working time.
    Only thing I can do is to use a two image slide show and test image couple after image couple. If n is the number of images, I will spend (n - 1) times 3 minutes (which is the average time to create a two slides slide how with a blend). But of course I will try to prepare the images and make them as big as the monitor resolution, so Encore DVD doesn't have to scale the images any more. That'll make the whole generation process much shorter.
    If I use JPGs or TIFFs, the pixel aspect ratio is not preserved when the image is imported. I scaled one of the images in Photoshop, using a modified menu file that was installed with Encore DVD, because it already has the correct size for PAL, the pixel aspect ratio and the guides for the save areas. I saved the image as TIFF and as PSD and imported both into Encore DVD. The TIFF is rendered with a 1:1 pixel aspect ratio and NOT with the D1/DV PAL aspect ration that is stored in the TIFF. Thus the image gets narrowed and isn't displayed the way I wanted it any more. Only the PSD looks correct. But I think I saw this already in another thread...
    I cannot really understand why the MPEG encoding engine would produce bit rates that are illegal and that are not accepted afterwards, when Encore DVD is putting together all the stuff. Why is the MPEG encoding engine itself not throwing an error during the encoding process? This would save the developer so much time. Instead they have to wait until the end, thinking everything went right, and find out then that there was a problem.
    Still, if sometime somebody finds out more about the whole matter I would be glad about further explanations.
    Best regards,
    Christian

  • Disk Transfer (reads and writes) Latency is Too High

    i keep getting this error:
    the Logical Disk\Avg. Disk sec/Transfer performance counter  has been exceeded.
    i got these errors on the following servers:
    active directory
    SQL01 (i have 2 sql clustered)
    CAS03 (4 cas server loadbalanced)
    HUB01
    MBX02(Clustered)
    a little info on our environment:
    *Using SAN storage.
    *Disks are new ,and working fine
    *the server has GOOD hardware components(16-32 Gb RAM;Xeon or quadcore........)
    i keep having these notifications everyday; i searched on the internet and i found the cause to be 1 of the 2:
    1) disk hardware issue( non common=rarely )
    2) the queue time on the hard-disk( time to write on the Hard-disk)
    if anyone can assist me with the following:
    1) is this a serious issue that will affect our enviroment?
    2) is it good to edit the time of monitoring to be 10minute(instead of the default 5min)
    3) is there any solution for this?(to prevent these annoying -useless??--- notifications)
    4)what is the cause of this queue delay;;and FYI sometime this happens when nothing and noone is using the server (i.e the server is almost Idle)
    Regards

    The problem is....  exactly what the knowledge of the alert says is wrong.  It is very simple.  Your disk latency is too high at times. 
    This is likely due to overloading the capabilities of the disk, and during peak times, the disk is underperforming.  Or - it could be that occasionally, due to the design of your disks - you get a very large spike in disk latency... and this trips the
    "average" counter.  You could change this monitor to be a consecutive sample threshold monitor, and that would likely quiet it down.... but only doing an analysis of a perfmon of several disks over 24 hours would you be able to determine specifically
    whats going on.
    SCOM did exactly what it is supposed to do.... it alerted your, proactively, to the possible existence of an issue.  Now you, using the knowledge already in the alert, use that information to further investigate, and determine what is the corrective
    action to take. 
    Summary
    The Avg. Disk sec/Transfer (LogicalDisk\Avg. Disk sec/Transfer) for the logical disk has exceeded the threshold. The logical disk and possibly even overall system performance may significantly diminish which will result in poor operating system and application
    performance.
    The Avg. Disk sec/ Transfer counter measures the average rate of disk Transfer requests (I/O request packets (IRPs)) that are executed per second on a specific logical disk. This is one measure of storage subsystem throughput.
    Causes
    A high Avg. Disk sec/Transfer performance counter value may occur due to a burst of disk transfer requests by either an operating system or application.
    Resolutions
    To increase the available storage subsystem throughput for this logical disk, do one or more of the following:
    •
    Upgrade the controllers or disk drives.
    •
    Switch from RAID-5 to RAID-0+1.
    •
    Increase the number of actual spindles.
    Be sure to set this threshold value appropriately for your specific storage hardware. The threshold value will vary according to the disk’s underlying storage subsystem. For example, the “disk” might be
    a single spindle or a large disk array. You can use MOM overrides to define exception thresholds, which can be applied to specific computers or entire computer groups.
    Additional Information
    The Avg. Disk sec/Transfer counter is useful in gathering throughput data. If the average time is long enough, you can analyze a histogram of the array’s response to specific loads (queues, request sizes, and so on). If possible, you should
    observe workloads separately.
    You can use throughput metrics to determine:
    •
    The behavior of a workload running on a given host system. You can track the workload requirements for disk transfer requests over time. Characterization of workloads is an important part of performance analysis and capacity planning.
    •
    The peak and sustainable levels of performance that are provided by a given storage subsystem. A workload can either be used to artificially or naturally push a storage subsystem (in this case, a given logical disk) to its limits. Determining these
    limits provides useful configuration information for system designers and administrators.
    However, without thorough knowledge of the underlying storage subsystem of the logical disk (for example, knowing whether it is a single spindle or a massive disk array), it can be difficult to provide an optimized one size fits all threshold value.
    You must also consider the Avg. Disk sec/Transfer counter in conjunction with other transfer request characteristics (for example, request size and randomness/sequentially) and the equivalent counters for write disk requests.
    If the Avg. Disk sec/Transfers counter is tracked over time and if it increases with the intensity of the workloads that are driving the transfer requests, it is reasonable to suspect that the logical disk is saturated if throughput does not increase and
    the user experiences degraded system throughput.
    For more information about storage architecture and driver support, see the Storage - Architecture and Driver Support Web site at
    http://go.microsoft.com/fwlink/?LinkId=26156.

  • [Solved] Powerline font 1px too high, using fontconfig w/ infinality

    Hey guys,
    I'm using the Termite terminal emulator which uses fontconfig for font rendering options. I'm also using vim powerline.
    I use the Monaco font and have installed the otf-powerline-symbols-git package.
    This is what my powerline looks like up close. As you can see the symbols appear to be 1px too high.
    Here's my font config
    <?xml version="1.0"?>
    <!DOCTYPE fontconfig SYSTEM "fonts.dtd">
    <fontconfig>
    <!-- Use Monaco for everything monospace -->
    <match target="pattern">
    <test qual="any" name="family"><string>Consolas</string></test>
    <edit name="family" mode="assign" binding="same">
    <string>Monaco</string>
    </edit>
    </match>
    <match target="pattern">
    <test qual="any" name="family"><string>Courier</string></test>
    <edit name="family" mode="assign" binding="same">
    <string>Monaco</string>
    </edit>
    </match>
    <match target="pattern">
    <test qual="any" name="family"><string>Courier New</string></test>
    <edit name="family" mode="assign" binding="same">
    <string>Monaco</string>
    </edit>
    </match>
    <alias>
    <family>monospace</family>
    <prefer>
    <family>Monaco</family>
    <family>Consolas</family>
    </prefer>
    </alias>
    <!-- Monaco is a little bit to large, making fallback fonts look small -->
    <match target="font">
    <test name="family"><string>Monaco</string></test>
    <edit name="pixelsize" mode="assign">
    <times><name>pixelsize</name>, <double>0.9</double></times>
    </edit>
    </match>
    </fontconfig>
    It doesn't seem like there's a fontconfig option to move the font's baseline down. Any ideas?
    Last edited by EvanPurkhiser (2013-08-10 02:45:35)

    Thanks bohoomil, I tried changing this setting (it was already set to true since I'm using your infinality bundle!) but it didn't seem to have an effect.
    It looks like I can fix this alignment issue by actually editing the OTF symbols font with fontforge. But it would be much nicer if there was some kind of setting I could change so I could use the upstream font.
    Edit: Reading through the issues, it looks like it's common practice to just accept that the font needs to be edited on a per-system basis.
    Last edited by EvanPurkhiser (2013-08-10 01:21:06)

  • I/O too high and it almost reach 100%

    Experts:
    I use berkeley DB to store some index information, the key values are string type and usually they are 20 chars.
    Now the hit rate shows it is 98% and db_stat shows no more pages are pageout or pagein. While the I/O rate is too high. I only run the serach command.
    2GB Total cache size
    1 Number of caches
    1 Maximum number of caches
    2GB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    3245477 Requested pages found in the cache (98%)
    57652 Requested pages not found in the cache
    32 Pages created in the cache
    57616 Pages read into the cache
    48199 Pages written from the cache to the backing file
    0 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    57665 Current total page count
    56884 Current clean page count
    781 Current dirty page count
    262147 Number of hash buckets used for page location
    3360554 Total number of times hash chains searched for a page
    1 The longest hash chain searched for a page
    3245235 Total number of hash chain entries checked for page
    41241 The number of hash bucket locks that required waiting (0%)
    40939 The maximum number of times any hash bucket lock was waited for (1%)
    1872 The number of region locks that required waiting (3%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    57676 The number of page allocations
    0 The number of hash buckets examined during allocations
    0 The maximum number of hash buckets examined for an allocation
    0 The number of pages examined during allocations
    0 The max number of pages examined for an allocation
    23 Threads waited on page I/O
    Pool File: DG_11_23_1
    4096 Page size
    0 Requested pages mapped into the process' address space
    2954110 Requested pages found in the cache (98%)
    57629 Requested pages not found in the cache
    0 Pages created in the cache
    57611 Pages read into the cache
    48072 Pages written from the cache to the backing file
    Pool File: DG_20_24_1
    8192 Page size
    0 Requested pages mapped into the process' address space
    291367 Requested pages found in the cache (99%)
    23 Requested pages not found in the cache
    32 Pages created in the cache
    5 Pages read into the cache
    127 Pages written from the cache to the backing file
    2GB Total cache size
    1 Number of caches
    1 Maximum number of caches
    2GB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    14M Requested pages found in the cache (99%)
    58927 Requested pages not found in the cache
    32 Pages created in the cache
    58891 Pages read into the cache
    48199 Pages written from the cache to the backing file
    0 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    58940 Current total page count
    54443 Current clean page count
    4497 Current dirty page count
    262147 Number of hash buckets used for page location
    14M Total number of times hash chains searched for a page (14039718)
    1 The longest hash chain searched for a page
    13M Total number of hash chain entries checked for page (13921739)
    251044 The number of hash bucket locks that required waiting (0%)
    250230 The maximum number of times any hash bucket lock was waited for (2%)
    1872 The number of region locks that required waiting (3%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    58951 The number of page allocations
    0 The number of hash buckets examined during allocations
    0 The maximum number of hash buckets examined for an allocation
    0 The number of pages examined during allocations
    0 The max number of pages examined for an allocation
    23 Threads waited on page I/O
    Pool File: DG_11_23_1
    4096 Page size
    0 Requested pages mapped into the process' address space
    13M Requested pages found in the cache (99%)
    58904 Requested pages not found in the cache
    0 Pages created in the cache
    58886 Pages read into the cache
    48072 Pages written from the cache to the backing file
    Pool File: DG_20_24_1
    8192 Page size
    0 Requested pages mapped into the process' address space
    618586 Requested pages found in the cache (99%)
    23 Requested pages not found in the cache
    32 Pages created in the cache
    5 Pages read into the cache
    127 Pages written from the cache to the backing file
    I can not understand why the I/O rate is so high, any expert can give some help?
    thanks

    This issue is discussed internally. If the user agrees, he can post the resolution on this topic.
    Bogdan Coman

  • Error message Translational currency is too high when posting GR for a PO

    Hi,
    I am trying to post a GR for a PO,with document currency say in GBP and local currency in EUR,when i tried to post a GR the system gives the message with-"Translational currency is too high" and also the value updated for previous GRs(for a qty 10) is vary high like 37,056,361,610.68     GBP,just i want to know how this value is picked by the system,i know exhange rate table maintained is plays important role,but the value i am getting is too high which is not at all related with rates we maintained in the table TCURR.
    I want to know why system is giving such a high value.     
    further PO quantity is 17,000 and net price is 1 GBP.
    we have posted very sucessfull GR earlier,but now this high value and error in GR  we are facing.
    with regards,

    Hi,
    Good afternoon and greetings,
    Wishing you a Happy New Year 2007
    Please go through the following OSS Note
    Note 191927 - Posting logic: GR for foreign currency PO
    Please reward points if found useful
    Thanking you
    With kindest regards
    Ramesh Padmanabhan

  • Error Rounding Difference too high when reset cleared document using FBRA

    My Client has posted cleared document with T_Code F-51. This cleared document has cleared 6040 open items.
    And now, they want to reset and reverse that cleared document.
    We found error "Rounding difference too high" when we are reset cleared document using FBRA.
    Kindly need your advice.
    Many thanks in advance.
    Regards,

    Hello,
    Please let me know the ERROR number.
    Regards,
    Ravi

Maybe you are looking for

  • Error while running PAPI API

    Hi, I am getting the following error while running a PAPI code in one of the machine. It runs fine in one another machine. I am using JRE1.5 in both the machines. Any Ideas?? Exception in thread "main" java.lang.NullPointerException      at java.util

  • Compiler replaces final static

    Hi! I've got the following problem:I need to check the version of an external jar which is contained in an attribute number of of class called Version. The problem with this is that the attribute is static final and therefore is replaced at compile-t

  • Executing java program

    iv installed jdk, jre 1.5.0_04 and 1.5.0_07 and NetBeans IDE 4.0 and 7.0. I am able to compile my java program but not able to run it. When i try to run it, i get the error ""Exception in thread "main" java.lang.NoClassDefFoundError:" followed by fil

  • Windows 8.1 Pro Upgrade

    Hi, Whenever i try to install windows 8.1 Pro, it asks to uninstall Hp security tools , Face Recognition and Theft recovery.Is there any other way to install 8.1 without uninstalling above HP softwares?

  • Should I Get the High Resolution 17" MBP Screen

    We have one designer in our firm who is a staunch PC person, but being that everyone has switched to a Mac at this point we've convinced her to try a Mac. I'm going to get her a 17" MacBook Pro, but my question is whether I should get her the higher