URGENT: Frequent expensive full GCs(after first full GC)with JDK_1.5.0_06

On load testing the application we see that
- everything works fine for the first 35 minutes till we hit the first full GC
after the first full GC, which happened after 35 minutes (after about 500 minor GCs), strangely the full GCs happen very frequently, once about every 2-3 minutes (once every 5-6 minor GCs).This degrades the performance significantly
- The frequency of minor GCs remains the same, about 1 every 15-30 seconds.
- Monitoring using visualGC and JConsole shows that till the first full GC, the minor GC was collecting a majority of the newly allocated objects with only a few spilling over to the Survivors and subsequently to the old gen. Hence the old gen was filling up very slowly, and it took 35 minutes for it to fill up and cause the first full GC.
- However, immediately after this first full GC (Note that the load exposed to the servers remained the same), it seems that the minor GCs started collecting very less objects and a huge number of objects were spilled over to the Survivors (which got completely filled up) and subsequently to the old gen which also got filled up quickly within 5-6 minor GCs, leading to frequenty major GCs, which were very expensive (took ~20 seconds)
Here is our JVM parameters:
-XX:+PrintGCDetails -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCTimeStamps -
XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -Xms2560m -Xmx2560m -XX:NewSize=896m -XX:MaxNewSize=896m
-XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -XX:InitialSurvivorRatio=8 -XX:MinSurvivorRatio=8 -XX:CompileThreshold=8000 -XX:PermSize=256m -XX:MaxPermSize=512m
This is an urgent issue and help is very much appreciated

Cross-posted; see a suggestion made at:
http://forum.java.sun.com/thread.jspa?threadID=5175759&messageID=9862052#9862052

Similar Messages

  • Frequent expensive full GCs (after the first full GC) with JDK_1.5.0_06

    On load testing the application we see that
    - everything works fine for the first 35 minutes till we hit the first full GC
    after the first full GC, which happened after 35 minutes (after about 500 minor GCs), strangely the full GCs happen very frequently, once about every 2-3 minutes (once every 5-6 minor GCs).This degrades the performance significantly
    - The frequency of minor GCs remains the same, about 1 every 15-30 seconds.
    - Monitoring using visualGC and JConsole shows that till the first full GC, the minor GC was collecting a majority of the newly allocated objects with only a few spilling over to the Survivors and subsequently to the old gen. Hence the old gen was filling up very slowly, and it took 35 minutes for it to fill up and cause the first full GC.
    - However, immediately after this first full GC (Note that the load exposed to the servers remained the same), it seems that the minor GCs started collecting very less objects and a huge number of objects were spilled over to the Survivors (which got completely filled up) and subsequently to the old gen which also got filled up quickly within 5-6 minor GCs, leading to frequenty major GCs, which were very expensive (took ~20 seconds)
    Here is our JVM parameters:
    -XX:+PrintGCDetails -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCTimeStamps -
    XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -Xms2560m -Xmx2560m -XX:NewSize=896m -XX:MaxNewSize=896m
    -XX:-UseAdaptiveSizePolicy -XX:SurvivorRatio=8 -XX:InitialSurvivorRatio=8 -XX:MinSurvivorRatio=8 -XX:CompileThreshold=8000 -XX:PermSize=256m -XX:MaxPermSize=512m
    This is an urgent issue and help is very much appreciated

    Did you try the -XX:+UseConcMarkSweepGC collector?
    One possibility regarding the frequent full gc's after the first
    is the following somewhat contrived explanation. Until the first
    full gc, most objects die in the survivor spaces and very few
    get promoted (your observations). At the first full gc, the
    entire heap is collected and all survivors end up, irrespective of
    age, in the tenured (old) generation. This could include objects
    that are relatively quite young. The application may store references
    to newer younger objects in these relatively young but already
    tenured objects. Suddenly there are many more references from
    old to young gen, which keeps these referenced objects artifically
    "alive" : the tenured objects may have already become unreachable
    but will not be collected and will keep the younger gen objects
    from being collected. So now you will have artificially longer object
    lifetimes, resulting in a larger volume of survivors. If the survivor
    spaces overflow, this can become a self-perpetuating
    cycle as more young objects get tenured prematurely.
    So I suggest two independent tacks here:
    (1) enable adaptive sizing of generation (i noticed you had turned this
    off; why?). This might allow GC to resize the generations and
    survivor spaces to avoid (or mitigate) this badness. You might also
    consider experimenting with -XX:+UseParallelOldGC which might
    make the full gc's quicker.
    (2) try the concurrent collector -XX:+UseConcMarkSweepGC
    If you happen to do any of these experiments, I'd be interested to
    know how the results turn out.
    All the best!

  • Only the first page is printed reduced as setup. All pages afterward are full size with information cut off. Recent problem. Can print from Safari just fine. How can I get all pages to be reduced size?

    Question
    Printing internet pages
    Reduced print size
    Only the first page is reduced and printed as desired
    All pages after the first page are full size with information cut off
    Recent problem never seen before
    Can print from Safari and other software just fine
    How can I get all pages to be reduced size?

    One suggestion worked from the Firefox prints incorrectly link mentioned above by mha007. I'm thrilled since this has been annoying me for weeks. Thanks mha007!
    Reset all Firefox printer settings
    # Open your profile folder:
    # On the menu bar, click on the Help menu and select Troubleshooting Information. The Troubleshooting Information tab will open.
    # Under the Application Basics section, click on Show in Finder. A window with your profile folder will open.
    # Note: If you are unable to open or use Firefox, follow the instructions in Finding your profile without opening Firefox.
    # On the menu bar, click on the Firefox menu and select Quit Firefox.
    # In your profile folder, copy the prefs.js file to another folder to make a backup of it.
    # Open the original prefs.js file in a text editor (such as TextEdit).
    # Remove all lines in prefs.js that start with print. and save the file.
    # If something goes wrong when you open Firefox, close it again and overwrite prefs.js with the backup you made.

  • After uploading our phones with the latest update, we have noticed that the battery life has deminished considerably.  I now have to charge my phone overnight and two or three times a day. Prior to the update, my battery life lasted me at least a full day

    After uploading our phones with the latest update, we have noticed that the battery life has deminished considerably.  I now have to charge my phone overnight and two or three times a day. Prior to the update, my battery life lasted me at least a full day.  We have several phones in our office and the ones that have updated (4) now have issue holding a charge/battery life. I really liked this phone and can not believe that you are now going to charge us $79 a battery to fix what is most definately a problem with your latest update.  I know other people outside of our company that are having the same problem. Not to mention when I called AT&T it was confirmed to me that they are hearing the same issue and then some from other customers as well.  Your own people, once I talked to them earlier today, told me they are showing a history of issues that are showing up after the update was put in place. Of course she tried to say, "Maybe the age of the battery and the update both contributed".  Whatever. 
    I want you all to know how disappointed I am in your company for the handling of this issue.  I always thought "Apple" was the line I didn't have to worry about having any types of issue. And that you all would stand behined your product 100%. Now I am not so sure.   
    I would love to hear back from your company on how you perceive the issue with all of these phones that prior to the update didn't have any issues and how after the update THEY ARE NOW having issues.  I do not believe this was an issue due to the age of a battery and that was pretty lame to say so.  It was fine and now its not.
    Please feel free to contact me and help me figure out a way to pay for all of the batteries that will be needed for our company to contiue doing business as needed.
    Thank you.
    Web Address (URL):
    5106 McNarney

    Sorry this is a user to user technical forum.  There is NO APPLE here as stated in the term of use when you signed up for this forum.
    here are some battery tips
    http://osxdaily.com/2013/09/19/ios-7-battery-life-fix/
    http://www.apple.com/batteries/iphone.html

  • Returns PO appearing in VL10B  transaction even after delivery is created with full PO qty

    Hi All,
    Returns PO is  appearing in VL10B  transaction even after delivery is created with full PO  qty /
    Regards
    ab

    Hi Ayub,
    Please go to the Return PO item details, and check the Delivery tab and make sure the delivery completed check box is selected. If not please select the check box and save the line items.
    If there are multiple line items, then do the same for all the line items.
    Then check in VL10B and confirm me.

  • Trigger Compilation Errors after Full Import with Datapump

    Hello All,
    We did a full import with Oracle Datapump, and encountered some errors related to triggers:
    ORA-39082: Object type TRIGGER:"CONVERT3"."CUBCNVT_AUDIT_RESET" created with compilation warnings
    ORA-39082: Object type TRIGGER:"CONVERT3"."CUBCNVT_AUDIT_RESET" created with compilation warnings
    ORA-39082: Object type TRIGGER:"CONVERT3"."CUBCNVT_AUDIT" created with compilation warnings
    ORA-39082: Object type TRIGGER:"CONVERT3"."CUBCNVT_AUDIT" created with compilation warnings
    ORA-39082: Object type TRIGGER:"CONVERT3"."CURCNVT_AUDIT_RESET" created with compilation warnings
    ORA-39082: Object type TRIGGER:"CONVERT3"."CURCNVT_AUDIT_RESET" created with compilation warnings
    ORA-39082: Object type TRIGGER:"CONVERT3"."CURCNVT_AUDIT" created with compilation warnings
    ORA-39082: Object type TRIGGER:"CONVERT3"."CURCNVT_AUDIT" created with compilation warningsWe are wondering if there is some bug with the datapump on oracle 10.2.0.2. What caused such errors and how to resolve this trigger issue?
    Thanks!

    Hello,
    Show errors / at the end of the trigger and see if any of the dependent objects is missing resulting in error at compilation.
    Also you can try manually fixing the issue
    CREATE OR REPLACE TRIGGER table1_trg
       AFTER INSERT
       ON TABLE1    REFERENCING NEW AS new OLD AS old
       FOR EACH ROW
    DECLARE
       tmpvar   NUMBER;
    BEGIN
    Trigger code
    EXCEPTION
       WHEN OTHERS
       THEN
          -- Consider logging the error and then re-raise
          RAISE;
    END table1_trg;
    SHOW ERRORS;Regards

  • Problems after doing full restore with Time Machine

    If I reinstall Leopard and and do a complete restore with time Machine, I have to erase my Time Machine backup drive and do a complete TM backup from scratch because TM will not pick up where it left off doing incremental backups. It wants to start over like it's never done any backups at all. It's a pain because it take me 4 or 5 hours to back up 270 GB. I've had to do this twice so far because once I had a problem with my startup drive and just today I made an external hard drive with leopard and restored all my files via TM.
    Is there anyway to get out of starting over from scratch when I do a full restore with TM ?

    The following procedure works for me if your restore happens to be from the last backup that Time Machine completed. (I'm not sure if it would work just as well if you restore from an older point in the Time Machine history -- but it certainly should!):
    After you complete the full restore (which will force a Restart when it completes), log in and then immediately go to System Preferences / Time Machine and Turn Off Time Machine to prevent any premature additional backups.
    Now go to System Preferences / Spotlight / Privacy and drag the icons for all your hard drives (including your main hard drive and the Time Machine hard drive) from the Desktop into the Privacy list. This will stop Spotlight indexing of all these drives to speed things up for what follows. You will do the indexing later (see below).
    If you have any sort of automatic virus protection active, disable it at this point to speed up what follows.
    Then Restart again (to get things into a fresh state), and then immediately do a Repair Permissions for your main hard drive (using Applications / Utilities / Disk Utility). Be patient. This will take 30 minutes or more and the progress bar may not advance until the very end. Do not be alarmed when several hundred notifications come up, as most of them are minor tidying up items, but some are significant. For example, you will see the ownership get adjusted for every help file in every language for every Lexar printer the system knows about (minor). You will also see the permissions adjusted for the root directory of your main hard drive (significant).
    And all of these permission and ownership repairs will happen EVEN THOUGH the files you backed up into your Time Machine may have had no such problems. They are, apparently, a result of the method that Time Machine uses to rebuild your file system during the restore.
    When the permission repair eventually completes, Quit Disk Utility and Shut Down the computer.
    Now reset the Parameter memory (PRAM). PRAM holds copies of certain system settings for rapid access. To do this, hold down the 4 keys Apple-Option-P-R continuously and press and release the power button. When you hear the SECOND startup chime, release those 4 keys. The system will continue to boot up normally. This makes sure the system's Parameter memory is in sync with the System Preferences resulting from the restore you just completed. It probably would have been anyway, but this makes sure. Among other things, this makes sure the system takes proper note of your "computer name" (System Preferences / Sharing) which is crucial to Time Machine's ability to recognize and use your previous backup database on the Time Machine hard drive.
    Now log in and fire up Mail to let it automatically finish the restore of its mailboxes by importing the necessary mail lists. Quit Mail when it finishes.
    If you have any other, application specific tasks to perform to complete the restore for any other applications, now is the time to do them.
    Finally, go back into System Preferences / Spotlight / Privacy, select the line showing your main hard drive in the list, and click the "-" on the bottom to remove it from the list. Repeat this for every other hard drive EXCEPT for your Time Machine hard drive. Exit System Preferences. Spotlight will now begin to re-index those hard drives from scratch. Watch this by clicking on the Spotlight icon in the menu bar. Wait for indexing to finish.
    Your restore is now at the point where you can let Time Machine do a new backup.
    I suggest you Restart again to get things into a fresh state (not truly necessary, but it is what I do). Then go into System Preferences / Time Machine and, at long last, Turn On Time Machine again. Then do a Back Up Now (right click on the Time Machine icon in the dock and select Back Up Now from the pop up menu).
    Because of the restore, Time Machine will now do a Deep Traversal of your entire file system looking for EVERYTHING that has changed compared to the last backup on its hard drive (rather than depending on the file system transaction logs as it normally does to make incremental backups happen much faster). The "Preparing" stage for this will take a long time -- about as long as a Repair Permissions pass in Disk Utility. Eventually Time Machine will start transferring files. This will be a backup of significant size because all the permissions repairs you did above, etc., count as changes as far as Time Machine is concerned, not to mention that certain portions of the file system are rebuilt during the restore. But it should be WELL SHORT of actually doing a complete backup of everything on your system. I.e., it is just a particularly large, but nevertheless incremental, backup added on to the previous stuff on your Time Machine disk.
    Crucial to this is that Time Machine recognizes the prior database on its hard drive as applying to your computer. Thus the permissions repair and PRAM resetting steps above.
    When that backup eventually completes, go into System Preferences / Spotlight and remove your Time Machine drive from the Privacy list. Exit System Preferences and wait for Spotlight to finish re-indexing your Time Machine drive.
    Restart once again, just to get things into a fresh state, and then re-enable any antivirus "live protection" stuff you disabled above.
    You are done.
    From this point on, Time Machine should do "normal" incremental backups, and the previous history of Time Machine backups should be accessible and used by Time Machine just as before.
    --Bob
    Message was edited by: BobP1776

  • Full Load" and "Full load with Repair full request"

    Hello Experts,
    Can any body share with me what is the difference between a "Full Load" and "Full load with Repair full request"?
    Regards.

    Hi......
    What is function of full repair?? what it does?
    How to delete init from scheduler?? I dont see any option like that in infopackage
    For both of you question there is a oss note 739863-Repairing data in BW ..........Read the following.....
    Symptom
    Some data is incorrect or missing in the PSA table or in the ODS object (Enterprise Data Warehouse layer).
    Other terms
    Restore data, repair data
    Reason and Prerequisites
    There may be a number of reasons for this problem: Errors in the relevant application, errors in the user exit, errors in the DeltaQueue, handling errors in the customers posting procedure (for example, a change in the extract structure during production operation if the DeltaQueue was not yet empty; postings before the Delta Init was completed, and so on), extractor errors, unplanned system terminations in BW and in R/3, and so on.
    Solution
    Read this note in full BEFORE you start actions that may repair your data in BW. Contact SAP Support for help with troubleshooting before you start to repair data.
    BW offers you the option of a full upload in the form of a repair request (as of BW 3.0B). If you want to use this function, we recommend that you use the ODS object layer.
    Note that you should only use this procedure if you have a small number of incorrect or missing records. Otherwise, we always recommend a reinitialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
    1. Repair request: Definition
    If you flag a request as a repair request with full update as the update mode, it can be updated to all data targets, even if these already contain data from delta initialization runs for this DataSource/source system combination. This means that a repair request can be updated into all ODS objects at any time without a check being performed. The system supports loading by repair request into an ODS object without a check being performed for overlapping data or for the sequence of the requests. This action may therefore result in duplicate data and must thus be prepared very carefully.
    The repair request (of the "Full Upload" type) can be loaded into the same ODS object in which the 'normal' delta requests run. You will find this request under the "Repair Request" option in the InfoPackage (Maintenance) menu.
    2. Prerequisites for using the "Repair Request" function
    2.1. Troubleshooting
    Before you start the repair action, you should carry out a thorough analysis of the possible cause of the error to make sure that the error cannot recur when you execute the repair action. For example, if a key figure has already been updated incorrectly in the OLTP system, it will not change after a reload into BW. Use transaction RSA3 (Extractor Checker) in the source system for help with troubleshooting. Another possible source of the problem may be your user exit. To ensure that the user exit is correct, first load a user exit with a Probe-Full request into the PSA table and check whether the data is correct. If it is not correct: Search for the error in the exit user. If you do not find it, we recommend that you deactivate the user exit for testing purposes and request a new Full Upload. It If the data arrives correctly, it is highly probable that the error is indeed in the user exit.
    We always recommend that you load the data into the PSA table in the first step and check the result there.
    2.2. Analyze the effects on the downstream targets
    Before you start the Repair request into the ODS object, make sure that the incorrect data records are selectively deleted from the ODS object. However, before you decide on selective deletion, you should read the Info Help for the "Selective Deletion" function, which you can access by pressing the extra button on the relevant dialog box. The activation queue and the ChangeLog remain unchanged during the selective deletion of the data from the ODS object, which means that the incorrect data is still in the change log afterwards. After the selective deletion, you therefore must not reconstruct the ODS object if it is reconstructed from the ChangeLog. (Reconstruction is usually from the PSA table but, if the data source is the ODS object itself, the ODS object is reconstructed from its ChangeLog). You MUST read the recommendations and warnings about this (press the "Info" button).
    You MUST also take into account the fact that the delta for the downstream data targets is created from the changelog. If you perform selective deletion and then reload data into the deleted area, this may result in data inconsistencies in the downstream data targets.
    If you only use MOVE and do not use ADD for updates in the ODS object, selective deletion may not be required in some cases (for example, if incorrect records only have to be changed, rather than deleted). In this case, the DataMart delta also remains intact.
    2.3. Analysis of the selections
    You must be very precise when you perform selective deletion: Some applications do not provide the option of selecting individual documents for the load process. Therefore, you must first ensure that you can load the same range of documents into BW as you would delete from the ODS object. This note provides some application-specific recommendations to help you "repair" the incorrect data records.
    If you updated the data from the ODS object into the InfoCube, you can also delete it there using the "Selective deletion" function. However, if it is compressed at document level there and deletion is no longer possible, you must delete the InfoCube content and fill the data in the ODS object again after repair.
    You can only perform this action after a thorough analysis of all effects of selective data deletion. We naturally recommend that you test this first in the test system.
    The procedure generally applies for all SAP applications/extractors. The application determines the selections. For example, if you cannot use the document number for selection but you can select documents for an entire period, then you are forced to delete and then update documents for the entire period in the data target. Therefore, it is important to look first at the selections in the InfoPackage exactly before you delete data from the data target.
    Some applications have additional special features:
    Logistics cockpit: As preparation for the repair request, delete the SetUp table (if you have not already done so) and fill it selectively with concrete document numbers (or other possible groups of documents determined by the selection). Execute the Repair request.
    Caution: You can currently use the transactions that fill SetUp tables with reconstruction data to select individual documents or entire ranges of documents (at present, it is not possible to select several individual documents if they are not numbered in sequence).
    FI: The Repair request for the Full Upload is not required here. The following efficient alternatives are provided: In the FI area, you can select documents that must be reloaded into BW again, make a small change to them (for example, insert a period into the assignment text) and save them -> as a result, the document is placed in the delta queue again and the previously loaded document under the same number in the BW ODS object is overwritten. FI also has an option for sending the documents selectively from the OLTP system to the BW system using correction programs (see note 616331).
    3. Repair request execution
    How do you proceed if you want to load a repair request into the data target? Go to the maintenance screen of the InfoPackage (Scheduler), set the type of data upload to "Full", and select the "Scheduler" option in the menu -> Full Request Repair -> Flag request as repair request -> Confirm. Update the data into the PSA and then check that it is correct. If the data is correct, continue to update into the data targets.
    And also search in forum, will get discussions on this
    Full repair loads
    Regarding Repair Full Request
    Instead of doing of all these all steps.. cant I reload that failed request again??
    If some goes wrong with delta loads...it is always better to do re-init...I mean dlete init flag...full repair..all those steps....If it is an infocube you can go for full update also instead of full repair...
    Full Upload:
    In full upload all the data records are fetched....It is similar to full repair..Incase of infocube to recover missed delta records..we can run full upload.....but in case of ODS  does'nt support full upload and delta upload parallely...so inthis case you have to go for full repair...otherwise delta mechanism will get corrupted...
    Suppose your ODS activation is failing since there is a Full upload request in the target.......then you can convert full upload to full repair using the program : RSSM_SET_REPAIR _ FULL_FLAG
    Hope this helps.......
    Thanks==points as per SDN.
    Regards,
    Debjani.....
    Edited by: Debjani  Mukherjee on Oct 23, 2008 8:32 PM

  • I have PS 4,5,and 6 installed on my Macbook Pro as well as Lightroom 4 and 5. Can I remove PS 4 and 5 and LR 4?  PS 4 is the total install and PS 5 and 6 are upgrades. LR 4 is the full package with LR an upgrade.

    I have PS 4,5,and 6 installed on my Macbook Pro as well as Lightroom 4 and 5. Can I remove PS 4 and 5 and LR 4? PS 4 is the total install and PS 5 and 6 are upgrades. LR 4 is the full package with LR an upgrade.
    I need space!

    First let me disabuse of a misconception:  All Adobe upgrades are FULL installers.  Whether you buy them as upgrades or new versions does not make an iota of difference.  The upgrade installers do not need to find a prior version installed, they just prompt you for the old serial number as well as the old one, and they do not rely on the old version in the least.
    Second, the time to uninstall an older version is BEFORE upgrading to a newer one.  Otherwise, the uninstaller from the older versions will mess up with your new install.  There are a gazillion files that are named the same and in the same locations as the new one.
    Your best bet now is to uninstall ALL the Photoshop versions you have installed, including the newest one, one by one, then run the Adobe CS Cleaner Tool, then run Repair Permissions from Apple's Desk Utility, and finally re-install the latest version.
    The other alternative, if your boot HD is large enough (very doubtful in case of a laptop like a Mac Book), is to leave all versions on it.
    Your choice.

  • Full optimize with Compress DB throws error

    Hi,
    We are using SAP BPC 7.0 MS version... We have an SSIS package scheduled every morning for a Full Optimize with Compress DB. For the past few days this package had been failing after running for 2 hours, with the following error message :
    [Error][OSoft.Services.Application.OptimizeManage.OptimizeManageCtrl]
    When we tried doing a Full optimize without compress DB or Index defragmentation from the Admin Console, it completes successfully.
    But when we try doing a Full Optimize with Compress database checked, it runs for about 2 hours and throws the following error :
    An error has occurred during processing
    Error Message : Thread was being aborted
    After the above process it leaves all the records in the fact table with source column 1.
    Can anyone please share your ideas on this issue if you have faced this before and a way to fix this.
    We do not have bad or invalid records in the fact table.

    Hi,
    Please recheck if you have any calculated members or invalid dimension members.
    Calculated members will create such problems.
    Please check you have enough free disk space on your server which has Database and Analysis Services.
    Please make sure you have enabled 3GB support in Everest Update component, if you have more than 4GB RAM in your Application server.
    Please check if you have setup the Analysis Server settings with a minimum recommended value for the
    Threadpool \ Query \ MaxThreads parameter.
    The recommended value is higher of 10 or (number of Analysis Services Databases + twice the number of processor cores).
    If you have 4 dual core processors in your Analysis Services server (8 cores) and 5 Analysis Services DB, then you must set the value as 5+(2*8)=21.
    The source column will be updated as zero when the full optimization is complete. It remains as 1 since your full optimization is not completed.
    Since it says thread is being aborted, I believe the Analysis Services server settings change might resolve your issue.
    Karthik AJ

  • How do I watch BBC iPlayer in full screen with flash player 10.1

    I was able to watch BBC iPlayer in full screen with Firefox. I can now only watch full screen with IE.
    Why can I not watch iPlayer in full screen with Firefox anymore?
    == After Flashplayer 10.1 installed

    I have tried in safe mode and as expected it makes no difference. I am still not able to view in full screen like IE it just shows a blank white screen. I was able to view full screen with the same extensions before so removing them has not made any difference.
    Thank you for your reply anyway.

  • Gallery Images: How to make clear PNG's open in full screen with a white background?

    Hi,
    So in the Life on Earth textbook, there is an image of an ant that has no background, so when you pick it up, you can see the text below without the standard "white box" around it.
    I've scorwed the internet looking for answers but couldn't find any.
    Please see the photos snapped from my phone of this:
    Unfortunately... When I try to achive this, my images open on a black background.
    I can achive the pick up.
    Would really appreicate the help. Thanks...
    Grant

    Well, if I put a white background in my image, it opens in full screen with a white background. But has a white square when I pick it up:
    This is with a 2028 x 1496 image same as above, just no background - came out of Illustrator - and this has the effect I want, as the first post but still opens to full screen with a black background.
    Thanks for the help... Hope to get this one figured out!

  • Indesign: full page with bleed?!

    Hi all, if i'm given an specific ad size of only:
    - Full Page with bleed - 8 x 10.5 (inches)
    - Non bleed size - 7.25 x 9.8 (inches)
    Does it mean:
    - page size: 8 x 10.5 (inches) 
    - bleed & slug: 0 
    - Margin: 0.375 (inches) for all sides 
    i'm puzzled if there is a need for trim area if there isn't any measurement provided..
    Hope someone could help me out please?!

    (Wow, I said "marginally more likely" -- no pun was intended, but I think it's pretty funny!!)
    Manish (do you work for Adobe?) wrote:
    In the Post there are 2 size :
    Full Page with bleed - 8 x 10.5 (inches) 
    - Non bleed size - 7.25 x 9.8 (inches)
    The second is without bleed and the first one with it hence we can know the bleed size from it. Can't we.
    So, the whole point of a bleed is to define three regions: 1) the interor space, bounded by the margin, where you're guaranteed to not come near to the trim, 2) the bleed area, which is a danger zone; once a page object touches this area, it is danger of being trimmed, but it might never happen 3) the actual trim area (and slug), where we're guaranteed the content will be cut off.
    If we were to just subtract the provided 7.25x9.8 from the provided 8x10.5 and say that our bleed area was 3/8" on a side, then that would mean if we put an image touching the margin, it would be touching the bleed area. That doesn't really make sense. Because the standard practice with bleeds is if you touch the bleed line, then you should go all the way to the trim line.
    The other reason this doesn't make sense is that 3/4" is 9.4% of 8". That is a lot of page to waste on a bleed. And, indeed. 3/8" is a rather large amount for a bleed, at least at these sizes. (If you were cutting billboards, different story...).
    Oh, and also, knowledge of magazine layouts gives us some more information, as Peter mentioned. It's quite common that a full page ad is allowed to bleed all the way, but otherwise all content on the page (advertising as well as Real Content) is kept within a margin, like 3/8". So there is a 3/8" border between the page edge and any content.

  • Photosmart 7510 did full bleed with PC, won't do it with new Mac

    For years I used my Photosmart 7510 with my old HP PC laptop, and did full bleed just fine. Now, I can't get it to do so with any of my Mac applications. Right now I am struggling with Pages, but it was also true for MS Word. I've tried all the fixes (including making the page custom sized larger), and still it always leaves about an 1/8th of an inch on all sides. Please tell me how to do a full bleed with my Mac. Thank you.

    Hello lnipps,
    Welcome to the HP Forums!
    I understand your Photosmart 7510 will not print a full bleed using your Mac computer. I will do my best to assist you! First, I need to ask some questions:
    What is your operating system on this computer? Mac.
    What type of paper are you using?
    What type of full bleed printing are you trying to achieve?
    What program are you printing from?
    Please verify this information and I will assist you further! Have a great night!
    I worked on behalf of HP.

  • Delta update data error. Can we do a full load with bad records?

    Hello,
    We are working with SAP BW 7.0. And we had a problem with a delta update. It has made the delta update correctly, however, a register is loaded incorrectly, when this register was correct in the source system.
    We just made a load of that record and now we have it right in the master data. But we must now update the InfoCube. The data came up with a delta load, and now to load only this record we must make a full load (with only 1 register), then in the infocube make a delete selection of the wrong record.
    The problem is that we have doubts about how this would affect the load delta, because loads are scheduled for each day and then lost as the source of where to start the next load and we have problems with delta loads the next few days.
    Thank you.

    hi,
    What is your delta extractor (LIS or not LIS), What is you target (DSO or cube).
    depending on your soruce and target procedure is not the same but you can do it in every cases :
    in case of not LIS
    just reload with full IP to PSA
    in case of LIS
    delete setup tables
    do a restructuration for your record (if you can with selection condition)
    in case of cube in BW
    do a simple full upload
    in case of DSO
    do a full upload in repair mode (if dataflow is 3.x) else just use DTP.
    But if your target is DSO and DSO load other infocube after be sure that they are not corrupted
    Cyril

Maybe you are looking for

  • G/L account determination for Vendor Billing document

    Hello, There are 2 requirements of the same nature. 1) there are duplication and unwanted G/L entries in accounting document 2) Where there is multiple entry, it is getting posted to single G/L account How to rectify both. i had searched forums and h

  • Can You Disable A PDF Form After X (days or openes)?

    Is there any way to disable a PDF form after X # of days or X # of openes? Here's my situation: I have a time sensitive document that needs to be filled it out and printed within 72 hours from receipt. After 72 hours the form is no longer valid and m

  • Dynamic Parameter Record Limit

    I am using Crystal 2008 with universes as a data source with BOE XI 3.0 (Edge) Infoview and need to create parameter fields that return greater than 1,000 records.  I have seen suggestions about modifying the local workstation registry, and also foun

  • How to import file to DVD Widesrceen without leterbox ??

    hi. please help ! I inport one video file (resoluton : 1440*1080 file H264 ) from hard disk . and i set the iMovie project is DV Widesrceen (16:9) . After import finish . i find both of side has black vertical stripe . And i check the dst DV file in

  • Retain existing versions after KM transport

    Hi SDNers, I need to transport xml articles (created using xml form builder) from Dev to QA system. I am doing so using the KM export and import features. However, my requirement is such that these xml articles are already present in both the Dev and