GC not releasing memory-Out of Memory Excp.

WL5.1,SP8,JDK1.2.2,Unix.
We have our application running on the above mentioned Env.
We have a memory allocation of 1024MB for the heap. We have three instances of
weblogic server running inside a cluster.
The server calls GC[0] regularly. Once the 100% of the memory is used the server
calls GC[1] and releases memory upto 50% to 60%
appx. But this release of memory keeps reducing and at one point of time it is
not able to release any of its memory.
We tried to get some thread dumps by doing kill -3 and that din't help us much.
we see this kind of behaviour in all the three weblogic instances.
Our application was running quiet a long time and we din't experience any problem
of memory issue before this occured.
Is their anything that I can do to see what objects are their inside the JVM.
I don't have any performance tool installed.
Does weblogic console can give some information on the heap used.
Is their any java utility program that i can use to find the inside's of JVM
Thanks
Krish.

You can find out why 'locale' tag fails by looking at the
ServletException.getRootCause() (in your error jsp page, if
you have one).
Krish <[email protected]\> wrote:
I had the same problem recently and I have some information prior to the server
going out of memory.
Our application uses JSP for presentation.Prior to the server getting out of memory
we faced a problem in one of our JSP being corrupted. The client never got the
full JSP on the screen. We believe that the JSP was corrupted and the server servert
the corrupted file. Then after investigation we used the "touch" command and touched
the file and then it recompiled and the client was able to view the full page.
This problem of serving corrupted JSP code started the same day the out of memery
problem occured. I would like to know that does this corruption leave some thing
in the memory that never got cleaned up by GC[1].
Following is the exception thrown when the client try to acess the JSP.
javax.servlet.ServletException: runtime failure in custom tag 'locale'
     at java.lang.Throwable.fillInStackTrace(Native Method)
     at java.lang.Throwable.fillInStackTrace(Compiled Code)
     at java.lang.Throwable.<init>(Compiled Code)
     at java.lang.Exception.<init>(Compiled Code)
     at javax.servlet.ServletException.<init>(Compiled Code)
The custom tag locale just transalates the string into the local language with
some attibutes passes. Almost all JSP's use this tag. So i don't think there is
a problem in tag.
All in need to know is that if their is some kind of exception in the JSP compilation
will the objects stay in the memory and never get cleaned up.
Thanks in advance
Krish.
"Mike Reiche" <[email protected]> wrote:
JProbe will provide information about java objects in the JVM heap.
Your application was running for a long time, then all of a sudden it
started
running out of memory? Look closely at the last code change you made.
Mike
"Krish" <[email protected]> wrote:
WL5.1,SP8,JDK1.2.2,Unix.
We have our application running on the above mentioned Env.
We have a memory allocation of 1024MB for the heap. We have three instances
of
weblogic server running inside a cluster.
The server calls GC[0] regularly. Once the 100% of the memory is used
the server
calls GC[1] and releases memory upto 50% to 60%
appx. But this release of memory keeps reducing and at one point oftime
it is
not able to release any of its memory.
We tried to get some thread dumps by doing kill -3 and that din't help
us much.
we see this kind of behaviour in all the three weblogic instances.
Our application was running quiet a long time and we din't experience
any problem
of memory issue before this occured.
Is their anything that I can do to see what objects are their inside
the JVM.
I don't have any performance tool installed.
Does weblogic console can give some information on the heap used.
Is their any java utility program that i can use to find the inside's
of JVM
Thanks
Krish.
Dimitri

Similar Messages

  • Can't open sequence: " operation not allowed.  Out of memory"

    I have a project with several sequences. The master sequence started freezing on me. I restarted FCP and the computer, but now it won't let me open this sequence. It will let me open others in the project, but not this one. It's even smaller than others. I get the error message, "Operation not allowed. Out of memory." How do I fix this if I can't open it? I've cleaned out render files as well.
    Thanks for the help!

    The graphics/stills don't reside in your project or in a Sequence - those are just pointers to the original media. The images should be on a hard drive wherever you stored them.
    The problem could also be due to a corrupt render files. You could try trashing the render files for the project and see if it opens afterward.
    -DH

  • Cannot open Sequence:  "Operation Not Allowed" then "Out of Memory"

    I quit FCS1 for about 5 hours and returned. When I opened the project I was working on I got these messages: "Operation Not Allowed" then "Out of Memory" when I attempt to open my master working sequence. Other sequences (small nests) do open within the project. The project opens, so the file is not corrupt. A back-up copy of the same project file on a separate physical disc generates the same results. I deleted the prefs and User Data folder, even ran FCP Rescue. Same results. Then I resorted to the Autosave Vault. The two most recent saves resulted in the same error messages. Fortunately the next did not. I lost a considerable amount of work. The system says my memory is OK (6GB). Any ideas? (An attempt to open on a FCS2 machine {8GB RAM} resulted in the same two error messages.)
    The project is fairly lengthy (90 mins) and does have numerous 1 and 2 frame audio edits. Nothing particularly unusual in the video tracks--although I am mixing DV and HDV. It doesn't seem to me these attributes would matter, just throwing them out there.

    ...removing and re-rendering is a great idea, if you've got the time.
    And if during render you get the same out of memory error, try rendering "in-to-out", rendering small chunks until you get to a chunk which brings up the error. Then, you've found an offending "corrupt" file.

  • SAP DBTech JDBC: [4]: cannot allocate enough memory: Out of memory

    Hello,
    I am currently experiencing this error when making request to the Hana database via jdbc : SAP DBTech JDBC: [4]: cannot allocate enough memory: Out of memory
    I guess I am running out of space, so I wanted to delete some data from unused tables. The problem is that I cannot connect with Studio anymore "Error when connecting to system".
    Is there another way to delete data from some tables when getting this particular error ?
    Does anyone experienced such a situation ?
    Thank you,
    Benoît

    Hi Benoit,
    Please plan to migrate to the latest version from Rev 38 before 1 year license expires in Rev 38 in Oct 2013. You need to export your content and import in the latest version.
    As you are aware, a lot of bugs are fixed between Rev 38 and the newer releases along with new SPS5 features  and HANA One is adding new web based management features in each HANA One release which does not need SSH to stop|start HANA as it can be done in browser starting in Rev 52 (check out The specified item was not found.for additional info). Please stay tuned for the upcoming version which will have additional management features to manage your instance.
    Thanks,
    Swapan

  • Error: Not Found Error: Out of Memory

    Hello,
    I'm trying to use a few photoshop files in Final Cut Pro 7.0.3 and I can't seem to shake these error messages.
    I've never had trouble using Photoshop files in the past but these ones are being uncooporative.
    I've imported the .psd file and edited it into the sequence.  The sequence setting match the clip settings.  Then I've added the "Page Peel" Transition to the edit point.  Immediately the canvas goes red and says something like "close and open window to restore preview."  After this I try to play the sequence and it says "Error: Not Found."  After closing and reopening the sequence my attempts to play back my work result in the error "Error: Not Found" followed immediately by "Out of Memory."
    I opened activity monitor to try and see if I was running out of ram space and I've got loads left.  I also have plenty of room on the hard disk I'm working from.
    Any thoughts?
    Thanks,
    Ryan

    Check your PSD documents, make sure they are RGB 8 Bits/Channel.
    MtD

  • Sequence not Opening: File not found and Out of Memory Error Message

    Hello
    Attempting to open sequence in FC gives me the error: File not found.
    Clicking on okay in the message gives another error: Out of Memory.
    Final Cut Pro version is 6.02
    Anybody have a solution on how to get this sequence working again?
    Thanks

    Project itself opens fine.
    Can a sequence file with in a project become corrupted?
    Luckily, I always make duplicate of the sequence I'm working on at the start of the day. And I save often when I'm working.
    The earlier duplicated version of the sequence works, so not everything is lost.
    Unfortunately the auto save was disabled in my profile, back on it goes.
    I was wondering if this has happened to anyone before and that if there is a fix - if it is not a corrupted file?

  • I can not open timeline and the canvas can not display because out of memory.

    I can not open timeline and the canvas can not diplay. I get a pop up saying "error out of memory" what does it mean and what can I do?

    Usually this is because you've got graphics that are cmyk or grayscale not RGB, or that are larger than 4000 by 4000 pixels.   Move them on your hard disk so fcp can't find them and make them rgb and resize to an acceptable pixel dimension. 

  • Not Found Error: Out of Memory

    ::sigh:: what a frustrating set of dialogue boxes.
    So I've read a lot and found no answers to this question on this forum.
    My situation is that I have a project that has multiple sequences, some will open and some will not. The ones that do not open have the wonderful dialogue boxes that appear in my subject line.
    I have erased all preference files and render files for the project. I don't have any CMYK images in my project. I have over 150 gigs of space on an external LaCie d2 500 gig drive and 2.5 gigs of RAM running in a G5 dual 2.5 ghz. I am running FCP 5.1.4
    Any advice would be very helpful.

    Try taking your media drive offline and reopening the project and sequences without reconnecting. If you can open them without the media, then there is something about your clips or renders that is tripping up the sequence.
    I get this message when I have clips that are in an unsupported codec in my sequences. Adding the missing codecs to /Library/Quicktime/ usually will do the trick. If you can't find a codec to add, you may be able to pre-export the clips to a more compatible codec using either FCP if you can, or QT Player, or Compressor. I have found each of these methods works slightly different with clips in "strange" codecs.
    Hope this helps -
    Max Average

  • Reader XI w/ win7 x64 8GB RAM, pdf shows 'out of memory'

    Good afternoon, title says it all. I gather this is nothing new but thought I'd throw my hat into the ring. For what it's worth this approx 500kb pdf not only gives out of memory on reader xi but also only shows as a blank page on foxit
    The page is at
    https://workspaces.acrobat.com/?d=HCs5d4qzwoVFHbMOwP3Ppw
    See what you think.Thanks to all
    Scott

    It's pretty comprehensively broken, not least by some HTML injected at the end of the file.
    I suspect it was served by a web server which injected some code related to AJAX, whatever that is, despite the fact that this would break the PDF. The PDF data itself, especially some JPEG image data, was mangled far beyond repair.

  • CTEQ !Out of Memory

    CTEQ !Out of Memory Hello,
    I have here the Creative SB Audigy 2 ZS since 06.
    Around the same time line went to the Creative All Downloads at http://support.creative.com/downloads/searchdownloads.aspx?nLanguageLocale=033&filename= SB0350&nPage=2 and downloaded SBAX_WBUP2_LB_2_09_006.exe.
    Was working like a charm up until last week when I happy go lucky clicked on Graphic Equalizer and instead of opening up as usual I got a message: CTEQ !Out of Memory.
    I came here to the forum and noticed many others with the same problem.
    Has anyone found a solution to this? Is there a patch?
    Over.

    Solution
    In windows XP pro, simply, the windows creative driver needs to be reinstalled:
    Creative SB Audigy 2 ZS (WDM) in device manager under sound, video and game controllers
    Right click properties, driver tab, update drive, install from specific location, don't search i will chose from list, show compatible hardware, Creative Audigy Audio Processor (WDM), next, continue anyway driver not signed.
    Out of memory error disappears on restart and creative equalizer resumes operation.
    d.

  • [svn] 2692: Bug: BLZ-227 - When using JMS Destination, MessageClient and FlexClient not released from memory when the session times out .

    Revision: 2692
    Author: [email protected]
    Date: 2008-07-31 13:05:35 -0700 (Thu, 31 Jul 2008)
    Log Message:
    Bug: BLZ-227 - When using JMS Destination, MessageClient and FlexClient not released from memory when the session times out.
    QA: Yes
    Doc: No
    Checkintests: Pass
    Details: Fixed a memory leak with JMS adapter. Also a minor tweak to QA build file to not to start the server if the server is already running.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-227
    Modified Paths:
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/services/messaging/adapters/J MSAdapter.java
    blazeds/branches/3.0.x/qa/build.xml

    Revision: 2692
    Author: [email protected]
    Date: 2008-07-31 13:05:35 -0700 (Thu, 31 Jul 2008)
    Log Message:
    Bug: BLZ-227 - When using JMS Destination, MessageClient and FlexClient not released from memory when the session times out.
    QA: Yes
    Doc: No
    Checkintests: Pass
    Details: Fixed a memory leak with JMS adapter. Also a minor tweak to QA build file to not to start the server if the server is already running.
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-227
    Modified Paths:
    blazeds/branches/3.0.x/modules/core/src/java/flex/messaging/services/messaging/adapters/J MSAdapter.java
    blazeds/branches/3.0.x/qa/build.xml

  • Garbage Collection not releasing memory properly

    Hello, In my web application memory is not releasing properly, after some point of time application throw out of memory error, we are using tomcat5, jdk5, my heap size is half of the RAM.Garbage Collection not releasing memory properly.

    sabre150 wrote:
    punter wrote:
    georgemc wrote:
    Yep. Couldn't be a problem with your code, it must be that garbage collection doesn't work and nobody else noticed. Raise a bug with Sun.Are you serious ?You need to replace your sarcasm detector.I have blowed my sarcasm meter and limping in the blindness !!!! :(

  • SQL Server Memory Usage Peaks to 95% and its not releasing SQL 2012

    We are currently running SQL 2012  64 bit, Lock Pages Enabled, 128 GB. We allocated Max Memory 112000, Min Memory to 0. We have a SQL Maintenace Job which backs up (Full Backup) of our server around 11PM at that time the SQL Server memory peaks to almost
    96% even during full business hours we are constantly in the range of 56 to 60% but after 11PM within 10 minutes of the job it peaks to 95 or 96 Percentage and even after the job completes it's not releasing the memory back. I have to manually shut the sql
    service and then restart at that time it comes back to normal.
    Any suggestions/any help really appreciated.

    Here are the details Memory is at 96% I still didn't restart...Please any help appreciated...The memory detals is for 24 hours  I restarted the service around 11AM before I posted my previous message..
    Memory Used by SqLServer: 119329
    Locked Pages Used SQLServer : 105969
    Total VAS in MB : 8388607
    Process Physical Memory Low : 0
    Process Virtual Memory Low : 0
    Max Memory set to :112000
    Out of 128 Memory Last 24 hours the memory usage in %
    Percent Memory Used - Total Memory is 128.0 GB
    Last 24 Hours
    DATE / TIME
    Memory Used
    19-Dec-14
    11:46 PM
    96.24659602
    20-Dec-14
    12:46 AM
    96.24578349
    20-Dec-14
    1:46 AM
    96.25146739
    20-Dec-14
    2:46 AM
    96.24345652
    20-Dec-14
    3:46 AM
    96.27311834
    20-Dec-14
    4:46 AM
    96.28947067
    20-Dec-14
    5:46 AM
    96.18931325
    20-Dec-14
    6:46 AM
    96.09323502
    20-Dec-14
    7:46 AM
    96.07915497
    20-Dec-14
    8:46 AM
    96.07906977
    20-Dec-14
    9:46 AM
    96.0784111
    20-Dec-14
    10:46 AM
    96.07415009
    20-Dec-14
    11:46 AM
    26.03422141
    20-Dec-14
    12:46 PM
    33.57474359
    20-Dec-14
    1:46 PM
    39.466561
    20-Dec-14
    2:46 PM
    41.85940742
    20-Dec-14
    3:46 PM
    43.89071274
    20-Dec-14
    4:46 PM
    45.80877368
    20-Dec-14
    5:46 PM
    46.49493281
    20-Dec-14
    6:46 PM
    46.68486468
    20-Dec-14
    7:46 PM
    46.69701958
    20-Dec-14
    8:46 PM
    46.69994736
    20-Dec-14
    9:46 PM
    57.5012455
    20-Dec-14
    10:46 PM
    96.25695419
    I verified its a sqL job and still my memory is 95%
    It peaeked at 10:46PM and here are the details of the SQL job which started at 10:30 
    Progress: 2014-12-20 22:30:04.39 
          Source: Check Database Integrity Task      Executing query "USE [DATASTORE]  ".: 50% complete  End Progress  Progress: 2014-12-20 22:43:06.10 
          Source: Check Database Integrity Task      Executing query "DBCC CHECKDB(N'DATASTORE')  WITH NO_INFOMSGS  ".: 100% complete  End Progress
        Progress: 2014-12-20 22:43:06.11     Source: Check Database Integrity Task      Executing query "USE [ETL_Proc]  ".: 50% complete 
     End Progress  Progress: 2014-12-20 22:46:52.56     Source: Check Database Integrity Task      Executing query "DBCC CHECKDB(N'ETL_Proc') 
      WITH NO_INFOMSGS  ".: 100% complete  End Progress  Progress: 2014-12-20 22:46:52.64     Source: Back Up Database Task
            Executing query "EXECUTE master.dbo.xp_create_subdir N'P:\SQL_Backu...".: 20% complete  End Progress  
    Progress: 2014-12-20 22:46:52.64     Source: Back Up Database Task      Executing query "EXECUTE master.dbo.xp_create_subdir N'P:\SQL_Backu...".
    : 40% complete  End Progress  Progress: 2014-12-20 22:46:52.64   
      Source: Back Up Database Task      Executing query "EXECUTE master.dbo.xp_create_subdir N'P:\SQL_Backu...".: 60% complete 
       End Progress  Progress: 2014-12-20 22:46:52.64     Source: Back Up Database Task    
     Executing query "EXECUTE master.dbo.xp_create_subdir N'P:\SQL_Backu...".: 80% complete  End Progress  
     Progress: 2014-12-20 22:46:52.64     Source: Back Up Database Task      
     Executing query "EXECUTE master.dbo.xp_create_subdir N'P:\SQL_Backu...".: 100% complete  End Progress 
      Progress: 2014-12-20 22:46:55.63     Source: Back Up Database Task    
        Executing query "BACKUP DATABASE [ReportServer] TO  DISK = N'P:\SQL...".: 100% complete  
    End Progress  Progress: 2014-12-20 22:46:56.55     Source: Back Up Database Task    
      Executing query "BACKUP DATABASE [ReportServerTempDB] TO  DISK = N'...".: 100% complete  End Progress  Progress: 2014-12-20 22:46:57.35  
         Source: Back Up Database Task      Executing query "BACKUP DATABASE [dbamaint] TO  DISK = N'P:\SQL_Bac...".: 100% complete  End Progress 
      Progress: 2014-12-20 22:51:13.08     Source: Back Up Database Task   
         Executing query "BACKUP DATABASE [DATASTORE] TO  DISK = N'P:\SQL_Ba...".: 100% complete  End Progress
       Progress: 2014-12-20 22:51:52.72     Source: Back Up Database Task     
    Executing query "BACKUP DATABASE [ETL_Proc] TO  DISK = N'P:\SQL_Bac...".: 100% complete  End Progress  Progress: 2014-12-20 22:51:54.87 
        Source: Rebuild Index Task      Executing query "USE [ReportServer]  ".: 0% complete  End Progress  Progress:
     2014-12-20 22:51:54.88     Source: Rebuild Index Task      Executing query "ALT...  The package executed successf...  The step succeeded.

  • ORA-27102: out of memory SVR4 Error: 12: Not enough space

    We got image copy of one of production server that runs on Solaris 9 and our SA guys restored and handed over to us (DBAs). There is only one database running on source server. I have to up the database on the new server. while startup the database I'm getting following error.
    ====================================================================
    SQL*Plus: Release 10.2.0.1.0 - Production on Fri Aug 6 16:36:14 2010
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup
    ORA-27102: out of memory
    SVR4 Error: 12: Not enough space
    SQL>
    ====================================================================
    ABOUT THE SERVER AND DATABASE
    Server:
    uname -a
    SunOS ush** 5.9 Generic_Virtual sun4u sparc SUNW,T5240*
    Database: Oracle 10.2.0.1.0
    I'm giving the "top" command output below:
    Before attempt to start the database:
    load averages: 2.85, 9.39, 5.50 16:35:46
    31 processes: 30 sleeping, 1 on cpu
    CPU states: 98.9% idle, 0.7% user, 0.4% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 239G free, 49M swap in use, 16G swap free
    the moment I run the "startup" command
    load averages: 1.54, 7.88, 5.20 16:36:44
    33 processes: 31 sleeping, 2 on cpu
    CPU states: 98.8% idle, 0.0% user, 1.2% kernel, 0.0% iowait, 0.0% swap
    Memory: 52G real, 224G free, 15G swap in use, 771M swap free
    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.
    and ulimit -a gives as below..
    root@ush**> ulimit -a*
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) unlimited
    stack(kbytes) 8192
    coredump(blocks) unlimited
    nofiles(descriptors) 256
    memory(kbytes) unlimited
    root@ush**>*
    and ipcs shows nothing as below:
    root@ush**> ipcs*
    IPC status from <running system> as of Fri Aug 6 19:45:06 PDT 2010
    T ID KEY MODE OWNER GROUP
    Message Queues:
    Shared Memory:
    Semaphores:
    Finally Alert Log gives nothing, but "instance starting"...
    Please let us know where else I should check for route cause ... Thank You.

    and I compared the Semaphores and Kernel Parameters in /etc/system . Both are Identical.are identical initSID,ora or spfile being used to start the DB.
    Clues indicate Oracle is requesting more shared memory than OS can provide.
    Do any additional clues exist within alert_SID.log file?

  • Dynamic memory not released to host

    Dear Techies,
    Ours is a small Hyper V virtual server infrastructure with three DELL power-edge physical hosts(Windows server 2012 Datacenter) and around 15 virtual machines running on top of it. The hosts are added to fail-over cluster. Each host has 95 GB RAM. All the
    VMs are running Windows server 2012 standard edition.
    We have installed terminal services(TS licensing, TS connection broker, TS session host) in four VMs with the following dynamic memory settings:
    Start-up RAM : 2048 MB
    Minimum RAM : 2048 MB
    Maximum RAM : 8192 MB
    Below mentioned applications are configured in the server:
    Nova Application Installed
    SQL Developer Tool is Configured (ODBC Connection established for Database communication)
    FTPs communication allowed for File Transfer
    McAfee Agent is configured (Virus Scanner)
    Nimsoft Robot Agent Configured – Monitoring
    Terminal Service
    Enabled Multiple terminal sessions based on customer requirement
    BGinfo tool configured through group policy for customized desktop background
    The average memory utilization in the terminal servers are 3.6 GB. As per dynamic allocation the maximum RAM requirement/allocation till date is 4GB. As seen in Hyper V console, the current RAM demand is 2300 MB and assigned memory is 2800 MB.
    However, the earlier assigned RAM in the server is ballooned/faked to the VM as driver locked memory. This is by design. Despite the memory being released back to the host, the server still shows up the 4Gb which makes the memory utilization report from
    monitoring tools look 80% (3.2 GB out of 4 GB).
    As a result, the memory utilization report is always based on the current dynamically allocated RAM and not calculated based on the maximum assigned RAM(8GB in this case). To make it clear: If the
    currently assigned RAM is 4Gb and utilization is 3.2 GB the utilization % is reported as 80%. However, if calculated in accordance with maximum RAM capacity of the server it would be 40% ((3.2/8)*100).
    Is there any way to release the driver locked memory from the VM.?
    Regards, 
    Auditya N

    I am not really clear on the point of your question.
    Allocated RAM is what is currently in use / allocated to a VM out of the physical RAM pool.  It is Demand + Buffer.  The demand is based on the applications in the VM and what they think they need and how efficiently they return unused memory
    to the machine.  This has nothing to do with in-application paging (which happens a lot with Terminal Servers).
    So yes, the memory utilization is accurate in relation to physical RAM utilization.
    Dynamic Memory is about efficiency, not about over-allocation.  Hyper-V can never give VMs more RAM than is in the physical RAM pool.  The VMs can be configured to have more RAM than is in the physical RAM pool - but the VMs will hit the top of
    the pool and not be allowed to have any more.  There is no ballooning or paging to disk.
    So, you maximum allocated could go beyond what is possible.  But that would mean that your utilization would be artificially low if this was used in the calculation.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

Maybe you are looking for

  • Adobe Media Encoder CC - unknown error

    I am getting the following error when trying to export the media in Premiere Pro CC.

  • Need help getting my 120g hd to work

    hello there folks, i was wondering if i could get some help with a hard drive issue. i recntly got a g4 that was thrown out due to upgrading. i was told the only thing wrong with it was that the hard drive was taken out. so i went out and got a 120g

  • Apply in tunes doesn't work in sync to iPad

    After finally having my new iPad recognised in iTunes devices, After selecting the music I wish to sync etc.  the APPLY button does not offer the choice to select (sync) I have re installed, tried another time, backed up to iCloud, tried back up to w

  • How do I get Integer in JTable cell to be selected when edit using keyboard

    I have some cells in a JTable, when I double-click on the cell to edit it the current value (if any) is selected and is replaced by any value I enter, which is the behaviour I want. But when I use the keyboard to navigate to the cell and start editin

  • Add marker to clip in timeline

    How do you add a marker to a clip in the timeline in Premiere Pro CC?