Help - JVM memory issue w RH3 and Apache (Oracle 11.5.9)

We have an issue where we need to increase the memory on our JVM on 32 - bit Linux OS past 1.9G addressable. 64 - bit is not an option and niether is hugetbl. are there some settings I am missing to enable the JVM to address more than 1.9 G. I have set Xmx and can allocate 2.6 G but cant address more than 1.9G. This is a huge issue that can hold up a critical go live
-any ideas???

11.5.9 does not use AK for any OAF pages, we migrated all the AK data way before 11.5.9 was released.
dont worry about the regionname and the URL param, there will be a regionmap to specify the xml that replaced this AK region.
look for one of my threads to understand how to get the xml for the ak region appearing in the URL
Thanks
Tapash

Similar Messages

  • JVM memory issues.

    Hi All,
    This is about the JVM memory issues I am getting on CentOS 5.2.
    I have the following configuration export CATALINA_OPTS="-Xms2048m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=1024m" and the line is being added in catalina.sh file.
    When I run the process after say around 12hrs I get the following error code java.lang.OutOfMemoryError: Requested array size exceeds VM limit this looks to be the jvm error.
    What is the work around for preventing the above error? My RAM size is 4 gb and JVM heap size configured to be 2 gb.
    Another query is , if I do a top -p on the particular java process I do see the RES grows more than the heap size configured like 2gb configured in this case but have seen the process RES is more than 2 gb.I am not clear about this.
    Let me know if I can provide more information.
    Thanks in advance.
    -Raju

    seige wrote:
    ... CentOS 5.2.
    export CATALINA_OPTS="-Xms2048m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=1024m"
    I get the following error code java.lang.OutOfMemoryError: Requested array size exceeds VM limit
    What is the work around for preventing the above error? Use a 64-bit processor a 64-bit CentOS and a 64-bit Java.
    When I run the process after say around 12hrsHuh? It does not happen always?

  • Help! Memory issues - programme won't open

    Hello,
    I have an issue when trying to open premier elements 3.  a message pops up stating:
    "the instruction @' 0x0a7bl0ez' referenced memory at '0x00000018'.  The memory could not be 'read'.
    I have tried to free up more memory and have at least 55gb free on the D Drive and 8 GB free on the C drive.  Can anyone offer any advice?
    Thanks

    Memory is not at all the same thing as hard drive space
    Memory is what the program uses to "think" with, hard drive space is the "suitcase" it uses to carry things with
    As Hunt (sorta) said, you have something wrong in your configuration and/or software that is starting when you turn your computer on... sorta like you are looking at a bird in flight so you can't think about your grocery list
    This is aimed at Premiere Pro, but may help
    Work through all of the steps (ideas) listed at http://ppro.wikia.com/wiki/Troubleshooting
    If your problem isn't fixed after you follow all of the steps, report back with ALL OF THE DETAILS asked for in the FINALLY section

  • Jrockit JVM GC issue - weblogic performance and crashes at times

    On enabling the verbose gc for memory debug, we have observer the following and we frequently face a JVM issue i.e JVM will be unresponsive due to GC pause and on checking we found the following in GC log.
    [memdbg ][Tue Jul 13 01:02:12 2010][26381] GC reason: TLA allocation failed, cause: Get TLA From Nursery
    [memdbg ][Tue Jul 13 01:02:12 2010][26381] Stopping of javathreads took 2.234 ms
    As of now the following is the TLA size:-
    [memdbg ][Tue Jul 13 01:00:10 2010][26381] Minimum TLA size is 2048 bytes
    [memdbg ][Tue Jul 13 01:00:10 2010][26381] Preferred TLA size is 65536 bytes
    [memdbg ][Tue Jul 13 01:00:10 2010][26381] Large object limit is 2048 bytes
    After consultaion with oracle support team, they have asked to us increase the TLA size and we did as follows but we still see the same message.
    tried setting
    -XXlargeObjectLimit:16k -XXminBlockSize:16k -XXtlaSize:min=16k,preferred=32k
    it was still a problem, tried
    -XXlargeObjectLimit:32k -XXminBlockSize:32k -XXtlaSize:min=32k,preferred=64k
    and we sill the following message.
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] f0 3.75Gb
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Minimum TLA size is 16384 bytes
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Preferred TLA size is 32768 bytes
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Large object limit is 16384 bytes
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Minimal blocksize on the freelist is 16384 bytes
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Initial and maximum number of gc threads: 8, of which 8 parallel threads, 4 concurrent threads, and 8 yc threads.
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Preferred free list cache percentage 10%.
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Maximum nursery percentage of free heap is: 95.
    [nursery][Wed Jul 21 03:14:06 2010][11864] Optimal nursery size: 536870912, free heap: 1073741824
    [nursery][Wed Jul 21 03:14:06 2010][11864] Setting mmNurseryMarker[0] to 0x12affff8
    [nursery][Wed Jul 21 03:14:06 2010][11864] Setting mmNurseryMarker[1] to 0x1aaffff0
    [nursery][Wed Jul 21 03:14:06 2010][11864] Nursery size increased from 0kb to 524288kb. Parts: 1
    [memdbg ][Wed Jul 21 03:14:06 2010][11864] Prefetch distance in balanced tree: 4
    [compact][Wed Jul 21 03:14:06 2010][11864] Compactset limit: 7600010, Using matrixes: 0, Static: 0
    [memory ][Wed Jul 21 03:14:06 2010][11864] GC mode: Garbage collection optimized for throughput, initial strategy: Generational Parallel Mark & Sweep
    [memory ][Wed Jul 21 03:14:06 2010][11864] heap size: 1048576K, maximal heap size: 1048576K, nursery size: 524288K
    [memory ][Wed Jul 21 03:14:06 2010][11864] <s>-<end>: GC <before>K-><after>K (<heap>K), <pause> ms
    [memory ][Wed Jul 21 03:14:06 2010][11864] <s/start> - start time of collection (seconds since jvm start)
    [memory ][Wed Jul 21 03:14:06 2010][11864] <end> - end time of collection (seconds since jvm start)
    [memory ][Wed Jul 21 03:14:06 2010][11864] <before> - memory used by objects before collection (KB)
    [memory ][Wed Jul 21 03:14:06 2010][11864] <after> - memory used by objects after collection (KB)
    [memory ][Wed Jul 21 03:14:06 2010][11864] <heap> - size of heap after collection (KB)
    [memory ][Wed Jul 21 03:14:06 2010][11864] <pause> - total sum of pauses during collection (milliseconds)
    [memory ][Wed Jul 21 03:14:06 2010][11864] run with -Xverbose:gcpause to see individual pauses
    [memdbg ][Wed Jul 21 03:14:39 2010][11864] GC reason: TLA allocation failed, cause: Get TLA From Nursery
    [memdbg ][Wed Jul 21 03:14:39 2010][11864] Stopping of javathreads took 1.627 ms
    [nursery][Wed Jul 21 03:14:39 2010][11864] KeepAreaStart: 0x1aaffff0 KeepAreaEnd: 0x22b00000
    [nursery][Wed Jul 21 03:14:39 2010][11864] Young collection 1 started. This YC is running while the OC is in phase: not running.
    [memdbg ][Wed Jul 21 03:14:39 2010][11864] A pinned object was found: 0x11a4d4a0
    [memdbg ][Wed Jul 21 03:14:39 2010][11864] A pinned object was found: 0x11a30010
    [nursery][Wed Jul 21 03:14:39 2010][11864] Found pinned object: 0x11a4d4a0 - 0x11a4f4b0
    Are there any standard tuning recommendations for Jrockit JVM to come over this GC issue? At present, we are using following JAVA options.
    -XXlargeObjectLimit:32k -XXminBlockSize:32k -XXtlaSize:min=32k,preferred=64k -verbose:gc -Xverboselog:/tmp/gc.log -Xverbose:memory,gcpause,memdbg,compaction,gc -Xverbosetimestamp -Xgcreport
    -RR
    Regards
    Ranga

    If you want to optimize for pausetime, you can use for example
    -Xms512m -Xmx512m -Xns256m -XXkeepAreaRatio:25 -Xgcprio:pausetime -XpauseTarget:200msThe parameters xms and xmx can be adjusted to your wishes.

  • Please help :: JVM Memory Calculation

    Hi Guys,
    I've developed a struts based web application. I've got the java.lang.OutOfMemoryError when I tried to retrieve 2000 records in the login page itself. Ofcourse even in my listing of the modules.. i am getting the same...error
    still i have confused with JVM size.. how much to be increased...? presently i have JVM size 32MB with jdk1.5, tomcat 5.5 , linux os..with 1GB RAM
    please suggest..,

    Well firstly i feel Holding 2000 records at a time is ceratinly foolish as sooner or later it would eat up all your JVM heap...
    A better solution in order to maintain this is to use data paging....
    first....
    which you might divde into pages....
    Say for example....
    select count(*) from table_name where <condition>gives you total number of records present.
    Say if you have divded the total records into 100 per Virtual Page
    at a strench fetch first 100 records using properties like rownum(oracle),LIMIT clause in mysql and etc.
    Hold the present page pointers in a bean Scoped session and then fetch the other page accordingly.this saves a lot of memory thought their is a lot of overhead involved.
    the other would be usage of other caching techniques.
    May be the below article might help you in this regard
    http://java.about.com/library/weekly/uc_querycache1.htm
    And if you are still struck to increasing increasing JVM Size you might have to configure it in App server batch server runtime parameters
    check out below article of how configure few app servers to increase JVM Heap
    http://www.chemaxon.com/jchem/doc/admin/tomcat.html
    http://www.redhat.com/docs/manuals/jboss/jboss-eap-4.2/doc/Server_Configuration_Guide/ch02s02s02.html
    http://e-docs.bea.com/wls/docs81/perform/JVMTuning.html
    http://www.ibm.com/developerworks/websphere/library/techarticles/0706_sun/0706_sun.html
    and checkout the below article to findout different technique followed to solve memory leak problems.
    http://support.bea.com/application_content/product_portlets/support_patterns/wls/InvestigatingOutOfMemoryMemoryLeakProblemsPattern.html
    Hope this might help :)
    REGARDS,
    RaHuL

  • HELP!!  Issues with iTunes and Windows 8.1.

    So I recently upgraded to a new laptop with Windows 8.1 and I have been experiencing some issues that I need assistance with. So far the issues I need help with are:
    When I try to get album artwork for my songs, it gets through a small fraction then just kind of freezes.I have a lot of songs and albums but I have never had this problem before.
    When I try to connect to the iTunes store, it takes quite awhile and then I am left with either a blank screen or album/song/etc. names and no pictures.And when I click on another link, it takes forever (again) and I experience the same issue while trying to travel to the new page.
    I don't know what else to do and I can't figure out a solution. Please help me!!!

    Hello RavensFan94
    For both your issues, it sounds like you are having a hard time connecting to the iTunes Store to get album artwork and viewing content. Check out the article below for some troubleshooting options to get it working again.
    Can't connect to the iTunes Store
    http://support.apple.com/kb/TS1368
    Regards,
    -Norm G.

  • Urgent help with memory issue (ORA-04031)

    Hi Geeks,
    Came Across this below error in alert log in of our databases. Interestingly this error only pops up while the database backup runs. Below are the memory parameters.
    memory_max_target big integer 14G
    memory_target big integer 13G
    pga_aggregate_target big integer 0
    sga_target big integer 0
    shared_pool_reserved_size big integer 192M
    shared_pool_size big integer 0
    ORA-04031: unable to allocate 5792 bytes of shared memory
    ("shared pool","unknown object","sga heap(1,1)","ges resource ")
    Errors in file /apps/opt/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_lmd0_14123.trc
    I have tried specifying a value to shared_pool_size but with no luck..!! Pls help. I am not sure how to resolve this.

    Hi, ORA-4031 in this case can be caused by unrestricted growth of the PGA, thus forcefully reducing the SGA.
    We've had this in the past also.
    The solution is in older versions to increase the shared_pool_size as you mentioned but with ASMM, this is no longer an option. You can however retrict the growth of the PGA by setting SGA_TARGET.
    On our system this now looks like:
    SQL> show parameter _target
    NAME                                 TYPE                 VALUE
    db_flashback_retention_target        integer              1440
    memory_max_target                    big integer          10G
    memory_target                        big integer          10G
    pga_aggregate_target                 big integer          0
    sga_target                           big integer          8GSee how we in fact set the max growth of the PGA to 2Gb ( 10Gb - 8Gb)
    Try this and see if ot works for you also
    Success!!
    FJFranken

  • Help with memory issue - abap memory vs roll area

    Hi experts,
    with our customer we are using TRM and WebDynpro technology to process Tax forms. We have portal integration. Recently due to performance reasons we were forced to analyze memory consumption and actually we are unable to understand what is happening. In memory inspector we can see much higher total memory than for ABAP total.
    For Example, running simple application for form bundle search will produce following memory consumptions:
    We do not understand why the Total section is + 12 MB when all ABAP code is just 7 MB?
    Another example - opening of a form bundle (FB) leads to cca 32 MB consumption (no matter what type of FB we are opening). Visiting all pages of FB leads to 52 MBs (specific FB, this depends on FB type). Again, when we invoke memory inspector a lot of memory is in Roll Area.
    This look like roll area is almost always twice as big as the actual ABAP Application.
    Please any suggestions what is happening?
    May be just the misunderstanding of roll area. To our knowledge roll area is the area of the internal session managed by memory manager (usually mostly created by extended memory). But why is this area so large when the actual application needs half of the space?
    Some discussion suggests that this is due many dead objects released when memory inspector is started. But why is this space still available to the application and not put back to the extended pool? Is there any way how to identify those dead objects?
    Thanks a lot

    Hi, ORA-4031 in this case can be caused by unrestricted growth of the PGA, thus forcefully reducing the SGA.
    We've had this in the past also.
    The solution is in older versions to increase the shared_pool_size as you mentioned but with ASMM, this is no longer an option. You can however retrict the growth of the PGA by setting SGA_TARGET.
    On our system this now looks like:
    SQL> show parameter _target
    NAME                                 TYPE                 VALUE
    db_flashback_retention_target        integer              1440
    memory_max_target                    big integer          10G
    memory_target                        big integer          10G
    pga_aggregate_target                 big integer          0
    sga_target                           big integer          8GSee how we in fact set the max growth of the PGA to 2Gb ( 10Gb - 8Gb)
    Try this and see if ot works for you also
    Success!!
    FJFranken

  • Please help with several issues-Pictures, Videos and More.

    I'm on my 3rd iPhone4 in 48 days. A few weeks ago I switched to ATT from Sprint and the EVO because my Company forced me too (they pay for my line). I have a bunch of issues. I won't get into the poor reception and dropped data connection since this is a hardware problem. Ok, when I take pictures and send in an email they don't send straight. Even is I rotate then and save that way when I attach the picture it's not straight. This also happens with video. If I send in a text they are correct. If anyone has tried a Samsung Galaxy S I would like feedback on that too. Probably a long shot in an iPhone forum but I am desperate. I'm not pleased with this phone. I may have to return it.

    The ATT rep has visited our work twice. Several us have had issues. He told us we would be able to return our phones even after the 30 days because of all the service tickets open on our phones. If we wanted to move to something else we would just pay the difference. So if I wanted to change to the GalaxyS it wouldn't cost me a thing since it's $199 which I paid for my iPhone. For some reason when I double tap the space bar for a period most of the time it types the letter B. Why is this?

  • Coldfusion Multi instance Memory Issues

    Hello,  we recently got brand new servers with 8gb of ram and 64bit Windows 2008.  We have about 7 instances created on these servers and I am noticing something extremely disturbing. On 2 of the instances which I just created today that have absolutely no sites running on them yet as we are still migrating sites.  Coldfusion immediately consumes 700 to 900 mb of working set memory.  this is for all instances which then makes my server seem like it is out of memory.  on the old box it only took the about of working memory that is needed and this would grow over time but not immediately upon starting the server.  I started on of these instances and litterally watched as it took 850mb of ram within 2 minutes of starting and it doesn't get released.
    I do have the jvm set to 2gb and they do for now all share the same jvm.  I am just curious if anyone else running 64bit Windows 2008 is having the same issue and if it is just normal behavior with 64bit systems.  We moved to these beefy servers to help with memory issues but it seems I am still plauged with the issue even when there is no site allocated to the instance.
    Any ideas and thoughts would be appreciated.
    thank you.

    If you have a min memory parameter in your jvm.config (-xmxMin or somethingn like that, then each instance on start will immediately reserve that amount of memory.
    This is a java thing not CF.  With CF8, I don't think having the min memory value matters with the most recent java versions.  I previous versions you would occasionally get out of memory errors if you didn't, but I haven't heard of this since CF8 came out.
    They may have also fixed something in 8 to help alleviate that issue.
    So chaulk it up as normal.  With 7 instances you'll probably run into more CPU issues.
    Byron Mann
    [email protected]
    [email protected]
    Software Architect
    hosting.com | hostmysite.com
    http://www.hostmysite.com/?utm_source=bb

  • Oracle 11g - memory issue.

    hi,
    Recently we have upgraded to oracle 11.2.0.1 from 10.2.0.4. Below memory issues shows up and system crashes.
    We included parameter MEMORYIMM_MODE_WITHOUT_AUTOSGA=FALSE in init file. Despite this happens.
    ORA-04031: unable to allocate 264 bytes of shared memory ("shared pool","unknown object","CCUR^4e9657a","kglob")
    Fri May 20 18:43:45 2011
    Sweep [inc][64733]: completed
    Sweep [inc][64732]: completed
    We increased db_cache and shared_pool memory, does not help.
    Would appreciate any feedback on this.
    Thanks,
    RajS

    Hi Raj
    with experience I noticed these memory issues (ORA-0403 ) issues with 11.2.0.1
    say some bugs, which aren't easy to fix.
    Better upgrade to 11.2.0.2.
    May be not the proper answer to your question Just a tip.
    BR
    Venkat

  • CF10 Update 14 and possible memory issues

    One of my associates is complaining that since I applied the Coldfusion 10 Update 14 we are experiencing memory issues.  Has anyone else had issues with Update 14?
    Just a System Admin fighting the good fight!

    I’ll throw in that even if changing the JVM helped, it would still leave the question open as to whether/why Chad experienced a change in heap usage on solely updating CF.
    Chad, really? Nothing else changed? I just find that so odd. I’ve not yet heard about it (or seen it) being an issue. Of course, update 14 did do quite a bit: beyond bug fixes it also updated Tomcat. I would be surprised that that could lead to memory leaks (assuming that’s what this is, if really NOTHING else changed).
    What about the database you’re using?  Update 14 did change the JDBC drivers for Postgres. Are you using that DBMS?
    Just trying to think what else could contribute to this, if indeed nothing else changed for you.
    It is possible that something else changed, either in the config or coding (and you didn’t know it), or perhaps in the load against the server (I see that all the time: someone adds a new site, perhaps brought over from another server, and they assume “it doesn’t get much traffic”, but they don’t realize how heavily spiders and bots may hit that newly added  site, which could definitely put pressure on the heap whether from increased sessions, caching, and so on.)
    Of course, you can always uninstall the update easily, in the same CF Admin page where you install it. That would help you prove if that alone was it. (Just be sure to rebuild the connectors back to the version as per the CF update you would revert to. I don’t think it’s appropriate to run the update 14 connector with an earlier update.)
    Finally, FWIW, if you really wanted to go nuts, you could change CF to using Java 8. That’s another thing added in 14: support of 8. But to clear, it does not “change it” for you, so that’s not what happened here. But just as the two Carl’s proposed changing the  JVM to see if it “would help”, you could consider moving to Java 8. That’s all the more worth considering if indeed the issue is that something changed in your environment (config/code/load) and you simply do need more heap (in Java 7).
    Of course CF will use the same GC you have specified even if you update it to use 8, so you may need to make some changes to see a real impact. But for instance one thing Java 8 does by default is no longer use the permanent generation. That should have no effect on your your observed use of heap. Just saying that 8 is indeed different, and you never know if updating to it could help (or hurt. It’s so new that CF supports it in 10, and only to come in update 3 of 11, that there’s relatively little known experience about the combination.)
    Anyway, do let us know if you find more.
    /charlie

  • HT5316 I've been working with Mainstage 2.1.3 and have experienced concerts crashing immediately after loading.  I'm not installing Mainstage 2.2.2 to see if that helps alleviate the issue.  My question is: should I uninstall Mainstage 2.1.3 now?

    Hi all,
    I've been working with Mainstage 2.1.3 and have experienced concerts crashing immediately after loading.  In an attempt to solve crash issue, I've installed Mainstage 2.2.2 to see if that helps alleviate the issue.  My question is: should I uninstall Mainstage 2.1.3 now in order to save memory?
    thanks,
    korriefromca

    Hi
    korriefromca wrote:
    My question is: should I uninstall Mainstage 2.1.3 now in order to save memory?
    You would only save about 500MB by deleting MS 2.1.3. You may find it useful to keep the application, or perhaps to make a zip and then delete the original
    CCT

  • StringBuffer and memory issue

    I am having trouble. I have a fairly large string buffer that I use instead of a file. The String Buffer can grow to 5-10 MB.
    When it grows, it seems that the memory of the JVM basically doubles each time.
    To grow the string buffer, I append to it each time, and I have tried making its initial size 1-2 mb for testing purposes.
    Is there some way to prevent the jvm from growing it's memory by 2 X each time? It seems wasteful, and when many users are logged in, 10 mb per 10 users gets to be a lot. Please advise??

    A StringBuffer will indeed double the storage it uses every time an overflow occurs. Given that it has no idea how large its contents will ultimately be, doubling its size on overflow is statistically the best possible approach.
    Seems to me, by choosing a StringBuffer instead of a file, you have considered the memory/speed trade-off and chosen higher speed, higher memory instead of lower speed, lower memory. And isn't it true that the issue is mostly that you are using multi-megabytes of memory, rather than that more megabytes of memory are potentially being wasted? If you are only concerned about the extra allocation that StringBuffer makes, you could call its setLength() method to clean up the waste after it reaches its maximum size.

  • Planning - Memory Issue. Help!

    We are using planning and recently added approx 100 new members and some new webforms. Suddenly in planning, I cannot delete or move a member. It looks as though its processing but then nothing happens. Our internal IT checks the memory settings and sees a giant spike in CPU usage. We had this problem a year ago and Oracle recommended we update our Java Heap Settings from 1024 to 1536 which at the time resolved our issues. We are running planning on Windows 32bit.
    Now we are being told that 1536 is too high and to set it to 1024. Our develompent server has not seen this memory issue so we put their new recommendations in place in development and development no longer works. I put the Heapsize back to 1536 in development and it works. For some reason production does not.
    Anyone have any ideas about the Java Heapsize settings or any other recommendations?

    Hi Jeannette,
    I agree with JG, its time to start moving towards a 64Bit OS. I would urge you to run parallel with a 63bit server, so that when your 32 bit environment starts buckling under the added pressure the newer requirments are putting on your system, you have a fail over at least.
    With regard to 1536 being to high, I would +1 that too. The problem is the JVM really likes a chunk of contiguous memory, and Windows likes to eat up a chunk of memory slap dab in the middle of things - making ~1200 about the upper limit for a 32Bit JVM (perhaps by design, and I'm assuming windows because they probably wouldn't tell you to back off the heap size in U/LiNIX)
    Another way to look at this is to see if you can break out some of the more complex web forms into simpler ones. Also make sure you don't have any member functions that pretty much return the better part of a dimension(e.g. @iDescendants(Entity)) only to be shrunk back to a useable size using zero supression.
    Regards,
    Robb Salzmann

Maybe you are looking for

  • How to display a db record with a variable number of joined records

    I'm building a website for an airshow http://www.hollisterairshow.com and the organiser would like to post a list of things he needs for the show and have people respond via the website. For example, he might need 20 tents and potentially up to 20 pe

  • How should i open another application through url....?

    Hi,           how should i open new application in new window thorough url of that application. I know , one way is link to url property of ui element but in my case i dont wont to use that. i m having url of another application and if i want to open

  • Multiple article select in Folio Producer [Add] article feature

    Folio Producer provides [Add] article feature allow designer collaborative workflow, thanks. I just completed adding articles from several shared folios, I would like to say, select multiple articles in folio is a MUST to make DPS sharing collaborati

  • Start up disk full, might have partitioned...

    So, I keep geeting the pop up that says "Startup Disk is almost full". So, I am looking on desktop where it says Macintosh HD. Under that it says 148.73 G... 4MB free. I am confused. I think that somewhere along the line awhile ago I may have partiti

  • Reader download problem answered

    I was trying to figure why when downloading Reader to my windows XP Pro Sp 3 IE 8 desktop I was getting an error window saying a script error occurred on this page and it would ask if I wanted to continue. If I said yes it would hang for ever. So I w