Hash Table Infrastructure ran out of memory Issue

I am getting ORA-32690 : Hash Table Infrastructure ran out of memory error, while executing an Informatica mapping using Oracle Database ( Test Environment)
The partition creation is as shown below.
TABLESPACE MAIN_LARGE_DATA1
PARTITION BY LIST (MKTCD)
PARTITION AAM VALUES ('AAM')
TABLESPACE MAIN_LARGE_DATA1,
PARTITION AHT VALUES ('AHT')
TABLESPACE MAIN_LARGE_DATA1,
PARTITION GIM VALUES ('GIM')
TABLESPACE MAIN_LARGE_DATA1,
PARTITION CNS VALUES ('CNS')
TABLESPACE MAIN_LARGE_DATA1,
PARTITION AOBE VALUES ('AOBE')
TABLESPACE MAIN_LARGE_DATA1,
PARTITION DBM VALUES ('DBM')
TABLESPACE MAIN_LARGE_DATA1
Could you please provide me with a solution to this problem asap?

SQL statement and execution plan? Is there a server-side trace file created for the session?
From the brief description, it sounds like bug 6471770. See Metalink for details. The workaround for this particular bug is to either disable hash group-by, by setting +"_gby_hash_aggregation_enabled"+ to FALSE (using an ALTER SESSION statement . Or by using a NO_USE_HASH_AGGREGATION hint.
Suggest you research this problem on Metalink (aka MyOracleSupport at https://support.oracle.com)

Similar Messages

  • Hash Table Infrastructure ran out of memory

    Hi Gurus,
    We are using oracle 10.2.0.3 on sparc solaris 64 bit with 16 CPU and 32GB of physical memory. Our sga_target is 12G and pga target is 15G.
    We are receiving
    ORA-32690: Hash Table Infrastructure ran out of memory
    We had identified query which is running in parallel 8 and using huge hash group by. May be due to that this error is there.
    We are doing kind of
    select x,y,z from abc partition(Q) group by x,y,z;
    Where Partition is hash partition.
    But here are some questions
    1. How to find why this error is coming. Any trace dump or pga dump can help ? Because i believe it was not using temp space at all. We have 2 TB temp space free.
    2. i think for hash group by if it don't get any memory it spill to temp ? then why this error
    3. How to monitor this while this error is happening.
    4. How to dig into issue using dumps and what to look for.
    5 IS PGA_AGGREGATE_TARGET only use 5 % for each parallel process

    Re: ORA-32690: HASH TABLE INFRASTRUCTURE RAN OUT OF MEMORY

  • SharePoint 2013 Search - Zip - Parser server ran out of memory - Processing this item failed because of a IFilter parser error

    Moving content databases from 2010 to 2013 August CU. Have 7 databases attached and ready to go, all the content is crawled successfully except zip files. Getting errors such as 
    Processing this item failed because of a IFilter parser error. ( Error parsing document 'http://sharepoint/file1.zip'. Error loading IFilter for extension '.zip' (Error code is 0x80CB4204). The function encountered an unknown error.; ; SearchID = 7A541F21-1CD3-4300-A95C-7E2A67B2563C
    Processing this item failed because the parser server ran out of memory. ( Error parsing document 'http://sharepoint/file2.zip'. Document failed to be processed. It probably crashed the server.; ; SearchID = 91B5D685-1C1A-4C43-9505-DA5414E40169 )
    SharePoint 2013 in a single instance out-of-the-box. Didn't install custom iFilters as 2013 supports zip. No other extensions have this issue. Range in file size from 60-90MB per zip. They contain mp3 files. I can download and unzip the file as needed. 
    Should I care that the index isn't being populated with these items since they contain no metadata? I am thinking I should just omit these from the crawl. 

    This issue came back up for me as my results aren't displaying since this data is not part of the search index.
    Curious if anyone knows of a way to increase the parser server memory in SharePoint 2013 search?
    http://sharepoint/materials-ca/HPSActiveCDs/Votrevieprofessionnelleetvotrecarrireenregistrement.zip
    Processing this item failed because the parser server ran out of memory. ( Error parsing document 'http://sharepoint/materials-ca/HPSActiveCDs/Votrevieprofessionnelleetvotrecarrireenregistrement.zip'. Document failed to be processed. It probably crashed the
    server.; ; SearchID = 097AE4B0-9EB0-4AEC-AECE-AEFA631D4AA6 )
    http://sharepoint/materials-ca/HPSActiveCDs/Travaillerauseindunequipemultignrationnelle.zip
    Processing this item failed because of a IFilter parser error. ( Error parsing document 'http://sharepoint/materials-ca/HPSActiveCDs/Travaillerauseindunequipemultignrationnelle.zip'. Error loading IFilter for extension '.zip' (Error code is 0x80CB4204). The
    function encountered an unknown error.; ; SearchID = 4A0C99B1-CF44-4C8B-A6FF-E42309F97B72 )

  • Flash CS6, QuickTime: The export operation failed because it ran out of memory.

    Exporting a QuickTime movie file seems to be an old issue with previous versions too. In CS6 I always get the error message: "The export operation failed because it ran out of memory." I've got tons of memory and have allocated increasing amounts to the cache with no success.
    My small and fairly simple file was created with Flash CS5 which seemed to work fine at the time of creation. I've tried everything recommended online but with no luck so far. BTW, I reinstalled CS5 and tried that too but it just hangs now, and there's no error message.
    Does anyone have a solution?
    (OS X 10.7.4)

    If I may join this discussion...  I loaded up CS6 two days ago and have just completed my first 60 second animation using it.  Like everybody else here, it all went fine until I tried to export it to QuickTime – something that has never been problematic before. 
    I ended up exporting it as a swf and opening that in CS4.  Then I could export it, but the resulting QuickTime movie was over 500 MB in size!  Okay, so then I opened it in QuickTime Pro and exported it from there.  This brought the file size down to a far more reasonable 35 MB, but now the soundtrack has gone!  So now I am having to replace the sound in Final Cut Pro. 
    This is a real palaver – and not conducive to heaping praise on CS6 when questioned.  I note that there is talk of Adobe rectifying this "bug", but it would be really good to hear from somebody at Adobe with:
    a) An apology
    b) An estimated time of delivery
    c) A best practice work round
    But I'm guessing that anybody that works at Adobe will be the last person to read this...

  • DB error - Ran out of memory retrieving results - (Code: 200,302) (Code: 209,879)

    I am encountering an error while running a big job of about 2.5million records running thru the EDQ cleansing/match process.
    Process failed: A database error has occurred : Ran out of memory retrieving query results.. (Code: 200,302) (Code: 209,879)
    The server has 8gb memory with 3gb allocated to Java for processing. I could not see any PostgreSQL configuration files to tune any parameters. Need some help with configuring the PostgreSQL database I guess. Appreciate any suggestions!!

    Hi,
    This sounds very much like a known issue with the latest maintenance releases of EDQ (9.0.7 and 9.0.8) where the PostgreSQL driver that we ship with EDQ was updated to support later versions of PostgreSQL but has been seen to use huge amounts more memory.
    The way to resolve this is to change the PostgreSQL driver that ships with EDQ to the conventional PostgreSQL version:
    1. Go here PostgreSQL JDBC Download and download the JDBC4 Postgresql Driver, Version 9.1-902. 
    2. Put this into the  tomcat/webapps/dndirector/WEB-INF/lib folder
    3. Remove/rename the existing postgresql.jar from the same location
    4. Rename the newly downloaded driver postgresql.jar
    5. Restart the 3 services in the following order: Director database, Results database, Application Server)
    With this version of the driver, the memory issues have not been seen.
    Note that there are two reasons why we do not ship this driver as standard, so you may wish to be aware of the impact of these if you use the standard driver:
    a. Drilldown performance from some of the results views from the Parse processor may be a little slower.
    b. There is a slim possibility of hitting deadlocks in the database when attempting to insert very wide columns.
    Regards,
    Mike

  • Out of memory Issues

    Hi,
    Weblogic version is 10.3, DB - Oracle
    We have environment like 4 servers One server with Admin server 4 managed servers remaining 3 servers with each server with 4 managed servers.
    Each managed server have 2 GB memory.
    Connection pools are setup Initial capacity 0 maximum capacity 15
    Our applications are developed on Pega, Currently we are getting Out of memory issues. F5 node send alerts like
    SEVERITY: Error
    Alert(432526): Trap received from ttnny-cse-f5node1: bigipServiceDown -- Bindings: sysUpTimeInstance = 1589988172, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status down., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 15:01 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    SEVERITY: Error
    Alert(432524): Trap received from ttnny-cse-f5node2: bigipServiceDown -- Bindings: sysUpTimeInstance = 1589982333, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status down., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 14:59 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    SEVERITY: Error
    Alert(432527): Trap received from ttnny-cse-f5node1: bigipServiceUp -- Bindings: sysUpTimeInstance = 1589988572, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status up., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 15:01 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    SEVERITY: Error
    Alert(432525): Trap received from ttnny-cse-f5node2: bigipServiceUp -- Bindings: sysUpTimeInstance = 1589982733, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status up., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 14:59 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    When we checked at that time server is up and running with some pega exceptions JVM Shows 10 % after some it will go 30 % .
    Can you see below alert confirms JVM down so we are restarting the server at this point.
    SEVERITY: Alert
    Alert(432565): Threshold triggered -- ttappapp01's 8003's Port Availability: 0.00 Percent < 100 Percent averaged over 1.00 minutes (Fri. 02/12/2010 17:15 America/New_York - Fri. 02/12/2010 17:15 America/New_York)
    SEVERITY: Alert
    Alert(432564): Threshold triggered -- ttappapp01's 8003's Port Availability: 0.00 Percent != 100 Percent averaged over 1.00 minutes (Fri. 02/12/2010 17:15 America/New_York - Fri. 02/12/2010 17:15 America/New_York)
    We took thread dump and heap dump at that time can any one please give some suggestions. why server going out of memory.
    *1, Any issue with connection pools*
    *2, Please give suggestion on design.*
    Thanks,
    Raj.

    Hi Raj,
    Did you checked the system.out and Weblogic managed server logs?
    Also you have to check the GC logs, to see if there is a memory problem or not.

  • Lightroom 3.2 out of memory issues

    I had been using the beta version of Lightroom 3 without issues.  Once I installed the shipping version I get out of memory messages all the time.  First I noticed this when I went to export some images.  I can get this message when I export just one image or part way though a set of images ( this weekend it made it though 4 of 30 images before it died ).  If I restart Lightroom it's a hit or miss if I can proceed or not. I've even tried restarting the box and only having Lightroom running and still get the out of memory issue.
    I've also had problems printing.  I go to print an image and it looks like it will print but nothing does.  This does not generate an error message it just doesn't do anything.  So far restarting Lightroom seems to fix this problem.
    When in the develop module and click on an image to see it 1:1 at times the image is out of focus.  If I click on another image and then go back to the original it might be in focus.
    I have no idea if any of this is related but I thought I'd throw it out there.  I've been using Lighroom since version 1.0 and have had very good luck with the program.  It is getting very frustrating trying to get anything done.  I search though the forum but the memory issues I found were with older versions. I'd be very grateful if anyone could point me in the right direction.
    Ken
    System:
    i7 860
    4g memory
    XP SP3

    Hi,
    You can get the HeapDump Analyzer for analyzing IBM AIX heapdumps from the below mentioned link.
    http://www.alphaworks.ibm.com/tech/heapanalyzer
    http://www-1.ibm.com/support/docview.wss?uid=swg21190608
    Prerequistes for obtaining a heapdump:
    1.You have to add -XX:+HeapDumpOnOutOfMemoryError to the java options of the server (see note 710146,1053604) to get a heap dump on its occurance, automatically.
    2.You can also generate heapdumps on request :
    Add -XX:+HeapDumpOnCtrlBreak to the java options of the server
    (see note 710146).
    Send signal SIGQUIT to the jlaunch process representing the
    server e.g. using kill -3 <jlaunch pid> (see note 710154).
    The heap dump will be written to output file java_pid<pid>.hprof.<millitime> in:
    /usr/sap/<SID>/<instance>/j2ee/cluster/server<N> directory.
    Both these parameters can be set together too to get the benefit of both the approaches.
    Regards,
    Sandeep.
    Edited by: Sandeep Sehgal on Mar 25, 2008 6:51 PM

  • CC 2014 Progs leave me plagued with out of memory issues which the previous versions don't exhibit.

    The new CC 2014 suite of progs seem rather memory hungry? I am plagued with out of memory issues trying to use them. The old CC progs work just fine! Is there a new minimum spec for memory now? For now I am forced to use the old versions as the new ones are just un useable...some 'upgrade'!
    Phil

    Me too!  Seems whenever I run more than one CC app I get out of memory errors.  I have Win 7 with 32GB ram.  Only have this problem with CC 2014, not CS6.

  • HELP! FLASH CS6 "The export operation failed because it ran out of memory"

    Hey guys, really need your help on this one. I can't seem to export my flash animation video. It kept failing and prompting that "The export operation failed because it ran out of memory"
    Anyone knows a solution to this. It will be very much appreciated.
    Thank you very much.

    Hi,
    It totally depends on what's the content you are trying to export. The reason is as simple as it says. The content might be really huge that it ran out of memory. Any info you can provide on the kind of stuff you have in the file?
    - Hemanth

  • I ran out of memory on my iphone and downloaded i cloud but still dont have any usage available. Why?

    I ran out of memory on my iphone so I downloaded icloud and backed everything up on that.  I still have no available usage. Do I delete everything off of my phone because its on Icloud? I cant do any updates to my phone because im out of memory and i only have 3 apps downloaded. Why??

    Backing up to iCloud doesn't allow you to free up space on your phone.  If you delete any of the iCloud data from your phone, it will also be deleted from iCloud and it will be lost.  If you need to free up space on your phone, you'll have to delete or sync some data off of it.
    If you aren't sure how much space you have available on your phone or what is using the space, go to Settings>General>Usage and wait for the apps list to open.

  • Ran out of memory on my 16 gig iPad

    I have a 16 gig iPad. It's in good condition and only a year old. I ran out of memory/space. What are my options besides purchasing a new iPad with more gigs or deleting apps?

    How much space is Other using? You may be able to reduce.
    How Do I Get Rid Of The “Other” Data Stored On My iPad Or iPhone?
    http://tinyurl.com/85w6xwn
    With an iOS device, the “Other” space in iTunes is used to store things like documents, settings, caches, and a few other important items. If you sync lots of documents to apps like GoodReader, DropCopy, or anything else that reads external files, your storage use can skyrocket. With iOS 5/6, you can see exactly which applications are taking up the most space. Just head to Settings > General > Usage, and tap the button labeled Show All Apps. The storage section will show you the app and how much storage space it is taking up. Tap on the app name to get a description of the additional storage space being used by the app’s documents and data. You can remove the storage-hogging application and all of its data directly from this screen, or manually remove the data by opening the app. Some applications, especially those designed by Apple, will allow you to remove stored data by swiping from left to right on the item to reveal a Delete button.
     Cheers, Tom

  • IPhone ran out of memory when recording video...can it be saved?

    I was recording a video of my friend getting engaged and my iPhone 4S ran out of memory and stopped recording. Afterwards the video couldn't be played on my iPhone and I am encountering the same problem when trying to play the file on my laptop.
    I've tried a few different MOV and MP4 fix websites, but nothing seems to work. The file size is 377MB so I know something is there, it just can't play.
    Is there some sort of code that's written into the video file at the end that didn't happen because my iPhone ran out of memory?
    Any help to save this very important memory would be appreciated.
    Thanks
    -Justin

    Now this is very strange. I uploaded the video file to my Dropbox and now it's playing through the app.
    I also sent a link and it's playing via that as well.
    But I DL the file from Dropbox and try to play it, no dice.
    Strangest thing I've ever seen...but at least it's working!

  • Result Set Causing out of memory issue

    Hi,
    I am having trouble to fix the memory issue caused by result set.I am using jdk 1.5 and sql server 2000 as the backend. When I try to execute a statement the result set returns minimum of 400,000 records and I have to go through each and every record one by one and put some business logic and update the rows and after updating around 1000 rows my application is going out of memory. Here is the original code:
    Statement stmt = con.createStatement();
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                System.out.println("doing some logic here");
    rs.close();
    st.close();
    I am planning to fix the code in this way:
    Statement stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY,
                          ResultSet.CONCUR_UPDATABLE);
    stmt.setFetchSize(50);
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                System.out.println("doing some logic here");
    rs.close();
    st.close();But one of my colleague told me that setFetchSize() method does not work with sql server 2000 driver.
    So Please suggest me how to fix this issue. I am sure there will be a way to do this but I am just not aware of it.
    Thanks for your help in advance.

    Here is the full-fledged code.There is Team Connect and Top Link Api being used. The code is already been developed and its working for 2-3 hours and then it fails.I just have to fix the memory issue. Please suggest me something:
    Statement stmt = con.createStatement();
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                 /where vo is the value object obtained from the rs row by row     
                if (updateInfo(vo, user)){
                               logger.info("updated : "+ rs.getString("number_string"));
                               projCount++;
    rs.close();
    st.close();
    private boolean updateInfo(CostCenter vo, YNUser tcUser) {
              boolean updated;
              UnitOfWork unitOfWork;
              updated = false;
              unitOfWork = null;
              List projList_m = null;
              try {
                   logger.info("Before vo.getId() HERE i AM" + vo.getId());
                   unitOfWork = FNClientSessionManager.acquireUnitOfWork(tcUser);
                   ExpressionBuilder expressionBuilder = new ExpressionBuilder();
                   Expression ex1 = expressionBuilder.get("application")
                             .get("projObjectDefinition").get("uniqueCode").equal(
                                       "TABLE-NAME");
                   Expression ex2 = expressionBuilder.get("primaryKey")
                             .equal(vo.getPrimaryKey());// primaryKey;
                   Expression finalExpression = ex1.and(ex2);
                   ReadAllQuery projectQuery = new ReadAllQuery(FQUtility
                             .classForEntityName("EntryTable"), finalExpression);
                   List projList = (List) unitOfWork.executeQuery(projectQuery);
                   logger.info("list value1" + projList.size());
                   TNProject project_hist = (TNProject) projList.get(0); // primary key
                   // value
                   logger.info("vo.getId1()" + vo.getId());
                   BNDetail detail = project_hist.getDetailForKey("TABLE-NAME");
                   project_hist.setNumberString(project_hist.getNumberString());
                   project_hist.setName(project_hist.getName());
                   String strNumberString = project_hist.getNumberString();
                   TNHistory history = FNHistFactory.createHistory(project_hist,
                             "Proj Update");
                   history.addDetail("HIST_TABLE-NAME");
                   history.setDefaultCategory("HIST_TABLE-NAME");
                   BNDetail histDetail = history.getDetailForKey("HIST_TABLE-NAME");
                   String strName = project_hist.getName();
                   unitOfWork.registerNewObject(histDetail);
                   setDetailCCGSHistFields(strNumberString, strName, detail,
                             histDetail);
                   logger.info("No Issue");
                   TNProject project = (TNProject) projList.get(0);
                   project.setName(vo.getName());
                   logger.info("vo.getName()" + vo.getName());
                   project.setNumberString(vo.getId());
                   BNDetail detailObj = project.getDetailForKey("TABLE-NAME"); // required
                   setDetailFields(vo, detailObj);//this method gets the value from vo and sets in the detail_up object
                   FNClientSessionManager.commit(unitOfWork);
                   updated = true;
                   unitOfWork.release();
              } catch (Exception e) {
                   logger.warn("update: caused exception, "
                             + e.getMessage());
                   unitOfWork.release();
              return updated;
         }Now I have tried to change little bit in the code. And I added the following lines:
                        updated = true;
                     FNClientSessionManager.release(unitOfWork);
                     project_hist=null;
                     detail=null;
                     history=null;
                     project=null;
                     detailObj=null;
                        unitOfWork.release();
                        unitOfWork=null;
                     expressionBuilder=null;
                     ex1=null;
                     ex2=null;
                     finalExpression=null;
    and also I added the code to request the Garbage collector after every 5th update:
    if (updateInfo(vo, user)){
                               logger.info("project update : "+ rs.getString("number_string"));
                               projCount++;
                               //call garbage collector every 5th record update
                               if(projCount%5==0){
                                    System.gc();
                                    logger.debug("Called Garbage Collectory on "+projCount+"th update");
                          }But now the code wont even update the single record. So please look into the code and suggest me something so that I can stop banging my head against the wall.

  • Top Link Causes out of memory issue when millions of records need to update

    Hello everyone,
    I am using TopLink 9.0.4 in a batch process. The batch process reads from the temp table(temp table has millions of records one month worth of data which need be updated). The database being used is sqlserver 2005. Below is the snippet of code. It works for 6-7 hours and crashes after that due of out of memory:
    ExpressionBuilder expressionBuilder = new ExpressionBuilder();
    Statement stmt = con.createStatement();
    ResultSet rs = st.executeQuery("Select * from database tablename where field= 'done'");
    while(rs!=null && rs.next()){
    *//where vo is the value object obtained from the rs row by row*     
    if (updateInfo(vo, user,expressionBuilder )){
                   logger.info("updated : "+ rs.getString("col_name"));
                   projCount++;
    rs.close();
    st.close();
    private boolean updateInfo(ProjectVO vo, YNUser tcUser,expressionBuilder ) {
              boolean updated;
              updated = false;
              try {
                   updated = true;
              } catch (Exception e) {
                   logger.warn("update: caused exception, "
                             + e.getMessage());
              return updated;
    Edited by: user8981696 on Jan 14, 2010 1:00 PM

    Thanks for your reply.
    Please find below the answers to you suggestions/concerns:
    You seem to be using raw JDBC to select all of the records in a single result set, not sure if this may be causing a memory issue. You could try paging through the results instead.
    Ans: I have modified the code to get me 1000 records each time and I am getting the ResultSet by using PrepartedStatement instead of regular Statement object.
    What type of caching are you using?
    Ans: No caching is being used. If you have some thoughts on caching please suggest or put some sample code. Again there is no AppServer is being used, its just a regular java process(Batch process) so I dont know how to do caching in a simple java process.
    You may also wish to try the latest 9.0.4 patch release, or try the 10.1.3 version, or the latest EclipseLink 2.0 release.
    Ans: Where can I find the latest patch release 9.0.4?
    Any help/suggestion is really appreciated!

  • Out of memory issues with PSE 8

    I am using PSE 8 64 bit on a Dell desktop computer with Intel (R) Core (TM) i7 CPU 920 at 2.67 GHz. with 8 GB of ram. My operating system is Windows 7, 64 bit.
    My problem is that I get out of memory or insufficient ram messages in PSE Editor with some PSE tools when my CPU resource utilization reaches 37 to 38%. In other words, even though my computer is telling me I have almost 4 GB of memory left, PSE is saying it does not have enough memory to complete the operation. It looks to me as if PSE is only using 4GB of my 8 GB of Ram. Is this true and what do I need to do to allow PSE to utilize all of my available ram.

    Thanks, that does answer what the problem is, but not necessarily a solution. I like working with 8 to 10 pictures (files) in the editor tray at a time. I make whatever changes needed to each and then group 4 or 5 into an 8.5 X 11 collage. Each picture in the collage is a separate layer and each separate picture may multiple layers of its own. I print the collage on 8.5 x 11 photo paper and then put the page in a photo album. I like the pictures in different sizes, orientations and sometimes shapes, so the album and multiple picture options offered in PSE are not much help. My process eats a lot of memory, which I mistakenly thought, my 8 gb of ram would solve.
    Anyway, now that I know the limitations, I can adjust the process to avoid the memory issue and hopefully, a future version of Elements will accommodate 64 bit.
    Maybe, I am wondering, do I need to look at other programs or am I missing a PSE function that would make my chore easier.

Maybe you are looking for

  • A dot on monthly view in Reminders

    Hi. I use Reminders so much, but it would be better if we could see the days that contain reminders with a dot on them in monthly view, just like in Calendar. Is it possible? I think it's not so difficult for Apple to make it... I don't want to use a

  • Reports Interface with Win NT 4.0

    I wonder if it is possible to execute NT 4.0 shell commands (to do simple things like DEL a file, ....) from inside Oracle Reports 3.xxx? I checked the on line Report Builder help, there was a package WIN_API_UTILIT.* which said it is only for ORCALE

  • Group by selection

    I have a report which i am grouping by country. The country field appears once and the other fields appear based on country value. Is there a way to give users option in the crstal report to pick a group by field? Thanks

  • Export Data Relevant to the Customer

    hi friends i am working on export scenario and i am creating the customer i want to know that what is the necessary fields for the customer in customer maste data regards srini

  • Migrate to Tomcat 6

    Soon I will be upgrading from Tomcat 4 to Tomcat 6. Can I reload my wars from Tomcat 4 into Tomcat 6? Does Tomcat 6.0.18 come with the jdbc drivers (mysql, oracle) and mail.jar or do I need to install them? Please advise any other info I should know