Running script consuming memory

Hi,
To create many thousands of icons I've created a pretty simple script that turns the various layers and group objects in my AI file on/off, and exports the icon as a GIF. The script works in so much as it creates all the icon permutations I'm after, however the memory being consumed grows. Using Window's task manager I can see illustrator.exe is using ~101000KB and page file usage is at 915MB, prior to running the script. After running the script illustrator.exe is using 3 times the memory and page file usage is up around 1.28GB.
Is there something I should be looking into regarding my script to better manage memory usage?

Hey David
I have run into this as well, except I start to get a Param error(which is usually memory). Check out the scripting reference for the $ Global Object I think thats what its called, and find a place in your script to run the garbage collector $.gc(); That should get rid of empty references that are taking up memory. Let us know if that helps.
A

Similar Messages

  • Users moved out of POWL but still it is running and consume memory.

    We are on SRM6.0 and we have issues with POWL session close. Some users runs POWL w/o any selection which inturns run the POWL forever. While users logoff from the system but still the POWL is running in system and consume lots of memory in system. I am looking the ways to kills the session automatically once the users logoff or close the portal. Do let me know if you have any inputs.
    Thanks, Sachin

    Hi Sachin
    SAP Note 1134640 - SRM 6.0 / 7.0 POWL: no refresh when
    clicking on query name
    SAP Note 1134761 - SRM 6.0 / 7.0 POWL: Display 'Query has
    been changed'
    SAP Note 1410793 - SRM Shopping Carts (SC) header POWL
    query-- OPEN THIS FILE SC_header_POWL_query.pdf FOR MORE INFO SC_header_POWL_query.pdf given by richardo cavendi for us..
    regarding your BBP_DOCUMENT_tab- invoice entry - leave as it is . it may harm your business object. you can delete it after some 2 years. Even SAP dont recommend to delete. some accident happen during that time. SRM system is very robust in this matter. CLean job verifies the docuemnt .if any changes happened by the enduser before running the clean job. it makes the entry. Indeed it is not SAP problem. whenever enduser /procurement department made mistake , this entries are sticked permanently. This is my opinion.
    regarding your  new UOM query
    SMOF_DOWNLOAD report helps to transfer ..
    whenever you change any  UOM  on Material master and immediately updates in SRM by CRM*table settings.
    Muthu

  • Running out of memory building csv file

    I'm attempting to write a script that does a query on my
    database. It will generally be working with about 10,000 - 15,000
    records. It then checks to see if a certain file exists. If it
    does, it will add the record to an array. When its done looping
    over all the records, it takes the array that was created and
    outputs a csv file (usually with about 5,000 - 10,000 lines).
    But... before that ever happens, it runs out of memory. What can I
    do to make it not run out of memory?

    quote:
    Originally posted by:
    nozavroni
    I'm attempting to write a script that does a query on my
    database. It will generally be working with about 10,000 - 15,000
    records. It then checks to see if a certain file exists.
    Sounds pretty inefficient to me. Is there no way you can
    modify the query so that it only selects the records for which the
    file exists?

  • Running out of memory... what to do?

    I am using a mid-2012 non-retina 13" MacBook Pro that I upgraded with a large SSD and 16 gigs of RAM, but I find myself running out of memory every day. I am developing a DirectX11 game engine in a virtualized Windows 7 environment, and my memory needs apparently exceed the 16GB I have. Is there any way to add more memory to my MacBook Pro, or can any Apple notebook hold more than 16 gigs of RAM? (officially or unofficially)
    I can't afford a Mac Pro (a shame since it would have been right at home in my chain of Thunderbolt devices) but if I could upgrade my MacBook Pro with 32GB of memory or more, that would be a life saver. I am starting to think it is unreasonable of me to except my consumer notebook to do the work of a professional workstation, but I figured it couldn't hurt to ask around.
    Thank you for your time!
    -Faye

    FayeCS wrote:
    Is there any way to add more memory to my MacBook Pro, or can any Apple notebook hold more than 16 gigs of RAM? (officially or unofficially)
    I'm sorry, but 16 GB of RAM is the maximum amount of RAM MacBooks Pro can use.
    As you are using Windows in a virtual machine, this RAM problem would be solved if you install Windows through Boot Camp. By doing this, your Mac will run faster because you will not be using all the RAM, apart from getting the maximum performance in Windows.
    Your Mac supports 64-bit Windows 7, Windows 8 and Windows 8.1, so you can install any of them. Then, follow the steps to install Windows in your computer > http://manuals.info.apple.com/MANUALS/1000/MA1636/en_US/boot_camp_install-setup_ 10.8.pdf
    I do not recommend to use Windows in virtual machines if you want to do tasks like game design because they require that your GPU and CPU work at the best performance.

  • IE 11 Restricting Running Scripts and ActiveX Controls on Local Files

    I have been coding for my MediaWiki pages and needed to test these pages with IE 11. However, whenever I open a file in IE 11, an annoying message saying "IE 11 Restricted this website from running scripts and ActiveX controls." How do I do a
    one-time fix to stop this from happening in future to local files?

    to test your WEBSITE.... publish it to a web server. eg. localhost/test.local
    web servers and browsers use http(s), not the file: protocol.
    Questions regarding Internet Explorer 8, 9 and 10 and Internet Explorer 11 for the IT Pro Audience. Topics covered are: Installation, Deployment, Configuration, Security, Group Policy, Management questions. If you are a consumer looking for answers or to
    raise a question, it's highly recommended you head on over to http://answers.microsoft.com/en-us
    Rob^_^

  • Loading script "script path-name" failed (0xC0000006) error when running scripts from DFS

    We have this issue where any number of random scripts that execute at startup will produce the following error when run from DFS:
    The status code for this error equates to STATUS_IN_PAGE_ERROR - The instruction at 0x%p referenced memory at 0x%p. The required data was not placed into memory because of an I/O error status of 0x%x.
    This does not always happen each time and the script can be random. If we move the script(s) to a non DFS, we do not see this. I believe the issue is being caused by a minute disconnect to the DFS for whatever script is attempting to execute at that
    moment. This happens across a few (consistent) offices on different computers.
    Any ideas?

    Hi,
    According to your description, my understanding is that you run scripts in a shared folder which is added to a DFS Namespace, then the error prompt. However, you can run the scripts when you directly access the shared folder.
    What scripts did you run? Is there any error message in the Event Log? If so, please provide us the detailed error message for our further research.
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Using domain templates (where's the run scripts step?)

    Hello,
    I'm using 9.2.1 for ALSB.
    I have manually created an ALSBdomain. I want to create a similar domain on a different server, every setting is the same except the username for the datasource.
    I used config_builder.sh to create a jar template from the first ALSB domain. I then copied the jar to the other server and used it with the domain creation wizard. (config.sh) Everything behaved exactly the way I expected. All the settings were the same, and I only had to change the username for the datasource.
    However, there is one problem, the runscripts phase is missing. I even went and found the scripts "reporting_runtime_drop.sql" and "reporting_runtime.sql" and added them during template creation. Interestingly enough they did not show up in any run scripts phase.
    Also, I took a look at the jar contents and the sql files I added in the add sql phase were not contained in the jar.
    Am I looking at this the wrong way? Is there any way to associate the sql I want to run in a template after the datasources are configured? or is this a limitation of templates, and I must manually log into the database and set things up?
    M@

    Q005, I am not certain that I fully understand your question.  However, here is the "Motherboard Specifications" page for the computer.  At the bottom of the page is a Motherboard layout.  It shows the connectors at the rear of the case.  You will use # 4-LAN to connect your computer by wire to the router.  I have had several computers connected in this way.
    Please click "Accept as Solution" if your problem is solved.
    Signature:
    HP TouchPad - 1.2 GHz; 1 GB memory; 32 GB storage; WebOS/CyanogenMod 11(Kit Kat)
    HP 10 Plus; Android-Kit Kat; 1.0 GHz Allwinner A31 ARM Cortex A7 Quad Core Processor ; 2GB RAM Memory Long: 2 GB DDR3L SDRAM (1600MHz); 16GB disable eMMC 16GB v4.51
    HP Omen; i7-4710QH; 8 GB memory; 256 GB San Disk SSD; Win 8.1
    HP Photosmart 7520 AIO
    ++++++++++++++++++
    **Click the Thumbs Up+ to say 'Thanks' and the 'Accept as Solution' if I have solved your problem.**
    Intelligence is God given; Wisdom is the sum of our mistakes!
    I am not an HP employee.

  • InDesign running low on memory

    Hi,
    I'm trying to develop a script that adds a number of AI-files to a new ID-document, saves the document and then exports the ID-document as PDF. Most of it seems to be working fine, but when I reach page 30-35(depending on which files I add), ID runs out of memory.
    It's using ~1,3 GB memory, which is quite a lot, so the error is understandable.
    What I don't understand is why it takes that much. At first I tried exporting the entire file at once (which gave me the above mentioned error and no output-file at all). I then added a "while-loop" that exported each page into it's own file (called 1.pdf, 2.pdf, 3.pdf etc.). Same result.
    I tried adding $.gc(); x 2 for every 10 page that gets exported, but that doesn't seem to change anything whatsoever. The code can be seen here (I removed the pdf-prefs, since they aren't relevant I think):
    "Counter" is a variable starting a 1, and "numberOfPages" is a variable containing the number of pages in the document. I'm at a bit of a loss here. Shouldn't InDesign release the memory, once it has exported each page?
        while(counter <= numberOfPages)
                if(counter == 10 || counter == 20 || counter == 30)
                    $.gc();
                    $.gc();
                with(app.pdfExportPreferences)
                   //set the pagerange to the current page (counter) - convert to string to set pageRange to accept input
                   myString = counter + "";
                   pageRange = myString;
                  lots of other pdf preferences goes here - deleted for readability
             try
                 myDocument.exportFile(ExportFormat.pdfType, File(myFolder +"/" + counter +".pdf"),  false, myPDFExportPreset);
             catch(e)
                 alert(e);
             counter = counter + 1;
    Let me know if you need further info.
    Thanks in advance,
    Thomas

    You're probably correct. The files do tend to get quite heavy. But I was expecting Indesign to be able to release the memory after exporting a single page. Alas, it doesn't do that. I could also delete the page, but we would like to save the .indd-file as well
    But I tried, just for the fun of it, to close the document and open it again after every 10 pages - and et voilá. It worked! It's not beautiful and I'm not gonna win any awards for it, but for now it'll have to do

  • System running out of memory

    I have deployed a Windows Embedded Standard 7 on a x64 machine. My answer file includes the File Based Write Filter and my system has 8GB RAM installed. I have excluded some working folders for a specific software and other than that no big change would
    happen in the system. I have set the overlay size of FBWF to be 1GB.
    Now my problem is that after the system works for some time, the amount of free memory starts to decline and after around 7-8 hours the available memory reaches a critical amount and the system is unusable and I have to reset the system manually. I have
    increased the size of the overlay to 2GB but this happens again.
     Is it possible that this problem is due to FBWF? If I set the overlay size to be 2GB the system should not touch any more than that 2GB so I would never run out of memory with 8GB installed RAM. am I right?

    Would you please take a look at my situation and give me a possible diagnosis:
    1- I have "File Based Write Filter" on Windows Embedded Standard 7 x64 SP1.
    2- The installed RAM is 8GB and size of overlay of FBWF is set to 2GB.
    3- When the system is giving the critical memory message the conditions are as follows:
    a) The consumed memory in task manager is somewhere around 4 to 4.5 GB out of 8GB
    b) A process schedule.exe (from our software) is running more than a hundred time and is consuming
    memory,
    but its .exe file is located inside an unprotected folder.
    c) executing fbwfmgr.exe /overlaydetail is reporting that only 135MB of overlay volume is full!
    Memory consumed by directory structure: 35.6 MB
    Memory consumed by file data: 135 MB
    d) The CPU usage is normal
    I don't know what exactly is full? Memory has free space, FBWF overlay volume has free space, then which memory is full?
    p.s.: I checked my answer file and paging file is disabled as required.

  • Physical Oracle connections consume memory continuously

    I am running Oracle Database 10.2.0.1 on a Solaris x86-64bit system. We have recently stumbled upon an issue where the physical Oracle DB connections' footprint continues to grow over time. If you do a 'ps' you'll see:
    oracle 13074 1 0 Nov 27 ? 0:00 oracleAMS (LOCAL=NO)
    oracle 13076 1 1 Nov 27 ? 146:20 oracleAMS (LOCAL=NO)
    oracle 13459 1 1 Nov 27 ? 144:39 oracleAMS (LOCAL=NO)
    oracle 13463 1 1 Nov 27 ? 144:22 oracleAMS (LOCAL=NO)
    oracle 13457 1 0 Nov 27 ? 0:00 oracleAMS (LOCAL=NO)
    oracle 19847 1 0 Nov 11 ? 0:00 oracleAMS (LOCAL=NO)
    oracle 13088 1 1 Nov 27 ? 145:52 oracleAMS (LOCAL=NO)
    oracle 19925 1 0 Nov 11 ? 0:19 oracleAMS (LOCAL=NO)
    oracle 13461 1 1 Nov 27 ? 144:43 oracleAMS (LOCAL=NO)
    The connections that seem to be taking up the most memory also have a large value in the 'TIME' column of the output from 'ps'.
    Output from top:
    13463 oracle 11 49 0 2646M 2561M sleep 144:22 0.46% oracle
    13465 oracle 11 59 0 2646M 2561M sleep 144:35 0.27% oracle
    13461 oracle 11 59 0 2645M 2560M sleep 144:43 0.74% oracle
    13459 oracle 11 49 0 2645M 2560M sleep 144:40 0.59% oracle
    13088 oracle 11 59 0 2645M 2560M cpu/0 145:52 0.40% oracle
    13076 oracle 11 59 0 2645M 2560M sleep 146:20 0.28% oracle
    13090 oracle 11 59 0 2645M 2559M sleep 143:47 0.53% oracle
    Notice the size column that these connections are consuming ~2.5 GB of RAM each, but they start out around 800M.
    We've looked through our application code and have been unable to find any glaring coding issues (i.e. failure to close a PreparedStatement or ResultSet). Does anyone know if there is/was a bug in the version of Oracle we're using that would cause this to happen? Has anyone else seen this issue?
    Any feedback is greatly apprecaited!

    Generally, a ideal connection/session(can have more than one connection) uses .5mb memory to hold session related information and then depending upon what your sessions are doing (sql queries) it consumes memory ; one of the memory parameter pga_aggregate_target you might want to take a look and investigate. See following query it might give you some more insight.
    SELECT vses.username || ':' || vsst.SID || ',' || vses.serial# username,
    vstt.NAME, MAX (vsst.VALUE) VALUE
    FROM v$sesstat vsst, v$statname vstt, v$session vses
    WHERE vstt.statistic# = vsst.statistic#
    AND vsst.SID = vses.SID
    AND vstt.NAME IN
    ('session pga memory', 'session pga memory max',
    'session uga memory', 'session uga memory max',
    'session cursor cache count', 'session cursor cache hits',
    'session stored procedure space', 'opened cursors current',
    'opened cursors cumulative')
    AND vses.username IS NOT NULL
    GROUP BY grouping sets( vses.username), vsst.SID, vses.serial#, vstt.NAME
    ORDER BY vses.username, vsst.SID, vses.serial#, vstt.NAME;

  • System hanging when it runs out of memory

    Hello,
    my system has a finite amount of RAM and swap (it doesn't matter, to my purpose, if it's 16GB or 128MB, I'm not interested in increasing it anyway).
    Sometimes my apps completely use all the available memory. I would expect that in these cases the kernel kills some apps to keep working correctly. The OOM Killer exists just for this, doesn't it?
    What happens, instead, is that the system hangs. Even the mouse stops working. Sometimes it manages to get back to life in a few seconds/minutes, other times hours pass by and nothing changes.
    I don't want to add more memory, I just want that the kernel kills some application when it's running out of memory.
    Why isn't this happening?
    Right now I'm writing a bash script that will kill the most memory-hungry process when available memory gets below 10%, because I'm sick of freezing my machine.
    But why do I have to do this? Why do I need an user space tool polling memory usage and sentencing applications according to a cheap policy? What the hell is wrong with my OOM Killer, why isn't it doing is job?!

    Alright, you won, now quit pointing out my ignorance
    Your awkish oom killer is a lot cooler than mine, switching to it, thanks!
    I did some testing (initially just to test the oom killing script) and found out that if a program tries to allocate all the memory it can, it gets eventually killed by linux's oom killer. If instead it stops allocating new memory when there are less than 4MB of free memory (or similar values) the OOM won't do anything, and the system will get stuck, like if a forkbomb was running.
    That's it, this program with MINSIZE=1 will be killed, while with MINSIZE=4MB will force me to hard reboot:
    #include <string.h>
    #include <stdlib.h>
    #define MINSIZE (1024*1024*4) // 4MB
    int main( )
    int block = 1024*1024*1024; // 1GB
    void *p;
    while( 1 ) {
    p = malloc( block );
    if( p ) {
    memset( p, 85, block );
    else if( block > MINSIZE ) {
    block /= 2;
    return 0;
    Guess I'd need to go deeper to understand why Linux' oom killer works like that, but I won't (assuming the oom killing script will behave).

  • Nodemgr process running away with memory

    We have 3 different applications that this has happened to. All of a
    sudden, the nodemgr for the application environment will start consuming
    memory on the Unix Solars 2.6 box. The nodemgr process gets up to
    2 gig of virtual memory usage and seems to 'die', not to mention sometimes
    consuming all the available memory on the box, locking up the box and
    requiring a box reboot. I'm getting paged when a process starts consuming
    memory so at least a box reboot has been avoided lately.
    One application is using 3G2, two 3f2 and one is a web sdk application.
    I suspect that there is a bug in Forte causing this to happen. Maybe a
    bug with the way Forte is not handling something from the version of Solaris
    we're using.
    I've been working with a Forte tech rep. and they have said they haven't
    seen any customers with the problem and don't have a reason for it.
    Has anyone else seen this happen and can you please shed some light
    on it?
    Thanks,
    Peggy Adrian
    [email protected]
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Peggy:
    I have DEFINITELY seen this problem and strongly believe that it is related to
    3G2 running on Solaris 2.6. As you know, 3G2 is not supported on Solaris 2.6.
    We upgraded to 3J1 and Solaris 2.6 in late November, and have never seen this
    problem since. However, we did see this problem a number of times when trying
    to run 3G2 on Solaris 2.6.
    By the way, we did find that if we killed the nodemgr and then restarted it
    VERY quickly, it would rebind properly and all would be well (except for
    having to start all the router partitions again.)
    -Martin
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • Media Server Consuming Memory

    My project is forwarding an RTMP stream to an RTMFP (multicast stream) using a server action script. In forwarding this stream, the AMSCore process keeps consuming memory. This consumption of memory requires the system to be rebooted repeated. How can this be prevented?

    I did not think I could use the sample multicast application with my
    project's software. Instead,  I compared the outbound NetConnection and
    NetStream code in the sample application with the outbound NetConnection
    and NetStream code in my project's multicast server actionscript. One of
    differences I discovered is that my project's application connected to the
    rtmfp group with the following call:
    nc_connect..connect("rtmfp:");
    while the sample called:
    nc.connect(resetUriProtocol(streamContext.client.uri, "rtmfp"));
    Therefore, in my project's actionscript, I replaced connection to the
    "rtmfp:" serverless group with a connection to the application uri:
    net_connect.connect("rtmfp://127.0.0.1:" + port + "/" + application.name)
    This change fixed the excessive memory consumption. So, why does the
    connection to serverless network endpoint consume memory on Windows Server
    2008 R2 to the point that all of the virtual memory is used or it crashes
    the system?
    Scott F. Wilson
    Principal Software Engineer
    Raytheon SAS
    Marlborough, MA  01752
    Phone: 508-490-3123
    Fax:     508-490-1366

  • Running of of memory? Help!

    Hi everyone,
    I am editing a movie and keep running out of memory (or so it seems). Here are the symptoms:
    First the obvious. Sometimes when I try to render, I get a red X telling me I am out of memory.
    Sometimes I try to preview my work and the video and audio are all choppy. This happens whether I have rendered my work or not.
    The program crashes or becomes unresponsive.
    I run a very clean machine with what I think is adequate power. Here are the specs:
    Premier Elements 7
    Windows XP Pro SP3   
    Dell Precision Workstation T3400
    Intel Core Duo @ 2.66GHz
    4GB of RAM
    Nvidia Quadro NVS290 dual monitor video card with 256MB of RAM
    My project is a five minute movie (corporate video) setup as follows:
    HD1080i movie at 29.97 FPS
    Square pixels (1.0)
    Upper Field First
    20 fps Drop-Frame Timecode
    Audio sample rate 48000 Hz
    I had my video footage on a couple of external drives connected via USB. Most of the video is coming from a couple of shoots. Those files vary in size between 15GB and 40GB. Stills were on a shared external drive connected via gigabit Ethernet. The movie project is on a secondary 250GB hard drive in my computer (separate from the OS) which still has plenty of room available (133GB). I moved the video footage to the secondary drive (and re-synced it) but that didn’t help. Premier’s scratch disks are pointed to the secondary drive where the project resides (the one with the 133GB of available space).
    Hopefuly I didn't miss any important information. I'll be glad to provide anything additional if needed. Any help would be greatly appreciated!
    Jose

    Hi Steve,
    Indeed, I have a stack of 60 photos. Each is around 3MB. All are JPGs, all 72DPI, all the same dimensions (I worked on the in Photoshop to make sure they were all consistent). Images right now are 2880 X 1620. Do you think that reducing them to 2000 X 1500 px will make that much of a difference? Maybe I can optimize them for web to reduce file size?
    I have to work in High-Def. Yes, eventually I will output to Blu-Ray. But, for my immediate needs, I'm outputting to Flash video. From there, I'm converting to an SWF loop, which the client will play at a tradeshow. The client is insisting on maximum quality, so I know they would not like a downsampled (is that the right term?) movie.
    I hear ya on the "modestly powerful computer." I run CS4 with no problem. Thinking that Premier Pro would certainly be too much for this system, I thought that PE (because its a consumer product) would run ok. Maybe I'm just pushing the limits... of everything (my computer, the software, the pile of images, etc.)!
    I'm almost done (been working on it for days), so I think I'm gonna leave images as they are for now. For the time being, I'll just keep on rendering and saving (save as) at every step. And, when all else fails. Close and restart!
    Thanks
    Jose

  • I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?

    I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?
    Thanks!
    David

    Either of these should help.
    http://grandperspectiv.sourceforge.net/
    http://www.whatsizemac.com/
    Or search 'disk size' in the App Store.
    Be carefull with what you delete & have a backup BEFORE you start, you may also want to reboot to try to free any memory that may have been written to disk.

Maybe you are looking for

  • Concern Regarding a System Cache

    I have a lot of large files on my machine (Virtual Machine Disk Images, Videos, etc.) and use an application called iDefrag to occasionally reduce the amount of fragmentation in those larger files (I am aware of the OS' active defrag, but am also awa

  • Cannot connect 5800 XM to 3G

    Hello. I recently bought a second-hand 5800 XM with product code 0574326 and decoded to RM-356 MENA_D BLUE.  I have the latest software 50.0.005 but the problem is I can't connect on UMTS network. I use the phone in Romania and RM-356 I think it shou

  • Retrieve int using sql statements

    Hi there, Does anyone have any ideas as to how i can update a database table with an integer, returned from a second table as part of an sql statement. Here is part of the method to get the value i want out of the database: public int retrieveId() th

  • Re: BI Query in Portal

    Hi All, How to publish a BI query to a portal user. How to restrict(authorisation) the portal user to view HR data. Regards, Anand

  • 100% width accordion panel flipped upside down

    i would like to use an accordion panel flipped 180 at 100% width for a pop up footer. it doesn't seem like this is supported at this time, just would like confirmation or some direction on how to make it happen. would love to see this supported, as i