Memory Capacity when using DHTML Hotspot/pop-ups

I use Robo v5. I have begun reformatting my predecessor's
topics to make them more robust and have been exploring the Robo
Help. Perhaps I am not technically-savvy enough to use these
features properly, so if I am, please just advise me and I'll see
if my boss will send me to some official Robo training. Our end
users seem to like what I've been doing, however, and it is indeed
making our Help System more usable.
What I'm doing is embedding screen captures as pop-ups using
the "DHTML - Create Drop-down Hotspot and Text" menu option. The
links work and all of that is good, however, in my larger topics,
I'm noticing issues when working in those topics. 1 of 2 things
occur frequently enough to be a nuisance:
1. The topic just "dies" - the cursor freezes and my scroll
bar cuts off the bottom portion of the topic. None of the data is
lost, but I have to close it and reopen it. Part of the topic just
disappears including the workspace it occupies.
2. I run out of memory - I actually get a dialog that says
"Insufficient Memory" and the topic aborts automatically. Again,
none of the data is lost, but I have to reopen it. Sometimes, this
occurs every couple of "saves".
In my smaller topics, it's a non-issue.
Is this related to the number of Hotspots I'm embedding or is
it an issue with my Local Drive or what? I truly appreciate any
guidance here! Thanks.

Thanks for responding.
1 - is 1.8KB and contains 12 pop-ups, some of which are
pointing to the same image.
2 - is 2.7KB and contains 50 pop-ups, some of which are
pointing to the same image.
These topics also have images embedded within the actual
topic and not as a pop-up. Another thing I'm using are image maps
(which I didn't mention before 'cus I didn't think it would
matter).
Are there other options for using images that reduce the size
of a topic? I found that using pop-ups make the topic easier to
read (they previously were entirely too monsterous because the
screen captures were within the text. I now use only the primary
ones in the body of the text and embed the subsidiary ones as
pop-ups.)

Similar Messages

  • Memory problems when using alias channel strips (3.0.2)

    I'm having memory issues when using alias channel strips.  What is happening is I create one instance of the plugin, say Massive.  Then make a bunch of other alias on different patches.  When looking at the memory usage everytime I add and alais it increases the memory usage by an amount the same as the origional plugin.  ie. add 3 alias and you have 3 times more memory usage.  See attached photos.
    Original plugin instance ~80mb memory usage
    Adding 3 Alias channel strips.  ~330mb memory usage
    By comparison here is the usage by adding 3 brand new instance of the plugins. ~160mb memory usage
    By comparison it is using less memory by having 4 seperate instances of the plugin compared to using 3 alias's of the origional.  This doesn't make sense since the whole point of an alias is to save memory by only loading one instance of the plugin. 
    Anyone have any thoughts?
    Jon

    What's interesting is that if I, starting with a blank concert, add a Massive to a patch and then make an alias the memory is as you say. But if I make a duplicate of the inital patch and then make alias's of that the memory is lower.
    Are you seeing the same with the built-in plug-ins?

  • High Virtual memory usage when using Pages 2.0.2

    Hey there,
    I was just wondering whether there had been any other reports of unusually high memory usage when using Pages 2.0.2, specifically Virtual memory. I am running iWork 06 on the Mac listed below and Pages has been running really slowly recently. I checked the Activity Monitor and Pages is using hardly any Physical memory but loads of Virtual memory (so much so that the Page outs are almost as high as the Page ins (roughly 51500 page ins / 51000 page outs).
    Any known problems, solutions or comments for this problem? Thanks in advance

    I don't know if this is specifically what you're seeing, but all Cocoa applications, such as Pages, have an effectively infinite Undo. If you have any document that you've been working on for a long time without closing, that could be responsible for a large amount of memory usage.
    While it's good practice to save on a regular basis, if you're making large amounts of changes it's also a good idea to close and reopen your document every once in awhile, simply to clear the undo. I've heard of some people seeing sluggish behavior after working on a document for several days, which cleared up when the document was closed and reopened.
    Titanium PowerBook   Mac OS X (10.4.8)  

  • Need help with internal HD memory problems when using Premiere Pro?

    When using PP I keep loosing memory on my HD.
    Now this seems strange to me since I have every thing, all my video and audio files on external HDs.
    Each time I time I make a new project I end up with less space on my internal HD.
    Information related to these projects is somehow remaining on my internal HD.
    Anyone got any ideas about what I might be doing wrong?
    Dimitrije

    Premiere will slowly compile various files to help the project along, and the default place is usually your internal hard drive. Make sure your scratch disks are pointed to an external hard drive if that is what you want, also, make sure the Media Cache Files are being created on your external as well (and not the default location which is on your local drive).
    Premiere Preferences > Media
    Media Cache Files & Media Cache Database should be changed to an external disk if you don't want them created on your local disk. There are many tutorials and explanations about all of these aspects of Premiere on these forums and from other sources. Hope that helps!

  • Bug report & possible patch: Wrong memory allocation when using BerkeleyDB in concurrent processes

    When using the BerkeleyDB shared environment in parallel processes, the processes get "out of memory" error, even when there is plenty of free memory available. This results in possible database corruption.
    Typical use case when this bug manifests is when BerkeleyDB is used by rpm, which is installing an rpm package into custom location, or calls another rpm instance during the installation process.
    The bug seems to originate in the env/env_region.c file: (version of the file from BDB 4.7.25, although the culprit code is the same in newer versions too):
    330     /*
    331      * Allocate room for REGION structures plus overhead.
    332      *
    333      * XXX
    334      * Overhead is so high because encryption passwds, replication vote
    335      * arrays and the thread control block table are all stored in the
    336      * base environment region.  This is a bug, at the least replication
    337      * should have its own region.
    338      *
    339      * Allocate space for thread info blocks.  Max is only advisory,
    340      * so we allocate 25% more.
    341      */
    342     memset(&tregion, 0, sizeof(tregion));
    343     nregions = __memp_max_regions(env) + 10;
    344     size = nregions * sizeof(REGION);
    345     size += dbenv->passwd_len;
    346     size += (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    348     size += env->thr_nbucket * __env_alloc_size(sizeof(DB_HASHTAB));
    349     size += 16 * 1024;
    350     tregion.size = size;
    Usage from the rpm's perspective:
    The line 346 calculates how much memory we need for structures DB_THREAD_INFO. We allocate structure DB_THREAD_INFO for every process calling db4 library. We don't deallocate these structures but when number of processes is greater than dbenv->thr_max then we try to reuse some structure for process that is already dead (or doesn't use db4 no longer). But we have DB_THREAD_INFOs in hash buckets and we can reuse DB_THREAD_INFO only if it is in the same hash bucket as new DB_TREAD_INFO. So line 346 should contain:
    346     size += env->thr_nbucket * (dbenv->thr_max + dbenv->thr_max / 4) *
    347         __env_alloc_size(sizeof(DB_THREAD_INFO));
    Why we don't encounter this problem earlier? There are some magic reserves as you can see on line 349 and some other additional space is created by alligning to blocks. But if we have two processes running at the same time and these processes end up in the same hash bucket and we repeat this proces many times to fill all hash buckets with two DB_THREAD_INFOs then we have 2 * env->thr_nbucket(37) = 74 DB_THREAD_INFOs, which is much more than dbenv->thr_max(8) + dbenv->thr_max(8) / 4 = 10 and plus allocation from dbc_put, we are out of memory.
    And how we will create two processes that end up in the same hash bucket. We can start one process (rpm -i) and then in scriptlet we start many processes (rpm -q ...) in loop and one of them will be in the same hash bucket as the first process (rpm -i).
    I would like to know your opinion on this issue, and if the proposed fix would be acceptable.
    Thanks in advance for answers.

    The attached patch for db-4.7 makes two changes:
      it allows enough for each bucket to have the configured number of threads, and
      it initializes env->thr_nbuckets, which previously had not been initialized.
    Please let us know how it works for you.
    Regards,
    Charles

  • Memory leak when "Use JSSE SSL" is enabled

    I'm investigating a memory leak that occurs in WebLogic 11g (10.3.3 and 10.3.5) when "Use JSSE SSL" is checked using the Sun/Oracle JVM and JCE/JSSE providers. The leak is reproducible just by hitting the WebLogic Admin Console login page repeatedly using SSL. Running the app server under JProfiler shows byte arrays (among other objects) leaking from the socket handling code. I thought it might be a general problem with the default JSSE provider, but Tomcat does not exhibit the problem.
    Anyone else seeing this?

    Yes, we are seeing it as well on Oracle 11g while running a GWT 2.1.1 application using GWT RPC. Our current fix is to remove the JSSE SSL configuration check, however this might not be an option if you really need it for your application. Have you found anything else about it?

  • Indesign cs5 'out of memory' error when using preflight

    I have been regulary getting an 'out of memory' error when i choose to use my bespoke preflight profile.
    I have 4gig of ram and run Indesign CS5 on OS 10.6.8.
    Does anyone know a work around?
    As soon as I select from the basic default profile, i get the beach ball from hell for 10mins, then it kindly lets me know that I am out of memory, sends a crash report to Adobe and then asks if I want to relauch. I'm stuck in a vicious circle. I must of sent my 4th crash report now and no feedback from anyone at Adobe.

    I have replaced my preferences, but still the problem persists. I have tried switching my view from typical display to fast display before i selected a profile. I thought this may give me the extra memory I needed to avoid the enevitable crash. I learnt that 2 files were indeed rgb instead of cmyk before it crashed again. So I switched them to cmyk and tried again, selected my bespoke profile, but yet again it crashed. I think the problem lies with the file, not Indesign, as i have tried the same profile on a different file and the program doesn't crash and runs as it should. So if in future I need to use said crashing file again, firstly i will need to try Peter's isolate fix method. Otherwise i'll never be able to progress to successful a pdf.

  • Memory leak when using Threads?

    I did an experiment and noticed a memory leak when I was using threads.. Here's what I did.
    ======================================
    while( true )
         Scanner sc = new Scanner(System.in);
         String answer;
         System.out.print("Press Enter to continue...");
         answer = sc.next();
         new TestThread();
    ========================================
    And TestThead is the following
    ========================================
    import java.io.*;
    import java.net.*;
    public class TestThread extends Thread
    public TestThread() { start(); }
    public void run() {  }
    =====================================
    When I open windows Task Manager, every time a new thread starts and stops, the java.exe increases the Mem Usage.. Its a memory leak!? What is going on in this situation.. If I start a thread and the it stops, and then I start a new thread, why does it use more memory?
    -Brian

    MoveScanner sc = new
    Scanner(System.in);out of the
    loop.Scanner sc = new Scanner(System.in);
    while (true) {
    That won't matter in any meaningful way.
    Every loop iteration creates a new Scanner, but it also makes a Scanner eligible for GC, so the net memory requirement of the program is constant.
    Now, of course, it's possible that the VM won't bother GCing until 64 MB worth of Scanners have been created, but we don't care about that. If we're allowing the GC 64 MB, then we don't care how it uses it or when it cleans it up.

  • Are there any memory restrictions when using Invoke-Command?

    Hi, I'm using the Invoke-PSCommandSample workbook to run a batch file inside a VM.
    The batch file inside the VM runs a Java program.
    The batch file works fine, when I run it manually in the VM.
    However, when I use the Invoke-PSCommandSample workbook to run the batch file, I get the following error:
    Error occurred during initialization of VM
    Could not reserve enough space for object heap
    errorlevel=1
    Press any key to continue . . .
    Does anybody know if there are any memory restrictions when invoking commands inside a VM via runbook?
    Thanks in advance.

    Hi Joe, I'll give some more background information. I'm doing load testing with JMeter in Azure and I want to automate the task. This is my runbook:
    workflow Invoke-JMeter {
    $Cmd = "& 'C:\Program Files (x86)\apache-jmeter-2.11\bin\jmeter-runbook.bat'"
    $Cred = Get-AutomationPSCredential -Name "[email protected]"
    Invoke-PSCommandSample -AzureSubscriptionName "mysubscription" -ServiceName "myservice" -VMName "mymachine" -VMCredentialName "myuser" -PSCommand $Cmd -AzureOrgIdCredential $Cred
    This is my batch file inside the VM:
    set JAVA_HOME=C:\Program Files\Java\jdk1.7.0_71
    set PATH=%JAVA_HOME%\bin;%PATH%
    %COMSPEC% /c "%~dp0jmeter.bat" -n -t build-web-test-plan.jmx -l build-web-test-plan.jtl -j build-web-test-plan.log
    Initially I tried to run JMeter with "-Xms2048m -Xmx2048m". As that didn't work, I lowered the memory allocation but even with "-Xms128m -Xmx128m" it does not work. I have tried with local PowerShell ISE as you suggested, but I'm
    running into certification issues. I'm currently have a look at this. Here's my local script:
    Add-AzureAccount
    Select-AzureSubscription -SubscriptionName "mysubscription"
    $Uri = Get-AzureWinRMUri -ServiceName "myservice" -Name "mymachine"
    $Credential = Get-Credential
    $Cmd = "& 'C:\Program Files (x86)\apache-jmeter-2.11\bin\jmeter-runbook.bat'"
    Invoke-command -ConnectionUri $Uri -credential $Credential -ScriptBlock {
    Invoke-Expression $Args[0]
    } -Args $Cmd
    With this, I get the following error (sorry, it is in German):
    [myservice.cloudapp.net] Beim Verbinden mit dem Remoteserver "myservice.cloudapp.net" ist folgender Fehler
    aufgetreten: Das Serverzertifikat auf dem Zielcomputer (myservice.cloudapp.net:666) enthält die folgenden
    Fehler:
    Das SSL-Zertifikat ist von einer unbekannten Zertifizierungsstelle signiert. Weitere Informationen finden Sie im
    Hilfethema "about_Remote_Troubleshooting".
        + CategoryInfo          : OpenError: (myservice.cloudapp.net:String) [], PSRemotingTransportException
        + FullyQualifiedErrorId : 12175,PSSessionStateBroken

  • Total system memory *decreases* when using invidia 9600M GPU

    I have 4 GB of RAM in my MBP. I understand that when using the integrated video processor (invidia 9400M), 250 MB of system RAM are set aside for the video memory. So when my total system RAM showed 3.8 GB, that made sense. However, when I switched to the 9600M GPU which has its own 256 MB of dedicated VRAM, total system memory actually decreased to 3729 MB!
    How can that be, and why isn't does the total amount of RAM installed show 4096 MB?

    Hi Odysseus,
    I am not completely sure here, but it sounds like you are seeing an issue related to 32-bit addressing. I am not well-versed in Mac architecture, but I when it comes to PC architecture. With a system that uses 32-bit kernel (like Mac OS X and 32-bit versions of Windows), it can natively access up to 4 GBs of RAM (232bytes). Here is where I get sketchy, I know that 32-bit WIndows (including Vista) uses what is termed a flat memory model. In order for the system to access the video RAM, the kernel maps the video memory over the upper addressees of the 4 GB memory space. It does it backwards, so for a 256 MB video card, it gets the addressees from 4096 - 3840 MBs. This prevents these addressees from being used by the system, so you effectively lose that RAM. There are two traditional methods of fixing this, the first is to use more than 32-bits of address space, and allow the video RAM to be mapped above the 4 GB barrier. This is what the Intel Santa Rosa chipset (used in all older MBPs that support 4 GBs of RAM), The second fix is to use a 64-bit kernel, which puts the address space up to the 264 range (around 17.2 billion GBs of RAM). As previously mentioned, I am not well versed with the Mac architecture, but I wonder if the behavior you are seeing is some combination of the 9400M taking its system memory, and the 9600 GT taking up addressees due to an addressing limitation? Won't really know for sure until Snow Leopard comes out, as it finally makes the OS X kernel 64-it, which should ease any addressing limitations. Good luck!
    Rich S.

  • Use of lightbox pop-ups to collect business email addresses?

    With more pressure than ever before for us to acquire explicit email opt-in permission (well, for those marketing to Canadians at least!), I'm wondering to what extend B2B marketers are deploying a technique that's proved very successful for B2C marketers: Modal window website pop-ups for email acquisition. Do you do this? Does it work well for you? I'd love hear your experiences and see any examples from the B2B marketplace.

    Great question...I'd be interested, too.  Wondering how/if this could help (I agree with the Canada comment, too!)

  • HT3576 How can I get rid of "Connect to itunes to use push notifications" pop-ups.  I just had to replace my iPhone 5 and when I did a restore from backup, most of my apps will not let me in becuase I cannot get past the"connect to itunes..." message.

    My iPhone 5 was distroyed so I replaced it with a new iPhone 5 yesterday.  I did a restore from a backup that i had performed a week ago.  When I tried to use the phone several of my apps would not let me in becuase they continually gave me a popup message "Connect to Itunes to use push notifications".  I click ok and the popup comes back.  I have shut my phone off several times.  I have complete closed the apps several times.  I have even deleted the apps from my phone and reinstalled.  All to no avail.  Please help - does anyone know how to fix this?

    Yes - I connected my phone to my computer / Itunes and went into the apps section, but from there I have no idea how to manage the push notifications.  I even tryied going into itunes that is installed on my phone.  I still cannot find anyplace to manage these popups.  I have also gone into settings - notifiations - and tried turning all notifications for these apps all off but that didnt work either.  Any guidance is MUCH appreciated - Im not sure where to go from here.

  • Massive memory usage when using latest version of final cut, 10 minutes after restart.

    Hey I am really struggling with using Final Cut Pro X and I suspect there is some software issue.
    I am editing a one minute video clip, the video files are 1080 h264. I restart my computer and only open final cut. After 10 minutes of editing I cant barely navigate my computer anymore as it is going so slow, until eventually I have to force reboot my computer. I can scarcly export anything out of final cut, when I do my computer feels like its about to die and if it exports I seem to get broken frames and/or missing media.
    I reinstalled my mac only about 2 months ago.
    System specs.
    Mac book pro.
    13-inch, Late 2011
    Processor  2.4 GHz Intel Core i5
    Memory  4 GB 1333 MHz DDR3
    Graphics  Intel HD Graphics 3000 384 MB
    On my previous mac book pro (early 2010, 4 gm ram), running snow leopard and previous versions of final cut I was able to edit 3, 1 hour documentaries with very little problem, but now I cant even manage a one minute clip.
    Not sure what to do. Here is a screen shot of my memory usage. Im getting massive swap memory and lots of inactive memory. It seems very strange. Especially after only 10 minutes running only final cut.

    FCP is only using 409 MBs of memory out of your 4 GBs. But whatever else you have open is using up virtually all the physical RAM you have. This is forcing the memory manager to page memory out to the disk-based VM file. That's a major reason for major slowdowns.
    You have 4 GBs of RAM. How many applications are you running concurrently? If you look at Page outs: you will note the positive number in parentheses. That means your computer is hitting the hard drive a lot because the memory demands are too high. If possible cut down on how many concurrent applications you run or put more RAM into the computer, if that's possible.
    About OS X Memory Management and Usage
    Using Activity Monitor to read System Memory & determine how much RAM is used
    Memory Management in Mac OS X
    Performance Guidelines- Memory Management in Mac OS X
    A detailed look at memory usage in OS X
    Understanding top output in the Terminal
    The amount of available RAM for applications is the sum of Free RAM and Inactive RAM. This will change as applications are opened and closed or change from active to inactive status. The Swap figure represents an estimate of the total amount of swap space required for VM if used, but does not necessarily indicate the actual size of the existing swap file. If you are really in need of more RAM that would be indicated by how frequently the system uses VM. If you open the Terminal and run the top command at the prompt you will find information reported on Pageins () and Pageouts (). Pageouts () is the important figure. If the value in the parentheses is 0 (zero) then OS X is not making instantaneous use of VM which means you have adequate physical RAM for the system with the applications you have loaded. If the figure in parentheses is running positive and your hard drive is constantly being used (thrashing) then you need more physical RAM.
    Adding RAM only makes it possible to run more programs concurrently.  It doesn't speed up the computer nor make games run faster.  What it can do is prevent the system from having to use disk-based VM when it runs out of RAM because you are trying to run too many applications concurrently or using applications that are extremely RAM dependent.  It will improve the performance of applications that run mostly in RAM or when loading programs.

  • Memory leakage when using Ini-file VIs

    I'm using the Configuration File Vis to read and write data to different .ini files. The files contain both standard keys and clusters written as a segment using the Open G toolkit. Instead of opening the files and keeping them in the memory of the Config VIs I'm just using them to read and write, decode and encode...the references are all closed using the Close Config Data.vi. The problem is that even though immediately close the config data the application keeps grabbing more and more data...every time a configuration file is open, read or written to and the closed everything from 4K to 50K of additional memory has been allocated by the application (this is a stripped down application that only deals with the config files, so
    there are no other sources for the memory leak).
    Has anyone else experienced this? How can you repeatedly open and close config file slike this without it continoulsy allocating more memory?
    Attached is a copy of the VIs, the directory structure must be kept intact if the ini file is to be read correctly.
    I've been stearing so hard on this the whole day that I might just be overlooking something obvious...
    In the full application the VI init and write operations are only done when the user reconfigures the system, which may be a couple of times per month...so the memory leak would not cause a problem right away, but it would not be healthy to leave it there...
    MTO
    Attachments:
    Memory_Leak_Demo.zip ‏1391 KB

    Could you post a 6.1 version?
    LV7 is still about two weeks away for me.
    Does the problem show up in 6.1?
    I ran across an error while writting to a FP output that was not configured that would cause a "drop of memory" to leak every time the VI performed the write. The leak did not show up in the profiler but windows would show te memory foot print growing continually as long as the writes continued. The work around was "don't do that!".
    I bring this up because I found and reported this just prior to LV7 release and the featur may still be present in LV7. I also believe that Jean-Pierre used a "write and check" metod to detail with the unknown data types of of complex data structures.
    If you just read does it leak?
    If you just use simple data types do
    es it leak?
    Is the ini file growing?
    I really appreciate the effort you have been putting into the Dev-Exchange Mads! I wish i could do to more to help.
    Keep us posted.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • I am experiencing a memory leak when using ajax calls, only in Firefox 4.

    I have been developing an intranet application that uses jquery's ajax function to refresh certain parts of a page. I have tested this application in Firefox 4, IE9 and Chrome 11 on a Windows 7 64-bit machine.
    Only Firefox is seeing an increase in memory usage, leading me to believe there may be an issue with how Firefox is handling these types of calls.
    I have been investigating this issue for two solid days now and have come across solutions to similar issues for slightly different configurations, but none have solved the problem I am experiencing.
    The software and versions I have been using are:
    Windows 7 64-bit
    Firefox 4.0.1
    jQuery 1.6.1
    Any assistance or even a clarification that the issue exists would be helpful.

    The behaviour you describe is strange. Something to try: Open a refernce to the file one time before entering the loop, then inside the loop use that reference to do all the writing. Finally close the reference when the loop completes.If this doesn't work, post your code in 6.0 format and I'll be glad to look at it.Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

Maybe you are looking for

  • Auto print invoice with delivery notes

    I am trying to find a way when we print invoices to have it print the matching delivery note as well.  Currently as you are aware you have to do the process in a separate manner which is not satisfactory for us. Thank you in advance, Chad

  • Change the status profile assigned to the line item from PROFA TO PROFB

    Hi Experts, The issue we are having relates more to the fact that the code we have written is changing the item category, however the status profile has already been retrieved from configuration based on the original item category and therefore the s

  • Mac os x cd has been in processing for 4 days.

    Is this normal for it to be in the processing stage for so long? Please reply quick.

  • Difference between FI & CO-PA???

    Hi Experts, We confirmed an production order (related to sales order production). We confirmed zero activity and afterwards zero quantity has been delivered (goods receipt created with zero amount) to stock. By one activity I see following ... Plan q

  • About the file size of Default.png

    Hallo forum members, Is there a 2 MB restriction on [email protected]  as there is on normal JPEG pictures? When I save the [email protected] for a widget the filesize goes up to more than 2 MB, as I save the png with 24 Bit. Using 8 bit depth to sav