Question regarding Memory Consumption in Collections

I Read this on some website -
Memory for collections is stored in the program global area (PGA), not the system global area (SGA). SGA memory is shared by all sessions connected to Oracle Database, but PGA memory is allocated for each session. Thus, if a program requires 5MB of memory to populate a collection and there are 100 simultaneous connections, that program causes the consumption of 500MB of PGA memory, in addition to the memory allocated to the SGA.
My Question is -
If i use Collections in my Procedures and they consume 5 MB space, is the memory still allocated to the session even if the Procedure completes its execution ??
I mean is memory released for other sessions after procedure completion or session termination ??

user9276238 wrote:
If i use Collections in my Procedures and they consume 5 MB space, is the memory still allocated to the session even if the Procedure completes its execution ??Yes. How does the session know that your very next instruction to it is not run that very same procedure again (with different parameters). It will again need 5MB of memory. And if releases, it will need to reallocate it. If all processes work like this, immediately releasing memory and then grabbing memory again - that could cause contention and other issues.
Memory management is more complex than this. For example, let's say your session required 1 few KB of space. So the process's memory looks something as follows:
xxxx
Next your session needs that 5MB for the (enormous) collection:
xxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
And during the processing, it needs a few KB more:
xxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyzz
Okay, so now it can release the 5MB memory (as indicated by the y ) back to the kernel? It cannot shrink it as there is trailing memory allocated and used (as indicated by the z ). It will need to re-org that memory in order to release that 5MB worth of y's.
So it is not as straight forward as one would like to think. The basic rule for PL/SQL is to tread softly using memory. Why? Because this is server code. Memory used in PL is allocated as private memory to that process. Unlike shared memory, no-one else can share in its use. Thus the memory must be considered expensive.
There's also the issue of scalability. If your PL code needs 5MB of private memory from the server.. what happens when a 100 client sessions all run your code? How well does your code scale in a multi-process and multi-server environment?
So what memory should you ideally then use in PL? Shared memory. The SGA. This means it is better for scalability and server resources to rather store large data structures for PL as GTT's (global temporary tables - where a temp table is created per session).
I mean is memory released for other sessions after procedure completion or session termination ??Session termination is a "sure thing" ito memory being released back to the kernel as free memory - assuming this is a dedicated server session. The actual server process will terminate and its PGA will be released. Shared server processes services multiple sessions during their life time. And only when the the shared server process terminates, will its PGA be released.(though PGAs can be shrunk as and when decided feasible by their memory managers)

Similar Messages

  • Question regarding Memory in Xperia C and updating Xperia C to 4.3

    Dear Sir,
    When shall we get update for 4.3 version? Further please tell me what is the difference between Device Memory and Internal Storage. Whether Internal storage can be used against device memory. Phone runs slow due to less Device memory.
    In my mobile gollowing is the status
    1. Total Device Memory  :   0.98GB         Used Device Memory : 503 MB       Remaining Device Memory : 282 MB
    2. Total RAM  :   973MB        Available RAM : 314 MB
    3. Internal Storage : 1.21 GB      Available Internal Storage : 1.2 GB      
    Regards
    Pummy

    http://talk.sonymobile.com/t5/FAQ/Memory-in-Android-devices/m-p/348064#U348064
    "I'd rather be hated for who I am, than loved for who I am not." Kurt Cobain (1967-1994)

  • Question regarding memory on linux platforms

    Customer recently upgraded from OCS 9i to 10g. The mail store database for OCS is a RAC 10.1.0.5 two node database. I have questions about which method is best to use to have a SGA bigger than 1.7GB on Linux x86 servers.
    On the new OCS 10g mail store database, HWm enqueues are found to impact
    database performance while the mail store receives very large quantities of
    email in a short time. Oracle support and Enterprise Manager's ADDM has
    recommended increasing the size of the SGA to help reduce HWm enqueues.
    The max SGA size is currently 1648MB.
    Servers are Red Hat 4 update 5. Each server in the two node RAC has 16GB of
    RAM, with 4 processors (8 cores on each server). The kernel is the 2.6 SMP
    kernel: Linux prcodb01.its.calpoly.edu 2.6.9-55.0.9.ELsmp #1 SMP Tue Sep 25 02:
    17:24 EDT 2007 i686 i686 i386 GNU/Linux
    Here are customer questions:
    Oracle states that 1.7GB is the current limit with my systems current
    configuration. When the Oracle documents reference 1.7GB, can I safely assume
    that 1.7GB = 1740MB? I'd be safe using a max SGA size of 1739MB?
    I've read Metalink notes 260152.1 and 329378.1. Looking at options listed in
    260152.1, two options are listed for Red Hat 4.0, "configuration 3" and
    "configuration 4" which required use of the hugemem kernel. Besides the
    difference in the SGA sizes available between those two options, are there any
    drawbacks from choosing one option over the other? Is there any drawback in
    using the hugemem kernel over the plain smp kernel?

    Hi,
    For that question, Is there any drawback in using the hugemem kernel over the plain smp kernel ?
    Please refer to NOTE 264236.1
    Regards
    Jason

  • A question regarding memory clock on GTX 580 Twin Frozr II OC

    Hi, I just installed the video card and Afterburner shows the memory clock to be 2048 mhz. The specifications list memory clock at 4276 mhz, why is the card underclocked according to Afterburner? Also, the memory clock slider only goes up to 2665 mhz. What's going on?

    You have to double those numbers to get the effective clock.
    i.e. 2665 X 2 = 5330 if you were to try and use that, which I would advise against.

  • I have a question regarding memory...

    Hello fellow Apple Users,
    I am currently experiencing a few technical glitches which I beleive may be related to an error I am having with my computer memory. Recently I have pushed my memory nears its maximum. To solve my problem I not only deleted files and movies not being used, but I also moved my iPhoto to an external drive and then deleted it from my computer. (In case you are wondering I have, of course, restarted my computer and cleared my trash). My memory still seems to be full, but I am not sure why. Twice this last week my screen has gone completely black, showing nothing but my mouse and giving a panic attack. The only temporary fix has been to shut down my computer for a short time before turning it back on.
    Below are two screen shots from my activity monitor:
    I like to think of myself as competent, but by no means am I a computer expert. If I understand correctly it is the disk usuage which more accurately  reflects if my memory is still full. As you can see both screens indicate that I have missed something.
    The next step I took to remedy my problem was by running software (Whatsize) to screen my files and show their size. This does include hidden files. Below is a screenshot of what I found.
    Bottom line like many others out there, I am simpy at a loss over what is probably (hopefully) a very simple problem. If anyone has any ideas of why my memory still appears full, or if anyone feels I have incorrectly self-diagnosised the problem please share.
    Thank you in advance for your time and assistance!! ^~^

    Well this is embarrassing, in addition to my outlandish grammatical errors I did not properly upload my screenshots. Nevertheless here they are in the same respective order.

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Memory consumption of queries in workbooks

    We have an issue with the exceution of a Workbook which contains several queries. The queries require very much memory which finally leads to a shortdump (TSV_TNEW_PAGE_ALLOC_FAILED). We found that during execution of the workbook the memory is not released after a query has been executed and therfore at some point of time the dump occurs. However, if the queries are refreshed manually one after the other in the workbook the memory is relaesed and finally the workbook can be executed by this workaround.
    My question is, if anyone has an idea, if it is possible to apply a setting somewhere that the queries relaese the memory after execution when they are all refreshed together in the workbook?
    Thanks a lot in advance for any hint & Kind regards,
    Hans-Jörg

    Hi,
    Try this,
    You may be able to workaround the problem by increasing free memory avaiable, parameter em/initial_size_MB (contact your Basis team or refer note 835474).
    Also concenrate on parameter ztta/roll_extension (Refer note 146289)
    Try increasing the parameter, abap/heap_area_dia from tcode RZ11.
    Also check the following notes in detail as well,
    649327     Analysis of memory consumption
    425207     SAP memory management, current parameter ranges
    369726     TSV_TNEW_PAGE_ALLOC_FAILED
    185185     Application: Analysis of memory bottlenecks
    If the issue persist, please review SAP Note 779123 and query design.
    check this,
    http://scn.sap.com/thread/288222
    http://www.sapfans.com/forums/viewtopic.php?f=3&t=109557
    regards,
    anand.

  • J2EE Engine memory consumption (Usage)

    Dear experts,
    We have J2EE Engine (a Jawa stack).  When I run routine monitoring via the browser and read the memory consumption I am meet with a chart that show a sawtooth like graph. Every hour from 19:00 to 02:00 the memory consumption will rise with approx. 200 MB after 7 hours all of a sudden the memory consumption drops down to normal idel levvel and start over again. I can inform that at the time there are no user on the system.
    My question is what are the J2EE doing? since there is no user activity.Are the J2EE engine running some system applications? is it filling up the log files and then empty(storing) them.
    I hope some of the experts can answer.
    I just want to undertand what's going on, on the system. If there is some documentation/white paper on how to interpret/read the J2EE monitor I will great full if you drop the information or link here.
    Mike

    Hi Mike
    To understand what exactly is being executed in Java engine, I'd suggest you perform Thread dump analysis as per:
    http://help.sap.com/saphelp_smehp1/helpdata/en/10/3ca29d9ace4b68ac324d217ba7833f/frameset.htm
    Generally 4-5 thread dumps are triggered at the interval of 20-25 seconds for better analysis.
    Here's some useful SAP notes related to thread dump analysis:
    710154 - How to create a thread dump for the J2EE Engine 6.40/7.0
    1020246 - Thread Dump Viewer for SAP Java Engine
    742395 - Analyzing High CPU usage by the J2EE Engine
    Kind regards,
    Ved

  • Question about memory expansion on Satellite A80-117

    I have a question regarding the memory compatibility.
    (The A8-117 is a 1.6 GHz Centrino (Pentuin M processor 730) with 533 MHz FSB, and 2 MB of 2nd level cache, Intel® 915GM Express chipset)
    At the moment i have 1 chip of 512MB (333MHz, CL 2.5) installed. I plan to upgrade to 1.5 or 2GB. So what bothers me is: can I install a 533MHz DDR2 modules on this laptop (in that case the old 333MHz one goes out)?
    At the official Toshiba site only 333MHz DDR are provided, while the 915GM express chipset allowes it.
    If 533 is not possible i guess i'll just buy additional 1GB of 333MHz DDR.
    Thanks in advance!
    Bostjan

    Hello
    At first I want to say that is always recommended to use compatible and tested RAM modules. On this way you will be sure it will work well and without any problems.
    According the notebook specification you can upgrade RAM to 2GB max and compatible modules are: PC2700 512MB (PA3312U-2M51) and PC2700 1024MB (PA3313U-2M1G).
    You can use faster RAM modules but for me it is really not necessary. The notebook specification is 333MHz bus speed and faster modules will not help that notebook runs faster. It is better to buy 2 x PA3313U-2M1G and you will have no problem and notebook will runs much much faster than before.

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from 2007 it came preloaded with tiger. I have original install tiger discs version 10.4.10. Is it safe to downgrade or not please help

    Hi, I want to downgrade from OSX LEOPARD to OSX TIGER but I have a few questions regarding this. My iMac is originally from Sep 2007 it came preloaded with tiger. I have original install (2) tiger discs version 10.4.10.  I want to know if it is safe and what are the necessary steps to do so. Also by downgrading im wondering if a lot of apps nowadays support tiger for example I have photoshop version 5 and 4 these are very important to me. One last question does anyone know of any reliable virus protection for mac that doesnt slow down your computer? because I have read that a lot of them do so. If anyone can help me I would greatly appreciate it! Here are the specs for my iMac 
    Model Name:
    iMac
      Model Identifier:
    iMac7,1
      Processor Name:
    Intel Core 2 Duo
      Processor Speed:
    2 GHz
      Number Of Processors:
    1
      Total Number Of Cores:
    2
      L2 Cache:
    4 MB
      Memory:
    2 GB
      Bus Speed:
    800 MHz

    Most of the time a perception of general slow performance is the result of installing third party junk alleged to speed up, "clean" or "optimize" your Mac, or to look for viruses that don't exist. Ideally you would know what you installed so you can uninstall it, but if you don't know or aren't sure there are techniques such as Safe Mode and creating a temporary user account to confirm that suspicion.
    If you open Activity Monitor it may show a process, or processes, that occupy a lot of your system's time.
    Slowness confined solely to web browser activity is often the result of an inexorable progress toward websites that demand ever more processor-intensive tasks. If your slow performance is strictly limited to web browsing, you might try disabling Flash by either uninstalling it, or use utilities such as ClickToFlash that allow you to control what Flash content gets loaded. Flash in itself is not inherently evil, but there is nothing to stop websites or the advertisers who pay for them from writing horrible Flash code that can do everything from hogging 100% of your CPU's time to causing random crashes. You can watch Activity Monitor as in the above to correlate these troublesome web pages with performance degradation.
    You are correct; if your computer shipped with Tiger you may certainly revert to it. I forgot that Tiger was shipping on new Macs as recently as five years ago. To downgrade it would be necessary to completely erase your hard disk and boot with the Tiger installation DVD, followed by installing it anew. Such drastic measures are not necessary and you are unlikely to be satisfied with the results anyway.
    Assuming your system is free of third party parasitic junk attached to OS X in an ill-conceived attempt to improve upon it, that your hard disk drive is sound and the boot volume has enough free space to work with, by far the best performance-enhancing improvement would be to add more memory. Buy as much as your computer can use and that you can afford. 2 GB is not that much any more.
    Read the following for some recommended troubleshooting techniques from Apple:
    General purpose Mac troubleshooting guide: Isolating issues in Mac OS X
    Creating a temporary user to isolate user-specific problems: Isolating an issue by using another user account
    Memory limitations: Using Activity Monitor to read System Memory and determine how much RAM is being used
    Identifying resource hogs and other tips: Runaway applications can shorten battery runtime
    Starting the computer in "safe mode": Mac OS X: What is Safe Boot, Safe Mode?

  • Problems updating projects to new versions of Premiere (CS5 to CC and CC to CC 2014) Memory consumption during re-index and Offline MPEG Clips in CC 2014

    I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
    A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
    Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
    Memory consumption during re-indexing.
    When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
    I checked that clips played; there were no errors generated; no clips showed as Offline.
    Some Clips now Offline:Importer  CC 2014
    Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
    The same clips play perfectly well in, for example, Windows Media Player.
    I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
    It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
    It's not the sort of problem expected with a premium product.
    After this experience, I will not be updating premiere mid-project ever again.
    How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
    Any help appreciated. Thanks.

    To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
    Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
    It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
    It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
    It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
    This is very poor support for what is supposed to be a premium product.
    Lesson learned: I won't be upgrading Premiere again mid project after these experiences.

  • High memory consumption in XSL transformations (XSLT)

    Hello colleagues!
    We have the problem of a very high memory consumption when transforming XML
    files with CALL TRANSFORMATION.
    Code example:
    CALL TRANSFORMATION /ipro/wml_translate_cls_ilfo
                SOURCE XML lx_clause_text
                RESULT XML lx_temp.
    lx_clause_text is a WordML xstring (i.e. it is a Microsoft Word file in XML
    format) and can therefore not be easily splitted into several parts.
    Unfortunately this string can get very huge (e.g. 50MB). The problem is that
    it seems that CALL TRANSFORMATION allocates memory for the source and result
    xstrings but doesn't free them after the transformation.
    So in this example this would mean that the transformation allocates ~100MB
    memory (50MB for source, ~50MB for result) and doesn't free it. Multiply
    this with a couple of transformations and a good amount of users and you see
    we get in trouble.
    I found this note regarding the problem: 1081257
    But we couldn't figure out how this problem could be solved in our case. The
    note proposes to "use several short-running programs". What is meant with
    this? By the way, our application is done with Web Dynpro for ABAP.
    Thank you very much!
    With best regards,
    Mario Düssel

    Hi,
    q1. how come the Ram consumption is increased to 99% on all the three boxes?If we continue with the theory that network connectivity was lost between the hosts, the Coherence servers on the local hosts would form their own clusters. Prior to the "split", each cache server would hold 1/12 of the primary and 1/12 of the backup (assuming you have one backup). Since Coherence avoids selecting a backup on the same host as the primary when possible, the 4 servers on each host would hold 2/3 of the cache. After the spit, each server would hold 1/6 of the primary and 1/6 of the backup, i.e., twice the memory it previously consumed for the cache. It is also possible that a substantial portion of the missing 1/3 of the cache may be restored from the near caches, in which case, each server would then hold 1/4 of the primary and 1/4 of the backup, i.e., thrice the memory it previously consumed for the cache.
    q2: where is the cache data stored in the coherence servers?on which memory?The cache data is typically stored in the jvm's heap memory area.
    Have you reviewed the logs?
    Regards,
    Harv

  • Dbxml memory consumption

    I have a query that returns about 10MB worth of data when run against my db -- it looks something like the following
    'for $doc in collection("VcObjStore")/doc
    where $doc[@type="Foo"]
    return <item>{$doc}</item>'
    when I run this query in dbxml.exe, I see memory footprint (of dbxml.exe) increase 125MB. Once query finishes, it comes back down.
    I expected memory consumption to be somewhat larger than what the query actually returns but this seems quite extreme.
    Is this behavior expected? What is a general rule of thumb on memory usage with respect to result size (is it really 10x)? Any way to make it less of a hog?
    Thanks

    Hi Ron,
    Thanks for a quick reply!
    - I wasn't actually benchmarking DBXML. We've observed large memory consumption during query execution in our test application and verified the same issue with dbxml.exe. Since dbxml.exe is well understood by everyone familiar with DBXML, I thought it would help starting with that.
    - Yes, an environment was created for this db. Here is the code we used to set it up
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setInitializeLocking(true);
    envConfig.setInitializeCache(true);
    envConfig.setAllowCreate(true);
    envConfig.setErrorStream(System.err);
    envConfig.setCacheSize(1024 * 1024 * 100);
    - I'd like an explanation on reasons behind the performance difference between these two queries
    Query 1:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return $doc'
    552 objects... <snip>
    Time in seconds for command 'query': 0.031
    Query 2:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return <val>{$doc}</val>'
    552 objects... <snip>
    Time in seconds for command 'query': 5.797
    - Any way to make the query #2 go as fast as #1?
    Thanks!

  • BW data model and impacts to HANA memory consumption

    Hi All,
    As I consider how to create BW models where HANA is the DB for a BW application, it makes sense moving the reporting target from Cubes to DSOs.  Now the next logical progression of thought is that the DSO should store the lowest granularity of data(document level).  So a consolidated data model that reports on cross functional data would combine sales, inventory and purchasing data all being stored at document level.  In this scenario:
    Will a single report execution that requires data from all 3 DSOs use more memory vs the 3 DSOs aggregated say at site/day/material?Lower Granularity Data = Higher Memory Consumption per report execution
    I'm thinking that more memory is required to aggregate the data in HANA before sending to BW.  Is aggregation still necessary to manage execution memory usage?
    Regards,
    Dae Jin

    Let  me rephrase.
    I got an EarlyWatch that said my dimensions on one of cube were too big.  I ran SAP_INFOCUBE_DESIGNS in SE38 in my development box and that confirmed it.
    So, I redesigned the cube, reactivated it and reloaded it.  I then ran SAP_INFOCUBE_DESIGNS again.  The cube doesn't even show up on it.  I suspect I have to trigger something in BW to make it populate for that cube.  How do I make that happen manually?
    Thanks.
    Dave

Maybe you are looking for