[svn] 3930: Step 1 of memory improvements

Revision: 3930
Author: [email protected]
Date: 2008-10-28 09:30:24 -0700 (Tue, 28 Oct 2008)
Log Message:
Step 1 of memory improvements
GraphicElement now optimizes away its AdvancedLayoutFeatures as long is has a trivial (translation only) transform matrix.
Added a new OnDemandEventDispatcher class that only allocates memory for event dispatcher functionality when it gets its first listener.
GraphicElement, OverrideBase, and LayoutBase extend OnDemandEventDispatcher.
Modified Paths:
flex/sdk/trunk/frameworks/projects/flex4/src/mx/graphics/graphicsClasses/GraphicElement.a s
flex/sdk/trunk/frameworks/projects/flex4/src/mx/layout/LayoutBase.as
flex/sdk/trunk/frameworks/projects/framework/src/FrameworkClasses.as
flex/sdk/trunk/frameworks/projects/framework/src/mx/graphics/LinearGradient.as
flex/sdk/trunk/frameworks/projects/framework/src/mx/graphics/LinearGradientStroke.as
flex/sdk/trunk/frameworks/projects/framework/src/mx/graphics/RadialGradient.as
flex/sdk/trunk/frameworks/projects/framework/src/mx/graphics/RadialGradientStroke.as
flex/sdk/trunk/frameworks/projects/framework/src/mx/states/OverrideBase.as
Added Paths:
flex/sdk/trunk/frameworks/projects/framework/src/mx/utils/OnDemandEventDispatcher.as

Thanks again Koen,
I am developing remotely on a development server. So when I run SSDT I am running it within the development server box. My current technology / environment mix means I can't run SSDT from my local desktop.
I am pretty sure that scanning of the rows is an issue causing performance, e.g. if I just click on the data source component SSDT freezes for several minutes, unless I just have a cut down version of the Excel file, say first 5,000 rows, then the response
of SSDT is acceptable.
Also when I have SSDT pointing to the full spreadsheet and I click around in the associated Data Flow I notice devenv.exe* 32 slowing grabbing 4gb of RAM and releasing it according to Windows Task Manager. 
Kind Regards,
Kieran.
Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

Similar Messages

  • [svn:osmf:] 11238: Cue point sample improvements and added a work around for FM-171 to the TemporalFacet class .

    Revision: 11238
    Author:   [email protected]
    Date:     2009-10-28 12:45:35 -0700 (Wed, 28 Oct 2009)
    Log Message:
    Cue point sample improvements and added a work around for FM-171 to the TemporalFacet class.
    Ticket Links:
        http://bugs.adobe.com/jira/browse/FM-171
    Modified Paths:
        osmf/trunk/apps/samples/framework/CuePointSample/src/CuePointSample.mxml
        osmf/trunk/framework/MediaFramework/org/osmf/metadata/TemporalFacet.as

    Revision: 11238
    Author:   [email protected]
    Date:     2009-10-28 12:45:35 -0700 (Wed, 28 Oct 2009)
    Log Message:
    Cue point sample improvements and added a work around for FM-171 to the TemporalFacet class.
    Ticket Links:
        http://bugs.adobe.com/jira/browse/FM-171
    Modified Paths:
        osmf/trunk/apps/samples/framework/CuePointSample/src/CuePointSample.mxml
        osmf/trunk/framework/MediaFramework/org/osmf/metadata/TemporalFacet.as

  • Rescuing SVN BDB Steps

    Hey Guys,
    Don't know about you guys but I had a wonderful start to my day. A 2 year old SVN repository that has never been backed up starting failing. We can no longer use it, all our work is locked in the BDB files and we're a little shafted!
    Have spent all day with this. This is where I am.
    svnadmin verify - fails after revision 17 though there are 4700 revisions.
    svnadmin restore - doesn't work
    Then I got looking at direct BDB functions
    db_restore - didn't do anything at all which was odd
    db_verify strings - worked, returned a whole load of Unreferenced Pages
    Am currently running db_dump -r into a file which I intend to feed to db_load later.
    But there is more than 1 db file corrupt here .. can i individually dump and reload them??
    -- Additional Info
    Getting a lot of this type of thing from the SVN changes and representations files. Whilst strings gives just Unrefereced Pages.
    dlx35203:/var/svn/db# db4.4_verify changes
    db4.4_verify: Page 5315: item 0 of unrecognizable type
    db4.4_verify: Page 5315: item 1 of unrecognizable type
    db4.4_verify: Page 5315: item 2 of unrecognizable type
    db4.4_verify: Page 5315: item 4 of unrecognizable type
    db4.4_verify: Page 5315: item 6 of unrecognizable type
    db4.4_verify: Page 5315: item 7 of unrecognizable type
    db4.4_verify: Page 5315: item 9 of unrecognizable type
    db4.4_verify: Page 5315: item 10 of unrecognizable type
    db4.4_verify: Page 5315: item 11 of unrecognizable type
    db4.4_verify: Page 5315: item 12 of unrecognizable type
    db4.4_verify: Page 5315: item 13 of unrecognizable type
    db4.4_verify: Page 5315: item 14 of unrecognizable type
    db4.4_verify: Page 5315: item 16 of unrecognizable type
    db4.4_verify: Page 5315: item 17 of unrecognizable type
    db4.4_verify: Page 5315: item 18 of unrecognizable type
    db4.4_verify: Page 5315: item 19 of unrecognizable type
    db4.4_verify: Page 5315: item 20 of unrecognizable type
    db4.4_verify: Page 5315: item 21 of unrecognizable type
    db4.4_verify: Page 5315: item 22 of unrecognizable type
    db4.4_verify: Page 5315: item 24 of unrecognizable type
    db4.4_verify: Page 5315: gap between items at offset 1136
    db4.4_verify: Page 5315: gap between items at offset 1260
    db4.4_verify: Page 5315: gap between items at offset 2212
    db4.4_verify: Page 5315: gap between items at offset 3032
    db4.4_verify: Page 5315: gap between items at offset 3384
    db4.4_verify: Page 5315: gap between items at offset 3624
    Should this work do we think?
    Christ, I need this SVN repo back, any help / tricks appreciated!
    Edited by: user11958941 on Sep 30, 2009 6:58 AM

    Hi,
    You can probably get your data back using the Berkeley DB [db_dump|http://www.oracle.com/technology/documentation/berkeley-db/db/api_reference/C/db_dump.html] utility. I'd suggest trying the db_dump utility, with or without the -r and -R options and see how that works.
    More details on how to dump and reload the database are in the Chapter 22 from the reference guide: [Dumping and Reloading Databases|http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/dumpload.html]
    user11958941 wrote:
    db_restore - didn't do anything at all which was oddYou are talking about db_recover, not db_restore, right? How did you configure Berkeley DB? Are you having any log files around? Are you using an environment? The Berkeley DB flags that you use when you open the database and the environment may help us understand if you can run recovery.
    user11958941 wrote:
    But there is more than 1 db file corrupt here .. can i individually dump and reload them??Yes, you can. If you have more databases in the same file, you can use -s option for db_dump.
    Bogdan Coman

  • [svn] 3543: Asc front end performance improvements & bug fixes

    Revision: 3543
    Author: [email protected]
    Date: 2008-10-09 11:54:47 -0700 (Thu, 09 Oct 2008)
    Log Message:
    Asc front end performance improvements & bug fixes
    This set of Asc parser/scanner/inputbuffer updates contains changes that simplify the parser?\226?\128?\153s lookahead/match fsm.
    A method, ?\226?\128?\152shift()?\226?\128?\153 has been added that replaces match, when the token to be consumed is known.
    Also, a simplified version of lookahead has been added that returns the lookahead token, which allows use of switch code when the lookahead set is large.
    Simple inputbuffer changes (switching to a String, so that we can use substring instead of valueof) seem to result in about a 2% performance improvement.
    Fixes for:
    ASC-3519
    ASC-2292
    ASC-3545
    All being overlapping bugs related to regexp recognition in slightly differing contexts.
    QA: Yes
    Doc:
    Tests: checkintests, Performance tests, tamarin, asc-tests, mx-unit
    Ticket Links:
    http://bugs.adobe.com/jira/browse/ASC-3519
    http://bugs.adobe.com/jira/browse/ASC-2292
    http://bugs.adobe.com/jira/browse/ASC-3545
    Ticket Links:
    http://bugs.adobe.com/jira/browse/ASC-3519
    http://bugs.adobe.com/jira/browse/ASC-2292
    http://bugs.adobe.com/jira/browse/ASC-3545
    http://bugs.adobe.com/jira/browse/ASC-3519
    http://bugs.adobe.com/jira/browse/ASC-2292
    http://bugs.adobe.com/jira/browse/ASC-3545
    Modified Paths:
    flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/InputBuffer.java
    flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/Parser.java
    flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/Scanner.java
    flex/sdk/trunk/modules/asc/src/java/macromedia/asc/parser/States.java

    In reference to this change in the Custom Reports... Better experience when exporting data - to prevent customer confusion when exporting data from Mac computers, we have removed the export to excel option and exporting in CSV format by default.
    What is the customer confusion we are trying to stop here? I've got even more confused customers at the moment because all of a sudden they can't find the export to excel option but know it exists if they log in on a PC?
    Mark

  • Will more memory improve performance in this situation?

    I upgraded from a 3 GHZ Windows machine with 1 GB memory to a Mac Pro 8-core, Leopard, with 2 GB RAM. Most everything runs a lot faster but a few things, like converting layers to smart objects still are a bit pokey. I expected I would never see the Mac equivalent of the hourglass, which seems to be a wristwatch, again, but I still do.
    I have PS CS3 prefs set to 71% RAM for PS, out of 1963 available.
    Most all of my images start at 15 MB in RAW format.
    I have kept an eye on the efficiency reading at the bottom of the screen in the status area. It consistently stays at 100%. I presume that means my work isn't getting complex enough to go to virtual disk.
    In this situation, is there anything to be gained for PS CS3 performance by adding more RAM to my machine?
    I couldn't find the answer to this in other forum messages.

    While Photoshop cannot directly address all the RAM itself, it works together with the OS to in effect use far beyond the 2-GB limit imposed by the 32-bit OS and beyond the 3-GB the Photoshop engineers were able to wrangle through a workaround (* SEE FOOTNOTE), bringing into RAM stuff that would otherwise be written to disk.
    The advice is totally correct. You are just a tad behind the times. :) It has been discussed earlier, though some of the threads may have been deleted.
    FOOT NOTE:
    Elementary computing math. Both Panther and Tiger, under which Photoshop CS3 was developed and released, are 32-bit operating systems.
    The 2
    32 limitation still holds for all 32-bit applications (minus overhead/application footprint 2
    31).
    Because one bit is reserved, the calculation has to be 2
    31, which works out to 2147483648, and that's why up through CS2, the Photoshop limit was around 2 GB as it was for any other applications because of the limitation imposed by the 32-bit OS.
    In CS3, the Photoshop engineers devised a workaround to circumvent the limitations of the 32-bit OS to let it access about 3 GB.
    So, be thankful for the resourcefulness of the Adobe folks, rather than rant against the forces of nature and mathematics.
    Then, in Tiger (and presumably in the currently lame Leopard too), the OS began allowing Photoshop to take advantage beyond the 3 GB, as described in the first paragraph above, even though Photoshop is not
    directly addressing the RAM.

  • EP6 sp12 Performance Issue, Need help to improve performance

    We have a Portal development environment with EP6.0 sp12.
    What we are experiencing is performance issue, It's not extremely slow, but slow compared to normal ( compared to our prod box). For example, after putting the username and password and clicking the <Log on> Button it's taking more than 10 secs for the first home page to appear. Also currently we have hooked the Portal with 3 xAPPS system and one BW system. The time taken for a BW query to appear ( with selection screen) is also more than 10 secs. However access to one other xAPPS is comparatively faster.
    Do we have a simple to use guide( Not a very elaborate one) with step by step guidance to immediately improve the performance of the Portal.
    Simple guide, easy to implement,  with immediate effect is what we are looking for in the short term
    Thanks
    Arunabha

    Hi Eric,
      I have searched but didn't find the Portal Tuning and Optimization Guide as you have suggested, Can you help to find this.
    Subrato,
      This is good and I would obviously read through this, The issue here is this is only for Network.
      But do you know any other guide, which as very basic ( may be 10 steps) and show step by step the process, it would be very helpful. I already have some information from the thread Portal Performance - page loads slow, client cache reset/cleared too often
    But really looking for answer ( steps to do it quickly and effectively) instead of list of various guides.
    It would be very helpful if you or anybody( who has actually done some performance tuning) can send  a basic list of steps that I can do immediately, instead of reading through these large guides.
    I know I am looking for a shortcut, but this is the need of the hour.
    Thanks
    Arun

  • Out of Memory Error - JBOSS - JRocket

    I continue to see my jrocket/jboss/linux servers crash daily in production for out of memory errors. When we trace back the stack to find a query that does not appear to have nearly enough data to bring the server to an out of memory state. Heap useage pattern does not indicate any leak in the application.
    Using RedHat Linux ES - rel. 4 - Nahant Update 4
    startup with 1536m Xms/x
    Any thoughts or previous expereince. This is a Java application servicing a Flex UI

    Check your Linux kernel version.
    If you have an unpatched Linux kernel, which is not compliant, you will see these errors for all Jrocket.
    The 2,6,9-5 kernel is not compliant without patches, see:
    http://e-docs.bea.com/jrockit/jrdocs/suppPlat/prodsupp.html#wp999048
    excerpt
    The JRockit JDK is supported on the default Linux kernel, which varies with the distribution and architecture. See the Notes field in the Summary of Supported Configurations by Release for details. If you require a different kernel, such as the hugemem kernel, contact Oracle Support for current support status.
    Also, see references to x_86 64bit?
    http://jira.jboss.org/jira/browse/JBPAPP-158
    This is a 32bit ES - rel 4 correct?
    This link also shows a fairly excellent step by step Out of Memory Troubleshooting session.

  • Sql memory usage increase each time win app runs

    Hi,
    Sql memory usage increases each time when win app runs. Why does it work like this? Is it normal ?
    I restart SQL Server 2012. Only my win app uses SQL server.
    First run of winapp.
    start memory usage : 211.800 KB
    close memory usage: 528.136 KB
    Second run of xaf app.
    start memory usage : 528.136 KB
    close memory usage: 996.844 KB
    Third run of xaf app
    start memory usage : 996.844 KB
    close memory usage: 997.640 KB
    Fourth run of xaf app
    start memory usage : 997.640 KB
    close memory usage: 1.104.864 KB

    Hi,
    Sql memory usage increases each time when win app runs. Why does it work like this? Is it normal ?
    Yes, it is perfectly normal for SQL Server to acquire and hold onto large amounts of memory indefinitely.  This memory improves performance by avoiding disk I/O, query plan compilation and costly memory management. 
    On a dedicated SQL Server you should usually let SQL Server dynamically manage memory.  It will release memory if it detects memory pressure.  But if you often run other applications on the server than need significant amounts of memory (e.g. IIS,
    application services), you may want to set max server memory as suggested.
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Windows 8.1 RDP Virtual Memory Error

    I just upgraded to Windows 8.1 and am encountering the same problem as mentioned here:
    http://social.technet.microsoft.com/Forums/lync/en-US/068228a3-78da-4bcf-acb5-3cf5c63dee48/windows-81-preview-remote-desktop-failing-with-low-virtual-memory-error?forum=w81previtpro
    I am trying to use RDP desktop version to connect to a site (which uses RD Gateway). It says "Initiating Connection" and then fails with no error or any notification (there is nothing in the event viewer)
    If I try changing settings (removing sound from RDP, reducing resolution) I eventually get this message:
    [Window Title]
    Remote Desktop Connection
    [Content]
    This computer can't connect to the remote computer.
    The problem might be due to low virtual memory on your computer. Close your other programs, then try connecting again. If the problem continues, contact your network administrator or technical support.
    Connection worked fine on Windows 8 - is this a known 8.1 bug? It is pretty critical for me!
    Thanks

    Hi,
    Since I cannot repro this issue, I considered that following factors may cause this issue:
    Paging file value setting is too low.
    RAM issue
    First, I suggest we try following steps to check the issue:
    Step 1: Increase page file size:
    How to Change The Size of Virtual Memory (pagefile.sys) on Windows 8 or Windows Server 2012
    http://blogs.technet.com/b/danstolts/archive/2013/01/07/how_2d00_to_2d00_change_2d00_the_2d00_size_2d00_of_2d00_virtual_2d00_memory_2d00_pagefile_2d00_sys_2d00_on_2d00_windows_2d00_8_2d00_or_2d00_windows_2d00_server_2d00_2012.aspx
    Step 2: Run Memory diagnostic tool:
    From Start, type mdsched.exe, and then press Enter. Normally, text that you type on Start is entered into the Apps Search box by default.
    Choose whether to restart the computer and run the tool immediately or schedule the tool to run at the next restart.
    Windows Memory Diagnostic runs automatically after the computer restarts and performs a standard memory test. If you want to perform fewer or more tests, press F1, use the up and down arrow keys to set the Test Mix as Basic, Standard, or Extended, and
    then press F10 to apply the desired settings and resume testing.
    When testing is complete, the computer restarts. You’ll see the test results when you log on.
    If this issue still persists after above steps, I recommend you helping to do following test and let me know the results:
    Start Windows in Clean boot mode, and open Remote desktop and see what’s going on:
    How to perform a clean boot to troubleshoot a problem in Windows 8, Windows 7, or Windows Vista
    http://support.microsoft.com/kb/929135  
    Keep post.
    Regards,
    Kate Li
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Important Note: Memory Management

    If you encounter low memory messages including “Available app space” or “Storage space running out” please check the software on your tablet:
    1) Go to Settings
    2) Select About Tablet
    3) Refer to SW Version
    If the SW Version on your Ellipsis 7 tablet is:
    MV7A_31D16_422  OR
    MV7A_31D18_422  OR
    MV7A_31D26_422
    Then moving to software version MV7A_31D25_422A will enhance the available storage on your device. You are eligible for a software update on the tablet. Please call our Customer Support at 1 (800) 837-4966, and they will support you through this process. You will be required to ship your device for this software enhancement.
    If your tablet is on MV7A_31D25_422A or MV7B_31D41_422 , then your device already has all the memory improvements and you will need to free up memory space. Please uninstall apps that are not needed and move media files to an SD card or to your PC.

    To clarify:  This software upgrade is being handled by the Verizon TECH dept (it took 20+ min on hold).  Once there, the tech employee had no idea what I was talking about... so I gave her this thread number and a bunch of information (and my Ellipsis 7's software version).  Now things are getting warmer. After 5 more min, she calls back, saying that this would be the procedure (as unbelievable as it will sound): 
    "Verizon will ship you a box. (two days).  Put the tablet in the prepaid postage box and send it to us (2-3 days), we'll fix it (3-5 days), and send it back to you (1-2 days)."  I said, "You've gotta be kidding!! How about you email me a shipping label, just like Amazon and Lands' End all do, and I'll send it to you today??"  Nope. Sorry. 
    Gee, Verizon, are you still living in1970? I hope some MBA class analyzes how much money and time you waste by doing it this way. Might explain my $250+ monthly bill for the last 16+ years.

  • JDeveloper running out of Heap Space using SVN

    JDeveloper: 11.1.1.1.0
    SVN : 1.6.3
    When checking out from SVN , JDeveloper runs out of memory.
    I have changed the startup target of JDeveloper to include the following : -Xmn512M -Xms2048M -Xmx2048M .. but this has not helped.
    What should I do?
    Howie

    Alternatively, you can set it in jdev.conf as well.
    -Arun

  • How to free up memory while running?

    Hello,
    i´m using a custom OPUI written in LV.
    In there i start TestStand 3.5.
    The model of Teststand then starts two parallel-sequences (as "New Execution" that are loaded in Sequence-File-Load and looping all the time.
    One parallel-sequence checks the buttons of the OPUI and the other parallel-sequence is controling a climatic chamber. When a special temperature is reched then it executes "Test UUT" of the model-sequence (also as "New Execution").
    Everything works fine but someone is eating memory. Everytime when a Test UUT is done the memory goes up. And after it is finished the memory is not released.
    So my question is: How can i free up unused memory (like garbage collector) in teststand, or in LV if there is a special action needed.
    Or perhaps the teststep-results are not removed? So that i create 100s of new executions and they all keep the results?
    I already clicked on "Disable result recording for all steps" in the two parallel-sequences and in all steps in the model. So that i think that only the Mainsequence Callback is creating results.
    Has someone any ideas what i can do?
    Thanks for everything

    If you have result collection disabled for all of your steps then the memory leak is likely due to something else. You might not be closing a reference that you should be. Try to narrow down which part(s) of your sequence are leaking the memory by either stepping through things one step at a time and looking at memory usage or perhaps by cutting out parts of your sequence until the problem goes away. Also, if it's not your sequence then it might be your custom UI that is leaking the memory, to determine this, try running your sequence in the sequence editor and/or one of the UIs that ship with TestStand and see if the problem goes away.
    Hope this helps,
    -Doug

  • LR2 Vista Memory VERY LOW

    I have Vista SP1 with 4GB of Memory, ok I know I reallyt only have access to
    a little over 3GB but LR is only making use of 712MB which is crazy low ....
    WHY ?
    And please don't tell me it has anything to do with fragmentation because
    it's not.
    Lightroom version: 2.0 [481478]
    Operating system: Windows Vista Professional Service Pack 1 (Build 6001)
    Version: 6.0 [6001]
    Application architecture: x86
    System architecture: x86
    Physical processor count: 2
    Processor speed: 2.1 GHz
    Built-in memory: 3069.6 MB
    Real memory available to Lightroom: 716.8 MB
    Real memory used by Lightroom: 271.4 MB (37.8%)
    Virtual memory used by Lightroom: 258 MB
    Memory cache size: 0.3 MB
    Application folder: C:\Program Files\Adobe\Adobe Photoshop Lightroom 2
    Library Path: C:\Users\Michael\Pictures\Lightroom\Lightroom 2 Catalog.lrcat

    If you have more than 3G in your 32-bit vista system you can override the app memory limit (2048MB) to allow LR & other apps like PS to allocate more memory:
    bcdedit /set IncreaseUserVa 3072
    reboot
    Not sure whether the extra memory improves LR performance but it will now report it has 1228.8MB real memory available. Lower values dont appear to work (e.g. 2560)
    This can be backed out if you aren't convinced:
    bcdedit /deletevalue increaseuserva
    reboot
    XP has another way of dealing with this (refer to /3G switch in boot.ini)

  • Is my project using too much memory?

    I'm having lots of problems w/ a project. When I checked the project info I found some startling numbers:
    Midi Regions: 29 objects, 4567934 events, memory: 77137218
    and Undo Steps: 31 objects, memory: 78043126
    Are those numbers abnormally large? Could they be responsible for my project acting super sluggishly?

    Seems to be 10 times larger than mine. here are my information:
    Midi Regions:195 - ram 401040
    environment objects: 240 - ram 221180
    Undo steps: 31 - ram 701712
    Regards, K

  • Improving Performance of a multidatabase report

    Hi All,
    This is regarding multiple database reprot.
    I am getting a query from mysql like this .
    SELECT *  FROM OBJSETTING_DATA
    and other query from oracle like this.
    select country,empno from HO_USERS
    and other also from oracle like this
    select linemanager,empno from hr_apps
    And the parameters are year,division,Status.And I am linking empno using cr links tab
    May be I will have some hundreds of records only.
    In my report I need show country,empno,name,linemanager,grade,status.
    So here country,linemanager coming from oracle and rest of all coming from mysql.
    Please suggest what are the steps to follow to improve performance.

    Hi Abhilash and Sastry,
    I did like this instead of linking tables in links tab and somehow I am able to improve performance
    Created main report using mysql query..
    And created 2 sub reports using oracle db with parameter empno and linking empno field with empno parameter using sub reports links tab and placed the sub reports in details section of main report as per my requirement
    I am getting somewhat better performance compared to earlier.
    Please suggest

Maybe you are looking for