Memory optimized DLLs not being cleaned up

Hi,
From BOL, my understanding is that DBAs do not need to administer DLLs created for memory optimized tables, or natively compiled stored procedures, as they are recompiled automatically, when the SQL Server service starts and are removed when no longer needed.
But I am witnessing, that even after a memory optimized table has been dropped, and the service restarted, the DLLs still exist in the file system AND are still loaded into SQL memory and attached to the process. This can be witnessed by the fact that they
are still visible in sys_dm_os_loaded_modules, and are locked in the file system if you try to delete them, whilst the SQL Service is running.
Is this a bug? Or are they cleaned up at a later date? If at a later date, what triggers the clean-up, if it isn't an instance restart?
Pete

Most likely the DLLs are still needed during DB recovery, as there are still remnants of the tables in the checkpoint files. A couple of cycle of checkpoints and log truncation (e.g., by doing log backup) need to happen to clean up the old checkpoint
files and remove the remnants of the dropped tables from disk.
The following blog post details all the state transitions a checkpoint file goes through:
http://blogs.technet.com/b/dataplatforminsider/archive/2014/01/23/state-transition-of-checkpoint-files-in-databases-with-memory-optimized-tables.aspx

Similar Messages

  • [svn] 4533: Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

  • Session not being clean up by JRun

    My application is using IPlanet WebServer and JRun3.02 Application server. I am having a problem with active session not getting cleaned up by the App Server. When the user goes through the application and finishes the process, I invalidate the session by doing 'session.Invalidate()'. I also have set a 30 minute timeout value in the JRun global.properties file to invalidate the session if the user starts but not finish going through the application. However, the number of active session count in the JRun log doesn't seem to go down. After a few days, I ran out of sessions and the application hungs. I keep a few objects on the session including a pretty big 'pdfObject' that I use to create a PDF document on the fly.
    Any idea why JRun not able to clean up the sessions after the 30 minute timeout has passed? Does the fact that I have stored objects on the session preventing JRun from invalidating and cleaning up the session?
    Thanks in advance.

    Hi afikru
    According to the Servlet specification session.invalidate() method should unbind any objects associated with it. However I'm not conversant with JRun application server so I can only provide some pointers here to help you out.
    Firstly, try locating some documentation specific to your application server which may throw some light on why this may be happening.
    Secondly, I'd suggest running the Server within a Profiling tool so that you can see what objects are being created and how many of those. Try explicitly running the Garbage Collector and see if the sessions come down.
    Keep me posted on your progress.
    Good Luck!
    Eshwar R.
    Developer Technical Support
    Sun microsystems

  • Obsolete jdb not being cleaned up

    Hi,
    Setup:
    * We are using Oracle NoSQL 1.2.123.
    * We have 3 replication groups with 3 replication nodes each.
    Problem:
    * 2 of the slaves (in 2 different replication groups) occupy much more space in JDB files (10 times more) then all the others. As these are slaves, and writes always go through the master, and all nodes in a replication group have the same data (eventually), I assume that this is stale data that has not been cleaned up by the BDB garbage collection (cleaner threads). Unfortunately the logs do not show anything new (since Dec. last year) and the oldest JDB files are from February.
    Questions:
    * Any ideas what could have gone wrong?
    * What can I do to trigger the cleaners to cleanup the old data? Is that safe to do in production environment and without downtime?
    * Is it really safe to assume that the current data in within a replication groups is really the same?
    Thank you in advance
    Dimo
    PS. A thread dump shows 2 cleaner threads that do nothing.

    1) The simplest and fastest way to correct the replica node is to restore it from the master node. We will send you instructions for doing this later today.Here are directions for refreshing the data storage files (.jdb files) on a target node. NoSQL DB will automatically refresh the storage files from another node, after we manually stop the target node, delete its storage files, and finally restart it, as described below. Thanks to Linda Lee for these directions.
    First, be sure to make a backup.
    Suppose you want to remove the storage files from rg1-rn3 and make it refresh its files from rg1-rn1. First check where the storage files for the target replication node are located using the show topology command to the Admin CLI. Start the AdminCLI this way:
        java -jar KVHOME/lib/kvstore.jar runadmin -host <host> -port <port>Find the directory containing the target Replication Node's files.
        kv-> show topology -verbose
        store=mystore  numPartitions=100 sequence=108
          dc=[dc1] name=MyDC repFactor=3
          sn=[sn1]  dc=dc1 localhost:13100 capacity=1 RUNNING
            [rg1-rn1] RUNNING  c:/linda/work/smoke/KVRT1/dirB
                         single-op avg latency=0.0 ms   multi-op avg latency=0.67391676 ms
          sn=[sn2]  dc=dc1 localhost:13200 capacity=1 RUNNING
            [rg1-rn2] RUNNING  c:/linda/work/smoke/KVRT2/dirA
                      No performance info available
          sn=[sn3]  dc=dc1 localhost:13300 capacity=1 RUNNING
            [rg1-rn3] RUNNING  c:/linda/work/smoke/KVRT3/dirA
                         single-op avg latency=0.0 ms   multi-op avg latency=0.53694165 ms
          shard=[rg1] num partitions=100
            [rg1-rn1] sn=sn1 haPort=localhost:13111
            [rg1-rn2] sn=sn2 haPort=localhost:13210
            [rg1-rn3] sn=sn3 haPort=localhost:13310
            partitions=1-100In this example, rg1-rn3's storage is located in
        c:/linda/work/smoke/KVRT3/dirAStop the target service using the stop-service command
        kv-> plan stop-service -service rg1-rn3 -waitIn another command shell, remove the files for the target Replication Node
        rm c:/linda/work/smoke/KVRT3/dirA/rg1-rn3/env/*.jdbIn the Admin CLI, restart the service
         plan start-service -service rg1-rn3 -waitThe service will restart, and will populate its missing files from one of the other two nodes in the shard. You can use the "verify" or the "show topology" command to check on he status of the store.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Aq$_tab_p, aq$_tab_d filling and not being cleaned up

    Hi all.
    I have a simple 1 way streams replication setup (two node) based on examples. Replication seems to be working.
    However, the AQ$_TAB_P and AQ$_TAB_D (on the capture side only) tables continue to fill (as well as number of messages in the queue and spilled lcrs in v$buffered_queues). Nothing should be spilling since the only things I'm sending are 1 row updates to a heartbeat table and the streams pools are a few hundred meg.
    I have tried aq_tm_process unset, as well as set to 2 and the tables continue to grow.
    The MSG_STATE in aq$tab are either DEFERRED or DEFERRED SPILLED. As mentioned all of the heartbeat updates ( as well as small test transactions ) replicate just fine, so the transactions are being captured,propagated and applied.
    I am running 10.2.0.2 on solaris 10 with no streams related one off patches to speak of, for reference. My propagation did not specify queue_to_queue.
    I'm wondering if there is a step I may have missed, or what else I may be able to look at to ensure that these tables are cleaned up?
    Thanks.
    Edited by: user599560 on Oct 28, 2008 12:39 PM

    Hello
    I forgot to mention that you should check v$propagation_receiver on the destination and v$propagation_sender on the source. v$propagation_receiver on the source will not have records unless you are using bi-directional streams.
    The aq_tm_processes parameter should be set on all the databases that uses Streams. This parameter is responsible for spawning the number of queue monitor slaves which is actually performing the spilling and removing the spilled messages which are no longer needed.
    It is suggested to remove this parameter from spfile, however your SHOW PARAMETER will still show this as 0. Hence you should be checking v$spparameter to confirm whether this was actually removed from it. If you remove it from the spfile, then it should spawn the required number of slaves automatically as per the autotune feature in 10g. However I would always suggest to set this parameter to 1 so that one slave process will be always spawned even if we dont use Streams and SHOW PARAMETER always show this as 1.
    If you find the slaves are not spawned, then you should be checking your alert.log to see whether any errors are reported. You need to check the Queue Monitor Coordinator process qmnc is spawned. If qmnc itself is not spawned (by default it should be spawned always), then no q00 slaves will be spawned. If you remove the parameter from spfile and you see that no q00 slaves are spawned eventhough you are using Streams (either capture, prop or apply) then you should log an SR with Oracle Support to investigate this. You can check the qmnc and q00 slaves at the OS level using the following command:
    ps -ef | grep $ORACLE_SID | grep [q][m0][n0]
    Please mark this thread as answered if all your questions are answered well else let me know.
    Thanks,
    Rijesh

  • My memory sticks are not being recognized by the computer...

    I just installed 4 1gb sticks of memory in pairs and all of the same maker. I already had 2.5gb of memory installed. Once I turned on my computer and clicked the apple, it is showing that I only have the 2 gb of memory installed! Why is it not showing the other 4 gigs? It is saying those slots are empty and all 8 slots are occupied. I have a total of 6gbs but am only showing 2gb. I restarted my computer twice and keep getting the same result. All memory sticks are seated firmly in the slots as well.
    Any suggestions?
    I bought the memory sticks from a reliable vendor that sells Mac products.

    I don't have the slightest idea of what you are talking about LOL! How do I do cold boot? How do I reset the nvram?
    Another thing I noticed that was odd is that I took all memory out of the computer and installed just the 1gb sticks totaling 4gbs. Once I turned on the computer, my desktop screen would not show up on my screen. I got a solid blue screen instead. Once I removed all those sticks and replaced with the sticks previously installed my desktop screen returned! The new memory sticks are brand spanking new. Perhaps I could have had them installed incorrectly as I had them all in the same bank. I Will experiment again tomorrow.
    Message was edited by: DVX100Shooter

  • When i try to load a new version of itunes i get an error message about a file MSVCR80.dll not being available and error message Windows error 126.  Any idea what to do?

    when I try to load a new version of itunes on my windows XP pc I get a message that file MSVCR80.dll is missing as well as a windows error 126 message.
    any idea what to do?

    Solving the iTunes Installation Problems in Windows
    1. Apple has posted their solution here: iTunes 11.1.4 for Windows- Unable to install or open - MSVCR80 issue.
    2. If the Apple article does not fully resolve the problem for you, then try Troubleshooting issues with iTunes for Windows updates - MSVCR80.

  • Old recovery points seems to not being cleaned up

    I'm running a Windows Server 2012 server with DPM 2012 SP1, acting as a secondary DPM server for a couple of primary servers. However, the last 5-6 weeks it has begun to behave very strange. Suddenly, I get a lot of "Recovery Point volume threshold
    exceeded", "DPM does not have sufficient storage space available on the recovery point volume to create new recovery Points" and "The used disk space on the computer running DPM for the recovery point volume of SQL Server 2008 database
    XXXXX\DB(servername.domain.com) has exceeded the threshold value of 90% (DPM accounts 600 MB for internal usage in addition to free space available). If you do not allocate more disk space, synchronization jobs may fail due to insufficient disk space. (ID
    3169).
    All of these alerts seem to have a common source - disk space of course, but there is currently 8 TB free in the DPM disk pool. However, I have a feeling that all of this started when we added another DPM disk to the storage pool. Could it be that DPM doesn't
    clean up expired disk data correctly any longer?
    /Amir

    Hi,
    If the pruneshadowcopiesdpm201.ps1 is not completing, hangs, or crashes, then that needs to be addressed as that will definitely cause storage usage problems.
    In the meantime you can use this powershell script to delete old recovery points to help free disk space.  It will prompt to select a datasource, then a date to delete all recovery points made before that time.
    #Author : Ruud Baars
    #Date : 11/09/2008
    #Edited : 11/15/2012 By: Wilson S.
    #edited : 11:27:2012 By: Mike J.
    # NOTE: Update script to only remove recovery points on Disk. Recovery points removed will be from the oldest one up to the date
    # entered by the user while the script is running
    #deletes all recovery points before 'now' on selected data source.
    $version="V4.7"
    $ErrorActionPreference = "silentlycontinue"
    add-pssnapin sqlservercmdletsnapin100
    Add-PSSnapin -Name Microsoft.DataProtectionManager.PowerShell
    #display RP's to delete and ask to continue.
    #Check & wait data source to be idle else removal may fail (in Mojito filter on 'intent' to see the error)
    #Fixed prune default and logfile name and some logging lines (concatenate question + answer)
    #Check dependent recovery points do not pass BEFORE date and adjust selection to not select those ($reselect)
    #--- Fixed reselect logic to keep adjusting reselect for as long as older than BEFORE date
    #--- Fixed post removal rechecking logic to match what is done so far (was still geared to old logic)
    #--- Modified to remove making RP and ask for pruning, fixed logic for removal rechecking logic
    $MB=1024*1024
    $logfile="DPMdeleteRP.LOG"
    $wait=10 #seconds
    $confirmpreference = "None"
    function Show_help
    cls
    $l="=" * 79
    write-host $l -foregroundcolor magenta
    write-host -nonewline "`t<<<" -foregroundcolor white
    write-host -nonewline " DANGEROUS :: MAY DELETE MANY RECOVERY POINTS " -foregroundcolor red
    write-host ">>>" -foregroundcolor white
    write-host $l -foregroundcolor magenta
    write-host "Version: $version" -foregroundcolor cyan
    write-host "A: User Selects data source to remove recovery points for" -foregroundcolor green
    write-host "B: User enters date / time (using 24hr clock) to Delete recovery points" -foregroundcolor green
    write-host "C: User Confirms deletion after list of recovery points to be deleted is displayed." -foregroundcolor green
    write-host "Appending to log file $logfile`n" -foregroundcolor white
    write-host "User Accepts all responsibilities by entering a data source or just pressing [Enter] " -foregroundcolor white -backgroundcolor blue
    "**********************************" >> $logfile
    "Version $version" >> $logfile
    get-date >> $logfile
    show_help
    $DPMservername=&"hostname"
    "Selected DPM server = $DPMservername" >> $logfile
    write-host "`nConnnecting to DPM server retrieving data source list...`n" -foregroundcolor green
    $pglist = @(Get-ProtectionGroup $DPMservername) # WILSON - Created PGlist as array in case we have a single protection group.
    $ds=@()
    $tapes=$null
    $count = 0
    $dscount = 0
    foreach ($count in 0..($pglist.count - 1))
    # write-host $pglist[$count].friendlyname
    $ds += @(get-datasource $pglist[$count]) # WILSON - Created DS as array in case we have a single protection group.
    # write-host $ds
    # write-host $count -foreground yellow
    if ( Get-Datasource $DPMservername -inactive) {$ds += Get-Datasource $DPMservername -inactive}
    $i=0
    write-host "Index Protection Group Computer Path"
    write-host "---------------------------------------------------------------------------------"
    foreach ($l in $ds)
    "[{0,3}] {1,-20} {2,-20} {3}" -f $i, $l.ProtectionGroupName, $l.psinfo.netbiosname, $l.logicalpath
    $i++
    $DSname=read-host "`nEnter a data source index from list above - Note co-located datasources on same replica will be effected"
    if (!$DSname)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $DSselected=$ds[$DSname]
    if (!$DSselected)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $rp=get-recoverypoint $DS[$dsname]
    $rp
    # $DoTape=read-host "`nDo you want to remove when recovery points are on tape ? [y/N]"
    # "Remove tape recovery point = $DoTape" >> $logfile
    write-host "`nCollecting recoverypoint information for datasource $DSselected.name" -foregroundcolor green
    if ($DSselected.ShadowCopyUsedspace -gt 0)
    while ($DSSelected.TotalRecoveryPoints -eq 0)
    { # "still 0"
    #this is on disk
    $oldShadowUsage=[math]::round($DSselected.ShadowCopyUsedspace/$MB,1)
    $line=("Total recoverypoint usage {0} MB on DISK in {1} recovery points" -f $oldShadowUsage ,$DSselected.TotalRecoveryPoints )
    $line >> $logfile
    write-host $line`n -foregroundcolor white
    #this is on tape
    #$trptot=0
    #$tp= Get-RecoveryPoint($dsselected) | where {($_.Datalocation -eq "Media")}
    #foreach ($trp in $tp) {$trptot += $trp.size }
    #if ($trptot -gt 0 )
    # $line=("Total recoverypoint usage {0} MB on TAPE in {1} recovery points" -f ($trptot/$MB) ,$DSselected.TotalRecoveryPoints )
    # $line >> $logfile
    # write-host $line`n -foregroundcolor white
    [datetime]$afterdate="1/1/1980"
    #$answer=read-host "`nDo you want to delete recovery points from the beginning [Y/n]"
    #if ($answer -eq "n" )
    # [datetime]$afterdate=read-host "Delete recovery points AFTER date [MM/DD/YYYY hh:mm]"
    [datetime]$enddate=read-host "Delete ALL Disk based recovery points BEFORE and Including date/time entered [MM/DD/YYYY hh:mm]"
    "Deleting recovery points until $enddate" >>$logfile
    write-host "Deleting recovery points until and $enddate" -foregroundcolor yellow
    $rp=get-recoverypoint $DSselected
    if ($DoTape -ne "y" )
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)-and ($_.DataLocation -eq "Disk")}
    else
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)}
    if (!$RPselected)
    write-host "No recovery points found!" -foregroundcolor yellow
    "No recovery points found, aborting...!" >> $logfile
    exit 0
    $reselect = $enddate
    $adjustflag = $false
    foreach ($onerp in $RPselected)
    $rtime=[string]$onerp.representedpointintime
    $rsize=[math]::round(($onerp.size/$MB),1)
    $line= "Found {0}, RP size= {1} MB (If 0 MB, co-located datasource cannot be computed), Incremental={2} "-f $rtime, $rsize,$onerp.Isincremental
    $line >> $logfile
    write-host "$line" -foregroundcolor yellow
    #Get dependent rp's for data source
    $allRPtbd=$DSselected.GetAllRecoveryPointsToBeDeleted($onerp)
    foreach ($oneDrp in $allRPtbd)
    if ($oneDrp.IsIncremental -eq $FALSE) {continue}
    $rtime=[string]$oneDrp.representedpointintime
    $rsize=[math]::round(($oneDrp.size/$MB),1)
    $line= ("`t...is dependancy for {0} size {1} `tIncremental={2}" -f $rtime, $rsize, $oneDrp.Isincremental)
    $line >> $logfile
    if ($oneDrp.representedpointintime -ge $enddate)
    #stick to latest full ($oneDrp = dependents, $onerp = full)
    $adjustflag = $true
    $reselect = $onerp.representedpointintime
    "<< Dependents newer than BEFORE date >>>" >> $logfile
    Write-Host -nonewline "`t <<< later than BEFORE date >>>" -foregroundcolor white -backgroundcolor red
    write-host "$line" -foregroundcolor yellow
    else
    #Ok, include current latest incremental
    $reselect = $oneDrp.representedpointintime
    write-host "$line" -foregroundcolor yellow
    if ($reselect -lt $oneDrp.representedpointintime)
    #we adjusted further backward than latest incremental within selection
    $reselect = $rtime
    $line = "Adjusted BEFORE date to be $reselect to include dependents to $enddate"
    $line >> $logfile
    Write-Host $line -foregroundcolor white -backgroundcolor blue
    $line="`n<<< SECOND TO LAST CHANCE TO ABORT - ONE MORE PROMPT TO CONFIRM. >>>"
    write-host $line -foregroundcolor white -backgroundcolor blue
    $line >> $logfile
    $line="Above recovery points within adjusted range will be permanently deleted !!!"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="These RP's include dependent recovery points and may contain co-located datasource(s)"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="Data source activity = " + $DSselected.Activity
    $line >> $logfile
    write-host $line -foregroundcolor white
    $DoDelete=""
    while (($DoDelete -ne "N" ) -and ($DoDelete -ne "Y"))
    $line="Continue with deletion (must answer) Y/N? "
    write-host $line -foregroundcolor white
    $DoDelete=read-host
    $line = $line + $DoDelete
    $line >> $logfile
    if (!$DSselected.Activity -eq "Idle")
    $line="Data source not idle, do you want to wait Y/N ? "
    write-host $line -foregroundcolor yellow
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Write-Host "Waiting for data source to become idle..." -foregroundcolor green
    while ($DSselected.Activity -ne "Idle")
    ("Waiting {0} seconds" -f $wait) >>$logfile
    Write-Host -NoNewline "..." -ForegroundColor blue
    start-sleep -s $wait
    if ($DoDelete -eq "Y")
    foreach ($onerp in $RPselected)
    #reselect is adjusted to safe range relative to what was requested
    #--- if adjustflag not set then all up to including else only older because we must keep the full
    if ((($onerp.representedpointintime -le $reselect) -and ($adjustflag -eq $false)) -or ($onerp.representedpointintime -lt $reselect))
    $rtime=[string]$onerp.representedpointintime
    write-host `n$line -foregroundcolor red
    $line >>$logfile
    if (($onerp ) -and ($onerp.IsIncremental -eq $FALSE)) { remove-recoverypoint -RecoveryPoint $onerp -confirm:$True} # >> $logfile}
    $line =("---`nDeleting recoverypoint -> " + $rtime)
    $line >>$logfile
    "All Done!" >> $logfile
    write-host "`nAll Done!`n`n" -foregroundcolor white
    $line="Do you want to View DPMdeleteRP.LOG file Y/N ? "
    write-host $line -foregroundcolor white
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Notepad DPMdeleteRP.LOG
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Passivation table ps_txn not being cleaned up

    Adf 11gR1PS1
    Hello
    I have a samll application using one unbounded task flow and one bounded task flow.
    Each task flow uses a different application module.
    The unbound task flow calls the bounded task flow in a modeless inline-popup via a button.
    When running the application and clicking on the button the bounded task flow is called and a new row is inserted
    into the ps_txn table.
    However when the inline-popup is closed via the "x" on the popup window the row is not removed from the ps_txn table.
    If the button is clicked again a new row is added to the theps_txn table.
    Is this the normal behaviour, looking at 40.5.3 in the Dev Guide it would seem that the record should be deleted or reused.
    I understand that there are scripts for cleaning up the table but shouldn't it be automatic ?
    What am I missing ?
    Regards
    Paul

    Hi Paul,
    Do you use the failover (jbo.dofailover) ?
    If not, I would expect records to be deleted from PS_TXN at activation.
    I tested with the ADF BC Component Browser, selecting menus Save/Restore Transaction State, with jbo.debugoutput=console:
    [277] (0) OraclePersistManager.deleteAll(2126) **deleteAll** collid=17461
    [278] (0) OraclePersistManager.deleteAll(2140)    stmt: delete "PS_TXN" where collid=:1
    [279] (0) OraclePersistManager.commit(217) **commit** #pending ops=1But I also already noticed orphaned records in the table.
    Do you use jbo.internal_connection to use the same connection whatever the AM instance that's passivated/activated or do you have an instance of the PS_TXN table in all AM's connections ?
    Regards,
    Didier.

  • Expired updates not being cleaned up

    Hi,
    I've been trying to clean up old expired updates on my SCCM 2012 SP1 server and for whatever reason it seems that the updates files are never actually getting removed.
    At first I tried the instructions at
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    When I run the script they provide it appears to go thru all the updates but never actually deletes any of them. The script always seems to say found it found an existing folder and then later it says that that it is excluding the same folder because
    it is active.
    Then I read that SP1 for SCCM 2012 is actually supposed to do the clean up process automatically.  But in this case do I need to do anything like expire the updates manually or does SCCM now do that?  How can I see what is preventing either
    the manual script or the automatic clean up process from actually removing the unneeded files and folders?
    And does anything need to be done with superseded updates as well?
    Also I've always thought that when you SCCM 2012 to do your updates that you should never go to the WSUS console and do anything but I read
    http://blog.coretech.dk/kea/house-of-cardsthe-configmgr-software-update-point-and-wsus/ and he is going the WSUS console and doing a clean up there as well.
    Thanks in advance,
    Nick

    Hi Xin,
    In the wsyncmgr.log file I see lots of log entries like this:
    Skipped update 2d8121b4-ba5c-4492-ba6e-1c70e9382406 - Update for Windows Vista (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.777+420><thread=4172 (0x104C)>
    Skipped update 24d18083-0417-4273-9a5e-1fc3cd37f1d4 - Update for Windows Embedded Standard 7 for x64-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.791+420><thread=4172 (0x104C)>
    Skipped update 954f2ad2-369e-469e-97a0-3efd0a831111 - Update for Windows 8.1 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.805+420><thread=4172 (0x104C)>
    Skipped update f81d2820-721a-431c-a262-4878a42f0115 - Update for Windows Vista for x64-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.822+420><thread=4172 (0x104C)>
    Skipped update 7c82171f-025c-46af-849c-63764ba44382 - Update for Windows Server 2008 x64 Edition (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.836+420><thread=4172 (0x104C)>
    Skipped update 36c29163-b78a-410f-8bd0-7370b35a24f1 - Update for Windows Server 2012 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.850+420><thread=4172 (0x104C)>
    Skipped update 6146260e-5c34-4483-962d-834250d84c79 - Update for Windows 7 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.864+420><thread=4172 (0x104C)>
    Skipped update e6e7f357-7011-4bfd-8b14-8be61e43fa51 - Update for Windows Server 2003 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.877+420><thread=4172 (0x104C)>
    Skipped update 2ed5e49f-3295-4b89-8a0b-9a38c0027d6d - Update for Windows Server 2008 R2 for Itanium-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.890+420><thread=4172 (0x104C)>
    Skipped update 62778a2a-11d8-4cb1-9970-9c3f45202d04 - Update for Windows Server 2008 R2 x64 Edition (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.905+420><thread=4172 (0x104C)>
    And I also see the following entries:
    Sync time: 0d00h41m29s  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 01:51:51.388+420><thread=3440 (0xD70)>
    Wakeup by SCF change  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 02:05:42.535+420><thread=3440 (0xD70)>
    Wakeup for a polling cycle  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:49.050+420><thread=3440 (0xD70)>
    Deleting old expired updates...  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:49.130+420><thread=3440 (0xD70)>
    Deleted 17 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:57.067+420><thread=3440 (0xD70)>
    Deleted 134 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:06.487+420><thread=3440 (0xD70)>
    Deleted 168 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.595+420><thread=3440 (0xD70)>
    Deleted 168 expired updates total  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.651+420><thread=3440 (0xD70)>
    Deleted 10 orphaned content folders in package P0100005 (Endpoint Protection Definition Updates)  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.875+420><thread=3440 (0xD70)>
    Deleted 5 orphaned content folders in package P0100007 (Automatic Deployment Rule for Exchange Servers)  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.953+420><thread=3440 (0xD70)>
    Thread terminated by service request.  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:51.039+420><thread=3440 (0xD70)>
    So it seems like it might be skipping updates?  And then it says it deleted 168 expired updates for example?
    But if I look at the drive where all the update packages are stored it hasn't changed size.

  • Will Installing more RAM improve performanc​e, IF laptop has "available memory" not being used?

    My laptop has 2x 1GB of RAM.
    Right now, and usually, The Physical Memory shown in Task Manager, shows that there is "available memory" (currently it's 500megs worth). (And "system cache" says 300megs.)
    There is also, right now, 2.5GB of page filing being used.
    If I were to replace the 2GB of RAM with 4GB (of the same Mhz), would there be a noticable increase in performance?
    (windows XP 32bit OS)

    A matched set of DDR2 ram will perform a lot faster one memory not matched. Its required for the double data rate to work. Hence 2+2 is the max you'd need/want in a 32bit machine. Windows 7 was built for large memory footprints. XP was built to run just barely in 64MB but really needed a min of 128MB to support a network connection and printing. It was never enable to truely go a whole lot further than a few machines built capable of 1GB. There was even a bug which was patch causing problems if a !GB or more was available. Just because it can see it does not mean it actually uses it !! The kernal by default is swapped in/out of ram even in larger memory systems. Why, it was is not aware of the extra memory and does not change its behavour from a 512MB machine.
    As for the 1333Mhz ram the timing may be faster than the other memory used. Hence it flow the data faster on the bus. Fewer delays in various cycles including the RAS/CAS etc but thats all under the hood so to speak.
    T520 Model 4239 Intel(R) Core(TM) i7-2860QM CPU @ 2.50GHz
    Intel Sandy Bridge & Nvidia NVS 4200M graphics Intel N 6300 Wi-Fi adapter
    Windows 7 Home Prem - 64bit w/8GB DDR3

  • 3?'s: Message today warning lack of memory when using Word (files in Documents) something about "idisc not working" 2. Message week ago "Files not being backed up to Time Capsule"; 3. When using Mac Mail I'm prompted for password but none work TKS - J

    3 ?'s:
    1  Message today warning lack of memory when using Word (files in Documents) something about "idisc not working"
    2. Message week ago "Files not being backed up to Time Capsule";                                                                                                                                             
    3. When using Mac Mail I'm prompted for password but none work
    Thanks - J

    Thanks Allan for your quick response to my amateur questions.
    Allan:     I'm running version Mac OS X Version 10.6.8     PS Processor is 2.4 GHz Intel core 15 
    Memory  4 gb  1067   MHz  DDr3  TN And @ 1983-2011 Apple Inc.
    I just "Updated Software" as prompted.
    Thanks for helping me!    - John Garrett
    PS.
    Hardware Overview:
      Model Name:          MacBook Pro
      Model Identifier:          MacBookPro6,2
      Processor Name:          Intel Core i5
      Processor Speed:          2.4 GHz
      Number Of Processors:          1
      Total Number Of Cores:          2
      L2 Cache (per core):          256 KB
      L3 Cache:          3 MB
      Memory:          4 GB
      Processor Interconnect Speed:          4.8 GT/s
      Boot ROM Version:          MBP61.0057.B0C
      SMC Version (system):          1.58f17
      Serial Number (system):          W8*****AGU
      Hardware UUID:          *****
      Sudden Motion Sensor:
      State:          Enabled
    <Edited By Host>

  • CF 8 JVM memory is not being garbage collected.

    I am baffled by something I am seeing on my QA server. I have
    an app that we load tested but when the test completed the JVM
    memory used was not released. I used CF Server Monitor to watch the
    memory usage and sometimes it spiked to the max and either the app
    failed or I got timeout exceptions.
    This is the only app running on this server and the testing
    completed over an hour ago but the memory has not been released
    yet.
    CF Admin settings:
    Maximum JVM Heap Size (MB) 512
    The CF Server JVM Setting arguments include: -server
    -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m
    -XX:+UseParallelGC
    I found a script that uses java.lang.Runtime and
    java.lang.management.managementFactory that dumps a JVM memory
    usage profile The latest dump follows:
    JVM Monitor - ColdFusion Server - Enterprise v8,0,1,195765
    JVM Memory Monitor - struct
    Heap Memory Usage - Committed 481 MB
    Heap Memory Usage - Initial 0.00 MB
    Heap Memory Usage - Max 493 MB
    Heap Memory Usage - Used 437 MB
    JVM - Free Memory 44.0 MB
    JVM - Max Memory 493 MB
    JVM - Total Memory 481 MB
    JVM - Used Memory 449 MB
    Memory Pool - Code Cache - Used 8.80 MB
    Memory Pool - PS Eden Space - Used 6.37 MB
    Memory Pool - PS Old Gen - Used 428 MB
    Memory Pool - PS Perm Gen - Used 52.4 MB
    Memory Pool - PS Survivor Space - Used 3.50 MB
    Non-Heap Memory Usage - Committed 62.8 MB
    Non-Heap Memory Usage - Initial 18.3 MB
    Non-Heap Memory Usage - Max 240 MB
    Non-Heap Memory Usage - Used 61.2 MB
    According to the CF Server Monitor JVM memory usage builds up
    to 477 MB then the app fails or timesout.
    Session Scope memory usage: 0.27 KB
    Application Scope memory usage: 1.370 KB
    Server Scope memory usage: 3.12 KB
    Since the test ended JVM Memory Usage has dropped back to 438
    MB?
    Besides CFAdmin nothing else is running on this CF Server.
    I've read several other memory related topics but none of them have
    helped.
    Can someone tell me why the memory isn't being release? How
    can I further troubleshoot the problem?
    Thx
    pwp

    > Adam Cameron wrote:
    >
    The maximum stable heap size I've managed to get is around
    > 1.0-1.2GB, on a win32 system. On Solaris (running a
    32-bit JVM),
    > about 1.4GB. It *seems* like GC doesn't actually clear
    out RAM
    > properly if more than that much RAM is being
    addressed.
    >
    > Yes, there is a well-known
    >
    http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_19359&sliceId=1
    Not really what I was talking about. One might be able to get
    the CF
    instance to *start* with 1.8GB allocated to the heap, but it
    won't actually
    work. I've managed to get a server to idle for a reasonable
    length of time
    on 1.5GB, but as soon as the thing started to ramp up, it
    face-planted,
    once it started actually trying to *use* the higher end of
    the RAM
    allocated to it. At 1.2GB, it'll seem to run OK for a
    reasonable length of
    time, but eventually it starts leaking memory; at around 1GB,
    it was pretty
    stable.
    Hence my comment about it being *stable* at that allocation.
    Not that "it
    simply won't start if more than 1.8GB is allocated to it".
    My point was that your rule of thumb:
    maximum heap size(Xmx) = RAM(in MB) / (2 * number of servers
    using the
    JVM)
    Is not a very good one. Plug 4GB RAM (so a small server) and
    one CF
    instance into that equation. Your rule suggests I should be
    allocating 2GB
    to the heap. Which - as you yourself pointed out - won't
    work.
    Adam

  • Applets and memory not being released by Java Plug-in

    Hi.
    I am experiencing a strange memory-management behavior of the Java Plug-in with Java Applets. The Java Plug-in seems not to release memory allocated for non-static member variables of the applet-derived class upon destroy() of the applet itself.
    I have built a simple "TestMemory" applet, which allocates a 55-megabytes byte array upon init(). The byte array is a non-static member of the applet-derived class. With the standard Java Plug In configuration (64 MB of max JVM heap space), this applet executes correctly the first time, but it throws an OutOfMemoryException when pressing the "Reload / Refresh" browser button or if pressing the "Back" and then the "Forward" browser buttons. In my opionion, this is not an expected behavior. When the applet is destroyed, the non-static byte array member should be automatically invalidated and recollected. Isn't it?
    Here is the complete applet code:
    // ===================================================
    import java.awt.*;
    import javax.swing.*;
    public class TestMemory extends JApplet
      private JLabel label = null;
      private byte[] testArray = null;
      // Construct the applet
      public TestMemory()
      // Initialize the applet
      public void init()
        try
          // Initialize the applet's GUI
          guiInit();
          // Instantiate a 55 MB array
          // WARNING: with the standard Java Plug-in configuration (i.e., 64 MB of
          // max JVM heap space) the following line of code runs fine the FIRST time the
          // applet is executed. Then, if I press the "Back" button on the web browser,
          // then press "Forward", an OutOfMemoryException is thrown. The same result
          // is obtained by pressing the "Reload / Refresh" browser button.
          // NOTE: the OutOfMemoryException is not thrown if I add "testArray = null;"
          // to the destroy() applet method.
          testArray = new byte[55 * 1024 * 1024];
          // Do something on the array...
          for (int i = 0; i < testArray.length; i++)
            testArray[i] = 1;
          System.out.println("Test Array Initialized!");
        catch (Exception e)
          e.printStackTrace();
      // Component initialization
      private void guiInit() throws Exception
        setSize(new Dimension(400, 300));
        getContentPane().setLayout(new BorderLayout());
        label = new JLabel("Test Memory Applet");
        getContentPane().add(label, BorderLayout.CENTER);
      // Start the applet
      public void start()
        // Do nothing
      // Stop the applet
      public void stop()
        // Do nothing
      // Destroy the applet
      public void destroy()
        // If the line below is uncommented, the OutOfMemoryException is NOT thrown
        // testArray = null;
      //Get Applet information
      public String getAppletInfo()
        return "Test Memory Applet";
    // ===================================================Everything works fine if I set the byte array to "null" upon destroy(), but does this mean that I have to manually set to null all applet's member variables upon destroy()? I believe this should not be a requirement for non-static members...
    I am able to reproduce this problem on the following PC configurations:
    * Windows XP, both JRE v1.6.0 and JRE v1.5.0_11, both with MSIE and with Firefox
    * Linux (Sun Java Desktop), JRE v1.6.0, Mozilla browser
    * Mac OS X v10.4, JRE v1.5.0_06, Safari browser
    Your comments would be really appreciated.
    Thank you in advance for your feedback.
    Regards,
    Marco.

    Hi Marco,
    my guess as to why JPI would keep references around, if it does keep them, is that it propably is an implementation side effect. A lot of things are cached in the name of performance and it is easy to leave things laying around in your cache. Maybe the page with the associated images/applets is kept in the browser cache untill the browser needs some memory and if the browser memory manager is not co-operating with the JPI/JVM memory manager the browser is not out of memory, thus not releasing its caches but the JVM may be out of memory. Thus the browser indirectly keeps the reference that it realy does not need. This reference could be inderect through some 'applet context' or what ever the browser uses to interact with JPI, don't realy know any of these details, just imaging what must/could be going on there. Browser are amazingly complicated beast.
    This behaviour that you are observing, weather the origin is something like I speculated or not, is not nice but I would not expect it to be fixed even if you filed a bug report. I guess we are left with relleasing all significatn memory structures in destroy. A simple way to code this is not to store anything in the member fields of the applet but in a separate class; then one has to do is to null that one reference from the applet to that class in the destroy method and everything will be relased when necessary. This way it is not easy to forget to release things.
    Hey, here is a simple, imaginary, way in which the browser could cause this problem:
    The browser, of course needs a reference to the applet, call it m_Applet here. Presume the following helper function:
    Applet instantiateAndInit(Class appletClass) {
    Applet applet=appletClass.newInstance();
    applet.init();
    return applet;
    When the browser sees the applet tag it instantiates and inits the new applet as follows:
    m_Applet=instantiateAndInit(appletClass);
    As you can readily see, the second time the instantiation occurs, the m_Applet holds the reference to the old applet until after the new instance is created and initlized. This would not cause a memory leak but would require that twice the memory needed by the applet would be required to prevent OutOfMemory.I guess it is not fair to call this sort of thing a bug but it is questionable design.In real life this is propably not this blatant, but could happen You could try, if you like, by allocating less than 32 Megs in your init. If you then do not run out of memory it is an indication that there are at most two instances of your applet around and thus it could well be someting like I've speculated here.
    br Kusti

  • Where clause one query not being pushed down

    I am having a problem where I cannot get a certain "optional" parameter to be pushed down to a query. The parameter gets applied in memory after the result set is returned but not pushed down to the DB. The function is as follows:
    declare function getFoo($key as xs:string, $optinalInput as xs:string*) as element(b:bar)*
    for $foo in f:getFoo()
    where $key = $foo/key
    where not(exists($optinalInput)) or $foo/optional = $optinalInput<- does not get pushed down to the query
    return $foo
    If I make optinalInput an xs:string instead of xs:string * and the optional parameter will get pushed to the query. The problem is for this optional parameter I could get anywhere from 0-50 in the sequence. Seems like when the parameter is a sequence it doesn't get applied to the query. Is there any way around this?

    Mike,
    I understand the difference between * and ? and I was one of the people working on the "string-length not getting pushed" problem so I am very familiar with it. I tried you solution that you mentioned below and it still did not push the where clause to the query. I know I could achieve this with an ad-hoc query but I wanted to do a pure xquery implementation of this component because of the benefits it could have when interacting with other components in our ODSI project...such as SQL joining and potentially pushing additional where clause down from components that call this component. The only way I did get this to kind of work is to do this:
    return
    for $o in $optinalInput
    for $foo in f:getFoo()
    where $key = $foo/key
    where $o = $foo/optional
    return
    $foo
    for $count in 1
    where not(exists($optinalInput))
    for $foo in f:getFoo()
    where $key = $foo/key
    return
    $foo
    By putting the optional parameter into a FOR statement above the table call it guarantees that at least one will exists and that's why the optional parameter gets pushed properly to the DB with a parameterized query. The problem with this is that even though a parameterized query gets pushed it will call the SQL statement multiple times!...not good.
    Another solution that was suggested to me would be to create the number of sequences for each item in the sequence and treat each item as it's own. For example if you know that the schema limits the sequence from 0..50 then you could make 50 items and 50 where clauses....probably not an optimal solution either but it would achieve properly pushing the where clause tot he DB.
    For now I am satisfied with this optional parameter not being pushed to the DB because the performance was still good but it would be nice if there was a maintainable pure xquery solution to this problem.
    Thanks for the help it's always appreciated!
    Mike

Maybe you are looking for

  • Sony BDV-E570 - Cannot get sound from TV to Sony system using Digital Optical Cord

    I get beautiful audio out of this system in FM Tuner, BD/DVD, and Internet Radio - no buzz whatsoever. However... the only way that I can get ANY audio from the TV or Cable box to come out of the Sony system is with RCA cables (the old style Red / Wh

  • How to delete pages in a pdf file

    how do i delete pages on a pdf file

  • Adobe Flash Player Problem (Jelly Bean)

    Hello, I'm from Mexico City and the update was released on 02/04/13. I update but there was a message that says a problem with Adobe Flash Player before upgrading. When the update finished, I looked in the Adobe applications and not find it. With Ope

  • Xorg-conf where is it?

    Hello again!! i need to set DPI for my monitor but in this direcotory: /etc/X11/xorg.confg.d/  i don't have any xorg.conf file.... i try to generate it with this commnad: Xorg :1 -configure but i get this error: Number of created screens does not mat

  • How can i active my multimedia and packet data?

    pls help of how to ative my multimedia and packet data, or wat the instractions i need to follow in order for me to active the multimedia model of the phone is 6070.hope u will help me in this .coz i want to share pics in my frnds and send a email to