Session not being clean up by JRun

My application is using IPlanet WebServer and JRun3.02 Application server. I am having a problem with active session not getting cleaned up by the App Server. When the user goes through the application and finishes the process, I invalidate the session by doing 'session.Invalidate()'. I also have set a 30 minute timeout value in the JRun global.properties file to invalidate the session if the user starts but not finish going through the application. However, the number of active session count in the JRun log doesn't seem to go down. After a few days, I ran out of sessions and the application hungs. I keep a few objects on the session including a pretty big 'pdfObject' that I use to create a PDF document on the fly.
Any idea why JRun not able to clean up the sessions after the 30 minute timeout has passed? Does the fact that I have stored objects on the session preventing JRun from invalidating and cleaning up the session?
Thanks in advance.

Hi afikru
According to the Servlet specification session.invalidate() method should unbind any objects associated with it. However I'm not conversant with JRun application server so I can only provide some pointers here to help you out.
Firstly, try locating some documentation specific to your application server which may throw some light on why this may be happening.
Secondly, I'd suggest running the Server within a Profiling tool so that you can see what objects are being created and how many of those. Try explicitly running the Garbage Collector and see if the sessions come down.
Keep me posted on your progress.
Good Luck!
Eshwar R.
Developer Technical Support
Sun microsystems

Similar Messages

  • [svn] 4533: Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

  • Portal session not being terminated. browser "unload" event

    This line of code is in the portallauncher.default and eventually causes the problem:
    EPCM.subscribeEvent("urn:com.sapportals.portal:browser", "unload", releaseProducerSessions);
    releaseProducerSessions eventually calls a portal component
    WSRPSessionRelease.. which is causing the problem.
    When we upgraded from EP 6.0 to NW 2004, users started recieving the Netweaver Login Screen when they logged out and logged back in, in the same browser. We think this error occurs because NW 2004 implements Web Services Remote Portal functionality.
    We are using SiteMinder as a third party session management tool.
    What we found was that the Siteminder session was being killed but the Portal session was not. Therefore, when users logged back in they would see the generic Netweaver Login Screen, and they could actually just hit "enter" and continue to the portal.
    A successful logoff, users clicked the logoff button, the DSM terminator was being called, thus killing the portal session, then a form was submitted redirecting the users the the siteminder loggoff page, which logs the users off siteminder.
    When the logoff failed, we found that after the DSM Terminator was called
    and before the page was being redirected, a portal component
    (WSRPSessionRelease) was being called, which in turn, RECREATED the portal session. So the user never actually gets logged off from the portal.
    We found that the WSRPSessionRelease component is set to
    a "browser" "unload" event when the portallauncher.default component is first loaded. This is the same component that is being called when the user clicks the "X" to force close the browser.
    Not everytime is the WSRPSessionRelase component being called before the redirect to the siteminder logg off page. Sometimes this component is called after the redirect, and we find that this is a successful loggoff.
    The component is:
    irj/servlet/prt/portal/prtroot/com.sap.portal.wsrp.coreconsumer.WSRPSessionRelease

    Hello Michael,
    The 'log off' issue is a known issue with Portal since EP 6
    Had faced similar issue and SAP suggests to redirect the 'log off' link to another non-SAP site...like your company intranet site.
    This will help the session to break.
    There are 1-2 SAP Notes on this as well.
    Hope this helps.
    Regards,
    Ritu

  • Portal session not being terminated

    When we upgraded from EP 6.0 to NW 2004, users started recieving the
    Netweaver Login Screen when they logged out and logged back in, in the
    same browser. We think this error occurs because NW 2004 implements Web
    Services Remote Portal functionality.
    We are using SiteMinder as a third party session management tool.
    What we found was that the Siteminder session was being killed but the
    Portal session was not. Therefore, when users logged back in they would
    see the generic Netweaver Login Screen, and they could actually just
    hit "enter" and continue to the portal.
    A successful logoff, users clicked the logoff button, the DSM terminator
    was being called, thus killing the portal session, then a form was
    submitted redirecting the users the the siteminder loggoff page, which
    logs the users off siteminder.
    When the logoff failed, we found that after the DSM Terminator was called
    and before the page was being redirected, a portal component
    (WSRPSessionRelease) was being called, which in turn, recreated the
    portal session. So the user never actually gets logged off from the
    portal.
    We found that the WSRPSessionRelease component is set to
    a "browser" "unload" event when the portallauncher.default component is
    first loaded. This is the same component that is being called when the
    user clicks the "X" to force close the browser.
    Not everytime is the WSRPSessionRelase component being called before the
    redirect to the siteminder logg off page. Sometimes this component is
    called after the redirect, and we find that this is a successful loggoff.
    The component is:
    irj/servlet/prt/portal/prtroot/com.sap.portal.wsrp.coreconsumer.WSRPSessio
    nRelease

    Hi Michael, we are facing the same error. Have you found a solution?
    Thanks in advance and best regards

  • Obsolete jdb not being cleaned up

    Hi,
    Setup:
    * We are using Oracle NoSQL 1.2.123.
    * We have 3 replication groups with 3 replication nodes each.
    Problem:
    * 2 of the slaves (in 2 different replication groups) occupy much more space in JDB files (10 times more) then all the others. As these are slaves, and writes always go through the master, and all nodes in a replication group have the same data (eventually), I assume that this is stale data that has not been cleaned up by the BDB garbage collection (cleaner threads). Unfortunately the logs do not show anything new (since Dec. last year) and the oldest JDB files are from February.
    Questions:
    * Any ideas what could have gone wrong?
    * What can I do to trigger the cleaners to cleanup the old data? Is that safe to do in production environment and without downtime?
    * Is it really safe to assume that the current data in within a replication groups is really the same?
    Thank you in advance
    Dimo
    PS. A thread dump shows 2 cleaner threads that do nothing.

    1) The simplest and fastest way to correct the replica node is to restore it from the master node. We will send you instructions for doing this later today.Here are directions for refreshing the data storage files (.jdb files) on a target node. NoSQL DB will automatically refresh the storage files from another node, after we manually stop the target node, delete its storage files, and finally restart it, as described below. Thanks to Linda Lee for these directions.
    First, be sure to make a backup.
    Suppose you want to remove the storage files from rg1-rn3 and make it refresh its files from rg1-rn1. First check where the storage files for the target replication node are located using the show topology command to the Admin CLI. Start the AdminCLI this way:
        java -jar KVHOME/lib/kvstore.jar runadmin -host <host> -port <port>Find the directory containing the target Replication Node's files.
        kv-> show topology -verbose
        store=mystore  numPartitions=100 sequence=108
          dc=[dc1] name=MyDC repFactor=3
          sn=[sn1]  dc=dc1 localhost:13100 capacity=1 RUNNING
            [rg1-rn1] RUNNING  c:/linda/work/smoke/KVRT1/dirB
                         single-op avg latency=0.0 ms   multi-op avg latency=0.67391676 ms
          sn=[sn2]  dc=dc1 localhost:13200 capacity=1 RUNNING
            [rg1-rn2] RUNNING  c:/linda/work/smoke/KVRT2/dirA
                      No performance info available
          sn=[sn3]  dc=dc1 localhost:13300 capacity=1 RUNNING
            [rg1-rn3] RUNNING  c:/linda/work/smoke/KVRT3/dirA
                         single-op avg latency=0.0 ms   multi-op avg latency=0.53694165 ms
          shard=[rg1] num partitions=100
            [rg1-rn1] sn=sn1 haPort=localhost:13111
            [rg1-rn2] sn=sn2 haPort=localhost:13210
            [rg1-rn3] sn=sn3 haPort=localhost:13310
            partitions=1-100In this example, rg1-rn3's storage is located in
        c:/linda/work/smoke/KVRT3/dirAStop the target service using the stop-service command
        kv-> plan stop-service -service rg1-rn3 -waitIn another command shell, remove the files for the target Replication Node
        rm c:/linda/work/smoke/KVRT3/dirA/rg1-rn3/env/*.jdbIn the Admin CLI, restart the service
         plan start-service -service rg1-rn3 -waitThe service will restart, and will populate its missing files from one of the other two nodes in the shard. You can use the "verify" or the "show topology" command to check on he status of the store.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Aq$_tab_p, aq$_tab_d filling and not being cleaned up

    Hi all.
    I have a simple 1 way streams replication setup (two node) based on examples. Replication seems to be working.
    However, the AQ$_TAB_P and AQ$_TAB_D (on the capture side only) tables continue to fill (as well as number of messages in the queue and spilled lcrs in v$buffered_queues). Nothing should be spilling since the only things I'm sending are 1 row updates to a heartbeat table and the streams pools are a few hundred meg.
    I have tried aq_tm_process unset, as well as set to 2 and the tables continue to grow.
    The MSG_STATE in aq$tab are either DEFERRED or DEFERRED SPILLED. As mentioned all of the heartbeat updates ( as well as small test transactions ) replicate just fine, so the transactions are being captured,propagated and applied.
    I am running 10.2.0.2 on solaris 10 with no streams related one off patches to speak of, for reference. My propagation did not specify queue_to_queue.
    I'm wondering if there is a step I may have missed, or what else I may be able to look at to ensure that these tables are cleaned up?
    Thanks.
    Edited by: user599560 on Oct 28, 2008 12:39 PM

    Hello
    I forgot to mention that you should check v$propagation_receiver on the destination and v$propagation_sender on the source. v$propagation_receiver on the source will not have records unless you are using bi-directional streams.
    The aq_tm_processes parameter should be set on all the databases that uses Streams. This parameter is responsible for spawning the number of queue monitor slaves which is actually performing the spilling and removing the spilled messages which are no longer needed.
    It is suggested to remove this parameter from spfile, however your SHOW PARAMETER will still show this as 0. Hence you should be checking v$spparameter to confirm whether this was actually removed from it. If you remove it from the spfile, then it should spawn the required number of slaves automatically as per the autotune feature in 10g. However I would always suggest to set this parameter to 1 so that one slave process will be always spawned even if we dont use Streams and SHOW PARAMETER always show this as 1.
    If you find the slaves are not spawned, then you should be checking your alert.log to see whether any errors are reported. You need to check the Queue Monitor Coordinator process qmnc is spawned. If qmnc itself is not spawned (by default it should be spawned always), then no q00 slaves will be spawned. If you remove the parameter from spfile and you see that no q00 slaves are spawned eventhough you are using Streams (either capture, prop or apply) then you should log an SR with Oracle Support to investigate this. You can check the qmnc and q00 slaves at the OS level using the following command:
    ps -ef | grep $ORACLE_SID | grep [q][m0][n0]
    Please mark this thread as answered if all your questions are answered well else let me know.
    Thanks,
    Rijesh

  • Old recovery points seems to not being cleaned up

    I'm running a Windows Server 2012 server with DPM 2012 SP1, acting as a secondary DPM server for a couple of primary servers. However, the last 5-6 weeks it has begun to behave very strange. Suddenly, I get a lot of "Recovery Point volume threshold
    exceeded", "DPM does not have sufficient storage space available on the recovery point volume to create new recovery Points" and "The used disk space on the computer running DPM for the recovery point volume of SQL Server 2008 database
    XXXXX\DB(servername.domain.com) has exceeded the threshold value of 90% (DPM accounts 600 MB for internal usage in addition to free space available). If you do not allocate more disk space, synchronization jobs may fail due to insufficient disk space. (ID
    3169).
    All of these alerts seem to have a common source - disk space of course, but there is currently 8 TB free in the DPM disk pool. However, I have a feeling that all of this started when we added another DPM disk to the storage pool. Could it be that DPM doesn't
    clean up expired disk data correctly any longer?
    /Amir

    Hi,
    If the pruneshadowcopiesdpm201.ps1 is not completing, hangs, or crashes, then that needs to be addressed as that will definitely cause storage usage problems.
    In the meantime you can use this powershell script to delete old recovery points to help free disk space.  It will prompt to select a datasource, then a date to delete all recovery points made before that time.
    #Author : Ruud Baars
    #Date : 11/09/2008
    #Edited : 11/15/2012 By: Wilson S.
    #edited : 11:27:2012 By: Mike J.
    # NOTE: Update script to only remove recovery points on Disk. Recovery points removed will be from the oldest one up to the date
    # entered by the user while the script is running
    #deletes all recovery points before 'now' on selected data source.
    $version="V4.7"
    $ErrorActionPreference = "silentlycontinue"
    add-pssnapin sqlservercmdletsnapin100
    Add-PSSnapin -Name Microsoft.DataProtectionManager.PowerShell
    #display RP's to delete and ask to continue.
    #Check & wait data source to be idle else removal may fail (in Mojito filter on 'intent' to see the error)
    #Fixed prune default and logfile name and some logging lines (concatenate question + answer)
    #Check dependent recovery points do not pass BEFORE date and adjust selection to not select those ($reselect)
    #--- Fixed reselect logic to keep adjusting reselect for as long as older than BEFORE date
    #--- Fixed post removal rechecking logic to match what is done so far (was still geared to old logic)
    #--- Modified to remove making RP and ask for pruning, fixed logic for removal rechecking logic
    $MB=1024*1024
    $logfile="DPMdeleteRP.LOG"
    $wait=10 #seconds
    $confirmpreference = "None"
    function Show_help
    cls
    $l="=" * 79
    write-host $l -foregroundcolor magenta
    write-host -nonewline "`t<<<" -foregroundcolor white
    write-host -nonewline " DANGEROUS :: MAY DELETE MANY RECOVERY POINTS " -foregroundcolor red
    write-host ">>>" -foregroundcolor white
    write-host $l -foregroundcolor magenta
    write-host "Version: $version" -foregroundcolor cyan
    write-host "A: User Selects data source to remove recovery points for" -foregroundcolor green
    write-host "B: User enters date / time (using 24hr clock) to Delete recovery points" -foregroundcolor green
    write-host "C: User Confirms deletion after list of recovery points to be deleted is displayed." -foregroundcolor green
    write-host "Appending to log file $logfile`n" -foregroundcolor white
    write-host "User Accepts all responsibilities by entering a data source or just pressing [Enter] " -foregroundcolor white -backgroundcolor blue
    "**********************************" >> $logfile
    "Version $version" >> $logfile
    get-date >> $logfile
    show_help
    $DPMservername=&"hostname"
    "Selected DPM server = $DPMservername" >> $logfile
    write-host "`nConnnecting to DPM server retrieving data source list...`n" -foregroundcolor green
    $pglist = @(Get-ProtectionGroup $DPMservername) # WILSON - Created PGlist as array in case we have a single protection group.
    $ds=@()
    $tapes=$null
    $count = 0
    $dscount = 0
    foreach ($count in 0..($pglist.count - 1))
    # write-host $pglist[$count].friendlyname
    $ds += @(get-datasource $pglist[$count]) # WILSON - Created DS as array in case we have a single protection group.
    # write-host $ds
    # write-host $count -foreground yellow
    if ( Get-Datasource $DPMservername -inactive) {$ds += Get-Datasource $DPMservername -inactive}
    $i=0
    write-host "Index Protection Group Computer Path"
    write-host "---------------------------------------------------------------------------------"
    foreach ($l in $ds)
    "[{0,3}] {1,-20} {2,-20} {3}" -f $i, $l.ProtectionGroupName, $l.psinfo.netbiosname, $l.logicalpath
    $i++
    $DSname=read-host "`nEnter a data source index from list above - Note co-located datasources on same replica will be effected"
    if (!$DSname)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $DSselected=$ds[$DSname]
    if (!$DSselected)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $rp=get-recoverypoint $DS[$dsname]
    $rp
    # $DoTape=read-host "`nDo you want to remove when recovery points are on tape ? [y/N]"
    # "Remove tape recovery point = $DoTape" >> $logfile
    write-host "`nCollecting recoverypoint information for datasource $DSselected.name" -foregroundcolor green
    if ($DSselected.ShadowCopyUsedspace -gt 0)
    while ($DSSelected.TotalRecoveryPoints -eq 0)
    { # "still 0"
    #this is on disk
    $oldShadowUsage=[math]::round($DSselected.ShadowCopyUsedspace/$MB,1)
    $line=("Total recoverypoint usage {0} MB on DISK in {1} recovery points" -f $oldShadowUsage ,$DSselected.TotalRecoveryPoints )
    $line >> $logfile
    write-host $line`n -foregroundcolor white
    #this is on tape
    #$trptot=0
    #$tp= Get-RecoveryPoint($dsselected) | where {($_.Datalocation -eq "Media")}
    #foreach ($trp in $tp) {$trptot += $trp.size }
    #if ($trptot -gt 0 )
    # $line=("Total recoverypoint usage {0} MB on TAPE in {1} recovery points" -f ($trptot/$MB) ,$DSselected.TotalRecoveryPoints )
    # $line >> $logfile
    # write-host $line`n -foregroundcolor white
    [datetime]$afterdate="1/1/1980"
    #$answer=read-host "`nDo you want to delete recovery points from the beginning [Y/n]"
    #if ($answer -eq "n" )
    # [datetime]$afterdate=read-host "Delete recovery points AFTER date [MM/DD/YYYY hh:mm]"
    [datetime]$enddate=read-host "Delete ALL Disk based recovery points BEFORE and Including date/time entered [MM/DD/YYYY hh:mm]"
    "Deleting recovery points until $enddate" >>$logfile
    write-host "Deleting recovery points until and $enddate" -foregroundcolor yellow
    $rp=get-recoverypoint $DSselected
    if ($DoTape -ne "y" )
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)-and ($_.DataLocation -eq "Disk")}
    else
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)}
    if (!$RPselected)
    write-host "No recovery points found!" -foregroundcolor yellow
    "No recovery points found, aborting...!" >> $logfile
    exit 0
    $reselect = $enddate
    $adjustflag = $false
    foreach ($onerp in $RPselected)
    $rtime=[string]$onerp.representedpointintime
    $rsize=[math]::round(($onerp.size/$MB),1)
    $line= "Found {0}, RP size= {1} MB (If 0 MB, co-located datasource cannot be computed), Incremental={2} "-f $rtime, $rsize,$onerp.Isincremental
    $line >> $logfile
    write-host "$line" -foregroundcolor yellow
    #Get dependent rp's for data source
    $allRPtbd=$DSselected.GetAllRecoveryPointsToBeDeleted($onerp)
    foreach ($oneDrp in $allRPtbd)
    if ($oneDrp.IsIncremental -eq $FALSE) {continue}
    $rtime=[string]$oneDrp.representedpointintime
    $rsize=[math]::round(($oneDrp.size/$MB),1)
    $line= ("`t...is dependancy for {0} size {1} `tIncremental={2}" -f $rtime, $rsize, $oneDrp.Isincremental)
    $line >> $logfile
    if ($oneDrp.representedpointintime -ge $enddate)
    #stick to latest full ($oneDrp = dependents, $onerp = full)
    $adjustflag = $true
    $reselect = $onerp.representedpointintime
    "<< Dependents newer than BEFORE date >>>" >> $logfile
    Write-Host -nonewline "`t <<< later than BEFORE date >>>" -foregroundcolor white -backgroundcolor red
    write-host "$line" -foregroundcolor yellow
    else
    #Ok, include current latest incremental
    $reselect = $oneDrp.representedpointintime
    write-host "$line" -foregroundcolor yellow
    if ($reselect -lt $oneDrp.representedpointintime)
    #we adjusted further backward than latest incremental within selection
    $reselect = $rtime
    $line = "Adjusted BEFORE date to be $reselect to include dependents to $enddate"
    $line >> $logfile
    Write-Host $line -foregroundcolor white -backgroundcolor blue
    $line="`n<<< SECOND TO LAST CHANCE TO ABORT - ONE MORE PROMPT TO CONFIRM. >>>"
    write-host $line -foregroundcolor white -backgroundcolor blue
    $line >> $logfile
    $line="Above recovery points within adjusted range will be permanently deleted !!!"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="These RP's include dependent recovery points and may contain co-located datasource(s)"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="Data source activity = " + $DSselected.Activity
    $line >> $logfile
    write-host $line -foregroundcolor white
    $DoDelete=""
    while (($DoDelete -ne "N" ) -and ($DoDelete -ne "Y"))
    $line="Continue with deletion (must answer) Y/N? "
    write-host $line -foregroundcolor white
    $DoDelete=read-host
    $line = $line + $DoDelete
    $line >> $logfile
    if (!$DSselected.Activity -eq "Idle")
    $line="Data source not idle, do you want to wait Y/N ? "
    write-host $line -foregroundcolor yellow
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Write-Host "Waiting for data source to become idle..." -foregroundcolor green
    while ($DSselected.Activity -ne "Idle")
    ("Waiting {0} seconds" -f $wait) >>$logfile
    Write-Host -NoNewline "..." -ForegroundColor blue
    start-sleep -s $wait
    if ($DoDelete -eq "Y")
    foreach ($onerp in $RPselected)
    #reselect is adjusted to safe range relative to what was requested
    #--- if adjustflag not set then all up to including else only older because we must keep the full
    if ((($onerp.representedpointintime -le $reselect) -and ($adjustflag -eq $false)) -or ($onerp.representedpointintime -lt $reselect))
    $rtime=[string]$onerp.representedpointintime
    write-host `n$line -foregroundcolor red
    $line >>$logfile
    if (($onerp ) -and ($onerp.IsIncremental -eq $FALSE)) { remove-recoverypoint -RecoveryPoint $onerp -confirm:$True} # >> $logfile}
    $line =("---`nDeleting recoverypoint -> " + $rtime)
    $line >>$logfile
    "All Done!" >> $logfile
    write-host "`nAll Done!`n`n" -foregroundcolor white
    $line="Do you want to View DPMdeleteRP.LOG file Y/N ? "
    write-host $line -foregroundcolor white
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Notepad DPMdeleteRP.LOG
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Passivation table ps_txn not being cleaned up

    Adf 11gR1PS1
    Hello
    I have a samll application using one unbounded task flow and one bounded task flow.
    Each task flow uses a different application module.
    The unbound task flow calls the bounded task flow in a modeless inline-popup via a button.
    When running the application and clicking on the button the bounded task flow is called and a new row is inserted
    into the ps_txn table.
    However when the inline-popup is closed via the "x" on the popup window the row is not removed from the ps_txn table.
    If the button is clicked again a new row is added to the theps_txn table.
    Is this the normal behaviour, looking at 40.5.3 in the Dev Guide it would seem that the record should be deleted or reused.
    I understand that there are scripts for cleaning up the table but shouldn't it be automatic ?
    What am I missing ?
    Regards
    Paul

    Hi Paul,
    Do you use the failover (jbo.dofailover) ?
    If not, I would expect records to be deleted from PS_TXN at activation.
    I tested with the ADF BC Component Browser, selecting menus Save/Restore Transaction State, with jbo.debugoutput=console:
    [277] (0) OraclePersistManager.deleteAll(2126) **deleteAll** collid=17461
    [278] (0) OraclePersistManager.deleteAll(2140)    stmt: delete "PS_TXN" where collid=:1
    [279] (0) OraclePersistManager.commit(217) **commit** #pending ops=1But I also already noticed orphaned records in the table.
    Do you use jbo.internal_connection to use the same connection whatever the AM instance that's passivated/activated or do you have an instance of the PS_TXN table in all AM's connections ?
    Regards,
    Didier.

  • Memory optimized DLLs not being cleaned up

    Hi,
    From BOL, my understanding is that DBAs do not need to administer DLLs created for memory optimized tables, or natively compiled stored procedures, as they are recompiled automatically, when the SQL Server service starts and are removed when no longer needed.
    But I am witnessing, that even after a memory optimized table has been dropped, and the service restarted, the DLLs still exist in the file system AND are still loaded into SQL memory and attached to the process. This can be witnessed by the fact that they
    are still visible in sys_dm_os_loaded_modules, and are locked in the file system if you try to delete them, whilst the SQL Service is running.
    Is this a bug? Or are they cleaned up at a later date? If at a later date, what triggers the clean-up, if it isn't an instance restart?
    Pete

    Most likely the DLLs are still needed during DB recovery, as there are still remnants of the tables in the checkpoint files. A couple of cycle of checkpoints and log truncation (e.g., by doing log backup) need to happen to clean up the old checkpoint
    files and remove the remnants of the dropped tables from disk.
    The following blog post details all the state transitions a checkpoint file goes through:
    http://blogs.technet.com/b/dataplatforminsider/archive/2014/01/23/state-transition-of-checkpoint-files-in-databases-with-memory-optimized-tables.aspx

  • Concurrent sessions not being released in CRS2008

    We have a servlet trying to connect to Crystal Reports Server 2008 using RAS Java API to open unmanaged reports.
    We have 5 CALs and the connection type of the Guest user is configured to use the Concurrent User in the Crystal Reports Server.  We run the reports from our web application with a same user logged on.  We were able to get about 2-3 reports successfully.  After the total sessions reached 5, it fails at the very
    beginning of ReportAppSession.initialize()  The logged error message in the Crystal Reports Server is:
    ErrorLog 2010  1  7 16:29:25.187 5164 3432 (:46) (..\cdtsagent.cpp:3303): CDTSagent::doOneRequest reqId=154:CSResultException thrown.   ErrorSrc:"Analysis Server" FileName:"..\cdtsagent2.cpp" LineNum:448 ErrorCode:-2147217397 ErrorMsg:"" DetailedErrorMsg:""     ErrorSrc:"COM" FileName:"..\cdtsagent2.cpp" LineNum:443 ErrorCode:-2147210992 ErrorMsg:"All of your system's 5 Concurrent Access Licenses are in use at this time or your system's license key has expired. Try again later or contact your administrator to obtain additional licenses. (FWB 00014)" DetailedErrorMsg:""
    We are using Tomcat and have tried the configuration in web.xml of infoviewapp and cmcapp but have no luck -
    (1) Locate the pattern "logontoken.enabled" and change the value from the existing 'true' to 'false' :
    <context-param>
    <param-name>logontoken.enabled</param-name>
    <param-value>false</param-value>
    </context-param>
    (2) Make sure these lines are uncommented:
    <listener>
    <listener-class>com.businessobjects.sdk.ceutils.SessionCleanupListener
    </listener-class>
    In a past thread, it mentioned that we may try to use various SDK code offerings to manage sessions.  Could you provide some sample codes using CRS SDK or CMS configuration to release the sessions.
    Here is the codes:
    try
    ReportAppSession reportAppSession = new ReportAppSession();
    reportAppSession.createService("com.crystaldecisions.sdk.occa.report.application.ReportClientDocument");
    reportAppSession.setReportAppServer("myCRServer");
    // This is where the exception is thrown.
    reportAppSession.initialize();
    ReportClientDocument lo_ReportClientDoc = new ReportClientDocument();
    lo_ReportClientDoc.setReportAppServer(reportAppSession.getReportAppServer());
    lo_ReportClientDoc.open(asReportName,OpenReportOptions._openAsReadOnly);
    ReportServerControl control = new ReportServerControl();
    control.setReportSource(lo_ReportClientDoc.getReportSource());
    catch(Exception exc)
    System.out.println( exc);
    Edited by: Bonita Diemoz on Jan 12, 2010 8:17 PM

    Recommendation is to publish the report to the Server, and use managed reporting.
    You'd have more control over the EnterpriseSession that way.
    Unmanaged RAS does use the Guest account for logon, and you don't have any control over the EnterpriseSession at all.
    It would be better to upgrade the CAL licensing, if you require additional users.
    Sincerely,
    Ted Ueda

  • Expired updates not being cleaned up

    Hi,
    I've been trying to clean up old expired updates on my SCCM 2012 SP1 server and for whatever reason it seems that the updates files are never actually getting removed.
    At first I tried the instructions at
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    When I run the script they provide it appears to go thru all the updates but never actually deletes any of them. The script always seems to say found it found an existing folder and then later it says that that it is excluding the same folder because
    it is active.
    Then I read that SP1 for SCCM 2012 is actually supposed to do the clean up process automatically.  But in this case do I need to do anything like expire the updates manually or does SCCM now do that?  How can I see what is preventing either
    the manual script or the automatic clean up process from actually removing the unneeded files and folders?
    And does anything need to be done with superseded updates as well?
    Also I've always thought that when you SCCM 2012 to do your updates that you should never go to the WSUS console and do anything but I read
    http://blog.coretech.dk/kea/house-of-cardsthe-configmgr-software-update-point-and-wsus/ and he is going the WSUS console and doing a clean up there as well.
    Thanks in advance,
    Nick

    Hi Xin,
    In the wsyncmgr.log file I see lots of log entries like this:
    Skipped update 2d8121b4-ba5c-4492-ba6e-1c70e9382406 - Update for Windows Vista (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.777+420><thread=4172 (0x104C)>
    Skipped update 24d18083-0417-4273-9a5e-1fc3cd37f1d4 - Update for Windows Embedded Standard 7 for x64-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.791+420><thread=4172 (0x104C)>
    Skipped update 954f2ad2-369e-469e-97a0-3efd0a831111 - Update for Windows 8.1 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.805+420><thread=4172 (0x104C)>
    Skipped update f81d2820-721a-431c-a262-4878a42f0115 - Update for Windows Vista for x64-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.822+420><thread=4172 (0x104C)>
    Skipped update 7c82171f-025c-46af-849c-63764ba44382 - Update for Windows Server 2008 x64 Edition (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.836+420><thread=4172 (0x104C)>
    Skipped update 36c29163-b78a-410f-8bd0-7370b35a24f1 - Update for Windows Server 2012 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.850+420><thread=4172 (0x104C)>
    Skipped update 6146260e-5c34-4483-962d-834250d84c79 - Update for Windows 7 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.864+420><thread=4172 (0x104C)>
    Skipped update e6e7f357-7011-4bfd-8b14-8be61e43fa51 - Update for Windows Server 2003 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.877+420><thread=4172 (0x104C)>
    Skipped update 2ed5e49f-3295-4b89-8a0b-9a38c0027d6d - Update for Windows Server 2008 R2 for Itanium-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.890+420><thread=4172 (0x104C)>
    Skipped update 62778a2a-11d8-4cb1-9970-9c3f45202d04 - Update for Windows Server 2008 R2 x64 Edition (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.905+420><thread=4172 (0x104C)>
    And I also see the following entries:
    Sync time: 0d00h41m29s  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 01:51:51.388+420><thread=3440 (0xD70)>
    Wakeup by SCF change  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 02:05:42.535+420><thread=3440 (0xD70)>
    Wakeup for a polling cycle  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:49.050+420><thread=3440 (0xD70)>
    Deleting old expired updates...  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:49.130+420><thread=3440 (0xD70)>
    Deleted 17 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:57.067+420><thread=3440 (0xD70)>
    Deleted 134 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:06.487+420><thread=3440 (0xD70)>
    Deleted 168 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.595+420><thread=3440 (0xD70)>
    Deleted 168 expired updates total  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.651+420><thread=3440 (0xD70)>
    Deleted 10 orphaned content folders in package P0100005 (Endpoint Protection Definition Updates)  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.875+420><thread=3440 (0xD70)>
    Deleted 5 orphaned content folders in package P0100007 (Automatic Deployment Rule for Exchange Servers)  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.953+420><thread=3440 (0xD70)>
    Thread terminated by service request.  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:51.039+420><thread=3440 (0xD70)>
    So it seems like it might be skipping updates?  And then it says it deleted 168 expired updates for example?
    But if I look at the drive where all the update packages are stored it hasn't changed size.

  • JRun session id being re-used

    We are using JRun 4.0 on our server in conjunction with MS
    IIS 6.0 to support dynamic JSP pages and Java Serlvets. We are
    using URL Encoding to support session handling. In the jrun-
    web.xml file we have the following parameters to disable the use of
    cookies for session handling.
    <session-config>
    <cookie-config>
    <active>false</active>
    </cookie-config>
    </session-config>
    With these parameters defined in the jrun-web.xml file, and
    the use of response.encodeURL() function, we see that jrun
    automatically appends a jsessionid=xxxxxxx parameter in the urls.
    This has been working for us well since long. Recently we noticed
    that these jsessionid values are being re-used by jrun for
    different session instances. Which means if a user logs in to a
    website at a given time and is assigned a sessionid say for e.g.
    101011 and after a while the user logs out. After some time if
    another user logs in, this second user is assigned the same
    sessionid parameter (which has a value 101011) for handling his
    session. If in case the first user has bookmarked the a page on the
    website, the bookmark is going to include the sessionid parameter
    (which has a value 101011) and if the first user accesses the
    website from the bookmark at the same time as the second user is
    logged in, the first user will get access to the second user's
    session which is very unsecure.
    This phenomena is referred to as session fixation and can be
    used by a hijacker to get access to any other user's session. Is
    there a way to prevent JRun from re-using these session id values
    or to increase the time period after which JRun re-uses these
    session ids.

    Dax Trajero wrote:
    ... how do I prevent a user who's just ordered, from returning to the site and re-using the same session ref ?
    Deny a returning paying(!) customer his session? Yours might be the only shop in town doing that.
    If your session housekeeping is any good, then the session variables  pertaining to shopping-cart, payment and delivery would have been  cleared or re-initialized. Often, starting a new session means logging in again. There are a number of reasons why that can be undesirable.
    I did an e-commerce course for a year, and learned some strange things. It is in fact to your advantage that a returning customer should keep his session, even after ordering.
    For example, it is well known that the chances of a returning customer placing a new order is much higher when he is already logged in than when he has to log in afresh. You could test that hypothesis yourself. Psychologists have also found that e-shoppers often return to the shop to gloat at the goodies they've just ordered. You wouldn't want to deny them their gloating session, would you?

  • The "Roman" font is not being recognized in Firefox 4.0. As such, I cannot read any previously posted topics or post any new topics on websites using this font.

    The "Roman" font is not being recognized in Firefox 4.0. As such, I cannot read any previously posted topics or post any new topics on websites using this font.

    I have had a similar problem with my system. I just recently (within a week of this post) built a brand new desktop. I installed Windows 7 64-bit Home and had a clean install, no problems. Using IE downloaded an anti-virus program, and then, because it was the latest version, downloaded and installed Firefox 4.0. As I began to search the internet for other programs to install after about maybe 10-15 minutes my computer crashes. Blank screen (yet monitor was still receiving a signal from computer) and completely frozen (couldn't even change the caps and num lock on keyboard). I thought I perhaps forgot to reboot after an update so I did a manual reboot and it started up fine.
    When ever I got on the internet (still using firefox) it would crash after anywhere between 5-15 minutes. Since I've had good experience with FF in the past I thought it must be either the drivers or a hardware problem. So in-between crashes I updated all the drivers. Still had the same problem. Took the computer to a friend who knows more about computers than I do, made sure all the drivers were updated, same problem. We thought that it might be a hardware problem (bad video card, chipset, overheating issues, etc.), but after my friend played around with my computer for a day he found that when he didn't start FF at all it worked fine, even after watching a movie, or going through a playlist on Youtube.
    At the time of this posting I'm going to try to uninstall FF 4.0 and download and install FF 3.6.16 which is currently on my laptop and works like a dream. Hopefully that will do the trick, because I love using FF and would hate to have to switch to another browser. Hopefully Mozilla will work out the kinks with FF 4 so I can continue to use it.
    I apologize for the lengthy post. Any feedback would be appreciated, but is not necessary. I will try and post back after I try FF 3.16.6.

  • Ipod not being recognised, but with a twist

    Hi,
    My ipod nano 5th Gen is not being recognised on my computer. It is now being recognised by the computer itself sometimes, but not itunes. However, when I plug my ipod into my partners laptop, its fine. More to the point, when they plug their's into my laptop, it recognises them. To me, this suggests that both my laptop and my ipod, individually at least, are working.
    I have been through the ipod support page, and gone through all the steps, and to no avail.
    Final details. I'm Windows 8, and my itunes is totally up to date. I have re-installed iTunes, and have reset my ipod to factory settings on another computer. It still doesn't work, please help if you know anything about this!!
    Cheers,
    Ralf

    I have the same issues. itunes 5.0.1.4 doesn't recognize my mini. ipod updater doesn't recognize my mini either. i have uninstalled itunes 5 and reinstalled 4.9, but it didn't work still. something in itunes 5 changed something in my system that reinstalling 4.9 doesn't fix it. updater just shows that i have no ipod connected. if i try to get updater to check for my ipod during the updater installatin, it just sits there and says "waiting for ipod". it's not my usb port because i can read and write to my ipod. i did the entire list of chores that apple suggests on the site including disabling all other functions and services, reinstalling the com port, uninstalling everything, reinstalling everything, getting windows install clean up, e v e r y t h i n g! nothing worked.
    I then installed itunes 4.9 and updater 1.4 in my other laptop. i was able to update to firmware 1.4 using that computer. when i put the ipod into that computer w/ itunes 4.9, everything works fine. of course, it asks me if i want to link that laptop w/ my ipod. i said no, because i want to use my original laptop w/ the ipod.
    anybody have a solution for this? it's like we all have the same symptoms and going to the same doctor and we end up having to diagnose ourselves. this is extremely annyoying and inefficient.

  • Index not being used in group by.

    Here is the scenario with examples. Big table 333 to 500 million rows in the table. Statistics are gathered. Histograms are there. Index is not being used though. Why?
      CREATE TABLE "XXFOCUS"."some_huge_data_table"
       (  "ORG_ID" NUMBER NOT NULL ENABLE,
      "PARTNERID" VARCHAR2(30) NOT NULL ENABLE,
      "EDI_END_DATE" DATE NOT NULL ENABLE,
      "CUSTOMER_ITEM_NUMBER" VARCHAR2(50) NOT NULL ENABLE,
      "STORE_NUMBER" VARCHAR2(10) NOT NULL ENABLE,
      "EDI_START_DATE" DATE,
      "QTY_SOLD_UNIT" NUMBER(7,0),
      "QTY_ON_ORDER_UNIT" NUMBER(7,0),
      "QTY_ON_ORDER_AMT" NUMBER(10,2),
      "QTY_ON_HAND_AMT" NUMBER(10,2),
      "QTY_ON_HAND_UNIT" NUMBER(7,0),
      "QTY_SOLD_AMT" NUMBER(10,2),
      "QTY_RECEIVED_UNIT" NUMBER(7,0),
      "QTY_RECEIVED_AMT" NUMBER(10,2),
      "QTY_REQUISITION_RDC_UNIT" NUMBER(7,0),
         "QTY_REQUISITION_RDC_AMT" NUMBER(10,2),
         "QTY_REQUISITION_RCVD_UNIT" NUMBER(7,0),
         "QTY_REQUISITION_RCVD_AMT" NUMBER(10,2),
         "INSERTED_DATE" DATE,
         "UPDATED_DATE" DATE,
         "CUSTOMER_WEEK" NUMBER,
         "CUSTOMER_MONTH" NUMBER,
         "CUSTOMER_QUARTER" NUMBER,
         "CUSTOMER_YEAR" NUMBER,
         "CUSTOMER_ID" NUMBER,
         "MONTH_NAME" VARCHAR2(3),
         "ORG_WEEK" NUMBER,
         "ORG_MONTH" NUMBER,
         "ORG_QUARTER" NUMBER,
         "ORG_YEAR" NUMBER,
         "SITE_ID" NUMBER,
         "ITEM_ID" NUMBER,
         "ITEM_COST" NUMBER,
         "UNIT_PRICE" NUMBER,
          CONSTRAINT "some_huge_data_table_PK" PRIMARY KEY ("ORG_ID", "PARTNERID", "EDI_END_DATE", "CUSTOMER_ITEM_NUMBER", "STORE_NUMBER")
      USING INDEX TABLESPACE "xxxxx"  ENABLE,
          CONSTRAINT "some_huge_data_table_CK_START_DATE" CHECK (edi_end_date - edi_start_date = 6) ENABLE
    SQL*Plus: Release 11.2.0.2.0 Production on Fri Sep 14 12:11:16 2012
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> SELECT num_rows FROM user_tables s WHERE s.table_name = 'some_huge_data_table';
      NUM_ROWS                                                                     
    333338434                                                                     
    SQL> SELECT MAX(edi_end_date)
      2    FROM some_huge_data_table p
      3   WHERE p.org_id = some_number
      4     AND p.partnerid = 'some_string';
    MAX(EDI_E                                                                      
    13-MAY-12                                                                      
    Elapsed: 00:00:00.00
    SQL> explain plan for
      2  SELECT MAX(edi_end_date)
      3    FROM some_huge_data_table p
      4   WHERE p.org_id = some_number
      5     AND p.partnerid = 'some_string';
    Explained.
    SQL> /
    PLAN_TABLE_OUTPUT                                                                                  
    Plan hash value: 2104157595                                                                        
    | Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |       
    |   0 | SELECT STATEMENT             |             |     1 |    22 |     4   (0)| 00:00:01 |       
    |   1 |  SORT AGGREGATE              |             |     1 |    22 |            |          |       
    |   2 |   FIRST ROW                  |             |     1 |    22 |     4   (0)| 00:00:01 |       
    |*  3 |    INDEX RANGE SCAN (MIN/MAX)| some_huge_data_table_PK |     1 |    22 |     4   (0)| 00:00:01 |       
    SQL> explain plan for
      2  SELECT MAX(edi_end_date),
      3         org_id,
      4         partnerid
      5    FROM some_huge_data_table
      6   GROUP BY org_id,
      7            partnerid;
    Explained.
    PLAN_TABLE_OUTPUT                                                                                  
    Plan hash value: 3950336305                                                                        
    | Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |                    
    |   0 | SELECT STATEMENT   |          |     2 |    44 |  1605K  (1)| 05:21:03 |                    
    |   1 |  HASH GROUP BY     |          |     2 |    44 |  1605K  (1)| 05:21:03 |                    
    |   2 |   TABLE ACCESS FULL| some_huge_data_table |   333M|  6993M|  1592K  (1)| 05:18:33 |                    
    -------------------------------------------------------------------------------                     Why wouldn't it use the index in the group by? If I write a loop to query for different partnerid (there are only three), the whole things takes less than a second. Any help is appreciated.
    btw, I gave the index hint too. Didn't work. Version mentioned in the example.
    Edited by: RPuttagunta on Sep 14, 2012 11:24 AM
    Edited by: RPuttagunta on Sep 14, 2012 11:26 AM
    the actual names are 'scrubbed' for obvious reasons. Don't worry, I didn't name the tables in mixed case.

    Jonathan,
    Thank you for your input. Forgot about this issue since ended up creating an MV since, the view was slower. But either way, I am curious. Here are the results for your questions.
    SQL> SELECT last_analyzed,
      2         blocks
      3    FROM user_tables s
      4   WHERE s.table_name = 'huge_data';
    LAST_ANAL     BLOCKS
    14-MAY-12    5869281
    SQL> SELECT last_analyzed,
      2         leaf_blocks
      3    FROM user_indexes i
      4   WHERE i.table_name = 'huge_data';
    LAST_ANAL LEAF_BLOCKS
    14-MAY-12     2887925
    SQL>It looks like stale statistics from the last_analyzed, but, they really aren't. This is a development database and that was the last time around which it was refreshed. And the stats are right (at least the approx_no_of_blocks and num_rows etc).
    No other data came into the table after.
    Also,
    1). I thought I don't have any particular optimizer parameters, but, checking back I do. fastfull_scan_enabled = false. Could that be it?
    SQL> SELECT a.name,
      2         a.value,
      3         a.display_value,
      4         a.isdefault,
      5         a.isses_modifiable
      6    FROM v$parameter a
      7   WHERE a.name LIKE '\_%' ESCAPE '\';
    NAME                           VALUE                          DISPLAY_VALUE   ISDEFAULT       ISSES
    _disable_fast_validate         TRUE                           TRUE            FALSE           TRUE
    _system_trig_enabled           TRUE                           TRUE            FALSE           FALSE
    _sort_elimination_cost_ratio   5                              5               FALSE           TRUE
    _b_tree_bitmap_plans           FALSE                          FALSE           FALSE           TRUE
    _fast_full_scan_enabled        FALSE                          FALSE           FALSE           TRUE
    _index_join_enabled            FALSE                          FALSE           FALSE           TRUE
    _like_with_bind_as_equality    TRUE                           TRUE            FALSE           TRUE
    _optimizer_autostats_job       FALSE                          FALSE           FALSE           FALSE
    _connect_by_use_union_all      OLD_PLAN_MODE                  OLD_PLAN_MODE   FALSE           TRUE
    _trace_files_public            TRUE                           TRUE            FALSE           FALSE
    10 rows selected.
    SQL>As, you might have guessed, I am not the dba for this db. Should pay more attention to these optimizer parameters.
    I know why we had to set connectby_use_union_all hint (due to a bug in 11gR2).
    Also, vaguely remember something about the disablefast_validate (something about another major db bug in 11gR2 again), but, not sure why those other parameters are set.
    2). Also, I have tried this
    SQL> SELECT /*+ index_ss(huge_data_pk) gather_plan_statistics*/
      2   MAX(edi_end_date),
      3   org_id,
      4   partnerid
      5    FROM huge_data
      6   GROUP BY org_id,
      7            partnerid;
    MAX(EDI_E     ORG_ID PARTNERID
    2 rows
    SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  f3kk8skdyvz7c, child number 0
    SELECT /*+ index_ss(huge_data_pk) gather_plan_statistics*/
    MAX(edi_end_date),  org_id,  partnerid   FROM huge_data  GROUP BY
    org_id,           partnerid
    Plan hash value: 3950336305
    | Id  | Operation          | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT   |          |      1 |        |      2 |00:05:11.31 |    5905K|   5897K|    |  |          |
    |   1 |  HASH GROUP BY     |          |      1 |      2 |      2 |00:05:11.31 |    5905K|   5897K|   964K|   964K| 2304K (0)|
    |   2 |   TABLE ACCESS FULL| hug_DATA |      1 |    333M|    334M|00:04:31.44 |    5905K|   5897K|    |  |          |
    16 rows selected.But, then, I tried this too.
    SQL> alter session set "_fast_full_scan_enabled"=true;
    Session altered.
    SQL> SELECT MAX(edi_end_date),
      2         org_id,
      3         partnerid
      4    FROM hug_data
      5   GROUP BY org_id,
      6            partnerid;
    MAX(EDI_E     ORG_ID PARTNERID
    2 rowsAnd this took around 5 minutes too.
    PS: This has nothing to do with original question, but, it is plausible to derive the 'huge_data' table name from the sql_id? Just curious.

Maybe you are looking for

  • New auto fill pjc

    Hi I found this java code on the web to create a pjc for autofilling a combo box. It uses most of the regular features of the normal oracle combo box(for example it gets populated like normal), but it adds the autofill feature which is pretty cool. M

  • Problem in  in_edit_mode

    Hi ,     In the esf interface if_esf_provider_access we get an attribute called in_edit_mode for method retrieve and retrieve association .                   The problem is when we try to create an new entry in the root using the esf test tool and us

  • 2 wish list items

    The more I get involved with helping people develop iPhone DPS apps, I wish for 2 capabilities in DPS: 1. It would be really nice to be able to arrange the articles vertically. We have the option now with the "horizontal swipe only" option to arrange

  • Firefox seems to download updates correctly, but it never applies them. When I try to update manually, Firefox closes and reopens, but it's not updated.

    This basically means that I have to watch for updates and update Firefox by re-downloading it from firefox.com each time. It's like the "Apply Update" button mysteriously turned into a "restart Firefox" button.

  • Why can't i import my Photos Library into iMovie on Yosemite

    i recently downloaded the Yosemite update. I've now convert iPhoto into Photos. When i use iMovie and try to import either my old iPhoto library OR my new Photos library, there is no content in either one. The libraries aren't selectable. What am i d