Obsolete jdb not being cleaned up

Hi,
Setup:
* We are using Oracle NoSQL 1.2.123.
* We have 3 replication groups with 3 replication nodes each.
Problem:
* 2 of the slaves (in 2 different replication groups) occupy much more space in JDB files (10 times more) then all the others. As these are slaves, and writes always go through the master, and all nodes in a replication group have the same data (eventually), I assume that this is stale data that has not been cleaned up by the BDB garbage collection (cleaner threads). Unfortunately the logs do not show anything new (since Dec. last year) and the oldest JDB files are from February.
Questions:
* Any ideas what could have gone wrong?
* What can I do to trigger the cleaners to cleanup the old data? Is that safe to do in production environment and without downtime?
* Is it really safe to assume that the current data in within a replication groups is really the same?
Thank you in advance
Dimo
PS. A thread dump shows 2 cleaner threads that do nothing.

1) The simplest and fastest way to correct the replica node is to restore it from the master node. We will send you instructions for doing this later today.Here are directions for refreshing the data storage files (.jdb files) on a target node. NoSQL DB will automatically refresh the storage files from another node, after we manually stop the target node, delete its storage files, and finally restart it, as described below. Thanks to Linda Lee for these directions.
First, be sure to make a backup.
Suppose you want to remove the storage files from rg1-rn3 and make it refresh its files from rg1-rn1. First check where the storage files for the target replication node are located using the show topology command to the Admin CLI. Start the AdminCLI this way:
    java -jar KVHOME/lib/kvstore.jar runadmin -host <host> -port <port>Find the directory containing the target Replication Node's files.
    kv-> show topology -verbose
    store=mystore  numPartitions=100 sequence=108
      dc=[dc1] name=MyDC repFactor=3
      sn=[sn1]  dc=dc1 localhost:13100 capacity=1 RUNNING
        [rg1-rn1] RUNNING  c:/linda/work/smoke/KVRT1/dirB
                     single-op avg latency=0.0 ms   multi-op avg latency=0.67391676 ms
      sn=[sn2]  dc=dc1 localhost:13200 capacity=1 RUNNING
        [rg1-rn2] RUNNING  c:/linda/work/smoke/KVRT2/dirA
                  No performance info available
      sn=[sn3]  dc=dc1 localhost:13300 capacity=1 RUNNING
        [rg1-rn3] RUNNING  c:/linda/work/smoke/KVRT3/dirA
                     single-op avg latency=0.0 ms   multi-op avg latency=0.53694165 ms
      shard=[rg1] num partitions=100
        [rg1-rn1] sn=sn1 haPort=localhost:13111
        [rg1-rn2] sn=sn2 haPort=localhost:13210
        [rg1-rn3] sn=sn3 haPort=localhost:13310
        partitions=1-100In this example, rg1-rn3's storage is located in
    c:/linda/work/smoke/KVRT3/dirAStop the target service using the stop-service command
    kv-> plan stop-service -service rg1-rn3 -waitIn another command shell, remove the files for the target Replication Node
    rm c:/linda/work/smoke/KVRT3/dirA/rg1-rn3/env/*.jdbIn the Admin CLI, restart the service
     plan start-service -service rg1-rn3 -waitThe service will restart, and will populate its missing files from one of the other two nodes in the shard. You can use the "verify" or the "show topology" command to check on he status of the store.
--mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • [svn] 4533: Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

    Revision: 4533
    Author: [email protected]
    Date: 2009-01-14 15:55:31 -0800 (Wed, 14 Jan 2009)
    Log Message:
    Bug: BLZ-301 - Selector expressions are not being cleaned up properly on unsubscribe
    QA: Yes
    Doc: No
    Ticket Links:
    http://bugs.adobe.com/jira/browse/BLZ-301
    Modified Paths:
    blazeds/trunk/modules/core/src/flex/messaging/services/messaging/SubscriptionManager.java

  • Session not being clean up by JRun

    My application is using IPlanet WebServer and JRun3.02 Application server. I am having a problem with active session not getting cleaned up by the App Server. When the user goes through the application and finishes the process, I invalidate the session by doing 'session.Invalidate()'. I also have set a 30 minute timeout value in the JRun global.properties file to invalidate the session if the user starts but not finish going through the application. However, the number of active session count in the JRun log doesn't seem to go down. After a few days, I ran out of sessions and the application hungs. I keep a few objects on the session including a pretty big 'pdfObject' that I use to create a PDF document on the fly.
    Any idea why JRun not able to clean up the sessions after the 30 minute timeout has passed? Does the fact that I have stored objects on the session preventing JRun from invalidating and cleaning up the session?
    Thanks in advance.

    Hi afikru
    According to the Servlet specification session.invalidate() method should unbind any objects associated with it. However I'm not conversant with JRun application server so I can only provide some pointers here to help you out.
    Firstly, try locating some documentation specific to your application server which may throw some light on why this may be happening.
    Secondly, I'd suggest running the Server within a Profiling tool so that you can see what objects are being created and how many of those. Try explicitly running the Garbage Collector and see if the sessions come down.
    Keep me posted on your progress.
    Good Luck!
    Eshwar R.
    Developer Technical Support
    Sun microsystems

  • Aq$_tab_p, aq$_tab_d filling and not being cleaned up

    Hi all.
    I have a simple 1 way streams replication setup (two node) based on examples. Replication seems to be working.
    However, the AQ$_TAB_P and AQ$_TAB_D (on the capture side only) tables continue to fill (as well as number of messages in the queue and spilled lcrs in v$buffered_queues). Nothing should be spilling since the only things I'm sending are 1 row updates to a heartbeat table and the streams pools are a few hundred meg.
    I have tried aq_tm_process unset, as well as set to 2 and the tables continue to grow.
    The MSG_STATE in aq$tab are either DEFERRED or DEFERRED SPILLED. As mentioned all of the heartbeat updates ( as well as small test transactions ) replicate just fine, so the transactions are being captured,propagated and applied.
    I am running 10.2.0.2 on solaris 10 with no streams related one off patches to speak of, for reference. My propagation did not specify queue_to_queue.
    I'm wondering if there is a step I may have missed, or what else I may be able to look at to ensure that these tables are cleaned up?
    Thanks.
    Edited by: user599560 on Oct 28, 2008 12:39 PM

    Hello
    I forgot to mention that you should check v$propagation_receiver on the destination and v$propagation_sender on the source. v$propagation_receiver on the source will not have records unless you are using bi-directional streams.
    The aq_tm_processes parameter should be set on all the databases that uses Streams. This parameter is responsible for spawning the number of queue monitor slaves which is actually performing the spilling and removing the spilled messages which are no longer needed.
    It is suggested to remove this parameter from spfile, however your SHOW PARAMETER will still show this as 0. Hence you should be checking v$spparameter to confirm whether this was actually removed from it. If you remove it from the spfile, then it should spawn the required number of slaves automatically as per the autotune feature in 10g. However I would always suggest to set this parameter to 1 so that one slave process will be always spawned even if we dont use Streams and SHOW PARAMETER always show this as 1.
    If you find the slaves are not spawned, then you should be checking your alert.log to see whether any errors are reported. You need to check the Queue Monitor Coordinator process qmnc is spawned. If qmnc itself is not spawned (by default it should be spawned always), then no q00 slaves will be spawned. If you remove the parameter from spfile and you see that no q00 slaves are spawned eventhough you are using Streams (either capture, prop or apply) then you should log an SR with Oracle Support to investigate this. You can check the qmnc and q00 slaves at the OS level using the following command:
    ps -ef | grep $ORACLE_SID | grep [q][m0][n0]
    Please mark this thread as answered if all your questions are answered well else let me know.
    Thanks,
    Rijesh

  • Old recovery points seems to not being cleaned up

    I'm running a Windows Server 2012 server with DPM 2012 SP1, acting as a secondary DPM server for a couple of primary servers. However, the last 5-6 weeks it has begun to behave very strange. Suddenly, I get a lot of "Recovery Point volume threshold
    exceeded", "DPM does not have sufficient storage space available on the recovery point volume to create new recovery Points" and "The used disk space on the computer running DPM for the recovery point volume of SQL Server 2008 database
    XXXXX\DB(servername.domain.com) has exceeded the threshold value of 90% (DPM accounts 600 MB for internal usage in addition to free space available). If you do not allocate more disk space, synchronization jobs may fail due to insufficient disk space. (ID
    3169).
    All of these alerts seem to have a common source - disk space of course, but there is currently 8 TB free in the DPM disk pool. However, I have a feeling that all of this started when we added another DPM disk to the storage pool. Could it be that DPM doesn't
    clean up expired disk data correctly any longer?
    /Amir

    Hi,
    If the pruneshadowcopiesdpm201.ps1 is not completing, hangs, or crashes, then that needs to be addressed as that will definitely cause storage usage problems.
    In the meantime you can use this powershell script to delete old recovery points to help free disk space.  It will prompt to select a datasource, then a date to delete all recovery points made before that time.
    #Author : Ruud Baars
    #Date : 11/09/2008
    #Edited : 11/15/2012 By: Wilson S.
    #edited : 11:27:2012 By: Mike J.
    # NOTE: Update script to only remove recovery points on Disk. Recovery points removed will be from the oldest one up to the date
    # entered by the user while the script is running
    #deletes all recovery points before 'now' on selected data source.
    $version="V4.7"
    $ErrorActionPreference = "silentlycontinue"
    add-pssnapin sqlservercmdletsnapin100
    Add-PSSnapin -Name Microsoft.DataProtectionManager.PowerShell
    #display RP's to delete and ask to continue.
    #Check & wait data source to be idle else removal may fail (in Mojito filter on 'intent' to see the error)
    #Fixed prune default and logfile name and some logging lines (concatenate question + answer)
    #Check dependent recovery points do not pass BEFORE date and adjust selection to not select those ($reselect)
    #--- Fixed reselect logic to keep adjusting reselect for as long as older than BEFORE date
    #--- Fixed post removal rechecking logic to match what is done so far (was still geared to old logic)
    #--- Modified to remove making RP and ask for pruning, fixed logic for removal rechecking logic
    $MB=1024*1024
    $logfile="DPMdeleteRP.LOG"
    $wait=10 #seconds
    $confirmpreference = "None"
    function Show_help
    cls
    $l="=" * 79
    write-host $l -foregroundcolor magenta
    write-host -nonewline "`t<<<" -foregroundcolor white
    write-host -nonewline " DANGEROUS :: MAY DELETE MANY RECOVERY POINTS " -foregroundcolor red
    write-host ">>>" -foregroundcolor white
    write-host $l -foregroundcolor magenta
    write-host "Version: $version" -foregroundcolor cyan
    write-host "A: User Selects data source to remove recovery points for" -foregroundcolor green
    write-host "B: User enters date / time (using 24hr clock) to Delete recovery points" -foregroundcolor green
    write-host "C: User Confirms deletion after list of recovery points to be deleted is displayed." -foregroundcolor green
    write-host "Appending to log file $logfile`n" -foregroundcolor white
    write-host "User Accepts all responsibilities by entering a data source or just pressing [Enter] " -foregroundcolor white -backgroundcolor blue
    "**********************************" >> $logfile
    "Version $version" >> $logfile
    get-date >> $logfile
    show_help
    $DPMservername=&"hostname"
    "Selected DPM server = $DPMservername" >> $logfile
    write-host "`nConnnecting to DPM server retrieving data source list...`n" -foregroundcolor green
    $pglist = @(Get-ProtectionGroup $DPMservername) # WILSON - Created PGlist as array in case we have a single protection group.
    $ds=@()
    $tapes=$null
    $count = 0
    $dscount = 0
    foreach ($count in 0..($pglist.count - 1))
    # write-host $pglist[$count].friendlyname
    $ds += @(get-datasource $pglist[$count]) # WILSON - Created DS as array in case we have a single protection group.
    # write-host $ds
    # write-host $count -foreground yellow
    if ( Get-Datasource $DPMservername -inactive) {$ds += Get-Datasource $DPMservername -inactive}
    $i=0
    write-host "Index Protection Group Computer Path"
    write-host "---------------------------------------------------------------------------------"
    foreach ($l in $ds)
    "[{0,3}] {1,-20} {2,-20} {3}" -f $i, $l.ProtectionGroupName, $l.psinfo.netbiosname, $l.logicalpath
    $i++
    $DSname=read-host "`nEnter a data source index from list above - Note co-located datasources on same replica will be effected"
    if (!$DSname)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $DSselected=$ds[$DSname]
    if (!$DSselected)
    write-host "No datasource selected `n" -foregroundcolor yellow
    "Aborted on Datasource name" >> $logfile
    exit 0
    $rp=get-recoverypoint $DS[$dsname]
    $rp
    # $DoTape=read-host "`nDo you want to remove when recovery points are on tape ? [y/N]"
    # "Remove tape recovery point = $DoTape" >> $logfile
    write-host "`nCollecting recoverypoint information for datasource $DSselected.name" -foregroundcolor green
    if ($DSselected.ShadowCopyUsedspace -gt 0)
    while ($DSSelected.TotalRecoveryPoints -eq 0)
    { # "still 0"
    #this is on disk
    $oldShadowUsage=[math]::round($DSselected.ShadowCopyUsedspace/$MB,1)
    $line=("Total recoverypoint usage {0} MB on DISK in {1} recovery points" -f $oldShadowUsage ,$DSselected.TotalRecoveryPoints )
    $line >> $logfile
    write-host $line`n -foregroundcolor white
    #this is on tape
    #$trptot=0
    #$tp= Get-RecoveryPoint($dsselected) | where {($_.Datalocation -eq "Media")}
    #foreach ($trp in $tp) {$trptot += $trp.size }
    #if ($trptot -gt 0 )
    # $line=("Total recoverypoint usage {0} MB on TAPE in {1} recovery points" -f ($trptot/$MB) ,$DSselected.TotalRecoveryPoints )
    # $line >> $logfile
    # write-host $line`n -foregroundcolor white
    [datetime]$afterdate="1/1/1980"
    #$answer=read-host "`nDo you want to delete recovery points from the beginning [Y/n]"
    #if ($answer -eq "n" )
    # [datetime]$afterdate=read-host "Delete recovery points AFTER date [MM/DD/YYYY hh:mm]"
    [datetime]$enddate=read-host "Delete ALL Disk based recovery points BEFORE and Including date/time entered [MM/DD/YYYY hh:mm]"
    "Deleting recovery points until $enddate" >>$logfile
    write-host "Deleting recovery points until and $enddate" -foregroundcolor yellow
    $rp=get-recoverypoint $DSselected
    if ($DoTape -ne "y" )
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)-and ($_.DataLocation -eq "Disk")}
    else
    $RPselected=$rp | where {($_.representedpointintime -le $enddate) -and ($_.Isincremental -eq $FALSE)}
    if (!$RPselected)
    write-host "No recovery points found!" -foregroundcolor yellow
    "No recovery points found, aborting...!" >> $logfile
    exit 0
    $reselect = $enddate
    $adjustflag = $false
    foreach ($onerp in $RPselected)
    $rtime=[string]$onerp.representedpointintime
    $rsize=[math]::round(($onerp.size/$MB),1)
    $line= "Found {0}, RP size= {1} MB (If 0 MB, co-located datasource cannot be computed), Incremental={2} "-f $rtime, $rsize,$onerp.Isincremental
    $line >> $logfile
    write-host "$line" -foregroundcolor yellow
    #Get dependent rp's for data source
    $allRPtbd=$DSselected.GetAllRecoveryPointsToBeDeleted($onerp)
    foreach ($oneDrp in $allRPtbd)
    if ($oneDrp.IsIncremental -eq $FALSE) {continue}
    $rtime=[string]$oneDrp.representedpointintime
    $rsize=[math]::round(($oneDrp.size/$MB),1)
    $line= ("`t...is dependancy for {0} size {1} `tIncremental={2}" -f $rtime, $rsize, $oneDrp.Isincremental)
    $line >> $logfile
    if ($oneDrp.representedpointintime -ge $enddate)
    #stick to latest full ($oneDrp = dependents, $onerp = full)
    $adjustflag = $true
    $reselect = $onerp.representedpointintime
    "<< Dependents newer than BEFORE date >>>" >> $logfile
    Write-Host -nonewline "`t <<< later than BEFORE date >>>" -foregroundcolor white -backgroundcolor red
    write-host "$line" -foregroundcolor yellow
    else
    #Ok, include current latest incremental
    $reselect = $oneDrp.representedpointintime
    write-host "$line" -foregroundcolor yellow
    if ($reselect -lt $oneDrp.representedpointintime)
    #we adjusted further backward than latest incremental within selection
    $reselect = $rtime
    $line = "Adjusted BEFORE date to be $reselect to include dependents to $enddate"
    $line >> $logfile
    Write-Host $line -foregroundcolor white -backgroundcolor blue
    $line="`n<<< SECOND TO LAST CHANCE TO ABORT - ONE MORE PROMPT TO CONFIRM. >>>"
    write-host $line -foregroundcolor white -backgroundcolor blue
    $line >> $logfile
    $line="Above recovery points within adjusted range will be permanently deleted !!!"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="These RP's include dependent recovery points and may contain co-located datasource(s)"
    write-host $line -foregroundcolor red
    $line >> $logfile
    $line="Data source activity = " + $DSselected.Activity
    $line >> $logfile
    write-host $line -foregroundcolor white
    $DoDelete=""
    while (($DoDelete -ne "N" ) -and ($DoDelete -ne "Y"))
    $line="Continue with deletion (must answer) Y/N? "
    write-host $line -foregroundcolor white
    $DoDelete=read-host
    $line = $line + $DoDelete
    $line >> $logfile
    if (!$DSselected.Activity -eq "Idle")
    $line="Data source not idle, do you want to wait Y/N ? "
    write-host $line -foregroundcolor yellow
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Write-Host "Waiting for data source to become idle..." -foregroundcolor green
    while ($DSselected.Activity -ne "Idle")
    ("Waiting {0} seconds" -f $wait) >>$logfile
    Write-Host -NoNewline "..." -ForegroundColor blue
    start-sleep -s $wait
    if ($DoDelete -eq "Y")
    foreach ($onerp in $RPselected)
    #reselect is adjusted to safe range relative to what was requested
    #--- if adjustflag not set then all up to including else only older because we must keep the full
    if ((($onerp.representedpointintime -le $reselect) -and ($adjustflag -eq $false)) -or ($onerp.representedpointintime -lt $reselect))
    $rtime=[string]$onerp.representedpointintime
    write-host `n$line -foregroundcolor red
    $line >>$logfile
    if (($onerp ) -and ($onerp.IsIncremental -eq $FALSE)) { remove-recoverypoint -RecoveryPoint $onerp -confirm:$True} # >> $logfile}
    $line =("---`nDeleting recoverypoint -> " + $rtime)
    $line >>$logfile
    "All Done!" >> $logfile
    write-host "`nAll Done!`n`n" -foregroundcolor white
    $line="Do you want to View DPMdeleteRP.LOG file Y/N ? "
    write-host $line -foregroundcolor white
    $Y=read-host
    $line = $line + $Y
    $line >> $logfile
    if ($Y -ieq "Y")
    Notepad DPMdeleteRP.LOG
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Passivation table ps_txn not being cleaned up

    Adf 11gR1PS1
    Hello
    I have a samll application using one unbounded task flow and one bounded task flow.
    Each task flow uses a different application module.
    The unbound task flow calls the bounded task flow in a modeless inline-popup via a button.
    When running the application and clicking on the button the bounded task flow is called and a new row is inserted
    into the ps_txn table.
    However when the inline-popup is closed via the "x" on the popup window the row is not removed from the ps_txn table.
    If the button is clicked again a new row is added to the theps_txn table.
    Is this the normal behaviour, looking at 40.5.3 in the Dev Guide it would seem that the record should be deleted or reused.
    I understand that there are scripts for cleaning up the table but shouldn't it be automatic ?
    What am I missing ?
    Regards
    Paul

    Hi Paul,
    Do you use the failover (jbo.dofailover) ?
    If not, I would expect records to be deleted from PS_TXN at activation.
    I tested with the ADF BC Component Browser, selecting menus Save/Restore Transaction State, with jbo.debugoutput=console:
    [277] (0) OraclePersistManager.deleteAll(2126) **deleteAll** collid=17461
    [278] (0) OraclePersistManager.deleteAll(2140)    stmt: delete "PS_TXN" where collid=:1
    [279] (0) OraclePersistManager.commit(217) **commit** #pending ops=1But I also already noticed orphaned records in the table.
    Do you use jbo.internal_connection to use the same connection whatever the AM instance that's passivated/activated or do you have an instance of the PS_TXN table in all AM's connections ?
    Regards,
    Didier.

  • Memory optimized DLLs not being cleaned up

    Hi,
    From BOL, my understanding is that DBAs do not need to administer DLLs created for memory optimized tables, or natively compiled stored procedures, as they are recompiled automatically, when the SQL Server service starts and are removed when no longer needed.
    But I am witnessing, that even after a memory optimized table has been dropped, and the service restarted, the DLLs still exist in the file system AND are still loaded into SQL memory and attached to the process. This can be witnessed by the fact that they
    are still visible in sys_dm_os_loaded_modules, and are locked in the file system if you try to delete them, whilst the SQL Service is running.
    Is this a bug? Or are they cleaned up at a later date? If at a later date, what triggers the clean-up, if it isn't an instance restart?
    Pete

    Most likely the DLLs are still needed during DB recovery, as there are still remnants of the tables in the checkpoint files. A couple of cycle of checkpoints and log truncation (e.g., by doing log backup) need to happen to clean up the old checkpoint
    files and remove the remnants of the dropped tables from disk.
    The following blog post details all the state transitions a checkpoint file goes through:
    http://blogs.technet.com/b/dataplatforminsider/archive/2014/01/23/state-transition-of-checkpoint-files-in-databases-with-memory-optimized-tables.aspx

  • Expired updates not being cleaned up

    Hi,
    I've been trying to clean up old expired updates on my SCCM 2012 SP1 server and for whatever reason it seems that the updates files are never actually getting removed.
    At first I tried the instructions at
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx
    When I run the script they provide it appears to go thru all the updates but never actually deletes any of them. The script always seems to say found it found an existing folder and then later it says that that it is excluding the same folder because
    it is active.
    Then I read that SP1 for SCCM 2012 is actually supposed to do the clean up process automatically.  But in this case do I need to do anything like expire the updates manually or does SCCM now do that?  How can I see what is preventing either
    the manual script or the automatic clean up process from actually removing the unneeded files and folders?
    And does anything need to be done with superseded updates as well?
    Also I've always thought that when you SCCM 2012 to do your updates that you should never go to the WSUS console and do anything but I read
    http://blog.coretech.dk/kea/house-of-cardsthe-configmgr-software-update-point-and-wsus/ and he is going the WSUS console and doing a clean up there as well.
    Thanks in advance,
    Nick

    Hi Xin,
    In the wsyncmgr.log file I see lots of log entries like this:
    Skipped update 2d8121b4-ba5c-4492-ba6e-1c70e9382406 - Update for Windows Vista (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.777+420><thread=4172 (0x104C)>
    Skipped update 24d18083-0417-4273-9a5e-1fc3cd37f1d4 - Update for Windows Embedded Standard 7 for x64-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.791+420><thread=4172 (0x104C)>
    Skipped update 954f2ad2-369e-469e-97a0-3efd0a831111 - Update for Windows 8.1 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.805+420><thread=4172 (0x104C)>
    Skipped update f81d2820-721a-431c-a262-4878a42f0115 - Update for Windows Vista for x64-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.822+420><thread=4172 (0x104C)>
    Skipped update 7c82171f-025c-46af-849c-63764ba44382 - Update for Windows Server 2008 x64 Edition (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.836+420><thread=4172 (0x104C)>
    Skipped update 36c29163-b78a-410f-8bd0-7370b35a24f1 - Update for Windows Server 2012 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.850+420><thread=4172 (0x104C)>
    Skipped update 6146260e-5c34-4483-962d-834250d84c79 - Update for Windows 7 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.864+420><thread=4172 (0x104C)>
    Skipped update e6e7f357-7011-4bfd-8b14-8be61e43fa51 - Update for Windows Server 2003 (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.877+420><thread=4172 (0x104C)>
    Skipped update 2ed5e49f-3295-4b89-8a0b-9a38c0027d6d - Update for Windows Server 2008 R2 for Itanium-based Systems (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.890+420><thread=4172 (0x104C)>
    Skipped update 62778a2a-11d8-4cb1-9970-9c3f45202d04 - Update for Windows Server 2008 R2 x64 Edition (KB2998527) because it is up to date.  $$<SMS_WSUS_SYNC_MANAGER><10-31-2014 01:50:02.905+420><thread=4172 (0x104C)>
    And I also see the following entries:
    Sync time: 0d00h41m29s  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 01:51:51.388+420><thread=3440 (0xD70)>
    Wakeup by SCF change  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 02:05:42.535+420><thread=3440 (0xD70)>
    Wakeup for a polling cycle  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:49.050+420><thread=3440 (0xD70)>
    Deleting old expired updates...  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:49.130+420><thread=3440 (0xD70)>
    Deleted 17 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:05:57.067+420><thread=3440 (0xD70)>
    Deleted 134 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:06.487+420><thread=3440 (0xD70)>
    Deleted 168 expired updates  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.595+420><thread=3440 (0xD70)>
    Deleted 168 expired updates total  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.651+420><thread=3440 (0xD70)>
    Deleted 10 orphaned content folders in package P0100005 (Endpoint Protection Definition Updates)  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.875+420><thread=3440 (0xD70)>
    Deleted 5 orphaned content folders in package P0100007 (Automatic Deployment Rule for Exchange Servers)  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:07.953+420><thread=3440 (0xD70)>
    Thread terminated by service request.  $$<SMS_WSUS_SYNC_MANAGER><10-30-2014 03:06:51.039+420><thread=3440 (0xD70)>
    So it seems like it might be skipping updates?  And then it says it deleted 168 expired updates for example?
    But if I look at the drive where all the update packages are stored it hasn't changed size.

  • The "Roman" font is not being recognized in Firefox 4.0. As such, I cannot read any previously posted topics or post any new topics on websites using this font.

    The "Roman" font is not being recognized in Firefox 4.0. As such, I cannot read any previously posted topics or post any new topics on websites using this font.

    I have had a similar problem with my system. I just recently (within a week of this post) built a brand new desktop. I installed Windows 7 64-bit Home and had a clean install, no problems. Using IE downloaded an anti-virus program, and then, because it was the latest version, downloaded and installed Firefox 4.0. As I began to search the internet for other programs to install after about maybe 10-15 minutes my computer crashes. Blank screen (yet monitor was still receiving a signal from computer) and completely frozen (couldn't even change the caps and num lock on keyboard). I thought I perhaps forgot to reboot after an update so I did a manual reboot and it started up fine.
    When ever I got on the internet (still using firefox) it would crash after anywhere between 5-15 minutes. Since I've had good experience with FF in the past I thought it must be either the drivers or a hardware problem. So in-between crashes I updated all the drivers. Still had the same problem. Took the computer to a friend who knows more about computers than I do, made sure all the drivers were updated, same problem. We thought that it might be a hardware problem (bad video card, chipset, overheating issues, etc.), but after my friend played around with my computer for a day he found that when he didn't start FF at all it worked fine, even after watching a movie, or going through a playlist on Youtube.
    At the time of this posting I'm going to try to uninstall FF 4.0 and download and install FF 3.6.16 which is currently on my laptop and works like a dream. Hopefully that will do the trick, because I love using FF and would hate to have to switch to another browser. Hopefully Mozilla will work out the kinks with FF 4 so I can continue to use it.
    I apologize for the lengthy post. Any feedback would be appreciated, but is not necessary. I will try and post back after I try FF 3.16.6.

  • Ipod not being recognised, but with a twist

    Hi,
    My ipod nano 5th Gen is not being recognised on my computer. It is now being recognised by the computer itself sometimes, but not itunes. However, when I plug my ipod into my partners laptop, its fine. More to the point, when they plug their's into my laptop, it recognises them. To me, this suggests that both my laptop and my ipod, individually at least, are working.
    I have been through the ipod support page, and gone through all the steps, and to no avail.
    Final details. I'm Windows 8, and my itunes is totally up to date. I have re-installed iTunes, and have reset my ipod to factory settings on another computer. It still doesn't work, please help if you know anything about this!!
    Cheers,
    Ralf

    I have the same issues. itunes 5.0.1.4 doesn't recognize my mini. ipod updater doesn't recognize my mini either. i have uninstalled itunes 5 and reinstalled 4.9, but it didn't work still. something in itunes 5 changed something in my system that reinstalling 4.9 doesn't fix it. updater just shows that i have no ipod connected. if i try to get updater to check for my ipod during the updater installatin, it just sits there and says "waiting for ipod". it's not my usb port because i can read and write to my ipod. i did the entire list of chores that apple suggests on the site including disabling all other functions and services, reinstalling the com port, uninstalling everything, reinstalling everything, getting windows install clean up, e v e r y t h i n g! nothing worked.
    I then installed itunes 4.9 and updater 1.4 in my other laptop. i was able to update to firmware 1.4 using that computer. when i put the ipod into that computer w/ itunes 4.9, everything works fine. of course, it asks me if i want to link that laptop w/ my ipod. i said no, because i want to use my original laptop w/ the ipod.
    anybody have a solution for this? it's like we all have the same symptoms and going to the same doctor and we end up having to diagnose ourselves. this is extremely annyoying and inefficient.

  • Can't do google seaches or get tabs to open without ringo and get the alert..The operation can not be completed because of an internal failure. A secure network communication has not been cleaned up correctly.

    I can't get tabs to open or google search to operate without constant ringo and I get the alert...The operation can not be completed because of an internal failure. A secure network communication has not been cleaned up correctly. I have 8.0 firefox version

    This is a known bug and it is being worked on.
    The relevant bug report is [https://bugzilla.mozilla.org/show_bug.cgi?id=588511 Bug 588511], but please do not comment on the bug report.

  • Can't Delete Private Cloud - tbl_WLC_PhysicalObject not being updated

    I am having issues with our SCVMM instance where I can't delete Private Clouds...even if there empty. 
    When I right click the Private Cloud and click Delete, the "Jobs" panel says it finished successfully, however, the Private Cloud is not deleted. 
    After doing some researching, I believe its because entries in the tbl_WLC_PhysicalObject database table are not being updated correctly, when a VM is moved from one Private Cloud to another. After determining the "CloudID" of the Private
    Cloud I am trying to delete, I still see resources assigned to this Private Cloud in the tbl_WLC_PhysicalObject table, even though from VMM Console, the Private Cloud shows up empty. 
    For some testing purposes, I assigned a VM back to the Private Cloud I am trying to delete, only to move it out again and gather some tracing/logging. When I moved the VM back out of the Private Cloud, I had a SQL Profiler running in the background, capturing
    the SQL statements on the DB server. Looking at the "exec dbo.prc_WLC_UpdatePhysicalOBject" statement, I see the @CloudID variable is assigned the "CloudID" of the Private Cloud the VM is currently assigned to/the Private Cloud I am trying
    to delete and is NOT the CloudID of the Private Cloud the VM is being moved to/assigned to. 
    Instead of having the VMM Console GUI do the Private Cloud assignment/change...I copied the PowerShell commands out...so I can run them manually. Looks like the script gets 4 variables ($VM, $OperatingSystem, $CPUType, and $Cloud), and then runs the "Set-SCVirtualMachine"
    CMDLET. For the $Cloud variable, it does return the proper "CloudID" of the Private Cloud I am trying to move the VM too (I ran it separately and then ran an ECHO $Cloud to look at its value). When I run the "Set-SCVirtualMachine" CMDLET,
    the output has values for "CloudID" and "Cloud" and these are still the values of the source Private Cloud/Private Cloud I am moving the VM out of and ultimately want to delete. 
    Has anyone ran into this? Is something not processing right in the "Set-SCVirtualMachine" CMDLET?

    I been slowing looking into this and this is where I am at:
    I built a development SCVMM 2012 R2  instance that mocks our production environment (minus all the VM's...just the networking configuration and all the private clouds have been mocked). From there, I started at SCVMM 2012 R2 GA and one by one installed
    the 4 rollup patches in ordered and at each new patch level,  I monitored the queries coming in through SQL Profiler, as I moved a VM between private clouds and created new VM's within clouds. As I created new VM's and moved the VM's between clouds. the
    stored procedure "prc_WLC_UpdatePhysicalOBject" all have a value of NULL for the CloudID column....so a CloudID isnt even associated to the physical objects (basically the VHDX files and any mounted ISO's I have on the VM's). 
    I did find out this SCVMM instance was upgraded from SCVMM 2008 (I took over after the 2012 R2 upgrade was completed). 
    I am thinking at this point...nothing is wrong with SCVMM 2012 R2 if you build and recreate it from scratch and a new DB. I am thinking this might be a depreciated field from SCVMM 2008. The only other thing we did, was put in a SAN and moved VM's from stand-alone
    hosts to the new CSV's (A mixture of 2008 R2 and 2012 NON R2 hosts). 
    At this point...since we dont have Self-Service enabled yet....it will be a days work to rebuild a new instance of SCVMM 2012 R2 and migrate the hosts/VM's to it and start from a clean slate. 
    I know the DB structure isnt really published...but does anybody have any other insights into this? 

  • Mouse settings are not being saved

    I just did a clean install of Lion (10.7.5) on a new SSD in a mac pro 1,1 and am having problems with my mouse settings.  The settings are not being saved.  When I change any of the buttons, nothing changes.
    It's not a hardware problem with the mouse.  If I boot up to my previous os drive, lion 10.7.5, the mouse settings are correct and function properly.

    FOrgot to add, this is the basic wired mouse.  I've also tried the wireless bluetooth mighty mouse as well with the exact same results.
    I've tried removing the plist files with no success

  • I had to G5's 2.2Ghz donated, and they were not wiped clean, and I don't have a Leopard CD. Anyone know what I can do? I am a non profit.

    I would greatly appreciate some help here. I have two PowerPC's that were donated to my community center, and they were not wiped clean. (insert major frowny face) I am at a loss on what to do, being a non profit I don't have much money to spend on having these cleaned off at a local "Authorized Mac Store." I have my figners crossed that someone here might be able to offer me some guideance.
    Thanks in advance,
    Brandon

    Some folks have experienced difficulties loading a new system.
    Restore Tiger 10.4 & Leopard 10.5  DVDs are available from Apple by calling 800-767-2775 as of January 20, 2013.
    https://discussions.apple.com/thread/4720126?tstart=0
    (0)
    Be careful when deleting files. A lot of people have trashed their system when deleting things.
    Place things in trash. Reboot & run your applications. Empty trash.
    Go after large files that you have created & know what they are.  Do not delete small files that are in a folder you do not know what the folder is for. Anything that is less than a megabyte is a small file these days.
    (1)
    Empty the trash.  Space isn't reclaimed until you empty the trash in three places!
    The trash can...
    --  in the dock
    -- for iPhoto
    -- with Mail
    (2)
    Run
    OmniDiskSweeper
    "The simple, fast way to save disk space"
    OmniDiskSweeper is now free!
    http://www.omnigroup.com/applications/omnidisksweeper/download/
    T
    Requirements for Mac OS X v10.5
    http://support.apple.com/kb/HT3759
    TenFourFox -- It's a port of the latest FireFox to run on older hardware and software.
    "World's most advanced web browser. Finely tuned for the Power PC."
        --  works for me on 10.4.  Supports 10.5
    http://www.floodgap.com/software/tenfourfox/
    alternative download site:
    http://www.macupdate.com/app/mac/37761/tenfourfox
    OmniWeb uses the lastest Safari framework.  The open source WebKit. Other browsers like Safari and iCab use the OS version of WebKit.  The OmniWeb downloaded dmg includes it's own copy of the latest WebKit.
    http://www.omnigroup.com/products/omniweb/
    Safari 4.1.3 for Tiger
    http://support.apple.com/kb/DL1069
    Safari 5.0.6 Leopard.
    http://support.apple.com/kb/dl1422

  • Structured XMLIndex is not being used

    I have a table defined as "TABLE OF XMLTYPE" with XML Binary storage with a structured XMLIndex under Oracle 11.2.0.3.4. The query that I am using on this table is virtually the same as the XMLIndex, but it's not being used. I searched the forums for similar issues and found this:
    XMLIndex is not getting used
    However, the post is a couple of years old, and I think that the solution was really specific to the problem. Not that mine isn't. ;)
    Per the forum guidelines, the data/DDL is confidential, and should not be posted, so I opened a SR for it - SR 3-7160281751.
    May I please have some help understanding why the structured XMLIndex is not being used?
    Thanks...

    OK, I did the best I could to clean this up. The last query in this script is the one not using the index. The tables "book_master" and "book_join_temp" are populated, though I didn't include the data here. Do you see anything wrong?
    --Create tables...
    CREATE TABLE book_master OF XMLTYPE
    XMLTYPE STORE AS SECUREFILE BINARY XML
    VIRTUAL COLUMNS
      isbn_nbr AS ( XmlCast(
                    XmlQuery('declare namespace plh="http://www.mrbook.com/InventoryData/Listing";
                              declare namespace invtdata="http://www.mrbook.com/Inventory";
                              /invtdata:INVENTORY/plh:LIST/plh:ISBN_NBR'
                              PASSING object_value RETURNING CONTENT) AS VARCHAR2(64)) ),
      book_id AS ( XmlCast(
                    XmlQuery('declare namespace plh="http://www.mrbook.com/InventoryData/Listing";
                              declare namespace invtdata="http://www.mrbook.com/Inventory";
                              /invtdata:INVENTORY/plh:LIST/plh:BOOK_ID'
                              PASSING object_value RETURNING CONTENT) AS VARCHAR2(64)) )
    CREATE GLOBAL TEMPORARY TABLE book_join_temp
       isbn_nbr VARCHAR2(64),
       book_id VARCHAR2(64),
       row_num INT,
       PRIMARY KEY(row_num)
    ) ON COMMIT DELETE ROWS;
    --Create indices....
    CREATE INDEX bkm_xmlindex_ix ON book_master (OBJECT_VALUE) INDEXTYPE IS XDB.XMLIndex PARAMETERS ('PATH TABLE path_tab');
    BEGIN
       DBMS_XMLINDEX.registerParameter(
          'myparam',
          'ADD_GROUP GROUP book_record
             XMLTable bk_idx_tab
             XmlNamespaces(''http://www.mrbook.com/InventoryData/Listing'' AS "plh",
                      ''http://www.mrbook.com/Inventory'' AS "invtdata",
                      ''http://www.mrbook.com/BookInfo'' AS "idty",
                      ''http://www.mrbook.com/References'' AS "lclone",
                      ''http://www.mrbook.com/Publishing'' AS "trd",
                      ''http://www.mrbook.com'' AS "mrbook"),
             ''/invtdata:INVT_DATA''
               COLUMNS
                    xml_id    RAW(16)     PATH ''/@XML_ID'',
                    isbn_nbr  VARCHAR(64) PATH ''/plh:LIST/plh:ISBN_NBR'',
                    book_id   VARCHAR(64) PATH ''/plh:LIST/plh:BOOK_ID'',
                    seller_loc_id NUMBER(13,0) PATH ''/plh:LIST/plh:SELLER_LOC_ID'',
                    catg_typ_cd NUMBER(7,0) PATH ''/idty:BK_INFO/idty:CATG_TYP_CD'',
                    CTRY_MKT_LOC NUMBER(7,0) PATH ''/idty:BK_INFO/idty:CTRY_MKT_LOC'',
                    bk_out_of_print_cd NUMBER(7,0) PATH ''/idty:BK_INFO/idty:BK_OUT_OF_PRINT_CD'',
                    reprint_isbn_nbr VARCHAR2(64) PATH ''/idty:BK_INFO/idty:REPRINT_ISBN_NBR'',
                    reprint_book_id VARCHAR2(64) PATH ''/idty:BK_INFO/idty:REPRINT_BOOK_ID'',
                    orig_ed_isbn_nbr VARCHAR2(64) PATH ''/lclone:REFERENCES/lclone:PRINT[child::lclone:PRINT_TYP_CD="160"]/lclone:ISBN_NBR'',
                    orig_ed_book_id VARCHAR2(64) PATH ''//lclone:REFERENCES/lclone:PRINT[child::lclone:PRINT_TYP_CD="160"]/lclone:BOOK_ID'',
                    last_mod_dt TIMESTAMP PATH ''/node()[local-name()="LAST_MOD_DT"]'',
                    subject_catg_code  NUMBER(7) PATH ''/idty:BK_INFO/idty:SUBJECT_CATG_CODE[child::idty:CATG_REF_LVL=1]/idty:SUBJECT_CATG_CODE'',
                    catg_code VARCHAR2(48) PATH ''/idty:BK_INFO/idty:SUBJECT_CATG_CODE[child::idty:CATG_REF_LVL=1]/idty:CATG_CODE'',
                    pub_summ  XMLType   PATH ''/trd:PUB_SUMM'' VIRTUAL
                XMLTable trd_summ_entr_ix
                XmlNamespaces(''http://www.mrbook.com/InventoryData/Listing'' AS "plh",
                      ''http://www.mrbook.com/Inventory'' AS "invtdata",
                      ''http://www.mrbook.com/BookInfo'' AS "idty",
                      ''http://www.mrbook.com/References'' AS "lclone",
                      ''http://www.mrbook.com/Publishing'' AS "trd",
                      ''http://www.mrbook.com'' AS "mrbook"),
                   ''/trd:PUB_SUMM/trd:PUBLC'' PASSING pub_summ
                COLUMNS
                    pub_yrmo  VARCHAR2(6) PATH ''/@PUBLC_YRMO''
    END;
    ALTER INDEX bk_xmlindex_ix PARAMETERS('PARAM myparam');
    CREATE INDEX ejt_isbn ON book_join_temp(isbn_nbr);
    CREATE INDEX ejt_book ON book_join_temp(book_id);
    --Using the PATH table instead of structured index???
    SELECT
        ej.row_num,
        e.xml_id,
        e.isbn_nbr,
        e.book_id,
        e.seller_loc_id,
        e.seller_loc_id AS mkt_seller_id,
        e.catg_typ_cd,
        e.CTRY_MKT_LOC,
        e.bk_out_of_print_cd,
        e.reprint_isbn_nbr,
        e.reprint_book_id,
        e.orig_ed_isbn_nbr,
        e.orig_ed_book_id,
        g.OBJECT_VALUE AS invt_data
    FROM
        book_master g,
        book_join_temp ej,
        XmlTable(
        XmlNamespaces('http://www.mrbook.com/InventoryData/Listing' AS "plh",
                      'http://www.mrbook.com/Inventory' AS "invtdata",
                      'http://www.mrbook.com/BookInfo' AS "idty",
                      'http://www.mrbook.com/References' AS "lclone",
                      'http://www.mrbook.com' AS "mrbook"),
          '/invtdata:INVENTORY'
        PASSING g.OBJECT_VALUE
        COLUMNS
           xml_id PATH '@XML_ID',
           isbn_nbr VARCHAR2(64) PATH 'plh:LIST/plh:ISBN_NBR',
           book_id VARCHAR2(64) PATH 'plh:LIST/plh:BOOK_ID',
           seller_loc_id NUMBER PATH 'plh:LIST/plh:SELLER_LOC_ID',
           catg_typ_cd NUMBER PATH 'idty:BK_INFO/idty:CATG_TYP_CD',
           CTRY_MKT_LOC NUMBER PATH 'idty:BK_INFO/idty:CTRY_MKT_LOC',
           bk_out_of_print_cd NUMBER PATH 'idty:BK_INFO/idty:BK_OUT_OF_PRINT_CD',
           reprint_isbn_nbr NUMBER PATH 'idty:SUBJ_DTL/idty:SUCSR_DUNS_NBR',
           reprint_book_id NUMBER PATH 'idty:SUBJ_DTL/idty:SUCSR_SUBJ_ID',
           orig_ed_isbn_nbr VARCHAR2(64) PATH '/lclone:REFERENCES/lclone:PRINT[child::lclone:PRINT_TYP_CD="160"]/lclone:ISBN_NBR',
           orig_ed_book_id VARCHAR2(64) PATH '/lclone:REFERENCES/lclone:PRINT[child::lclone:PRINT_TYP_CD="160"]/lclone:BOOK_ID'
    ) e
    WHERE
         ej.isbn_nbr = e.isbn_nbr
    OR  ej.book_id = e.book_id;

Maybe you are looking for

  • Want to purchase new mac mini but I do not know how I would install programs from dvd without internal optical drive

    I am planning to purchase new mac mini that just came out today but I do not know how I will be able to install programs or save stuff to disk when there is now internal optical drive.  I know I could purchase an external drive but is that necesary.

  • Use Select Value in JSp on same Page

    Sir, In my Jsp page there is one dropdown,sir i want to get that select value in same page so that i can display data on same page below the dropdown based on the <select> value. Is it possible to do this things. Plz reply Soon. Thanks, Regards, Ashi

  • Premier Constantly Losing Links to media

    So, I recently find myself repeatedly relinking every single media file in my Premier Pro project.  It usually takes less than a minute for Premier to lose at least one, then the others follow.  Sometimes, I open the project and within a minute or tw

  • Dependent Requirement Reduction

    Hello All, While posting the backflushing, I mant the system to automatically reduce the dependent requirement for the materials ( for MRP) . Plz help me with any settings which will clear the reservations automatically while backflushing

  • PRE 9 Organizer/Email

    How long does it take to receive a service verification code to use the Adobe Mail service.  I would think its a matter of minutes.  I've been waiting for over a day.  I'm suppose to receive an email verification code to my non adobe email address.