Enabling SNMP on HA secondary Server

Hi All,
I would like to enable SNMP on my HA secondary server.
When I set the SNMP string and tick the apply on all nodes box,  it does not get applied to the secondary server.
I was just wondering if this is something to do with it being the secondary server.
Is it OK to logon the secondary server in serviceability and just set it up there or would this cause problems between the primary and secondary server.
Sorry for the newbie question, but I have not used servers in HA mode before.
thanks for any help
Regards
Amanda Lalli-Cafini

/etc/snmp/conf is the config directory for snmp all you have to do is add the snmp startup into your rc3.d directory.
This is what mine looks like.
#!/sbin/sh
# Copyright (c) 1997-2001 by Sun Microsystems, Inc.
# All rights reserved.
#ident "@(#)init.snmpdx 1.13 01/11/02 SMI"
SNMP_RSRC=/etc/snmp/conf/snmpdx.rsrc
case "$1" in
'start')
if [ -f ${SNMP_RSRC} -a -x /usr/lib/snmp/snmpdx ]; then
if /usr/bin/egrep -v '^[          ]*(#|$)' ${SNMP_RSRC} > \
/dev/null 2>&1; then
/usr/lib/snmp/snmpdx -y -c /etc/snmp/conf
else
# Do not start snmpdx if snmpdx.rsrc contents are
# trivial.
exit 0
fi
fi
'stop')
/usr/bin/pkill -9 -x -u 0 '(snmpdx|snmpv2d|mibiisa)'
echo "Usage: $0 { start | stop }"
exit 1
esac
exit 0

Similar Messages

  • Automatically add new SQL data sources on secondary server

    Hi there.. I have found an issue that I cannot resolve.
    We have two DPM servers set up to be secondary servers for each other.. it has worked well for months and I was very happy with the setup.
    What I had not realized was if a new SQL database was created and was picked up by the local DPM server, the secondary server did not pick up the new data source.
    Is there a way to get a secondary server to refresh the list of data sources it should be backing up without having to manually updating the Protection Group once a week.
    Many thanks
    Chris
    Chris

    Hi,
    Have you attempted the following? From the secondary DPM Server modify the protection group in question.
    From the select group members step expand the needed SQL server.
    Select the SQL instance (ensure the check box is checked off next to the SQL instance) then right click.
    Now click on Turn on auto protection (Figure 1) to enable.
     Figure 1:Enabling SQL auto protection from Secondary DPM Server.
     Figure 2:After SQL auto protection is enabled.
    Also the below may assist.
    https://technet.microsoft.com/en-us/library/hh758042.aspx
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question.
    This can be beneficial to other community members reading the thread. Regards, Dwayne Jackson II. [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights."

  • How to start SNMP on LDAP Dir. server ? Is that runs on port 161 ?

    When I click 'SNMP' tab on configuration it says "The snmp service has been started" But when I try to stop or restart it says, "Error occured while stop/restating subagent. Check the configuration data you entered"
    Where do I check the config. data ?
    Do I need to enable somewhere snmp ???
    I want to browse the SNMP mib from our App.
    For that I'm trying to run the snmp agent on dir. server.
    any help ?
    Thanx,
    Ramesh//

    The SNMP services can be stoped from the configuration tab but that does not stop the SNMP subagent it has to be explicitly stoped if Ur running from windows machine go to the control panel and stop the services.
    If U'r running the Directory server from unix host the hostname where the master agent resides and the port no is must for the subagent to contact.
    Regarding the port no SNMP is running if Ur using unix host check the /etc/services file for the port no for SNMP services.

  • Disable SNMP agent in Job Server

    SNMP agent was enabled in job server. Since it is enabled, we cannot access windows SNMP agent. Windows SNMP and DS SNMP is using the same ports 161.  We have an application which shows all the statistics of all the servers, but for DS server it not showing all the statistics.
    So we are planning to disable the SNMP agent in job server. Is there any implications if i disable the SNMP agent?
    DS Version 4.0

    It should not create any issues.

  • Webserver Plugin not choosing secondary server

    Has anyone any idea why my web server plugin will sometimes choose a secondary server and sometimes not?
    I've got no cluster issues (well, no lost multicast packets) that I can see, the plugin debug information indicates that servers are available, yet I only have a single JVM chosesn and stored in my cookie
    This is what I'm seeing from the cluster debug information - just a primary chosen and everything else looks normal.
    Has anyone any suggestions?
    Pete
    Properties loaded from path: '\\?\C:\IISPlugin\PRD\iisproxy.ini'
    Query String: '__WebLogicBridgeConfig'
    WebLogic Cluster List:
    1. Host '172.23.1.12' Port: 7030 SecurePort: 7031 Primary
    Dynamic Server List:
    1. Host: '172.23.1.12' Port: 7030 SecurePort: 7031 Status: OK
    2. Host: '172.23.1.12' Port: 7040 SecurePort: 7041 Status: OK
    3. Host: '172.23.1.11' Port: 7010 SecurePort: 7011 Status: OK
    4. Host: '172.23.1.11' Port: 7020 SecurePort: 7021 Status: OK
    Static Server List:
    1. Host: '172.23.1.11' Port: 7010 SecurePort: 7010
    2. Host: '172.23.1.11' Port: 7020 SecurePort: 7020
    3. Host: '172.23.1.12' Port: 7030 SecurePort: 7030
    4. Host: '172.23.1.12' Port: 7040 SecurePort: 7040

    Thanks Pete. If you are saying that your JSESSIONID cookie only contains a single JVM ID, then this is usually an indication that the WL cluster node that created the session did not successfully create a replicated session on its chosen secondary server (or it believes it is not configured to replication sessions at all -- although your config looks good). As you probably know, it is the WL managed server that determines if a secondary JVM ID is populated in the session cookie based upon successful replication. The web server plugin will only use the secondary JVM ID in the session cookie if it fails to communicate with the primary server identified in the session cookie. If the web server is able to communicate with the primary owner of the session (dynamic server list includes the server and network is good) it will always route the request to the primary.
    Couple of other baseline questions:
    (a) what version of weblogic are you running?
    (b) have you been able to confirm that replication of session data is truly occuring between your nodes (depending on your version of WL, this can be verified on the admin console by clicking on Clusters, click on your cluster name, then select the Monitoring tab. If replication of session information is occuring you will see for each server that has an active session it will display information in the form "managed_server_name:number_of_sessions" under the "Secondary Distribution Names" column. This is an indication of sessions that have been replication If replication is not enabled or not working properly, this column will be blank). You can also enable cluster debugging to see replication messages logged to your server log. In the admin console go to Servers, pick a managed server, click on the Debug tab and expand the "weblogic -> core " part of the tree and select "cluster". This will enabled all cluster debug messaging.

  • Enabled SNMP trap

    Hi Experts,
    When i configure Snmp trap in switches it is showing a list of commands, What exacatly is these are?
    snmp-server enable traps snmp authentication linkdown linkup coldstart warmstart
    snmp-server enable traps tty
    snmp-server enable traps vtp
    snmp-server enable traps vlancreate
    snmp-server enable traps vlandelete
    snmp-server enable traps stpx
    snmp-server enable traps port-security
    snmp-server enable traps config
    snmp-server enable traps entity
    snmp-server enable traps copy-config
    snmp-server enable traps fru-ctrl
    snmp-server enable traps flash insertion removal
    snmp-server enable traps syslog
    snmp-server enable traps bridge
    snmp-server enable traps envmon fan shutdown supply temperature status
    snmp-server enable traps hsrp
    snmp-server enable traps bgp
    snmp-server enable traps pim neighbor-change rp-mapping-change invalid-pim-message
    snmp-server enable traps ipmulticast
    snmp-server enable traps msdp
    snmp-server enable traps rtr
    snmp-server enable traps vlan-membership
    Does it cause more CPU utilization? Do i need to enable snmp traps to monitor network using solarwinds NPM. I have configured community string and snmpserver host address.
    Thanks
    Vipin

    Hi Vipin,
    We usually configure SNMP traps to monitor our network reachability and availability. If anything goes on Device whether it is a link down situation or any issue with protocol running on the Device. So, whenever anything goes wrong on the device it generates an SNMP Trap and notify you if you have configure SNMP-Server host to receive the notifications/traps.
    To configure SNMP host command following is the command that you need to configure:
    (config)#snmp-server host
    E.g.:
    snmp-server enable traps snmp authentication linkdown linkup coldstart warmstart
    Above trap will send you send you following information :
    >Authentication : if any one tries to poll the device using wrong community string
    >linkdown/linkup : if any of the interface/ports/links goes down it will notify you
    snmp-server enable traps bgp
    Above trap will send you traps regarding problems with BGP running on your Device
    Now I come to CPU part, yes it may spike CPU sometime if regular polling is done from the Management Server or if any MIB has long output. You can use Solarwinds NPM for the same however if it causes high cpu then you have to openup a TAC Case with NMS team to have the issue resolved.
    To know more about MIB's on your device you can execute command on your Device 'show snmp mib' and can translate the MIB's using following link to know more about particular MIB :
    http://tools.cisco.com/Support/SNMP/do/BrowseOID.do?local=en&translate=Translate&objectInput=1.3.6.1.2.1.15
    Kindly let me know in case you have any other doubts.
    Thanks & Regards,
    Nikhil Gakhar

  • Enable snmp traps on Nexus1000V

    Hello community,
    I have the following issue. I want to configure snmp-traps regarding stpx (root-inconsistency, loop-inconsistency) on a Cisco Nexus 1000V.
    The command "show snmp traps" lists stpx as a trap that could be configured and which is not at the moment.
    MKBE1NX1# sh snmp trap
    Trap type                                           Enabled
    entity               : entity_mib_change               Yes         
    entity               : entity_module_status_change     Yes         
    entity               : entity_power_status_change      Yes         
    entity               : entity_module_inserted          Yes         
    entity               : entity_module_removed           Yes         
    entity               : entity_unrecognised_module      Yes         
    entity               : entity_fan_status_change        Yes         
    entity               : entity_power_out_change         Yes         
    link                 : linkDown                        Yes         
    link                 : linkUp                          Yes         
    link                 : extended-linkDown               Yes         
    link                 : extended-linkUp                 Yes         
    link                 : cieLinkDown                     No          
    link                 : cieLinkUp                       No          
    link                 : connUnitPortStatusChange        Yes         
    rf                   : redundancy_framework            Yes         
    aaa                  : server-state-change             No          
    license              : notify-license-expiry           Yes         
    license              : notify-no-license-for-feature   Yes         
    license              : notify-licensefile-missing      Yes         
    license              : notify-license-expiry-warning   Yes         
    zone                 : unsupp-mem                      No          
    upgrade              : UpgradeOpNotifyOnCompletion     No          
    upgrade              : UpgradeJobStatusNotify          No          
    feature-control      : FeatureOpStatusChange           No          
    rmon                 : risingAlarm                     No          
    rmon                 : fallingAlarm                    No          
    rmon                 : hcRisingAlarm                   No          
    rmon                 : hcFallingAlarm                  No          
    config               : ccmCLIRunningConfigChanged      No          
    snmp                 : authentication                  No          
    bridge               : topologychange                  No          
    bridge               : newroot                         No          
    stp                  : inconsistency                   No          
    stpx                 : loop-inconsistency              No          
    stpx                 : root-inconsistency              No   
    The only snmp traps that can be configured on the system are:
    aaa              Module notifications enable
      config           Module notifications enable
      entity           Module notifications enable
      feature-control  Module notifications enable
      license          Module notifications enable
      link             Module notifications enable
      rf               Module notifications enable
      rmon             Module notifications enable
      snmp             Module notifications enable
      upgrade          Module notifications enable
      zone             Module notifications enable
    Nothing about stpx... Is there some other way to configure more traps?
    Thanks in advance,
    Katerina

    Looks like you could be hitting CSCsz96130 "show snmp trap displays five unconfigurable types bridge / stpx" and it looks like support has been added starting 5.0(2)N1(1).

  • Got error: Location Code is not enabled in the XML Gateway Server

    Hi,
    When I perform Compliance rule screening on GTM - Globel Trade Management,that I defined GTM as an Trading Partner on EBS R12.1.3,
    GTM can send response to EBS,but I see the following error message via Transaction Monitor on EBS workflow:
    [The Standard:OAG, Transaction Type:ITM , Transaction SubType:EXPORT_COMPLIANCE and Location Code GTM6.2 is not enabled in the XML Gateway Server. Pls check your Setup.]
    On EBS,I defined an ITM Partner for GTM,named 'GTM6.2',according to the document created by you: ITM Setup.doc
    and I also finished the following steps:
    1. ITM Application Users
    2.ITM Partner Service Types
    3.ITM Parameter Setup
    4.Order Type Creation: ITM Only
    5.Customer Creation: create a customer named 'GTM6.2', also enter 'GTM6.2' as EDI Location
    6.Define Transactions on XML Gateway as following:
    <Header>:
    Party Type: Customer
    Transaction Type: ITM
    Transaction Sub Type: EXPORT_COMPLIANCE
    <External Transactions (Lines)>:
    Standard Code: OAG
    External Transaction Type: ITM
    External Transaction Sub Type: EXPORT_COMPLAINCE
    Queue: APPLSYS.ECX_IN_OAG_Q
    7.Define Trading Partners on XML Gateway as following:
    <Header>:
    Trading Partner Type: Customer
    Trading Partner Name: GTM6.2
    <Trading Partner Details>:
    Transaction Type: ITM
    External Transaction Type: ITM
    External Transaction Sub Type: EXPORT_COMPLAINCE
    Map: WSHITEIN
    Source Trading Partner Location Code: GTM6.2 (same as the value what defined in EDI Location for customer)
    I think the combination of External Transaction Type, External Transaction Subtype, Standard Code,Source Trading Partner Location Code
    should be existing once done above steps, I have no idea why the error prompted.
    Does someone can tell how to trace the error?
    Thanks,
    Rambler

    user12254038 wrote:
    Can you send any supporting documents on ITM Setup? My client is implementing ITM for the first time and I wanted to know more details on how it works in R12.1.3 version of EBS?Oracle International Trade Management (ITM) Partner Integration: Specifications [ID 572524.1]
    International Trade Management Integration [ID 259691.1]
    How Can I Tell if International Trade Management is Installed or Not? [ID 742539.1]
    What are the Required Setups for International Trade Management (ITM) Flows in Shipping Execution? [ID 782861.1]
    What Patches Provide the Latest Fixes and Enhancements for ITM Adapter? [ID 465122.1]
    How to Generate Debug Information From ITM Adapter for XML Processing [ID 738925.1]
    How to Generate Debug File for International Trade Management (ITM) [ID 1294853.1]
    Thanks,
    Hussein

  • Primary and Secondary Server  - odbc.ini entry

    Hi BI Experts,
    We are using OBIEE 10g and source is Teradata 13. We have to configure the DSN in such a way that if the primary server A goes down BI application should point to secondary server B on which the report can run. Please note that severs A and B will always be in sync. Is there any way to achieve this?
    Kindly help me on this.
    Thanks.

    Hello Torsten, if you not able to give proper solution, please dont start to reply
    Why the rudeness? If your problem was simple or the solution obvious, it wouldn't happen in the first place. Replication is one of the most complex things in all of software to troubleshoot and can be caused by an infinite number of things. On top of that,
    you're giving no details whatsoever so there's almost no way to know what's going on -- we don't have crystal balls or know the magic incantation that will miraculously make it work. You have to us help you, simply saying it's broke, and yes/no to probing
    questions is pretty much useless. You need to provide as much detail as possible so that we can help. The keyword here is "help" though, we can't do it for you.
    Jason | http://blog.configmgrftw.com

  • Script to redirect a Application to secondary server if primary server is not available in a .ini file

    Hi Team,
    I want to write a script in the .ini file of an excel add-in on the users windows system , where if the primary server is not available it should automatically redirect to the secondary server. 
    Can someone help me in this?

    Hi Jrv, 
    As I said its an Add-in (Twuploader) in excel which uses a ini file to connect to a server. I want the addin to connect to another available server when the primary is not available. This is the ini file content below:
    [TradeWeb]
    Host=Server1
    Port=8002
    User=abcd
    [Upload]
    AutoMode=N
    UserInt=5
    [Toolbar]
    Visible=Y
    Row=26
    Pos=1
    Top=-32000
    Left=-32000
    Let me know how to edit this ini file so that if Server1 is unavailable it should connect server2. 

  • 3.3.3 to 4.0.1.42 not working on secondary server

    I upgraded from 3.3.3 to 4.0.1.42 but still with old Base Image for now:
    Cisco Secure ACS 4.0.1.42
    Appliance Management Software 4.0.1.42
    Appliance Base Image 3.3.1.8
    CSA build 4.0.1.543.2 (Patch: 4_0_1_543)
    I upraded first my master server, that worked fine (after resolving some problems). When I tried to upgrade the "secondary" server, I get the following (in CLI because WEB does not work because services are not started):
    CSAdmin starting
    CSAuth starting
    CSDbSync starting
    CSLog stopping
    CSMon starting
    CSRadius starting
    CSTacacs stopped
    CSAgent stopped
    Appliance upgrade in progress...
    The "upgrade in progress" is for almost 60 minutes now.
    How do I recover the Web access?

    Hi.
    OK, it's not anymore with "upgrade in progress" and CPU went down from 100% to 0%. Some services are still in these states:
    CSAdmin stopped
    CSAuth stopped
    CSDbSync starting
    CSLog starting
    CSMon starting
    CSRadius starting
    CSTacacs stopped
    I tried stoping/starting/restarting them without success, getting this message:
    xxxxxxx> restart all
    CSAdmin is not running
    CSAuth is not running
    Error: Failed to stop service: rc = [1052] The requested control is not valid for this service.
    So I tried to do a shutdown. After coming back up, CPU is back to 100% and services look this way:
    CSAdmin starting
    CSAuth starting
    CSDbSync starting
    CSLog stopping
    CSMon starting
    CSRadius starting
    CSTacacs stopped

  • JavaScript is required. Enable JavaScript to use OAM Server.

    I want to open an Excel spread sheet stored in a Webdav server using OAM (Oracle Authentication).
    It works fine on every PC or Mac having Excel 2010, 2013, but it doesn't work in Excel 2007.
    In Excel 2007 always gives the error "JavaScript is required. Enable JavaScript to use OAM Server.", so I cannot distribute the file.
    The system with Excel 2007 has the Activex scripting enabled but I cannot discover how to open the file without the error.
    Any help is appreciated.

    Hi,
    We had a customer that ran into the same exact issue and symptoms described in this thread. The issues occurred after some upgrades were made to their browsers. Our customer was using Forms/Reports 11.1.2.1 (11gR2), and OAM (11.1.1.5). I'm not sure what version of OAM you are currently using?
    The issue was caused by a bug in OAM 11.1.1.5. The problem is exactly as pbell was explaining. By looking at the failed HTML/Javascript code generated by OAM - it was just poorly generated code by OAM. However, Install Bundle Patch 2 (BP02) onto OAM and you'll be fine! This updates your OAM to version 11.1.1.5.2.
    Oracle Support documents on the issue and bug:
    - There is a Oracle Support article describing the issue: 1447194.1
    - Oracle Support Bug Number: 13254371
    To fix the issue:
    - Apply OAM Patch: 13115859. Its a generic patch that will work on any environment type.
    - If you use WebGates for your deployment, look into install patch 13453929 as well
    I wrote an article on how to install the patch: http://pitss.com/us/2013/04/04/oam-error-enable-javascript-to-use-oam-server/
    I hope this helps!
    Thank you,
    Gavin
    Edited by: GavinWoods on Apr 5, 2013 9:29 AM

  • SQL server Agent Jobs High Avaialability - Crash on server 1 , automatically resumes from Secondary server

    Hi All,
    1)We have 2 physical Servers, where SSIS is installed (not clustered) . If machine crashes where Jobs running on server 1, then i want to automatically resumes the job from secondary server sql server job agent ?
    2)Can i AUTOMATICALLY resume the job from secondary ssis server on crash of my primary ssis server ?
    3)Can it resumes from the same point or we have to restart from begining ?

    The functionality which you are looking for is not available in sql server as a standout feature.
    You need to do this programattically.
    You need to put savepoints in your ssis package/jobs where it logs the steps that it has completed and the steps that it has started. Any time the job is restarted it first has to check the logging table and see where it has reached and restart from the
    step where it left off. The packages/jobs should have the ability to do so.
    You can also set another job, which will always only run on the sql agent start, which particularly chceks for all jobs which would have stopped/cancelled and see if it has completed or has stopped in between. This will make sure that if sql fails over
    or there is a restart of the agent all the jobs which were cancelled would be restarted.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com
    This seems to be one of the solutions but as already mentioned by others you have to build this logic - you won't get this functionality out of the box.
    Sarabpreet Singh Anand
    SQL Server MVP Blog ,
    Personal website
    This posting is provided , "AS IS" with no warranties, and confers no rights.
    Please remember to click "Mark as Answer" and "Vote as Helpful" on posts that help you. This can be beneficial to other community members reading the thread.

  • Replication overwrites the AAA servers table in the secondary server

    Hi,
    I've configured two ACS servers with replication but i noticed that when the replication takes place it overwrites the AAA servers table configured in the network configuration of the secondary server and that makes the next replication to fail because the two servers have the same configuration of AAA servers, if i uncheck the "Network Configuration Device tables" and the "Network Access Profiles" from the "Database Replication Setup" wich includes the AAA servers table I also missed the replication of the new network devices that are added in the master server.
    Do you know how can i exclude only the AAA servers table from the replication??
    Other thing is that I configured the Outbound replication as "Automatically triggered cascade", I'm not sure if this means that at the exactly moment that there is a change on the primary server it will replicate it to the secondary???? because if that is the case it is not doing it.
    Thanks in advance for your help

    Hi,
    I understand, thanks alot for making that clear!.
    I now have another situation and i was wondering if you can help me, i made some changes in the AAA servers trying to solve this situation but i wasn't able to, so i leave again the servers in the same way that they were configured by the time the replication was working but now it is not, in the master server i get this message:
    ERROR ACS 'LACSLVBCDVAS007' has denied replication request
    and in the second server i get this:
    ERROR Inbound database replication from ACS 'lacslvbcpvas011' denied - shared secret mismatch
    I've checked the same key configured for both and are the same, i've deleted the AAA servers and the configure them again, restart the services but the problem remains, dou you have any idea what this could be??
    Thanks in advance for your help.
    Best Regards,

  • In Log-shipping what is load delay period on secondary server - Skipping log backup file since load delay period has not expired ....

    During logshipping, job on secondary server is ran successfully BUT give this information
    "Skipping log backup file since load delay period has not expired ...."
    What is this "Load delay period" ? Can we configure this somehow, somewhere ?
    NOTE : The value on "Restore Transasction Log tab", Delay Restoring backups at least = Default (zero minutes)
    Thanks
    Think BIG but Positive, may be GLOBAL better UNIVERSAL.

    How to get the LSBackup, LSCopy, and LSRestore jobs back in sync...
    Last I posted the issue was that my trn backups were not being copied from Primary to Secondary. 
    I found upon further inspection of the LS related tables  in MSDB the following likely candidates for adjustment:
    1) dbo.log_shipping_monitor_secondary, column  last_copied_file 
    change last copied file column to something older than the file that is stuck. For example, the value in the table was 
    E:\SQLLogShip\myDB_20140527150001.trn
    I changed last_copied_file to E:\SQLLogShip\myDB_20140525235000.trn. Note that this is just a made up file name that is a few minutes before the actual file that I would like to restore (myDB_2014525235428.trn). 4 mins and 28 seconds before, to
    be exact.
    LSCOPY runs and voila! now it is copied from primary to secondary. That appears to be the only change needed to get the copy going again.
    2) For LSRestore, see the MSDB table dbo.log_shipping_monitor_secondary, change
    last_restored_file
    again I used the made up file E:\SQLLogShip\myDB_20140525235000.trn
    LSRESTORE runs and my just copied myDB_2014525235428.trn is restored
    ** note that
    dbo.log_shipping_secondary_databases also has a last_restored_file column - this did not seem to have any effect, though I see that it updates after completing the above and LSRestore has run successfully, so now it is correct as well
    3) LSBackup job is still not running, the job still has a last run date in the future. Could just leave it and eventually it will come right, but I made a fairly significant time change, plus it's all an experiment....back to MSDB.
    look at dbo.sysjobs, get the job_id of your LSBackup job
    edit dbo.sysjobschedules - change next_run_date  next_run_time as needed to a datetime before the current time, or when you would like the job to start running. 
    I wouldn't be so cavalier with data that was important, but that's the benefit of being in Test, and it appears that these time comparisons are very rudimentary - a value in the relevant log shipping table and the name of the trn file. That said, if you
    were facing a problem of this nature due to lost trn files, corrupted, or some similar scenario, this wouldn't fix your problem, though it _might_ allow you to continue? But in my case I know I have all the trn files, it's just the time that changed, in this
    case on my Primary server, and thus the names of the trn logs got out sync.

Maybe you are looking for