EOL clarification

   When looking at the eol notes for say a 3750G switch and it says  End of Vulnerability/Security Support:
OS SW  , January 31,2016  .  Does this mean there will be no more IOS updates for this switch model after this date even for security holes or vulnerabilities ?   The hardware says its not EOL until 2018 .   there seems to be some contradiction in cisco docs as if you look at the EOL  for the 12.2.55SE software it says eol isn't until 2018 while the hardware EOL SW is 2016 for the 3750G series .

Glen
Does this mean there will be no more IOS updates for this switch model after this date even for security holes or vulnerabilities ?  
Yes, that's what it means.
In terms of the software EoL announcement because the IOS can be run on multiple models of switch then it is not tied to the hardware. You can see from the EoL for the software that there are a lot more switch models.
So if the vulnerability in the software specifically affected the 3750G (or the others in that EoL announcement) you are out of luck.
If the vulnerability affected other switches not included in that EoL then it would be addressed up until 2018.
So you could get lucky ie. if other switch models were also affected then the software would be patched but, as far as I know, what they wouldn't do is then test it on your specific model.
So you would run so at your own risk.
That has always been my understanding of how it works anyway so if anyone knows differently hopefully they will add to this.
Of course it may be just an inaccuracy in their EoL documents :-)
Jon

Similar Messages

  • How to delete the EOL in Vim

    This might sound really stupid, but how do you delete a line-break in Vim? Deleting the EOL after the cursor, when you're at the end of a line seems to be 'dw', as that will delete to the next word, therefore deleting the EOL in the middle of it. However, deleting the EOL before the cursor, when you're at the start of a line I can't figure out. 'db' would delete the EOL, but would also delete to the beginning of the word at the end of the line previous. The only way I've found I've been able to use is entering Insert mode and pressing Backspace. Surely this isn't the best method for doing this...

    I don't think any of those does what OP wanted - joining upwards.
    @ bernarcher
    I can't get the negative numbers to work. The manual says only
    gJ Join [count] lines, with a minimum of two lines.
    Don't insert or remove any spaces. {not in Vi}
    Does e.g. '-3gJ' really works for you? Maybe I have some weird settings in my .vimrc.
    EDIT: I was thinking about something like
    map II i<Backspace><Esc>
    Of course you can change the 'II' mapping and maybe add '0':
    map II i<Backspace><Esc>0
    or whatever tells vim to go to the first char of the line.
    '[ count ]II' won't work.
    Last edited by karol (2011-01-24 21:43:05)

  • Domain Guideance and Clarification using SVN and an Export suggestion

    Hello Oracle SQL Data Modeler Support,
    Apologies if this has been documented somehwere and I have missed reading it, but have gone through the User Guide and cannot find the clarification I want regarding domains.
    1) WHAT IS BEST PRACTICE TO SAVE WHEN USING SVN
    From the forum I have picked up that the domains file is in the following directory:
    ~\datamodeler\datamodeler\types
    File name is 'defaultdomains.xml'
    When I come to save the file using SVN I get 'Choose versioned folder for storing system types'
    I assume this is where the domains file is stored.
    I require the Domains to be avialable centrally to all Designs I create, what should I do?
    a) Set the folder to ~\datamodeler\datamodeler\types
    b) Create a design called 'Domains' and store it in this folder
    c) Any thing you may suggest
    2) EXPORT OF DOMAIN FILE SUGGESTION
    This should be a quick win for you, can you please add an Export Domains function, seems this needs to do no more than make a copy of the defaultdomains.xml file and create it in a specified export directory.
    Will avoid having to go through the forum to pick up that the defaultdomains.xml file needs to be copied and transfered over for new SQL Data Modeler installations.

    Hello,
    I require the Domains to be avialable centrally to all Designs I create, what should I do?Default location is fine if SVN is not used and if all designs are used only on that computer.
    If versioning is used then it's better to have separate directory for domains and this directory shouldn't be part of any design's directory - i.e. for designs you can have directories c:\des_1, c:\des_2 ...c:\des_n - one directory per each design and that directory will contain design DMD file and design folder. For domains you can have directory c:\DM_Sys_types and you need to set this directory in "Tools>Preferences>Data Modeler>system types directory" - logical types, RDBMS sites and scripts also will be stored there.
    Philip

  • BI Java Installation: Clarification Needed!

    I'm wondering if someone could help clear this up for me:
    We have installed NW2004s SPS08 on Solaris with only the usage type EP (AS Java/EP). Awhile back I was tasked with connecting our BW ABAP system to our Portal ("Integration into the Portal" - transaction SPRO etc).
    After starting this task I noticed I was missing things that the instructions were telling me to configure; i.e. Items that were related to BI that weren't in the Portal such as certain roles, the BI Repository Manager etc.
    Reading around, it seemed like I need to install the BI usage type.
    I have now been tasked with another installation of the Portal (NW2004s SR1 this time), but am trying to head off the problems I'm experiencing trying to connect the Portal to our BW ABAP system. This will be a Java only installation. I've read that BI-Java requires EP and AS Java, and that if I install the BI-Java usage type, EP and AS-Java will be installed automatically.
    My question is, if I do the BI-Java installation and it automatically installs EP/AS-Java, will the Portal still act the same way as it does in my EP/AS-Java only installation I already have? We have many plans to use the Portal as an entry point for all of our backend systems, so if the Portal's capabilities are not what we see already (in our first installation of just EP/AS-Java) then we will have problems.
    Any clarification is greatly appreciated and I will award points accordingly.
    THANKS!
    Beau.

    I think that's what I needed to know.
    I was actually wondering about DI as well. Our developers are having problems deploying .EAR files to our original EP-only install. They can deploy .PAR files with no problems, but .EAR files always error out. Maybe having DI will solve this problem as well? I'm a little concerned about the hardware capacity of this box with having BI, EP and DI all installed on it. I had contacted SAP about installing DI a while back and basically they had told me to install it on a seperate server, by itself. We're running a Sun Enterprise 420R, 4G of memory and a 450 mhz processor for this new installation. Do you think this box is capable of handle EP, BI and DI (AS-JAVA)?
    Thanks for your help!

  • Clarifications in Asset Accounting

    Dear Experts,
    Please clarify below questions.
    1) What is the difference between Depreciation Area and Depreciation Key?
    2) What is the importance of Recalculate value button in Asset Accounting?
    3) Suppose I have 1000 assets, if I want to run depreciation only for 200 assets how can I do that?
    4) If suppose I have 5 Depreciation areas, I am able to see the book depreciation values only, where I
        can see the other depreciation values? If we can’t see for which purpose we are using other
        depreciation areas?
    5) Vendor and Customer balances get update regularly or once in a month or year?
    6) Where we have to create number ranges either in Production or Development Environment?
    7) How can we transfer GLs from one environment to another?
    Full points will be assigned as way of thanks
    Regards,
    Vineela

    Hi Krishna,
    Thanks for your reply,But still I need some more clarifications please respond........
    2) What is the importance of Recalculate value button in Asset Accounting?-
    (A)recalculates depr when asset parameters are changed
    Where it will be there as it(recalculate button) is not there in AFAB
    3) Suppose I have 1000 assets, if I want to run depreciation only for 200 assets how can I do that?
    (A)select those 200 and run depreciation
    Here Assets Selection option is there only in test run not there in update run.
    4) If suppose I have 5 Depreciation areas, I am able to see the book depreciation values only, where I can see the other depreciation values? If we can’t see for which purpose we are using other depreciation areas? (A)use AW01N- you cans ee all dep areas
    In AW01N only book depreciation values is displayed,how can I see other depreciation area values
    Regards
    Vineela
    Edited by: Vineela Siri on Apr 9, 2008 7:22 AM

  • Even though my email accounts seem to be set up when i press the email icon it just lists icloud, EOL etc and asks for new account to be set up

    Even though my email accounts seem to be set up when i press the email icon it comes up with Icloud, EOL etc like it is not set up.  When you press on icloud and put in details it says the account is already set up.
    Email had been set up and receiving and sending email but seems that no longer working.  Have i bumped something and how do i fix it.

    This is for Ipad rather iphone that am having this problem

  • EOL/EOS report generation fails

    Hi,
    I am using LMS version 3.2 and i am not able to generate EOS/EOL report with error no connection to Cisco.
    Saw an update i LMS portal as this:
    Now Available! LMS 3.2:Patch for un-interrupted service of Cisco.com download for Device/Software/PSIRT/EOX updates (To be applied on or before 15-June-2011)
    so upgraded the patch cwcs33x-win-CSCto46927-0.zip and restarted the demeon as read in the read me file for the patch.
    Now the job execution status is always shows running, its neither fail nor pass.
    Why is this like so ?? files attached...
    Any inputs ???
    Thanks
    Richard

    Hi,
    There are few bugs becasue of the following exception that filed in LMS 3.2 and fixed in LMS 3.2.1
    Exceptions found :-
    EOS_EOL reportcom.cisco.nm.rmeng.inventory.reports.util.IRException: Cisco.com Exception
        at com.cisco.nm.rmeng.inventory.reports.datagenerators.EOS_EOL_RDG.getData(EOS_EOL_RDG.java:234)
        at com.cisco.nm.rmeng.inventory.reports.datagenerators.DataGenRequestHandler.getData(DataGenRequestHandler.java:44)
        at com.cisco.nm.rmeng.inventory.reports.job.JobExecutor.generateReportData(JobExecutor.java:1693)
        at com.cisco.nm.rmeng.inventory.reports.job.JobExecutor.runReport(JobExecutor.java:894)
        at com.cisco.nm.rmeng.inventory.reports.job.JobExecutor.main(JobExecutor.java:2514)
    [ Thu Aug 04  11:56:20 IST 2011 ],ERROR,[main],com.cisco.nm.rmeng.inventory.reports.job.JobExecutor,generateReportData,1709,IRException throwncom.cisco.nm.rmeng.inventory.reports.util.IRException: Cisco.com Exception
        at com.cisco.nm.rmeng.inventory.reports.datagenerators.EOS_EOL_RDG.getData(EOS_EOL_RDG.java:283)
        at com.cisco.nm.rmeng.inventory.reports.datagenerators.DataGenRequestHandler.getData(DataGenRequestHandler.java:44)
        at com.cisco.nm.rmeng.inventory.reports.job.JobExecutor.generateReportData(JobExecutor.java:1693)
        at com.cisco.nm.rmeng.inventory.reports.job.JobExecutor.runReport(JobExecutor.java:894)
        at com.cisco.nm.rmeng.inventory.reports.job.JobExecutor.main(JobExecutor.java:2514
    BUGS :-
    1>  CSCta76147
    2> CSCta76147
    Upgrade LMS 3.2 to LMS 3.2.1 and it should be fix
    Note :- kindly take backup of CiscoWorks before any upgrade. Also you need to make sure that RME should be running 4.3.1 and Campus Manager should be 5.2.1
    here is the location to download LMS 3.2.1
    http://www.cisco.com/cisco/software/release.html?mdfid=282635181&flowid=16561&softwareid=280775102&os=Windows&release=3.2.1&relind=AVAILABLE&rellifecycle=&reltype=latest
    Many Thanks,
    Gaganjeet

  • Error Message in EoL/EoS Report

    currently running LMS 3.2 get the following error message when running the Eol/Eos report: under reason for failure get unable to get data from Cisco.com try again later. using the same login that is used to login to Cisco.com.

    The online version of this report is currently broken.  We have been working to fix it for a while now, but unfortunately, I do not have an ETA.  The workaround is to use the offline report mode, and download the data from http://www.cisco.com/cgi-bin/tablebuild.pl/cw2000-rme .

  • I need a clarification : Can I use EJBs instead of helper classes for better performance and less network traffic?

    My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
    I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
    Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
    Please suggest me which method (is Helper classes or Using EJBs) is perferable
    1) to get better performance and.
    2) for less network traffic
    3) for better container resource utilization
    I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
    Please give detailed explanation.
    thank you,
    sudheer

    <i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
    1) to get better performance</i>
    EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
    <i>2) for less network traffic</i>
    There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
    <i>3) for better container resource utilization</i>
    Again, the EJB version will consume a lot more container resources.

  • Need some clarification on Replacement Path with Variable

    Hello Experts,
    Need some clarification on Replacement Path with Variable.
    We have 2 options with replacement path for characteristic variables i.e.
    1) Replace with query
    2) Replace with variable.
    Now, when we use  "Replace with variable" we give the variable name. Then we get a list for "Replace with" as follows:
    1) Key
    2) External Characteristic Value Key
    3)Label
    4)Attribute value.
    I need detailed explanation for the above mentioned 4 options with scenarios.
    Thanks in advance.
    Regards
    Lavanya

    Hi Lavanya,
    Please go through the below link.
    http://help.sap.com/saphelp_nw70/helpdata/EN/a4/1be541f321c717e10000000a155106/frameset.htm
    Hope this gives you complete and detailed explaination.
    Regards,
    Reddy

  • Rebate clarification

    Hi all
    I need some clarification on Reabte.Kindly help me.I just want to know wheather the rebate will be considered for Free goods also?that is whether the volume of the goods supplied as free of cost also considered for rebate calculation?As per my understanding that should not be considered but still i need clarification on this.Kindly help me.

    Hi,
    Please follow the below link that help u ::
    http://help.sap.com/saphelp_46c/helpdata/en/5d/363eb7583f11d2a5b70060087d1f3b/content.htm
    http://www.erpgenie.com/publications/saptips/052005.pdf
    REgards,
    Krishna O

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Access to trunk port clarification

    Hello-
    I am looking to clarify a point of confusion for myself regrading connecting an access port to a trunk port. Consider the following switchport config on switch1:
    Switch#1
    interface GigabitEthernet0/5
     switchport
     switchport access vlan 6
    ....and the corresponding config on it's neighbor:
    Switch#2
    Interface GigabitEthernet10/8
    switchport
    switchport mode trunk
    switchport trunk allowed vlan 1,6,100
    My first question is- Is this a valid configuration? Secondly, what would the expected results be? I am curious about what vlans would be allowed to pass through..
    Thanks in advance-
    Brian

    This would work fine but not recommended.
    Also the traffic between the switches would be only Native Vlan and vlan 6 will pass through.
    SW1-----F0/1----------f0/1----SW2
    SW1#sh int trunk 
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       auto         n-802.1q       trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1-1005
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6
    SW1#
    SW2
    SW2#sh int trunk 
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       on           802.1q         trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1,6,100
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6,100
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6,100
    SW2#
    2) Part of this config is that any vlans which are been configured under the SW1 would be allowed through that access port.
    ex:
    SW1#sh int trunk 
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       auto         n-802.1q       trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1-1005
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6,10,20,30,40,50,60,70,80,90,100
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6,10,20,30,40,50,60,70,80,90,100 ...>>>>>>>>>>all vlans are allowed here.
    b)
    Were as on Switch 2 if you create all these vlans and u dont allow that to go through the trunk interface which you have configured those vlans would nt be flowing through.
    eg;
    SW2#sh int tr
    Port        Mode         Encapsulation  Status        Native vlan
    Fa0/1       on           802.1q         trunking      1
    Port        Vlans allowed on trunk
    Fa0/1       1,6,100
    Port        Vlans allowed and active in management domain
    Fa0/1       1,6,100
    Port        Vlans in spanning tree forwarding state and not pruned
    Fa0/1       1,6,100>>>>>>>>>>>>>>>.Only 3 vlans would be flowing through due to explicit defined. but if you defined allowed all then all vlans would be shown here.
    i created all the vlans above on sw2 but you can see only 3 vlans are allowd as you have explicitly defined it.
    Hope this clarifies your query.
    Regards
    Inayath
    *************Plz dont forget to rate posts***********

  • ZCM 11 SP3 and Windows 8.1, need clarification...

    I dont know if I can't read, but I need som clarification...
    When SP3 arrived, there were problems with Win 8.1.
    I have customers that have machines, that were upgraded to 8.1 and it have worked with the already installed 11.2.3a -agent, even if its not supported. They deployed ZCM 11.3-agent to this machines that were upgraded to 8.1, and they all crached, never started again. I made tests in my test-environment, same result.
    Last week I started to check and found the update "ZENworks 11.3.0a Windows 8.1 Update" (https://www.novell.com/support/kb/doc.php?id=7014805 / http://download.novell.com/Download?...d=0yMdXrTonF8~). Downloaded that yesterday, and was to test it today. Looked for the instructions again today, and now its not availble anymore, Obsolete. I have not deplyed it yet, so that is no problem.
    There are since yesterday a new update, "ZCM 11.3.0 WIN8.1 Patch 866736" (http://download.novell.com/Download?...d=OvBLs9qZhrU~).
    But in the instructions on this, it talks about it talks about how to update machines already updated. Here the text mentions "11.3.0_WIN8.1". What is that? Is it the now Obsolete "ZENworks 11.3.0a Windows 8.1 Update"? Or machined patched with the standard "Update for ZENworks (11 SP3)", that was created during SP3 install? If reffering to the last, it cant be done because in both my customers production environment and my testenvironment the Win 8.1 machined crached and never came up.
    Next two methods in the description describes how to update "Windows 8.1 Update for ZENworks (11 SP3)", if already imported (zman supf "Windows 8.1 Update for ZENworks (11 SP3)" ZCM_11.3.0_WIN8.1_20140404_866736.zip).
    But for those who have not imported that, what is the way to go? You cant download "Windows 8.1 Update for ZENworks (11 SP3)" anymore, and the patch seems to be for that one?
    What is the way to get Win 8.1 to works with ZCM 11.3?
    Can anyone clarify this?
    I dont know if this belongs in agent forum or server-forum, but I start here.
    /Stefan

    CRAIGDWILSON wrote:
    > New Versions of those patches are being rolled out.
    >
    > Normally it happens at the same time, but looks like timing was
    > slightly off.
    >
    >
    >
    > On 4/30/2014 11:31 AM, Niels Poulsen wrote:
    > > stesjo wrote:
    > >
    > > >
    > > > So, at this point, it means that there is no way to get ZCM 11.3
    > > > to work with Windows 8.1?
    > > >
    > > > Old patches removed, and just patches for the removed patches
    > > > available?
    > > >
    > > > One can assume, that there is a reasom why the 8.1 Update is
    > > > removed, and made obsolete?
    > > > /Stefan
    > >
    > > ... One would think so, yes. Not sure what's the reason...
    > >
    Cool :-)
    Niels
    A true red devil...
    If you find this post helpful, please show your appreciation by
    clicking on the star below
    A member must be logged in before s/he can assign reputation points.

  • Clarification on Time Machine migration

    I am about to upgrade my trusty PowerBook G4 to a shiny new MacBook Pro and would just like some clarification on accessing my Time Machine files from my new computer. After reading some similar posts it sounds like some users have been able to access their files from their old system on new computers while others have had issues. If I transfer all of the contents of my old system to the new one via Migration Assistant, will Time Machine recognize the new computer as the old one? Should I do the transfer via Time Machine instead? Any assistance would be greatly appreciated. Thanks!

    Before you start migrating, be sure to deactivate/deauthorize the software on your PowerBook. If you'll have both computers in place, it is faster I think to put the PowerBook into firewire target disk mode then to use TimeMachine. Further, regardless of which method you use, because this is a PPC to Intel upgrade, I'd recommend that you only transfer the contents of your personal drive space and reinstall your software.
    There are two reasons I recommend reinstalling software: First, buying a new computer is about the only time I get rid of the cruft and junk that I accumulate and almost never use afterwards. I figure if it is a good idea for me, it is a good idea for others . Second, having done dozens of upgrades, I have a decent idea of what can be safely transferred and what can't but long ago I figured it took almost as long to pick and choose what to transfer as it did to just reinstall everything.

Maybe you are looking for

  • SSL VPN with client, anyconnect.

    I've set up a simple test on SSL VPN with client on a 3800. It didnt work. I assume i have to turn on the IP http server so that the client can hit it. but when I turned it on, the client goes to SDM, nothing with ssl vpn happened. it tells me the pa

  • Tracing Termination Workflow attributes from Manager Self Service

    From Self service page on termination, I am setting attributes in a workflow function these attributes are used in the notification that is sent to the users. For a particular scenario I want to trace values of the attributes, is there a way to trace

  • Distribute Spaces Option does not space evenly in CS6

    Hi there! I just stumbled upon this just now and want to know if I'm alone with this or other people also have this issue. I'm working in Adobe Illustrator CS6 and neither the Distribute Spaces Option nor Distribute Object Option are working. It does

  • ITunes problems after upgrading - progress bar "updating library"

    after upgrading iTunes to the latest version I open iTunes and a progress bar appears "updating library". After a few seconds it stucks and I do not get a response at all. I have to cancel the program in the Windows Task Manager. Any help is greatly

  • IDocs for a Object

    Hello, If i know the Purchase order Number can i find out the IDOCS going out this Object. In general for any object (can be a Delivery note or Sales order etc) if we want to know the outgoing and incoming Idocs , except by digging in to segmenst and