Node1 is down from production in RAC

Dear all,
We have two nodes Oracle RAC 10gr2. Due to bug in network package, we happen to update node1 OS ( 2.6.9.42 to 2.6.9.55 ) by investigation with OS team, Now both node having kernel difference as above, node2 is older and node1 is little higher, node1 isolated from production,
We need to bring node1 in production or what are other solution to over come kernel version difference.
Problem: Now the node1 is down from production its kernel is new from node2.
What will be the solution of this to bring node1 up?
Old (up)
[oracle@node2 ~]$ uname -r
2.6.9-42.0.10.ELsmp
[oracle@node2 ~]$ uname -a
Linux node2 2.6.9-42.0.10.ELsmp #1 SMP Fri Feb 16 17:13:42 EST 2007 x86_64 x86_64 x86_64 GNU/Linux
new kernal ( down )
[root@node1 ~]# uname -r
2.6.9-55.ELsmp
[root@node1 ~]# uname -a
Linux node1 2.6.9-55.ELsmp #1 SMP Fri Apr 20 16:36:54 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
I need you guys assistance.
Thanks,

Hi,
We have two nodes Oracle RAC 10gr2. Due to bug in network package, we happen to update node1 OS ( 2.6.9.42 to 2.6.9.55 ) by investigation with OS team, Now both node having kernel difference as above, node2 is older and node1 is little higher, node1 isolated from production, As your OS support carried out the research why can't you opt for scheduled restart of RAC instances. Certainly I am not sure what might have happen at your end.
Problem: Now the node1 is down from production its kernel is new from node2.As your new Kernel is up and running, does you OS comes up, then try to check whether at initial stage cluster services up and running or not ?
If not then try to check intially as you stated across bug in network whether it got resolved or not, If yes, then go head with Oracle dependent services are able to come up to not. Difference/Issue in network would not drop the network communication perhaps it would have been effected network transfers band width.
Any how, Once you cluster can up then carry on with proper shutdown of node2 and patch it up.
- Pavan Kumar N

Similar Messages

  • Cannot drill down from 2nd lowest level to lowest level of hierarchy

    In my item master hierarchy, I cannot drill down from the 2nd lowest level (Product Class) to the lowest level (Item Detail).
    When I add another level between these 2 levels, then I can drill down from Product Class to this new level, but I cannot drill down from this new level to the lowest level.
    Also, if I set the preferred drill path at any level to drill down to the lowest level, it instead drills down to next level down intead of of drilling to the lowest level.
    Any thoughts as to why I would not be able to drill to the lowest level of this hierarchy?
    Thanks,
    Travis

    OK, next check, any security in place on the presentation columns which would make the lowest level column unavailable to the user account?
    Are you running the report as Administrator?

  • Down Loading the BOM & Routing Data from Production client

    Hi,
    Is there any method to download the ALL Material BOM & Routing data in excel sheet from Production client.
    Thanks
    shiv

    for hearder material u can use CA51 for routing in the plant
    For bom header material
    go to cs02
    click F4,
    select the option material by Bill of material
    u will get the list of bom material in the plant
    so that page u can directly download to the excel
    Edited by: sukendar neelam on Mar 5, 2009 8:27 AM

  • What is the best way to mimic the data from production to other server?

    Hi,
    here we user streams and advanced replication to send the data for 90% of tables from production to another production database server. if one goes down can use another one. is there any other best option rather using the streams and replication? we are having lot of problems with streams these days they keep break and get calls.
    I heard about data guard but dont know what is use of it? please advice the best way to replicate the data.
    Thanks a lot.....

    RAC, Data Guard. The first one is active-active, that is, you have two or more nodes accessing the same database on shared storage and you get both HA and load balancing. The second is active-passive (unless you're on 11.2 with Active Standby or Snapshot Standby), that is one database is primary and the other is standby, which you normally cannot query or modify, but to which you can quickly switch in case primary fails. There's also Logical Standby - it's based on Streams and generally looks like what you seem to be using now (sort of.) But it definitely has issues. You can also take a look at GoldenGate or SharePlex.

  • I have been shut down from my previous email address as I left my company's email system, I am now on gmail and have changed my Apple Id and password successfully unfortunately I dont understand what I need to do to change my ICloud, IFace etc...ID?

    I have bene shut down from my previous Apple email ID as i left my company email' server. I have move to gmail and have successfully reset my Apple ID accordingly. My iCloud, IFace etc...unfortunately pop up with the previous email address and I dont remember the password, so I cant get in and change it to gmail! Anyone out there knows what to do, i would be very grateful, thanks. M

    First thing to try is just to sign out on all your devices. If you can do that you can just sign in with the new ID. However if you've set up 'Find My iPhone' you won't be able to.
    In that event go to http://iforgot.apple.com and sign in with your iCloud login. A new password will be sent to your associated email address. If this doesn't work you will have to contact Support. Go to https://getsupport.apple.com . Click' See all products and services', then 'More Products and Services, then 'Apple ID', then 'Other Apple ID Topics' then 'Lost or forgotten Apple ID password'. If you have any problems with that try this form: https://www.apple.com/emea/support/itunes/contact.html

  • Database refresh from production to test -how to clean existing test env

    All,
    My environment is
    Both Production and Test databases are in two node RAC environment
    Oracle version - 10.2.0.4.0
    OS - RHEL5
    Production database size 80GB
    We need to refresh the test environment from production database. Complete objects, data etc should be refreshed.
    We have a datapump export from production environment. With this export dump from production environment, I need to import into test environment.
    So far, I have imported with this kind of dump to the fresh database only.
    Now, we have already objects, data sitting in the test environment. How to clean the existing test environment and refresh the production datapump export dump.
    I thought of dropping all the tablespaces in test (other than system,sysaux, undo and temp). But not able to drop few tablespaces, it is telling that index is available in other tablespaces, dependency errors etc.
    Which is the best method to clean the existing test database. Management is not interested in dropping the test database and recreating it.

    I understand that you are Newbie , let give me simple steps.
    Follow the steps on testing envi.
    1. Drop only Application users that you want to refresh from Prod(Do NOT drop users system,sysaux.. or tablespaces)
    2. Create the users that you dropped.
    3. using import or import data pump import the data.
    In case you want to import user"A" data to "B" use REMAP_SCHEMA option.
    See the below link for data pump export/import with examples.
    http://oracleracexpert.blogspot.com/2009/08/oracle-data-pump-exportimport.html
    Hope this helps.
    Regards,
    Satishbabu Gunukula
    http://oracleracexpert.blogspot.com
    Click here for [How to add and remove OCR|http://oracleracexpert.blogspot.com/2009/09/how-to-add-and-remove-ocr-oracle.html]
    Click here in [ Making decisions and migrating your databases |http://oracleracexpert.blogspot.com/2009/08/download-sql-developer-migration.html]
    Click here to lean [ Static parameters change in SPFILE and PFILE|http://oracleracexpert.blogspot.com/2009/09/how-to-change-static-parameters-in.html]
    Edited by: Satishbabu Gunukula on Sep 14, 2009 5:09 PM
    Edited by: Satishbabu Gunukula on Sep 18, 2009 10:35 AM

  • Transport planning book from Production to Dev/ Test system

    Hi APO Gurus,
    I need clarity for below mentioned point u2013
    I have a correct planning book in production system. I have a different structure in testing as well as in quality system. I want to transport planning book with all data views and macrou2019s associated with it to development system (or in test system).
    How to do this? Please elaborate with details
    Thanks in advance.

    I did the transport from Production to Dev client using TSOBJ last week. Basis team helped in connecting Prod and Dev clients.
    Everyting went ok except selection IDs. When I check 'with selection transport' in TSOBJ, it ends up in short dump saying Timed Out in reading SQL table /sapapo/sdpsetde. We increased the system profile parameter "rdisp/max_wprun_time" from 1800 seconds to 3600 seconds. It still timed out with same message. So, I created the transport without selection IDs and it worked.
    Next, I downloaded the only required selection IDs (total 498) from the DP Planning Area in Production client to excel spreadsheet using table /sapapo/TS_SELKO. Then created another transport using TSOBJ.....this time I used the 'Transport of Selections' section for Selection IDs and used single value selection from drop down, uploaded the IDs from clipboard option. That worked.
    Although my problem is solved....but by any chance, has anyone encountered that TSOBJ Timed out? Is there any OSS note for this?
    -Sachin

  • Prepare new test from Production

    How to Clone JDEdwards Enterprise One from Production to Test. Which document to follow. Please advice as this is very urgent.
    Thanks.

    I see you posted in both sections in the JD Edwards World category. Again, there is a separate forum for JD Edwards Enterprise One. If you go back to Forum Home and scroll down below the JD Edwards World category, you will find the JD Edwards Enterprise One category. That would be the better place to post this question (technically there is a good amount of difference between World and Enterprise One).
    John Dickey

  • Clone JDEdwards Enterprise One from Production to Test

    Please advice what is the way of Cloning JDEdwards Enterprise One from Production to Test.

    Uhh, this is the JD Edwards World forum. There is actually a separate forum for JD Edwards Enterprise One. Go back to the Forum Home and scroll down past JD Edwards World and you will find the Enterprise One forum. Technically there is a good amount of difference between World and Enterprise One. So to get better answers to your question, you need to post this on the Enterprise One forum. Good luck.
    John Dickey

  • Is it possible to transport an object from Production to DEV?

    I have some queries and workbooks in Production that I would like to archive.  I have not been able to find a method to archive, so I am wondering if I can transport the queries and workbooks from Production to DEV?  I realize the queries may not execute, but so long as I can open them up and see their definitions in DEV, I will be satisfied.
    I know this sounds a little bizarre, but is it possible to transport anything from Production to DEV?
    Thanks!

    hi
    Yes. its possible but its not recom
    you can set in production system to allow changes to obj rsa1->transport connection->click 'object changeability', and look for object type , and set to changeable.
    pls refer these below posts
    transport a query backwards, from production to development
    Re: Transporting Process Chains from Production DOWN to QA and Dev
    assign points if useful.....
    Shreya

  • Production order number range after taking the back up from production serv

    Dear all,
    My client did a back up activity yesterday from production server to disaster recovery server.
    After completing this activity they shut down there production server to do the testing of this new server.
    Production guy converted the planned order to production order and he got a number say 2000064732, but the current number in number range is 2000080139.
    But same guy created order one day before in production server that time the order number in 2000080138.
    After 2 hours the system (by that time he has create more than 50 orders) started assigning the number as per the current number. Ex: 2000080140,141,142u2026
    As per my observation they have maintained buffer as 20. As per buffering logic system has to skip in forward direction not in reverse direction.
    But in our case the system skipped in back word that nearly 16000.
    So please help me to find out the reason why the system assigned those number and on what basis.
    Thanks for your help
    Regards
    Ramesh

    See SAP note 62077, your application server was shut down, the numbers that are left in the buffer (that is, that are not yet assigned) are lost. As a result, there are gaps in the number assignment.

  • Production site RAC redundant architecture deployment

    We need the advise from Oracle Experts regarding our production site RAC redundant architecture deployment
    For some business constraint, We have only 2-NIC Available for configure the redundant RAC Deployment
    So are planed to bonding the available 2-NIC and after NIC bonding we got only one bonding interface of 1gbps speed.
    We have configure the public and private ip address in that available single bonding interface.
    network configuration
    #bond0 (public network)
    10.106.1.246 rac1
    10.106.1.247 rac2
    #bond0:1(vip network 10.106.1.251)
    10.106.1.251 rac1-vip
    10.106.1.252 rac2-vip
    #bond0:11(private network 10.10.0.1)
    10.10.0.1 rac1-priv
    10.10.0.2 rac2-priv
    #bond0:2(scan ip)
    10.106.1.244 rac-scan
    Oracle Setup Details
    Oracle11g r2 RAC/GRID Software
    RAC/GRID: 2-Node Database Cluster
    Database Storage: ASM
    Please find the review of our Hardware and Software
    Hardware:
    IBM Servers and SAN Storage
    NIC Speed: 1Gbps
    Software: Redhat Enterprise Linux5.5 64bit
    Application Behavior:
    1.) High insert/update/delete/select on single table.
    2.) multiple session connect and disconnect.
    Can you please conform that above architecture will support in production site?
    what is the advantages and disadvantages with above architecture in production site?
    Can you please suggest the right way of deployment the redundant RAC deployment in production site?

    This is one case where I would say that you are playing a very dangerous game with your production system. You asked for Expert opinions and you have been informed that this is a VERY BAD IDEA!!!! While you think that it works, don't come asking about node evictions when your bonded nics get saturated. I can say that I am an expert having installed, configured and spent time troubleshooting more than 75 clusters (2-6 nodes) on some very impressive hardware. The "big one" was 250TB on a 3 node RAC on Sun 6900's (48 dual-core x 192GB main memory with 8 NICS using SUN IPMP and 8 HBA's for SAN connectivity ). When you start having "weird" issues, Oracle will not support your configuration. You will need to fix it before they even begin troubleshooting it. Tell your manager that unless they spring for the appropriate configuration they should execute the following command: "Alter manager update Resume;" because it is not "IF" it will fail, but "WHEN" will it fail. Trust me, you and your managers have put your system in a very precarious position.

  • Add link from Product page to Documentation

    it would be nice if there were links from the Product pages e.g.
    http://otn.oracle.com/products/warehouse/index.html
    which have high level info, white papers, etc, to the formal documentation on OTN. Now you have to drill down the product tree, then drill down a different tree to see the documentation.

    http://labs.adobe.com/technologies/spry/samples/utils/URLUtilsSample.html

  • Dataguard configuration from 2-node rac to single instance with out ASM

    Hi Gurus,
    Oracle Version : 11.2.0.1
    Operating system:linux.
    Here i am trying to configure data Guard from 2-node rac to a singled instance stanby database . I have done all the changes in the parameter file for both primary and stand by database and when i am trying to duplicate my target database it is giving error as shown below.
    [oracle@rac1 dbs]$ rman target / auxiliary sys/qfundracdba@poorna
    Recovery Manager: Release 11.2.0.1.0 - Production on Thu Jul 21 14:49:01 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: QFUNDRAC (DBID=3138886598)
    connected to auxiliary database: QFUNDRAC (not mounted)
    RMAN> duplicate target database for standby from active database;
    Starting Duplicate Db at 21-JUL-11
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=63 device type=DISK
    contents of Memory Script:
       backup as copy reuse
       targetfile  '/u01/app/oracle/product/11.2.0/db_1/dbs/orapwqfundrac1' auxiliary format
    '/u01/app/oracle/product/11.2.0/db_1//dbs/orapwpoorna'   ;
    executing Memory Script
    Starting backup at 21-JUL-11
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=10 instance=qfundrac1 device type=DISK
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 07/21/2011 14:49:29
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-03009: failure of backup command on ORA_DISK_1 channel at 07/21/2011 14:49:29
    ORA-17629: Cannot connect to the remote database server
    ORA-17627: ORA-01017: invalid username/password; logon denied
    ORA-17629: Cannot connect to the remote database serverHere i was able to connect to my auxiliary database as shown below
    [oracle@rac1 dbs]$ rman target /
    Recovery Manager: Release 11.2.0.1.0 - Production on Thu Jul 21 15:00:10 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: QFUNDRAC (DBID=3138886598)
    RMAN> connect auxiliary sys/qfundracdba@poorna
    connected to auxiliary database: QFUNDRAC (not mounted)Can any one please help me .
    Thanks & Regards
    Poorna Prasad.S

    Hi All,
    Can any one please find out my both the parameters file and tell me if any thing is wrong.
    Primary Database parameters.
    qfundrac1.__db_cache_size=2818572288
    qfundrac2.__db_cache_size=3372220416
    qfundrac1.__java_pool_size=16777216
    qfundrac2.__java_pool_size=16777216
    qfundrac1.__large_pool_size=16777216
    qfundrac2.__large_pool_size=16777216
    qfundrac1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    qfundrac2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    qfundrac1.__pga_aggregate_target=4294967296
    qfundrac2.__pga_aggregate_target=4294967296
    qfundrac1.__sga_target=4294967296
    qfundrac2.__sga_target=4294967296
    qfundrac1.__shared_io_pool_size=0
    qfundrac2.__shared_io_pool_size=0
    qfundrac1.__shared_pool_size=1375731712
    qfundrac2.__shared_pool_size=855638016
    qfundrac1.__streams_pool_size=33554432
    qfundrac2.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/qfundrac/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='+ASM_DATA2/qfundrac/controlfile/current.256.754410759'
    *.db_block_size=8192
    *.db_create_file_dest='+ASM_DATA1'
    *.db_create_online_log_dest_1='+ASM_DATA2'
    *.db_domain=''
    *.DB_FILE_NAME_CONVERT='/u02/poorna/oradata/','+ASM_DATA1/','/u02/poorna/oradata','+ASM_DATA2/'
    *.db_name='qfundrac'
    *.db_recovery_file_dest_size=40770732032
    *.DB_UNIQUE_NAME='qfundrac'
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=qfundracXDB)'
    *.fal_client='QFUNDRAC'
    *.FAL_SERVER='poorna'
    qfundrac2.instance_number=2
    qfundrac1.instance_number=1
    *.LOG_ARCHIVE_CONFIG='DG_CONFIG=(qfundrac,poorna)'
    *.LOG_ARCHIVE_DEST_1='LOCATION=+ASM_FRA VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=qfundrac'
    *.LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=poorna'
    *.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
    *.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
    *.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    *.LOG_FILE_NAME_CONVERT='/u02/poorna/oradata/','+ASM_DATA1/','/u02/poorna/oradata','+ASM_DATA2/'
    *.open_cursors=300
    *.pga_aggregate_target=4294967296
    *.processes=300
    *.remote_listener='racdb-scan.qfund.net:1521'
    *.REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
    *.sec_case_sensitive_logon=FALSE
    *.sessions=335
    *.sga_target=4294967296
    *.STANDBY_FILE_MANAGEMENT='AUTO'
    qfundrac2.thread=2
    qfundrac1.thread=1
    qfundrac1.undo_tablespace='UNDOTBS1'
    qfundrac2.undo_tablespace='UNDOTBS2'and my standby database prameter file.
    poorna.__db_cache_size=314572800
    poorna.__java_pool_size=4194304
    poorna.__large_pool_size=4194304
    poorna.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    poorna.__pga_aggregate_target=343932928
    poorna.__sga_target=507510784
    poorna.__shared_io_pool_size=0
    poorna.__shared_pool_size=176160768
    poorna.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/poorna/adump'
    *.audit_trail='db'
    *.compatible='11.2.0.0.0'
    *.control_files='/u01/app/oracle/oradata/poorna/control01.ctl','/u01/app/oracle/flash_recovery_area/poorna/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    #*.db_name='poorna'
    #*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=4039114752
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=poornaXDB)'
    *.local_listener='LISTENER_POORNA'
    *.memory_target=849346560
    *.open_cursors=300
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sec_case_sensitive_logon=FALSE
    *.undo_tablespace='UNDOTBS1'
    ############### STAND By PARAMETERS ########
    DB_NAME=qfundrac
    DB_UNIQUE_NAME=poorna
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(poorna,qfundrac)'
    #CONTROL_FILES='/arch1/boston/control1.ctl', '/arch2/boston/control2.ctl'
    DB_FILE_NAME_CONVERT='+ASM_DATA1/','/u02/poorna/oradata/','+ASM_DATA2/','/u02/poorna/oradata'
    LOG_FILE_NAME_CONVERT= '+ASM_DATA1/','/u02/poorna/oradata/','+ASM_DATA2/','/u02/poorna/oradata'
    LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
    LOG_ARCHIVE_DEST_1= 'LOCATION=/u02/ARCHIVE/poorna  VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=poorna'
    LOG_ARCHIVE_DEST_2= 'SERVICE=qfundrac ASYNC  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)  DB_UNIQUE_NAME=qfundrac'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    STANDBY_FILE_MANAGEMENT=AUTO
    FAL_SERVER=qfundrac
    FAL_CLIENT=poornaThanks & Regards,
    Poorna Prasad.S

  • PARTNER DETERMINATION FROM PRODUCT

    Hi Experts,
    In service transaction i want to determine one partner(Vendor) from product master.
    In product master, in the Relationships, under Vendor tab i have mantained the details of the vendor. In the transaction when the product id is entered, this vendor has to be determine. Is it possible. If any ideas, please tell me.
    Thanks in advance.
    Nadh

    Hi Nadh,
    Vendor determination from the product master can surely be achieved.
    For your partner function Vendor put an access sequence, which picks the vendor from the product master.
    1. First create an access sequence say ZVEN , here make a new entry, in the drop down choose an entry which says vendor from product master (I am not sure with the name).
    2. Put this access sequence in partner function vendor of ur partner determination procedure and assign that to the transaction type.
    Wish this helps.
    Regards,
    Shalini Chauhan

Maybe you are looking for

  • Error while loading shared libraries: libaio.so.1: cannot open share

    Hi when i try to install Oracle Management Agent 11g using Deployment wizard to one of our hosts, it fails with the following error: error while loading shared libraries: libaio.so.1: cannot open shared object file: i have changed LD_LIBRARY_PATH to

  • HT2736 I can't use my id balance?

    I am trying to buy something with my apple id balance and it won't let me

  • Back ground job for Risk Analysis

    Dear expert we have schedule BG for risk analysis at role level for a DEV box and its been 7 days since it is in running state . I have checked logs but no error . Is this normal behaviour .I am confused because of DEV box which is having test roles

  • JDO and EJB

    Parts of our application do not require distributed transactions and security provided by CMP entity beans. Instead, we need to use JDO behind a layer of session beans to provide transparent persistence for Java business object models. Standard JDBC

  • [solved] Presentation with movies

    Hi guys, in the next weeks I will have to prepare a presentation which will have regular slides but also movies as part of the slides. I've been trying to add movies to OpenOffice Impress, I managed to install the Java Media Framework but I couldn't