Refreshing DBCP following database restarts

My DBA needed to restart the database server while I was away on leave. The result of which was a Broken Pipe error being displayed to the users of one my web applications. I believe others have experienced the same problem (http://freeroller.net/comments/founder_chen/Weblog/commons_dbcp_pool_broken_pipe).
Does anyone know a way of being able to recover from this error without having to restart the web application? Is there a way of refreshing the connection pool automatically when this error is encountered?
I am using Tomcat and Oracle via the Oracle JDBC driver.
Thanks,
Pete

Try to add a validation query on your connection pool and to set the testOnBorrw parameter to true.
Haven't test it but it should work...
Hi

Similar Messages

  • What are the procedure to be followed while restart the exchange server..?

    Dear All,
    Pls provide the procedure to be followed while restarting the exchange server. i.e. what is the sequence for restarting and starting the exchange server.
    Regards
    R Nantha Ram

    As suggested above no such procedure to be followed unless it is a DAG.
    If you are running exchange and DC in one OS. Stop exchange services first then restart server. (commands below to stop exchange related services)
    net stop msexchangeadtopology /y
    net stop msexchangefba /y
    net stop msftesql-exchange /y
    net stop msexchangeis /y
    net stop msexchangesa /y
    net stop iisadmin /y
    net stop w3svc /y
    This is bcz DC stops quicker than Exchange, therefore Exchange is unable to write to the domain controller and has to be be "killed" by the operating system.
    This continual "killing" of the Exchange services, instead of allowing them to shutdown gracefully is not good for the database and is one of the prime reasons for recommending that Exchange is not installed on a domain controller.
    A better option is to stop the services before you begin to shutdown the server. This will also cause the server to shutdown more quickly because it isn't waiting for the services to timeout. This can significantly decrease the shutdown/reboot time of Server.
    MAS

  • Database restarting daily, why?

    My RDBMS is Enterprise Edition Release 11.2.0.3.0 - 64bit Production running on RHEL 6 x86_64 running on a virtual server. Each morning at 05:58 the database restarts. Looking at the various trace files generated at that time I see messages as follows -
    CELL communication is configured to use 0 interface(s):
    kcrfwy: minimum sleep is 9974 usecs (overhead is 8974 usecs)
    kgfmIsAppliance=0 (due to init)
    There is a vktm trace file:
    VKTM running at (1)millisec precision with DBRM quantum (100)ms
    [Start] HighResTick = 1383573491638525
    kstmrmtickcnt = 0 : ksudbrmseccnt[0] = 1383573491
    kwqmnich: current time:: 13: 58: 21: 0
    kwqmnich: instance no 0 repartition flag 1
    kwqmnich: initialized job cache structure
    *** 2013-11-04 05:58:23.677
    kwqinfy: Call kwqrNondurSubInstTsk
    PQQ: Skipping service checks
    kgsksysstop: blocking mode (2) timestamp: 1383573500394276
    kgsksysstop: successful
    kgsksysresume: successful
    *** 2013-11-04 05:59:14.924
    PQQ: Active Services changed
    PQQ: Old service table
    SvcIdx  SvcId Active ActDop
    PQQ: New service table
    SvcIdx  SvcId Active ActDop
         1      1      1      0
         2      2      1      0
         3      3      1      0
         4      4      1      0
    Control file enqueue hold time tracking dump at time: 1470029
      1: 1140ms (rw) file: kct.c line: 2522 count: 140733193388033 total: 1140ms time: 1469406
    The alert log shows the shutdown/startup and all looks normal. The audit trail file shows a connection as sysdba at this time (that is to be expected I suppose).
    The vktm trace file indicates to me that the problem may be due to the virtual keeper of the time and I wondered if it is perhaps getting out of sync with the time clock on the virtual server. I spoke with one of the system administrators who suggested some checks. One is that the entry tinker panic 0 be in the /etc/ntp.conf file but I don't find that entry. The VM tools appear to be in place -
    -bash-3.2$ grep "buildNr =" /usr/bin/vmware-config-tools.pl
    $buildNr = '8.6.11 build-1180212';
    -bash-3.2$ /sbin/lsmod | grep vm
    vmmemctl 46424  0
    vmci            83804  1 vsock
    vmxnet                55556  0
    vmxnet3               88544  0
    At this point I am stuck and not sure what else to look at. I have done a bit of searching but have not found anything which seems to fit my situation. If anyone has encountered this or has any suggestions I would be most grateful.
    Thank you.
    Bill Wagman

    I understand your desire to 'plan' and why that might normally make sense but if planning is needed before your org can change their most important passwords then you have a large, gaping security hole.
    One of the key tasks that ANY organization must be able to perform AT A MOMENTS NOTICE is changing key passwords. Implicit in that task is the requisite documentation that shows the EXACT steps the need to be taken and clearly identifies ALL possible impacts (i.e. other password changes needed, batch process impacts, etc).
    The IT manager, or lead DBA, should be able to put their hands on security documentation that identifies that process and the steps to implement it.
    As part of the process of resolving your current issue I suggest you broach that LARGER deficiency to your management and make immediate plans to address it.

  • Queuing fails after database restart

    Hi All
    Currently we are on a test run of our application invloving advance queuing. The application involves queueing between 2 databases over a dblink.
    when we create queue tables ,create queues, start queues, add subscribers, enable propogation on both db's . it works fine. but why doesnt queuing withstand a database restart.
    The propogation to queues over dblinks fails , the moment i restart the databases. What could be the error .
    I have to recreate all queue tables, queues, add subscribers , enable propagation all over again if i restart the db's.
    My enqueuing process is initated on a trigger in a table with visibility on commit.
    And i have dequeuing process on dbms_aq.listen.
    kindly respond
    Regards
    Sushant

    Check this thread:
    Message Bridge - consumer count Increasing

  • Connection pool testing for dealing with database restarts

    How do you configure the app server to test JDBC connections before handing them to the application? The app server needs to accomodate database restarts while the app is running.
    We have an EAR file built with ANT (we use Eclipse 3.x to develop the app) and we use the deploy tool for getting it installed on the engine at 6.40 SP15. We have setup a datasource in the project options but I don't see how to configure it so that the engine tests JDBC connections before giving them to the application. We need to guard against database restarts and other app servers that I have used have an option that takes care of this detail. I cannot get the deploy tool help to appear due to an invalid URL being requested by the tool.

    Craig,
    you are right, in visual admin, it appears, any options to test a poll served connection are absent.
    Remedy: In tab additional
    - check expiration control and set
    - connection lifetime to say 60 (seconds) and
    - cleanup thread to say 30
    This way, you can't fully avoid, but diminish the likelihood to catch a bad connection.
    Regards
    Gregor

  • Refreshing JDBC Connection Pools after database restart

     

    Hi Patricia,
    Did you try to turn on testOnReserve for your connection pool?
    Regards,
    Slava Imeshev
    "Patricia Albano" <[email protected]> wrote in message
    news:3c0e3bff$[email protected]..
    Hello,
    I have a similar problem.
    I have a WLS 6.0 SP2 running on SunOS 5.6 Generic_105181-23 sun4u sparcSUNW, Ultra-60.
    I have a JDBC pool created in my weblogic server, withcom.informix.jdbc.IfxDriver driver. However, and in spite of the indication
    regarding the refresh period, the connections are not refreshed. Whenever
    the DB is down or, for some other reason the connections failed, they just
    don%u2019t recover on their own.
    We tried to use weblogic jdriver for Informix, but it doesn%u2019t solveour problem for we have some requirements that only work with the
    com.informix.jdbc.IfxDriver.
    >
    Is there some patch that we can use to solve our problem, like in Oracledatabase?
    >
    Thanks
    mail to: [email protected]

  • Refresh a dev database

    Hi
    I'm trying to refresh dev database from prod through rman,
    Dev was already refreshed 3 months ago , we are planning it again.So, the structure and datafile name are same on both.Prod is about 100GB.
    If I use rman, can rman update the existing files as the file structure is similar to prod. or I've to drop the database and follow regular duplication process(If i do this I will have to keep developers idle for long time)
    I'm trying have very less down time.
    Below is the script I'm using , Could any one advice me
    run{
    CONFIGURE AUXNAME FOR DATAFILE 1 TO '/u12/oradata/shark/system01.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 2 TO '/u05/oradata/shark/undotbs01.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 3 TO '/u20/oradata/shark/undotbs03.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 4 TO '/u25/oradata/shark/undotbs02.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 5 TO '/u05/oradata/shark/drsys01.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 6 TO '/u18/oradata/shark/tools01.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 7 TO '/u18/oradata/shark/users01.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 8 TO '/u30/oradata/shark/xdb01.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 9 TO '/u02/oradata/shark/u02_data.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 10 TO '/u03/oradata/shark/u03_data.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 11 TO '/u04/oradata/shark/u04_data.dbf';
    CONFIGURE AUXNAME FOR DATAFILE 12 TO '/u06/oradata/shark/u06_data.dbf';
    DUPLICATE shark DATABASE TO devl
    LOGFILE
    GROUP 1 (
    '/u04/oradata/shark/redo01.log',
    '/u06/oradata/shark/redo01b.rdo'
    ) SIZE 100M REUSE,
    GROUP 2 (
    '/u24/oradata/shark/redo02.log',
    '/u19/oradata/shark/redo02b.rdo'
    ) SIZE 100M REUSE,
    GROUP 3 (
    '/u16/oradata/shark/redo03.log',
    '/u20/oradata/shark/redo03b.rdo'
    ) SIZE 100M REUSE,
    GROUP 4 (
    '/u20/oradata/shark/redo04.log',
    '/u26/oradata/shark/redo04b.rdo'
    ) SIZE 100M REUSE,
    GROUP 5 (
    '/u26/oradata/shark/redo05.log',
    '/u19/oradata/shark/redo05b.rdo'
    ) SIZE 100M REUSE;
    Do you think the above would update the exiting one...
    oops..
    ORACLE 9.2.0.7
    Sun os solaris 10
    Edited by: ORA_DBA2 on Sep 2, 2008 2:40 PM

    Or if you haven't got GRID
    1. Create A Backup Control File Script.
    First you need to obtain a script that will create a copy of the existing control file. This is usually carried out with the SVRMGRL utility using the following commands:
         CONNECT INTERNAL
         ALTER DATABASE BACKUP CONTROLFILE TO TRACE RESETLOGS;
    This creates a file in the trace file directory. The file usually has the extension '.trc' and will be located either in the directory defined by the parameter 'user_dump_dest', or if this parameter is undefined it will be in $ORACLE_HOME/rdbms/log. Edit this file with your favourite editor and remove the crud. Then rename it as "ctrl<NEW_SID>.sql," where <NEW_SID> will be the ORACLE_SID of the copied database.
    2. Modify The Script Created In The Previous Step.
    The CREATE CONTROLFILE command in the script ctrl<NEW_SID>.sql contains SQL, which might look something like this:
         CREATE CONTROLFILE REUSE DATABASE "OLD_SID" RESETLOGS ARCHIVELOG
         MAXLOGFILES 32
         MAXLOGMEMBERS 2
         MAXDATAFILES 32
         MAXINSTANCES 16
         MAXLOGHISTORY 1815
         LOGFILE
         GROUP 1 'E:\ORACLE\ORADATA\OLD_SID\REDO03.LOG' SIZE 1M,
         GROUP 2 'E:\ORACLE\ORADATA\OLD_SID\REDO02.LOG' SIZE 1M,
         GROUP 3 'E:\ORACLE\ORADATA\OLD_SID\REDO01.LOG' SIZE 1M
         DATAFILE
         'E:\ORACLE\ORADATA\OLD_SID\SYSTEM01.DBF',
         'E:\ORACLE\ORADATA\OLD_SID\RBS01.DBF',
         'E:\ORACLE\ORADATA\OLD_SID\TEMP01.DBF',
         'E:\ORACLE\ORADATA\OLD_SID\TOOLS01.DBF',
         'E:\ORACLE\ORADATA\OLD_SID\INDX01.DBF',
         'E:\ORACLE\ORADATA\OLD_SID\DR01.DBF',
         'E:\ORACLE\ORADATA\OLD_SID\WORK01.DBF',
         'E:\ORACLE\ORADATA\OLD_SID\TEMP02.DBF'
         CHARACTER SET WE8ISO8859P1
    Where the string <OLD_SID> is the Oracle SID of the original database. This should be changed to <NEW_SID>. Normallly this will be contained somewhere in the full filespec (path + filename) of all redo logs, data logs and control files. If it isn't then it should have been. This entire document assumes that you have the SID somewhere in the full filespec of these crucial files and furthermore that there are no embedded spaces or other weird characters in these filespecs. If you failed to observe these universal conventions when you setup your database, you should not try to use any of the procedures outlined in this document.
    3. Copy The Existing Database To The New Location.
    This will be a "cold" copy. So obviously you should make sure that the database is shutdown and all services are stopped before attempting to "cold" copy the database.
    If the copy is on the same host, you can use the DOS copy command (once the instance is shutdown). If you are lack the manual dexterity required for a keyboard you can copy the files with a mouse. If the target is a remote host then you will have to copy to a mass storage device or copy across the network.
    On the target host you need to copy all parameter files and all files mentioned above to their new location. Make sure you preserve ownership and permissions. The copied init<OLD_SID>.ora should be renamed to init<NEW_SID>.ora, and any parameter files pointed to by an ifile parameter (e.g. parameter files such as config<OLD_SID>.ora) should be renamed to contain <NEW_SID> (e.g. config<NEW_SID>.ora).
    The datafiles and redo log files from the pervious step also need to be renamed to contain the <NEW_SID> in the full filespec.
    4. Set Up Parameter Files For The New Database
    There may be several parameters that need to be edited in init<NEW_SID>.ora. In particular you will need to edit the control_files parameter so that it points to the name and location that you want to use for the new control files. You will also have to change the DB_NAME parameter in init<NEW_SID>.ora. Change it to the newname for your database. Usually this corresponds the <NEW_SID>. Any 'ifile' parameters will need to be edited to point to the new name of the include file in the new location.
    5. Create The Control File For The New Database.
    Now edit the file ctrl<NEW_SID>.sql and strip out everything up to and including the STARTUP NOMOUNT command. Remove the ALTER DATABASE OPEN command and everything after it. This leaves a command which just creates the controlfile.
    Now change all the appropriate instances of <OLD_SID> to <NEW_SID>. Unless you have a very good reason for doing so, you should make the database name the same as <NEW_SID>. Save this script in an area where you will find it again.
    Make sure that your ORACLE_SID is set to <NEW_SID>. Then use the SVRMGRL utility to run the following commands:
         STARTUP NOMOUNT
         @ctrl<NEW_SID>
    6. Create The Services For NEW_SID
    Create the services "OracleService<Sid>" and the "OracleStart<Sid>" for "NEW_SID" with the following command:
    oradim -new -sid <NEW_SID> -intpwd <password> -startmode auto -pfile <path_name>
    7. Run 'CREATE CONTROLFILE' For <NEW_SID>
    Make sure that your current directory is the one that contains ctrl<NEW_SID>.sql
    Set your ORACLE_SID to <NEW_SID>
    Startup SVRMGRL and enter the following:
         CONNECT INTERNAL
         STARTUP NOMOUNT PFILE=<full path>\init<NEW_SID>.ora
         @ctrl<NEW_SID>
         ALTER DATABASE OPEN RESETLOGS;
    Automating This Process.
    Creating a clone of an Oracle Database is the type of thing that you might wish to carry out regularly. you might do this on a regular basis because:
    * You wish to create a test instance of your production system.
    * You wish to create another working copy of your production system for reporting purposes or for some form of off-line processing.
    Obviously if you had to do this regularly you would write a script. If you are proficient at writing CMD scripts you could write a .cmd or .bat file to carry out these steps.
    You would have to clobber the <NEW_SID> instance before you created the clone. But that could be easily accomplished in a CMD script. However some of the things that would be cumbersome in a CMD script would be:
    * Changing the CREATE CONTROLFILE script when your database is altered (e.g. when you add a datafile to a tablespace).
    * Creating a third copy (e.g. NEW_SID1).
    * Creating a copy from an entirely different database.
    Obviously these contingincies can be handled ... but usually it means re-writing or re-creating the CMD script.
    Perl is so versatile that it does not have these problems.
    coldarch.pl
    The coldarch perl script creates a cold archive of an Oracle database on Windows NT/2000. The archive is a separate directory. It could be on the same machine as the host. However, in the interest of data integrity, it would make more sense to place it on a remote host (via the network).
    The script relies on site specific variables that are set in the common.pl file.
    The coldarch command is intended for incorporation into a script file. Ideally it should be incorporated into a routine backup schedule. However, if it is being invoked from a command line, the syntax would be as follows:
         coldarch intrnl_passwd db_name
    Where:
    intrnl_passwd is the Internal password of the database.
    db_name is the database_name (should also be the SID).
    The logic of the script is as follows:
    1. Validate the command line parameters and the common.pl variables.
    2. Create a lock file, to prevent a second copy of the program from running.
    3. Initialise strings for a logfile and a Header File (for the archive directory)
    4. Use SQL queries to gather information about control_files, datafiles, redo logs and locations of important files.
    5. Create a script that will can CREATE CONTROL file
    6. Shutdown the database.
    7. Add the datafiles to the zip archive.
    8. Copy the parameter files and included files (ifiles) to the archive directory.
    9. Add the control file and redo logs to the zip archive.
    10. Depending on the value of the variable $OPEN_TYPE, open the database. Some sites may choose not to open the database, because the coldarch procedure is integrated into the backup schedule. The variable $OPEN_TYPE is hard-coded in the coldarch script.
    coldclone.pl
    The coldclone perl script restores a cold archive of an Oracle database to a Windows NT/2000 host. The Target SID must differ from the source SID.
    The script relies on the same site specific variables that the coldarch script relies on. These are set in the common.pl file.
    The coldclone command is intended for incorporation into a script file. If it is being invoked from the command line, the syntax would be as follows:
         coldclone src_dir intrnl_passwd db_name
    Where:
    src_dir is the directory where the cold arhive resides.
    intrnl_passwd is the Internal password of the database.
    db_name is the database_name (should also be the SID).
    The logic of the script is as follows:
    1. Validate the command line parameters and the common.pl variables.
    2. Verify that the database is shutdown.
    3. Create a lock file, to prevent a second copy of the program from running.
    4. Initialise strings for a logfile.
    5. Parse the archive header, to determine the location of various files.
    6. Verify that the database is a clone -- This script has been expressly written to create a clone. You would need a diffent script to do a restore.
    7. Completely clobber the existing database (use the oradim utility to blow it away). You have passed the point of no return.
    8. Unpack the datafiles from the zip archive.
    9. Copy the parameter files and included files (ifiles) to the archive directory.
    10. Construct an SQL query from the trace file (created by coldarch), which contains the CREATE CONTROLFILE.
    11. Create the database anew with oradim. Start it up (NOMOUNT) and run the CREATE CONTROLFILE script. Open the database. (ALTER DATABASE OPEN).

  • Error in reports after database restart

    Hi,
    I am making reports using OBI with oracle 10.1.0 . I made some charts, pivot tables
    and some graphs. these are working fine until i restart the database. when i restart the
    data base the tables are show again at dash board but all pictorial type reports not work
    untill i make them again.
    it gives me error like
    No connection could be made because the target machine actively refused it. [Socket:888]while table by the same query is giving result. only graphical thing give error even after i
    redraw them.
    Thanks

    Step 1: Create Cachepurge.txt file with following command.
    call SAPurgeAllCache();
    Step 2: Tranfer this Cachepurge.txt file to this path (It can copy to
    anywhere)
    D:\oracleBI\server\Bin
    Step 3:
    Go to Command Prompt
    cd D:\oracleBI\server\Bin
    nqcmd -d Analyticsweb -u Administrator -p Administrator -s D:\oracleBI\server\Bin\Cachepurge.txt -o D:\output.txt
    -u --> User
    -p --> Password
    If you install OBIEE in C: please replace D: with C:
    Thanks,
    Chandra

  • System Crash and Database Restart

    Hi,
    I have a CRM 2007 system with MaxDB 7.7.
    The system had an unplanned restart during the night.
    Now I have reboot and I am trying to get the database back online.
    Any dbbcli command I run such as db_state or db_online such as
    dbmcli -n lsadc1 -d DC1 -u SUPERDBA,xxxxxxx db_online
    just returns with the message:
    "Error! Connection failed to node lsadc1 for database DC1:
    Connection broken"
    Some dbmcli commands do work such as:
    E:\sapdb\DC1\db\pgm>dbmcli version
    OK
    version,os,dbroot,logon,code,swap
    "7.7.02","WIN32","E:\sapdb\DC1\db",True,ASCII,2
    Can anyone give an information on this?
    Thanks.
    Edited by: Peter O'Hara on Oct 30, 2008 2:02 PM
    Edited by: Peter O'Hara on Oct 30, 2008 2:03 PM
    A bit more information on this.
    I have found that if I run Xserver in the foreground and use the dbmcli command with an IP address rather than the server name, Xserver displays the following message:
    "18456 ERROR: Could not read from 'teo110_PipeBase::eo110_ReadPipe' pipe, rc = 109"
    Thanks again.
    Peter

    Hi Peter,
    as you're a SAP customer, please do open a support message for this.
    regards,
    Lars

  • How to schedule macros to open a report , refresh and update database?

    Hi,
    We have the below set of steps that work on a deski environment. It works fine on XI R3.1 Fix pack 1.5, but I am unable to schedule and get the macros to run although i've included the entire macros code in - Private Sub DocumentAfterRefresh()
    Refresh of report
    Step 1 : Report is Refreshed. Refreshed data contains 10 Report Names to be opened by the Macros and the prompt values that these reports need to be refreshed with.
    Macros Functionality:
    Step 1 : Opens 1st report of the 10 reports from the repository in deski thick client, updates the database by setting a flag on the processing to Y or N for the report being processed.
    Step 2 : Refreshes the report with the prompt values obtained from the first refresh in the report, saves the refreshed data to a CSV or PDF at a location.
    Step 3 : Updates the database if the report has been genrated or not and then deletes the local copy of the output csv and pdf.
    The above steps repeat over and over again until all 10 reports open up, get refreshed and update the database.
    Now all of this works on Deski thick client/refresh and macros.
    However, when i schedule it, the macros doesnot seem to be running.
    Is the above scenario even possible to replicate via a schedule process..? 
    Note : The reports need to be retained in Deski itself.
    PLz help...!

    Scheduling Background Jobs 
    Use
    You can define and schedule background jobs in two ways from the Job Overview:
    Directly from Transaction SM36. This is best for users already familiar with background job scheduling.
    The Job Scheduling Wizard. This is best for users unfamiliar with SAP background job scheduling. To use the Job Wizard, start from Transaction SM36, and either select Goto ® Wizard version or simply use the Job Wizard button.
    Procedure
    Call Transaction SM36 or choose CCMS ® Jobs ® Definition .
    Assign a job name. Decide on a name for the job you are defining and enter it in the Job Name field.
    Set the job’s priority, or "Job Class":
    High priority: Class A
    Medium priority: Class B
    Low priority: Class C
    In the Target server field, indicate whether to use system load balancing.
    For the system to use system load balancing to automatically select the most efficient application server to use at the moment, leave this field empty.
    To use a particular application server to run the job, enter a specific target server.
    If spool requests generated by this job are to be sent to someone as email, specify the email address. Choose the Spool list recipient button.
    Define when the job is to start by choosing Start Condition and completing the appropriate selections. If the job is to repeat, or be periodic, check the box at the bottom of this screen.
    Define the job’s steps by choosing Step, then specify the ABAP program, external command, or external program to be used for each step.
    Save the fully defined job to submit it to the background processing system.
    When you need to modify, reschedule, or otherwise manipulate a job after you've scheduled it the first time, you'll manage jobs from the Job Overview.
    Note: Release the job so that it can run. No job, even those scheduled for immediate processing, can run without first being released.
    For a simple job scheduling procedure, see the R/3 Getting Started Guide.

  • TNS Listener -ORA-12514 error following database shutdown - Oracle 11g

    Hi
    I have hit a problem with my oracle development database.
    When in sqlplus I executed the shutdown command, but nothing happened for several minutes and it was just hanging. No messages were displayed to the screen. The only thing was to close the command window. When I open sqlplus again and enter in the user name password as sysdba I am getting ORA 12514 TNS Listener could not resolve service in descriptor. This is odd as I could log on before and use the database. So I thought that the services were still shutting down I waited, and using the Windows snap on tool for Oracle 11g, I restarted the services on the database, but this has had no effect.
    Even using EM made no difference because I couldn't log on to perform a startup or recovery.
    So, please does anyone know how I can fix this?
    Thanks

    user633278 wrote:
    Hi
    I have hit a problem with my oracle development database.
    When in sqlplus I executed the shutdown command, but nothing happened for several minutes and it was just hanging. No messages were displayed to the screen. The only thing was to close the command window. When I open sqlplus again and enter in the user name password as sysdba I am getting ORA 12514 TNS Listener could not resolve service in descriptor. This is odd as I could log on before and use the database. So I thought that the services were still shutting down I waited, and using the Windows snap on tool for Oracle 11g, I restarted the services on the database, but this has had no effect.
    Even using EM made no difference because I couldn't log on to perform a startup or recovery.
    So, please does anyone know how I can fix this?
    open Command Window & do EXACTLY as below; line for line
    sqlplus
    / as sysdba
    shutdown abort
    startup
    exit
    COPY command & results from above then PASTE all back here

  • Refresh on logical database

    Hello Gurus,
    I need your help to solve one issue, please.
    I make a copy from RM07RESL report to one Z, this report get data with logical database.
    I need to create a button for user refresh data. I try to use the FM LDB_PROCESS, but return code 3, because the logical database is already in use, defined in program attributes.
    Thanks in advanced.
    Best Regards.
    Paulo

    Hi,
    I don't think that it's possible. You have to go back to your selection screen and run it again. Another reason why not to use logical databases (I understnad that you can't change it in this case).
    Cheers

  • OEM Repository Database Restarts Daily

    Greetings,
    I am running OEM 12.1.0.3.0 on a VM running Linux x86_64. The repository is Oracle 11.2.0.3.0 running on a separate VM host also running Linux x86_64.
    On a daily basis at 05:57:21 (give or take a few seconds) my OEM repository is restarted -
    Shutting down instance (immediate)
    Stopping background process SMCO
    Shutting down instance: further logons disabled
    <remaining shutdown steps displayed in the alert logs>
    and it immediately restarts. I am unable to determine why. The host is not restarted. I have looked at dba_jobs in the repository and see nothing running which would do this. I have looked at OEM jobs and don't see anything scheduled there which would do this. I am at a loss to explain this and am wondering if anyone else has encountered this or has an explanation. Is this expected behavior or is my setup an anomaly?
    Thank you.
    Bill Wagman

    Bill,
    This would not be an expected behaviour. If your Repository DB re-starts, then there are chances that the OMS will also re-start if it cannot access the repository DB for a long duration. This will affect console usage.
    Do you not see any other errors before this shutdown?
    You could also check if there are any DB trace files created around the same time.
    Recommend posting this in the Database forum for further investigation.
    Regards,
    Smitha.

  • How to handle a (Oracle) database restart

    Hi,
    For a cold backup we stop and start the database.
    When using Toplink for database access I've a problem when the database is restarted: "500 Internal Server Error: Exception [TOPLINK-4002] ... oracle.toplink.exceptions.DatabaseException ... Broken Pipe ... etc"
    I there a simple solution? (Restarting J2EE App solves the problem but is not our prefered way)
    How is this handled when we use BC4J?
    Thanks,
    Stephan van Hoof

    Stephan,
    Can you post the configuration you are using (e.g. app server with TL, Java Objects or EJB CMP, version of TL, with out without JDev etc.) and the steps to reproduce the issue?
    Thanks,
    Anuj Jain

  • Database refresh from standby database

    All,
    OS : Sun version : 11.1.0.7
    I'm doing database refresh using stand by database datafiles in different server.
    1)Disabled Data guard broker in standby database .
    2)Copied all datafiles in mount stage to target server (different server)
    3)created controlfile and started the database in mount stage in different server
    4)Applied all archive log files .
    5)But still the database not recovered .
    SQL> recover database using backup controlfile until cancel;
    ORA-00279: change 10401147296308 generated at 01/13/2012 09:03:43 needed for
    thread 1
    ORA-00289: suggestion :
    /u01/app/oracle/admin/qecgdeva/arch/qecgdeva_727091849_1_57879.arc
    ORA-00280: change 10401147296308 for thread 1 is in sequence #57879
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /u01/app/oracle/admin/qecgdeva/arch/qecgdeva_727091849_1_57879.arc
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    /u01/app/oracle/admin/qecgdeva/arch/qecgdeva_727091849_1_57885.arc
    ORA-00279: change 10401156453328 generated at 01/13/2012 10:48:01 needed for
    thread 1
    ORA-00289: suggestion :
    /u01/app/oracle/admin/qecgdeva/arch/qecgdeva_727091849_1_57886.arc
    ORA-00280: change 10401156453328 for thread 1 is in sequence #57886
    ORA-00278: log file
    '/u01/app/oracle/admin/qecgdeva/arch/qecgdeva_727091849_1_57885.arc' no longer
    needed for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01196: file 1 is inconsistent due to a failed media recovery session
    ORA-01110: data file 1:
    '/mounts/qecgdeva_data/oradata/qecgdeva/dbfiles/qecgprod_system.dbf'
    ORA-01112: media recovery not started
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01196: file 1 is inconsistent due to a failed media recovery session
    ORA-01110: data file 1:
    '/mounts/qecgdeva_data/oradata/qecgdeva/dbfiles/qecgprod_system.dbf'Am I missing any steps here ? Please advice me on this .
    Thanks.

    Hi,
    When you copy files of standby server to another target server ? - Yesterday.
    Is standby is SYNC with primary? or MRP is still running? - ---------yes, standby db is running fine .
    Even you can open database with resetlogs after recreating controlfile, no need of recovery too -- >I was unable to open . So went to apply arch files.
    Do you have archive of thread 1 is in sequence #57887, -- No , Because sequence 57887 yet generate. I have applied all recent available archive log files.
    ORA-00289: suggestion :
    /u01/app/oracle/admin/qecgdeva/arch/qecgdeva_727091849_1_57886.arc
    ORA-00280: change 10401156453328 for thread 1 is in sequence #57886
    ORA-00278: log file
    '/u01/app/oracle/admin/qecgdeva/arch/qecgdeva_727091849_1_57885.arc' no longer
    needed for this recovery
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    CANCEL
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01196: file 1 is inconsistent due to a failed media recovery session
    ORA-01110: data file 1:
    '/mounts/qecgdeva_data/oradata/qecgdeva/dbfiles/qecgprod_system.dbf'
    ORA-01112: media recovery not started
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01196: file 1 is inconsistent due to a failed media recovery session
    ORA-01110: data file 1:
    '/mounts/qecgdeva_data/oradata/qecgdeva/dbfiles/qecgprod_system.dbf'Edited by: 805877 on Jan 12, 2012 10:43 PM

Maybe you are looking for