Createdb prob: orclrun.sh and (no) connect internal

I installed 8.1.5.0.2 ok on a debian potato machine. But I have a problem with creating a database. Using the approach described by John Salvo: use dbassist to create db creation files.
The dbname = e8
the sid = orcl (also env ORACLE_SID=ORCL)
When running the first script (orclrun.sh), there is a problem after connecting with user internal :
Oracle8i Enterprise Edition Release 8.1.5.0.2 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
SVRMGR> connect internal
Connected.
SVRMGR> startup nomount pfile = /u01/app/oracle/admin/e8/pfile/initorcl.ora
ORACLE instance started.
ORA-01012: not logged on
SVRMGR>
It logs off after specifying the pfile.
Right now I cannot shutdown oracle, because the user internal cannot connect anymore (its asking for a password), and the users sys and system may not connect (db init in progress).
The following illustrates my strange problem:
Oracle8i Enterprise Edition Release 8.1.5.0.2 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
SVRMGR> connect internal
Connected.
SVRMGR> shutdown
ORA-01012: not logged on
SVRMGR> connect internal
Password:
ORA-12705: invalid or unknown NLS parameter value specified
SVRMGR>
I'm going to start all over again, this time using dbname and sid ORCL
Btw, did any of you experience strange things with the root.sh script? In my case I had to link awk to the right executable and edit the script to change ORACLE_OWNER from the unix uid to the name.
-- Yeb
null

Hmm I am almost definately sure I picked the dedicated server process when using dbassist.
Anyway, yesterday after creating the database, I wanted to shutdown and restart using svrmgrl and had the same problem again:
oracle@enschede8:~$ echo $ORACLE_SID
test
oracle@enschede8:~$ svrmgrl
Oracle Server Manager Release 3.1.5.0.0 - Production
(c) Copyright 1997, Oracle Corporation. All Rights Reserved.
Oracle8i Enterprise Edition Release 8.1.5.0.2 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
SVRMGR> connect internal
Connected.
SVRMGR> shutdown
ORA-01034: ORACLE not available
SVRMGR> quit
Server Manager complete.After looking at the create scripts I decided to try this and it worked:
oracle@enschede8:~$ cat svr
#!/bin/sh
ORACLE_SID=test
export ORACLE_SID
svrmgrl
oracle@enschede8:~$ ./svr
Oracle Server Manager Release 3.1.5.0.0 - Production
(c) Copyright 1997, Oracle Corporation. All Rights Reserved.
Oracle8i Enterprise Edition Release 8.1.5.0.2 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
SVRMGR> connect internal
Connected.
SVRMGR> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SVRMGR> I don't know what causes this. I am using
GNU bash, version 2.03.0(1)-release (i386-pc-linux-gnu)
Copyright 1998 Free Software Foundation, Inc.
null

Similar Messages

  • Facetime between iPad2 and MacBook Air worked fine here in the USA.  The iPad2 is now in Rome, Italy and we can't connect internationally.  We both can hear the caller ringing in, but after accepting the call we cannot connect.  No voice, no video.

    Facetime between iPad2 and MacBook Air worked fine here in the USA.  The iPad2 is now in Rome, Italy and we can't connect internationally.  We both can hear the caller ringing in and can see who is trying to call, but after accepting the call we cannot connect.  No voice, no video. iPad is successfully sending and receiving eMails, but no Factime connection   What's up?

    I am curious if possibly the hotel has some sort of VOIP block on.  Have you tried Skype?  Did you talk to the hotel staff?  My husband is traveling to Milan in a couple of days and I wonder if he will run into the same issue.  He has had no problems from Istanbul but he is staying in a corporate apartment there.  If you find an answer please post back.  Thanks

  • Where is   sign when you call international number, but first you call local number and after connect open keyboard and see that sign   doesn't exist only 0

    Where is   sign when you call international number, but first you call local number and after connect open keyboard and see that sign   doesn't exist only 0

    No that is not work. Try to call some number and when is connect click on keyboard and you can see that sign "+" doesn't exist.
    So I use local number, when I'm connect I have option to call international number but like "+3193675xxxx" how I can put?
    Before on iOS 6 I do it.

  • Client dns and internet connection

    Hi,
    Running 10.5.5 Server with basic DNS for inter clients to access AFP/SMB and Wiki services via Open Directory. IP address is 192.168.1.10
    Clients currently receive internet via DHCP from a router and Network preferences autofills DNS and search domains with router address 192.168.1.1 and RP614v4 respectively.
    If I add the server IP and search domain, I can't see the server via DNS name unless I reverse the order in the fields (192.168.1.10 and ######.private). However this turns the internet connection to snail pace.
    How can I get the DNS to work for both the internal server and internet connection?
    Thanks,
    Joel.

    If your DNS server is running correctly, there's no reason why it should resolve any slower than the router. The fact you're mentioning it implies that the delay is significant so I'll guess that your clients are requesting an address from your server, but that's timing out so they're falling back to the router before proceeding.
    You should check your DNS server to make sure it's set to be recursive (so it answers queries for non-local domains, too). That way the clients can get all lookups from your server and you should be good to go.

  • Phone line and BB connection keeps dropping - BT H...

    My broadband and phone connection keeps dropping.  Its been fine for years and now since all this bad weather we have been having the connection lasts temporarily for a few minutes sometimes more and then all the blue lights start flashing orange, sometimes for a while, and then back to blue.  Could it be that there is a wider problem in the area I am living in my area.  I have tried to report this to BT via the Indian call centre but the best they can offer me is an engineer visit on 20 January (!!) and I may have to pay £130 if the fault is found on my property so I would like to know first if there is a wider probelm affecting others in my area.  Many thanks. 
    Solved!
    Go to Solution.

    Have you checked here https://www.bt.com/consumerFaultTracking/public/faults/tracking.do?pageId=31
    Are you connected to the test socket to eliminate any problems caused by internal wiring
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post.
    If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

  • JMQ cluster and unstable connections

    Hello all.
    I have a few architectural questions about building an OpenMQ message-passing infrastructure between multiple offices which do not always have on-line internet connections. We also need to distribute the MQ mesh configuration info.
    From the scale of my questions it seems, that I or our developers don't fully understand MQ, because I think that many of our problems and/or solution ideas (below) should already be implemented within the MQ middleware, and not by us from outside it.
    The potential client currently has a (relatively ugly) working solution which they wanted to revise for simplification, if possible, but this matter is not urgent and answers are welcome at any timeframe :)
    I'd welcome any insights, ideas and pointers as to why our described approach may be plain wrong :)
    To sum this post up, here's my short questionnaire:
    1) What is a good/best way to distribute MQ mesh config when not all nodes are available simultaneously?
    2) What are the limitations on number of brokers and queues in one logical mesh?
    3) Should we aim for separate "internal" and "external" MQ networks, or can they be combined into one large net?
    4) Should we aim for partial solution external to OpenMQ (such as integration with SMTP for messaging, or SVN for config distribution), or can this quest be solved within OpenMQ functionality?
    5) Can a clustered broker be forced to fully start without available master broker connection?
    6) Are broker clusters inherently local-network, or is there some standard solution (pattern) for geographically disperse MQ clusters?
    7) How to enforce pushing of the messages from one broker to another? Are any priority assignments available for certain brokers and "their" queues?
    Detailed rumblings follow below...
    We are thinking about implementing JMQ in a geographically disperse project, where it will be used for asynchronous communications to connect application servers in different branch offices with a central office. The problematic part is, that the central and especially branch offices are not expected to be always on-line, hence the MQ - whenever a connection is available, queued messages (requests, responses, etc.) are to be pushed to the other side's MQ broker. And if all goes well with the project, there may eventually be hundreds of such branch offices and more than one central office for failover, and a mesh of interconnection MQ agreements.
    The basic idea is simple: an end-user of the app server in a branch generates a request, this request is passed via message queue to another branch or to a central office, then another app server processes it to generate a response and the answer is queued back to the requesting app server. At some time after the initial request, the end-user would see in his web-page that the request's status has been updated with a response value. A branch office's app server and MQ broker may be an appliance-server distributed as a relatively unmaintained "black box".
    During the POC we configured several JMQ broker instances in this manner and it worked. From what I gather from our developers, each branch office's request and response queues are separate destinations in the system, and requests (from a certain branch) may be subscribed by any node, and responces (to a certain branch) may be submitted by any node. This may be restricted by passwords and/or certificate-based SSL tunnel channels, for example (suggestions welcome, though).
    However, we also wanted to simplify spreading the configuration of the MQ nodes' network by designating "master brokers" (as per JMQ docs) which keep track of the config and each other broker downloads the cluster config from its master. Perhaps it was wrong on our side, and a better idea is available to avoid manual reconfiguration of each MQ broker whenever another broker or a queue destination is added?
    Problem here is: it seems an "MQ cluster" is a local-network oriented concept. When we have a master broker in a central office, and the inter-connection is not up, branch offices loop indefinitely waiting for connection to a master, and reject client connections (published JMS port remains 0, and appropriate comments in the log files). In this case the branch office can not function until its JMQ broker connects to a central office, updates the MQ config, and permits client connections to itself.
    Also we are not certain (and it seems to be a popular question on Google, too) how to enforce a queued message to be pushed to another side - to a broker "nearest" to the target app server? Can this be done within OpenMQ config, or does this require an MQ client application to read and manipulate such messages somehow? For example, when a branch office's "request" queue has a message, and a connection to central office comes online, this request data should end up in the central office's broker. Apparently, a message which physically remains in the branch office broker when the interconnection goes offline, is of little use to the central appserver...
    I was thinking along the lines of different-priority brokers for a certain destinations, so that messages would automatically flow from further brokers to neares ones - like water flows from higher ground to lower ground in an aqueduct. It would then be possible to easily implement transparent routing between branch offices (available at non-intersecting times) via central office (always up).
    How many brokers and destination can be interconnected at all (practically or theoretically/hardcoded)?
    Possibly, there are other means to do some or all of this?
    Ideas we've discussed internally include:
    * Multiple networks of MQ brokers:
    Have an "internal" broker (cluster) in each branch office which talks to the app server, and a separate "external" broker which is clustered with the central office's "master broker". Some branch office application transfers messages between two brokers local to its branch. Thus the local appserver works okay, and remote queuing works whenever network is available.
    Possibly, the central office should also have separate internal and external broker setups?
    * Multi-tiered net of MQ brokers:
    Perhaps there can be "clusters of clusters" - with "external" tier-1 brokers being directly master brokers for local "internal" tier-2 clusters? Otherwise the idea of "miltiple networks of MQ brokers" above, without an extra app to relay messages between MQ brokers local to this app.
    * Multi-protocol implementation of MQ+SMTP(+POP3/IMAP)
    Many of our questions are solvable by SMTP. That is, we can send messages to a mailbox residing on a specific server (local in each office), and local appserver clients retrieve them by POP3 from the local mailbox server, and then submit responses over SMTP. This is approximately how the client currently solves this task now.
    We don't really want to invent a bicycle, but maybe this approach can also be applied to JMQ (asynch traffic not over MQ protocol, but over SMTP like in SOAP-SMTP vs. SOAP-HTTP webservices)?
    * HTTP/RCS-based config file:
    The OpenMQ config allows for the detailed configuration file to be available in local filesystem or on a web server. It is possible to fetch the config file from a central office whenever the connection is up (wget, svn/cvs/etc.) and restart the branch broker.
    Why is this approach good or bad? Advocates welcome :)
    Thanks for reading up to the end,
    and thanks in advance for any replies,
    //Jim Klimov

    Hello all.
    I have a few architectural questions about building an OpenMQ message-passing infrastructure between multiple offices which do not always have on-line internet connections. We also need to distribute the MQ mesh configuration info.
    From the scale of my questions it seems, that I or our developers don't fully understand MQ, because I think that many of our problems and/or solution ideas (below) should already be implemented within the MQ middleware, and not by us from outside it.
    The potential client currently has a (relatively ugly) working solution which they wanted to revise for simplification, if possible, but this matter is not urgent and answers are welcome at any timeframe :)
    I'd welcome any insights, ideas and pointers as to why our described approach may be plain wrong :)
    To sum this post up, here's my short questionnaire:
    1) What is a good/best way to distribute MQ mesh config when not all nodes are available simultaneously?
    2) What are the limitations on number of brokers and queues in one logical mesh?
    3) Should we aim for separate "internal" and "external" MQ networks, or can they be combined into one large net?
    4) Should we aim for partial solution external to OpenMQ (such as integration with SMTP for messaging, or SVN for config distribution), or can this quest be solved within OpenMQ functionality?
    5) Can a clustered broker be forced to fully start without available master broker connection?
    6) Are broker clusters inherently local-network, or is there some standard solution (pattern) for geographically disperse MQ clusters?
    7) How to enforce pushing of the messages from one broker to another? Are any priority assignments available for certain brokers and "their" queues?
    Detailed rumblings follow below...
    We are thinking about implementing JMQ in a geographically disperse project, where it will be used for asynchronous communications to connect application servers in different branch offices with a central office. The problematic part is, that the central and especially branch offices are not expected to be always on-line, hence the MQ - whenever a connection is available, queued messages (requests, responses, etc.) are to be pushed to the other side's MQ broker. And if all goes well with the project, there may eventually be hundreds of such branch offices and more than one central office for failover, and a mesh of interconnection MQ agreements.
    The basic idea is simple: an end-user of the app server in a branch generates a request, this request is passed via message queue to another branch or to a central office, then another app server processes it to generate a response and the answer is queued back to the requesting app server. At some time after the initial request, the end-user would see in his web-page that the request's status has been updated with a response value. A branch office's app server and MQ broker may be an appliance-server distributed as a relatively unmaintained "black box".
    During the POC we configured several JMQ broker instances in this manner and it worked. From what I gather from our developers, each branch office's request and response queues are separate destinations in the system, and requests (from a certain branch) may be subscribed by any node, and responces (to a certain branch) may be submitted by any node. This may be restricted by passwords and/or certificate-based SSL tunnel channels, for example (suggestions welcome, though).
    However, we also wanted to simplify spreading the configuration of the MQ nodes' network by designating "master brokers" (as per JMQ docs) which keep track of the config and each other broker downloads the cluster config from its master. Perhaps it was wrong on our side, and a better idea is available to avoid manual reconfiguration of each MQ broker whenever another broker or a queue destination is added?
    Problem here is: it seems an "MQ cluster" is a local-network oriented concept. When we have a master broker in a central office, and the inter-connection is not up, branch offices loop indefinitely waiting for connection to a master, and reject client connections (published JMS port remains 0, and appropriate comments in the log files). In this case the branch office can not function until its JMQ broker connects to a central office, updates the MQ config, and permits client connections to itself.
    Also we are not certain (and it seems to be a popular question on Google, too) how to enforce a queued message to be pushed to another side - to a broker "nearest" to the target app server? Can this be done within OpenMQ config, or does this require an MQ client application to read and manipulate such messages somehow? For example, when a branch office's "request" queue has a message, and a connection to central office comes online, this request data should end up in the central office's broker. Apparently, a message which physically remains in the branch office broker when the interconnection goes offline, is of little use to the central appserver...
    I was thinking along the lines of different-priority brokers for a certain destinations, so that messages would automatically flow from further brokers to neares ones - like water flows from higher ground to lower ground in an aqueduct. It would then be possible to easily implement transparent routing between branch offices (available at non-intersecting times) via central office (always up).
    How many brokers and destination can be interconnected at all (practically or theoretically/hardcoded)?
    Possibly, there are other means to do some or all of this?
    Ideas we've discussed internally include:
    * Multiple networks of MQ brokers:
    Have an "internal" broker (cluster) in each branch office which talks to the app server, and a separate "external" broker which is clustered with the central office's "master broker". Some branch office application transfers messages between two brokers local to its branch. Thus the local appserver works okay, and remote queuing works whenever network is available.
    Possibly, the central office should also have separate internal and external broker setups?
    * Multi-tiered net of MQ brokers:
    Perhaps there can be "clusters of clusters" - with "external" tier-1 brokers being directly master brokers for local "internal" tier-2 clusters? Otherwise the idea of "miltiple networks of MQ brokers" above, without an extra app to relay messages between MQ brokers local to this app.
    * Multi-protocol implementation of MQ+SMTP(+POP3/IMAP)
    Many of our questions are solvable by SMTP. That is, we can send messages to a mailbox residing on a specific server (local in each office), and local appserver clients retrieve them by POP3 from the local mailbox server, and then submit responses over SMTP. This is approximately how the client currently solves this task now.
    We don't really want to invent a bicycle, but maybe this approach can also be applied to JMQ (asynch traffic not over MQ protocol, but over SMTP like in SOAP-SMTP vs. SOAP-HTTP webservices)?
    * HTTP/RCS-based config file:
    The OpenMQ config allows for the detailed configuration file to be available in local filesystem or on a web server. It is possible to fetch the config file from a central office whenever the connection is up (wget, svn/cvs/etc.) and restart the branch broker.
    Why is this approach good or bad? Advocates welcome :)
    Thanks for reading up to the end,
    and thanks in advance for any replies,
    //Jim Klimov

  • Hard drive gets error -50 when connected internally, but not in USB enclosure.

    I have a SATA hard drive that works perfectly when connected through an external USB enclosure but whenever it's plugged into the internal SATA connector on my MacBook Pro I receive the error message "The operation can’t be completed because an unexpected error occurred (error code -50)." any time I attempt to move or copy a file to it.
    This doesn't happen with any other hard drive I connect internally, so I don't believe it's an issue with the internal cable.
    I have made sure all permissions are set to allow reading and writing to the drive as well as selecting the "Ignore ownership on this volume" option in the Get Info panel.

    If I were you, I would just replace the drive. Since other drives work, you're probably right in assuming that it's not a SATA cable issue, but rather an issue with that specific drive.
    Clinton
    MacBook Pro (15-inch Late 2011), OS Mavericks 10.9.4, 16GB Crucial RAM, Crucial M500 960GB SSD, 27” Apple Thunderbolt Display

  • Adaptec AVA-2906 is this the PCI card needed to connect internal HD's?

    Hi, I have an Adaptec AVA-2906, is this the PCI card needed to connect internal HD's? It has a socket on the board that looks like the type of plug used to connect hard drives. On the backplate it has a socket with 25 pin connecter with screw holes like on monitor plugs.
    If this is the right device can I fit a HD bigger than 120gb?
    Thanks
    Robert

    (A) do I have to fit a SATA card to put a third hard
    drive inside the G4 or is there a ribbon connector
    that has 3 plugs on it, like the one in that came
    with the machine with two plugs that plugs into the
    edge of the motherboard?
    Without an ATA or SATA card you can install upto two hard drives. These drives are connected to the ATA controller built into your computer's logic board. It can only support two drives, a master and a slave. You can't buy a ribbon cable with three plugs and attach three drives.
    (B) Looking at pics of the SATA plug connectors they
    appear to be a lot smaller than the ribbon cable ones
    in my Mac, do the ATA hard drives come with this
    small type of connector?
    SATA drives use a different data connector and a different power connector, although some SATA drives also come with a legacy molex power connector. The data connectors aren't interchangeable, ATA drives use a wide ribbon cable only, SATA drives use the much narrower cable only.
    (C) Is there any other way of putting another HD
    inside if I dont exceed 120gb?
    In theory you could add a third hard drive to the zip bay, beneath the optical drive. However, it's not recommended by Apple due to possible heat issues. The ATA bus used for the optical drive and zip drive is also slower and may affect performance.
    (D) Can 5,400 rev be mixed with 7200?
    Yes, the spindle speeds and cache sizes are purely internal to the drive. The computer and OS probably don't even 'know' or 'care' about them.

  • How to Set NLS parameters in SqlDeveloper for current and future connection

    Hi
    I've downloaded sqldeveloper version 1.5.4(build main-5940)
    when I try to set NLS parameters as following it is thrwoing following error
    TOOLS->PREFERENCES->DATABASE-->NLS parameters
    on right side pane
    I've changed
    SORT----->BINARY_CI
    COMP----->LINGUISTIC
    then it is generating following log file
    SEVERE     43     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:ORA-00600: internal error code, arguments: [qctosop:like transform], [], [], [], [], [], [], []
    SEVERE     44     114     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     45     7     oracle.dbtools.db.DBUtil     Closed Connection
    SEVERE     46     9     oracle.dbtools.db.DBUtil     
    SEVERE     47     1     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     48     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     49     5     oracle.dbtools.db.DBUtil     Closed Connection
    SEVERE     50     0     oracle.dbtools.db.DBUtil     
    SEVERE     51     1     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     52     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     53     5     oracle.dbtools.db.DBUtil     Closed Connection
    SEVERE     54     0     oracle.dbtools.db.DBUtil     
    SEVERE     55     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     56     1     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     57     4     oracle.dbtools.db.DBUtil     Closed Connection
    SEVERE     58     0     oracle.dbtools.db.DBUtil     
    SEVERE     59     1     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     60     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     61     4     oracle.dbtools.db.DBUtil     Closed Connection
    SEVERE     62     1     oracle.dbtools.db.DBUtil     
    SEVERE     63     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     64     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     65     5     oracle.dbtools.db.DBUtil     Closed Connection
    SEVERE     66     0     oracle.dbtools.db.DBUtil     
    SEVERE     67     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     68     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     69     6     oracle.dbtools.db.DBUtil     Closed Connection
    SEVERE     70     0     oracle.dbtools.db.DBUtil     
    SEVERE     71     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    SEVERE     72     0     oracle.dbtools.raptor.nls.OracleNLSProvider     Error loading nls:Closed Connection
    please help me to set NLS parameters for current and future connections.
    thanks,
    harry

    Hi -K-
    actually I want to make SqlDeveloper to make case insensitive search.
    when I google, I've found to make oracle as case insensitive I need to set NLS_SORT to Binary_CI and NLS_COMP to LINGUISTIC
    Default SQLDEVELOPER NLSParmeters values are as NLS_SORT---->Binary and NLS_COMP--->Binary
    Now I want to make these as NLS_SORT---->Binary_CI and NLS_COMP--->LINGUISTICS.
    thanks,
    harry

  • ORA-00257: archiver error. Connect internal only

    Hello everyone,
    On 10gR2, if we face the above error, the system would hang for some time. During this time no one will be able to connect. SYSDBA session will hang and wont give you SQL prompt quickly.
    I want to know if there is any way to remove this time gap.
    thanks

    Hello,
    The ORA-00257 is likely due to a lack of space which prevent generating Archived redologs:
    ORA-00257: archiver error. Connect internal only, until freed.
    Cause: The archiver process received an error while trying to archive a redo log. If the problem
    is not resolved soon, the database will stop executing transactions. The most likely cause of this
    message is the destination device is out of space to store the redo log file.
    Action: Check archiver trace file for a detailed description of the problem. Also verify that the
    device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving.So check for a File System full and the use of the Flash Recovery Area (FRA).
    The following Note from MOS explains how to check the FRA and how to free space in it:
    How to Resolve ORA-00257: Archiver is Stuck Error in 10g? [ID 278308.1]
    The database hungs for some time even after clearing the archive log destination. And it is this time that I want to reduce.I think that the best way is to monitore the Archive destination so that you can clear the space before the Database hang.
    If you use EM Database Control, you may set some Alert Notification. More over, you have the specific Archive Full Metric:
    http://download.oracle.com/docs/cd/B19306_01/em.102/b25986/oracle_database.htm#sthref540
    Hope this help.
    Best regards,
    Jean-Valentin
    Edited by: Lubiez Jean-Valentin on Jul 24, 2011 10:46 PM

  • ORA-00257: archiver error. Connect internal only, until freed. (DBD ERROR:

    Dear All,
    Couple of days ago I changed my database mode to archive.
    Database server is 1-gR2 on Linux.
    Today i have got this error:
    ORA-00257: archiver error. Connect internal only, until freed.
    Kindly help me to strat DB again. How to get rid of this error.
    Thanks, Imran

    This is not a tempory solution, but the way how You should proceed.
    You should regulary backup up database and archive logs and delete the latter in order to prevent such problems.
    I would sugest to monitor how fast archive logs are generating and then implement regular archive log backup and deletion. For some site it would be enough to backup and delete once per week, for some there will be need to backup several times per day.

  • ORA-00257: archiver error. Connect internal only, until freed in RAC

    Hi All,
    I have installed 2-node 10gR2 RAC on RHEL4 with MSA 1000 as a shared storage. I am using ASM to store the databases files and archive log files in different diskgroups. The ASM diskgroup where archive log file stored is of 100GB size with RAID5 enabled. The database is Archive enabled. I imported only one schema of 4GB of size to the RAC db. today i'm getting "ORA-00257: archiver error. Connect internal only, until freed". Now, i want to stop the archive log mode remove files.
    1. How can i can change the archive log mode to NOARCHIVELOG mode in RAC db?
    2. How can i delete archive log files after changing the archivelog mode from ASM? I dont want take a backup of the database as the data is not important.
    Please help..my RAC db is freezed...
    Thanks,
    Praveen

    How can i rectify this permanentlyHow do you rectify this "error" everytime you get it ? If you are clearing / deleting files in the db_recovery_file_dest location than you would know that you should either
    a. Increaes db_recovery_file_dest_size (ensuring that the filesystem does have that much space, else increase the filesystem size as well !)
    b. retain fewer files in this location (reduce retention or redundancy)
    If you aren't using a db_recovery_file_dest or the archivelogs are going elsewhere and you are manually purging archivelogs, you should look at increasing the size of the available filesystem.
    If you are retaining multiple days archivelogs on disk, and running daily full backups, re-consider why you have multiple days archivelogs on disk.
    If the problem occurs because of large batch jobs generating a large quantum of redo, either buy enough disk space OR rre-consider the jobs.

  • Svrmgrl connect internal fails after changing oracle os user passwd on 8.06

    Hello,
    I've this problem.
    I've installed oracle rdbms 8.0.6 on HP-UX 10.20.
    I've choosen o.s. authentication system with password file.
    In the password file I set the password of the os user oracle which owns installation.
    In first time svrmgrl connect internal looks good.
    But when I've changed oracle os user password, and i retry to do connect internal
    I receive the following message:
    SVRMGR> connect internal;
    Password:
    Password:
    ORA-01031: insufficient privileges
    I've tried to reset the password to the old one, but it's unuseful
    Could someone help me?

    In the password file I set the password of the os
    user oracle which owns installation.The Password for the password file is supposed to be the SYS password and not OS User oracle which owns installation.
    I do not think the Oracle OS user password change is the problem. When you alter SYS password using the alter user command, it should change the password file automatically.
    Try
    SVRMGR> connect sys/sys_password as see whether it works. If it does not, then check other issues.

  • Alert: WebServices connectivity (Internal) transaction failure - The credentials can't be used to test Web Services.

    Hi.
    Could you please help me to resolve this issue.
    I have SCOM 2012 installed to monitor environment with Exchnage 2010 SP3. There are 2 sites with Exchnage servers within the organization. There are 2 mailboxes being created to test both sites.
    I am getting following alert:
    Alert: WebServices connectivity (Internal) transaction failure - The credentials can't be used to test Web Services.
    description: The test mailbox was not initialized. Run new-TestCasConnectivityUser.ps1 to ensure that the test mailbox is created.
    Detailed information: 
    [Microsoft.Exchange.Monitoring.CasHealthUserNotFoundException]: The user wasn't found in Active Directory. UserPrincipalName: extest*****@****.local. Additional error information: [System.Security.SecurityException]:
    Logon failure: unknown user name or bad password.
    Diagnostic command: "Test-WebServicesConnectivity -MonitoringContext:$true -TrustAnySSLCertificate:$true -LightMode:$true"
    EventSourceName: MSExchange Monitoring WebServicesConnectivity Internal
    I have tried the next steps:
    1. Verified that mailbox is exist and it's not locked (same for the second mailbox)
    2. Deleted those mailboxes and created  a new  using new-TestCasConnectivityUser.ps1  verified that this mailbox is visible on all DC's accross the forest (both mailboxes)
    and that temporary password was accepted;
    3. Cleared the cache on the SCOM 2012;
    4. Still getting the same alert
    I will really appriciate any help.
    Thanks.

    Hi,
    Hope these posts help you:
    http://thoughtsonopsmgr.blogspot.ca/2013/11/exchange-server-2010-mp-no-synthetic.html
    https://social.technet.microsoft.com/Forums/systemcenter/en-US/437f2bbb-cd96-40c3-8c56-6d4d176a9520/exchange-2010-mp-constantly-throws-webservices-connectivity-internal-transaction-failure?forum=operationsmanagermgmtpacks
    Natalya
    ### If my post helped you, please take a moment to Vote as Helpful and\or Mark as an Answer

  • From Azure unable to connect internal LAN network with windows RRAS site to site VPN

    Hi All,
    Below is my scenario.
    Our side.
    We have installed RRAS on Windows 2012 R2 on VMware and created a site to site VPN with azure.
    on RRAS server we have two interfaces
    eth0- 10.1.1.1
    eth1- 10.1.1.2
    We have natted(static nat) internal ip (eth0) 10.1.1.1 with public ip 1.1.1.1 (eg.).
    On Azure,
    We created a gateway, and two VMs.
    VM1 = 11.11.11.1
    VM2 = 11.11.11.2
    Both VMs can ping each other.
    VPN gateway on Azure and demand dial on RRAS server shows connected and, in and out data shows as well.
    We can ping, tracert and rdp the RRAS server using both the interfaces IP [eth0- 10.1.1.1   ,    eth1- 10.1.1.2]
    But we are unable to ping, tracert or rdp our other internal Lan machines on 10.1.x.x
    So we can reach Azure VM from our RRAS and
    we can reach RRAS server from Azure VM.
    But we cannot reach our other internal Lan machines from Azure VM and from other internal Lan machine to Azure VM.
    Please help?

    I will give you some pointers to check.
    The reason for this could be one of the two
    - local site in azure virtual network is not configured correctly
    - route for the azure subnet is not setup correctly on rras server
    Can you please validate the above?
    Open the Routing and Remote access UI and verify that there is a static route for azure subnet and the interface is the public ip of the azure gateway.
    Also verify that you have a local site created with the on-premises subnet and added in the azure virtual network.
    What is the gateway specified in the on-premises VM. Provide it as the IP of eth1, the IP that is not natted
    Is NAT allowing all traffic in or is it restricted to certain points.
    This posting is provided "AS IS" with no warranties, and confers no rights

Maybe you are looking for