Why archived log of an instance got created in another instance?
Hi All,
It is a 2 nodes RAC. I found that there are archived log of instance#1 created in instance#2. It is strange. Do you have any idea?
instance#1
[oracle@server1 dbs]$ ls -l /appl/erp/arch/orcl/
total 1501340
-rw-r----- 1 oracle erpdba 49469952 Jan 17 09:33 1_781_715695404.arc
-rw-r----- 1 oracle erpdba 50551808 Jan 17 09:34 1_782_715695404.arc
-rw-r----- 1 oracle erpdba 1024 Jan 17 09:39 1_785_715695404.arc
-rw-r----- 1 oracle erpdba 49051648 Jan 17 10:00 1_786_715695404.arc
-rw-r----- 1 oracle erpdba 83065344 Jan 17 10:00 2_615_715695404.arc
-rw-r----- 1 oracle erpdba 51148800 Jan 17 10:01 1_787_715695404.arc
-rw-r----- 1 oracle erpdba 1024 Jan 17 10:01 2_616_715695404.arc
-rw-r----- 1 oracle erpdba 48924672 Jan 17 10:01 1_788_715695404.arc
-rw-r----- 1 oracle erpdba 48935424 Jan 17 10:01 1_789_715695404.arc
-rw-r----- 1 oracle erpdba 23964160 Jan 17 10:06 1_790_715695404.arc
-rw-r----- 1 oracle erpdba 51796480 Jan 17 10:07 1_791_715695404.arc
-rw-r----- 1 oracle erpdba 52080640 Jan 17 10:08 1_792_715695404.arc
-rw-r----- 1 oracle erpdba 51973120 Jan 17 10:09 1_793_715695404.arcinstance#2
[oracle@server2 dbs]$ ls -l /appl/erp/arch/orcl/
total 1501340
-rw-rw---- 1 oracle erpdba 101348352 Jan 16 15:46 2_605_715695404.arc
-rw-rw---- 1 oracle erpdba 101351424 Jan 16 16:05 2_606_715695404.arc
-rw-rw---- 1 oracle erpdba 101347840 Jan 16 16:06 2_607_715695404.arc
-rw-r----- 1 oracle erpdba 101350400 Jan 16 16:55 2_596_715695404.arc
-rw-rw---- 1 oracle erpdba 3522560 Jan 16 18:13 2_609_715695404.arc
-rw-rw---- 1 oracle erpdba 47719424 Jan 17 09:33 2_610_715695404.arc
-rw-rw---- 1 oracle erpdba 7168 Jan 17 09:33 2_611_715695404.arc
-rw-rw---- 1 oracle erpdba 5175296 Jan 17 09:34 1_783_715695404.arc
-rw-rw---- 1 oracle erpdba 6170112 Jan 17 09:34 2_612_715695404.arc
-rw-rw---- 1 oracle erpdba 50551808 Jan 17 09:34 1_782_715695404.arc
-rw-rw---- 1 oracle erpdba 1024 Jan 17 09:34 1_784_715695404.arc
-rw-rw---- 1 oracle erpdba 102879232 Jan 17 09:36 2_613_715695404.arc
-rw-rw---- 1 oracle erpdba 103360512 Jan 17 09:38 2_614_715695404.arcRegards,
nww
Hi Salman,
I don't know. I just modified your SQL a bit as below.
select inst_id, name, thread#, sequence#, first_time from gv$archived_log
where first_time like '16-JAN-13%'
and sequence# between 605 and 793
order by FIRST_TIME ;I tried again with the follow SQL. Do you know what's going on? The FIRST_TIME changed?
select inst_id, name, thread#, sequence#, first_time from gv$archived_log
where first_time >= trunc(sysdate-1)
and sequence# between 605 and 793
order by FIRST_TIME ;
INST_ID NAME THREAD# SEQUENCE# FIRST_TIM
2 1 770 16-JAN-13
2 ORCLSB 1 770 16-JAN-13
1 ORCLSB 1 770 16-JAN-13
1 1 770 16-JAN-13
1 1 771 16-JAN-13
2 1 771 16-JAN-13
1 ORCLSB 1 771 16-JAN-13
2 ORCLSB 1 771 16-JAN-13
2 ORCLSB 1 772 16-JAN-13
1 1 772 16-JAN-13
2 1 772 16-JAN-13
1 ORCLSB 1 772 16-JAN-13
1 2 605 16-JAN-13
2 2 605 16-JAN-13
2 ORCLSB 2 605 16-JAN-13
1 ORCLSB 2 605 16-JAN-13
1 2 606 16-JAN-13
2 2 606 16-JAN-13
1 ORCLSB 2 606 16-JAN-13
2 ORCLSB 2 606 16-JAN-13
2 ORCLSB 2 607 16-JAN-13
2 2 607 16-JAN-13
1 2 607 16-JAN-13
1 ORCLSB 2 607 16-JAN-13
1 ORCLSB 2 608 16-JAN-13
2 ORCLSB 2 608 16-JAN-13
1 2 608 16-JAN-13
2 2 608 16-JAN-13
2 ORCLSB 1 773 16-JAN-13
2 1 773 16-JAN-13
1 ORCLSB 1 773 16-JAN-13
1 1 773 16-JAN-13
2 1 774 16-JAN-13
1 ORCLSB 1 774 16-JAN-13
2 ORCLSB 1 774 16-JAN-13
1 1 774 16-JAN-13
2 ORCLSB 2 609 16-JAN-13
1 ORCLSB 2 609 16-JAN-13
1 2 609 16-JAN-13
2 2 609 16-JAN-13
1 ORCLSB 1 775 16-JAN-13
1 1 775 16-JAN-13
2 1 775 16-JAN-13
2 ORCLSB 1 775 16-JAN-13
2 2 610 16-JAN-13
1 ORCLSB 2 610 16-JAN-13
1 2 610 16-JAN-13
2 ORCLSB 2 610 16-JAN-13
2 ORCLSB 1 776 16-JAN-13
2 1 776 16-JAN-13
1 ORCLSB 1 776 16-JAN-13
1 1 776 16-JAN-13
2 ORCLSB 1 777 17-JAN-13
1 1 777 17-JAN-13
1 ORCLSB 1 777 17-JAN-13
2 1 777 17-JAN-13
1 1 778 17-JAN-13
1 ORCLSB 1 778 17-JAN-13
2 1 778 17-JAN-13
2 ORCLSB 1 778 17-JAN-13
2 2 611 17-JAN-13
1 ORCLSB 2 611 17-JAN-13
2 ORCLSB 2 611 17-JAN-13
1 2 611 17-JAN-13
2 ORCLSB 1 779 17-JAN-13
1 1 779 17-JAN-13
1 ORCLSB 1 779 17-JAN-13
2 1 779 17-JAN-13
2 ORCLSB 1 780 17-JAN-13
2 1 780 17-JAN-13
1 ORCLSB 1 780 17-JAN-13
1 1 780 17-JAN-13
1 2 612 17-JAN-13
1 ORCLSB 2 612 17-JAN-13
2 2 612 17-JAN-13
2 ORCLSB 2 612 17-JAN-13
1 ORCLSB 1 781 17-JAN-13
1 1 781 17-JAN-13
2 1 781 17-JAN-13
2 ORCLSB 1 781 17-JAN-13
1 1 782 17-JAN-13
2 1 782 17-JAN-13
2 1 783 17-JAN-13
1 1 783 17-JAN-13
2 ORCLSB 1 783 17-JAN-13
1 ORCLSB 1 783 17-JAN-13
1 ORCLSB 1 784 17-JAN-13
1 1 784 17-JAN-13
2 1 784 17-JAN-13
2 ORCLSB 1 784 17-JAN-13
2 2 613 17-JAN-13
1 2 613 17-JAN-13
1 ORCLSB 2 613 17-JAN-13
2 ORCLSB 2 613 17-JAN-13
2 2 614 17-JAN-13
1 2 614 17-JAN-13
1 ORCLSB 2 614 17-JAN-13
2 ORCLSB 2 614 17-JAN-13
2 ORCLSB 2 615 17-JAN-13
1 2 615 17-JAN-13
1 ORCLSB 2 615 17-JAN-13
2 2 615 17-JAN-13
1 ORCLSB 1 785 17-JAN-13
1 1 785 17-JAN-13
2 1 785 17-JAN-13
2 ORCLSB 1 785 17-JAN-13
2 ORCLSB 1 786 17-JAN-13
2 1 786 17-JAN-13
1 ORCLSB 1 786 17-JAN-13
1 1 786 17-JAN-13
2 1 787 17-JAN-13
1 ORCLSB 1 787 17-JAN-13
2 ORCLSB 1 787 17-JAN-13
1 1 787 17-JAN-13
1 ORCLSB 2 616 17-JAN-13
2 ORCLSB 2 616 17-JAN-13
1 2 616 17-JAN-13
2 2 616 17-JAN-13
2 1 788 17-JAN-13
1 ORCLSB 1 788 17-JAN-13
1 1 788 17-JAN-13
2 ORCLSB 1 788 17-JAN-13
2 ORCLSB 1 789 17-JAN-13
1 1 789 17-JAN-13
1 ORCLSB 1 789 17-JAN-13
2 1 789 17-JAN-13
1 ORCLSB 1 790 17-JAN-13
1 1 790 17-JAN-13
2 1 790 17-JAN-13
2 ORCLSB 1 790 17-JAN-13
1 1 791 17-JAN-13
1 ORCLSB 1 791 17-JAN-13
2 ORCLSB 1 791 17-JAN-13
2 1 791 17-JAN-13
1 1 792 17-JAN-13
1 ORCLSB 1 792 17-JAN-13
2 ORCLSB 1 792 17-JAN-13
2 1 792 17-JAN-13
2 ORCLSB 1 793 17-JAN-13
2 1 793 17-JAN-13
1 ORCLSB 1 793 17-JAN-13
1 1 793 17-JAN-13
1 ORCLSB 2 617 17-JAN-13
1 2 617 17-JAN-13
2 2 617 17-JAN-13
2 ORCLSB 2 617 17-JAN-13
2 ORCLSB 2 618 17-JAN-13
1 /appl/erp/arch/ORCL/2_618_715695404.arc 2 618 17-JAN-13
1 ORCLSB 2 618 17-JAN-13
2 /appl/erp/arch/ORCL/2_618_715695404.arc 2 618 17-JAN-13
2 /appl/erp/arch/ORCL/2_619_715695404.arc 2 619 17-JAN-13
1 /appl/erp/arch/ORCL/2_619_715695404.arc 2 619 17-JAN-13
2 /appl/erp/arch/ORCL/2_620_715695404.arc 2 620 17-JAN-13
1 /appl/erp/arch/ORCL/2_620_715695404.arc 2 620 17-JAN-13
2 /appl/erp/arch/ORCL/2_623_715695404.arc 2 623 17-JAN-13
1 /appl/erp/arch/ORCL/2_622_715695404.arc 2 622 17-JAN-13
1 /appl/erp/arch/ORCL/2_623_715695404.arc 2 623 17-JAN-13
1 /appl/erp/arch/ORCL/2_621_715695404.arc 2 621 17-JAN-13
2 /appl/erp/arch/ORCL/2_621_715695404.arc 2 621 17-JAN-13
2 /appl/erp/arch/ORCL/2_622_715695404.arc 2 622 17-JAN-13
2 /appl/erp/arch/ORCL/2_624_715695404.arc 2 624 17-JAN-13
1 /appl/erp/arch/ORCL/2_624_715695404.arc 2 624 17-JAN-13
162 rows selected.BTW, what are those records having blank name?
Regards,
nww
Similar Messages
-
Why archive log is full so fast?
Hi All,
I have installed 10g RAC on RHEL4 using MSA1000 as a shared storage. and using ASM for database files and archive log files (external reduandancy) with RAID5. I have allocated 100GB for archive log files. database also created successfully with archive log enabled. After that i import 3 schemas in the RAC db each of 2GB size. The import was successful and when i try to connect the schema using Sqlplus it will "archive error". archive log space was full. i checked in asmcmd also, the 100GB fully occupied. My doubt is..How come the 100GB space is filled so fast?
Can anybody help me?
Thanks,
Praveen.Hi Don,
No error, just code from an older release! Right ?
Not found any reference of the TIME column from V$LOG_HISTORY into the 8i and 9i doc. Maybe from Oracle 7 ?
Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.1 - Production
With the Partitioning option
JServer Release 8.1.7.4.1 - Production
SQL> select substr(time,1,5) day,
2 to_char(sum(decode(substr(time,10,2),'00',1,0)),'99') "00",
3 to_char(sum(decode(substr(time,10,2),'01',1,0)),'99') "01",
4 to_char(sum(decode(substr(time,10,2),'02',1,0)),'99') "02",
5 to_char(sum(decode(substr(time,10,2),'03',1,0)),'99') "03",
6 to_char(sum(decode(substr(time,10,2),'04',1,0)),'99') "04",
7 to_char(sum(decode(substr(time,10,2),'05',1,0)),'99') "05",
8 to_char(sum(decode(substr(time,10,2),'06',1,0)),'99') "06",
9 to_char(sum(decode(substr(time,10,2),'07',1,0)),'99') "07",
10 to_char(sum(decode(substr(time,10,2),'08',1,0)),'99') "08",
11 to_char(sum(decode(substr(time,10,2),'09',1,0)),'99') "09",
12 to_char(sum(decode(substr(time,10,2),'10',1,0)),'99') "10",
13 to_char(sum(decode(substr(time,10,2),'11',1,0)),'99') "11",
14 to_char(sum(decode(substr(time,10,2),'12',1,0)),'99') "12",
15 to_char(sum(decode(substr(time,10,2),'13',1,0)),'99') "13",
16 to_char(sum(decode(substr(time,10,2),'14',1,0)),'99') "14",
17 to_char(sum(decode(substr(time,10,2),'15',1,0)),'99') "15",
18 to_char(sum(decode(substr(time,10,2),'16',1,0)),'99') "16",
19 to_char(sum(decode(substr(time,10,2),'17',1,0)),'99') "17",
20 to_char(sum(decode(substr(time,10,2),'18',1,0)),'99') "18",
21 to_char(sum(decode(substr(time,10,2),'19',1,0)),'99') "19",
22 to_char(sum(decode(substr(time,10,2),'20',1,0)),'99') "20",
23 to_char(sum(decode(substr(time,10,2),'21',1,0)),'99') "21",
24 to_char(sum(decode(substr(time,10,2),'22',1,0)),'99') "22",
25 to_char(sum(decode(substr(time,10,2),'23',1,0)),'99') "23"
26 from v$log_history
27 group by substr(time,1,5);
group by substr(time,1,5)
ERROR at line 27:
ORA-00904: invalid column name
SQL> desc v$log_history
Name Null? Type
RECID NUMBER
STAMP NUMBER
THREAD# NUMBER
SEQUENCE# NUMBER
FIRST_CHANGE# NUMBER
FIRST_TIME DATE
NEXT_CHANGE# NUMBER
SQL>
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
SQL> select substr(time,1,5) day,
2 to_char(sum(decode(substr(time,10,2),'00',1,0)),'99') "00",
3 to_char(sum(decode(substr(time,10,2),'01',1,0)),'99') "01",
4 to_char(sum(decode(substr(time,10,2),'02',1,0)),'99') "02",
5 to_char(sum(decode(substr(time,10,2),'03',1,0)),'99') "03",
6 to_char(sum(decode(substr(time,10,2),'04',1,0)),'99') "04",
7 to_char(sum(decode(substr(time,10,2),'05',1,0)),'99') "05",
8 to_char(sum(decode(substr(time,10,2),'06',1,0)),'99') "06",
9 to_char(sum(decode(substr(time,10,2),'07',1,0)),'99') "07",
10 to_char(sum(decode(substr(time,10,2),'08',1,0)),'99') "08",
11 to_char(sum(decode(substr(time,10,2),'09',1,0)),'99') "09",
12 to_char(sum(decode(substr(time,10,2),'10',1,0)),'99') "10",
13 to_char(sum(decode(substr(time,10,2),'11',1,0)),'99') "11",
14 to_char(sum(decode(substr(time,10,2),'12',1,0)),'99') "12",
15 to_char(sum(decode(substr(time,10,2),'13',1,0)),'99') "13",
16 to_char(sum(decode(substr(time,10,2),'14',1,0)),'99') "14",
17 to_char(sum(decode(substr(time,10,2),'15',1,0)),'99') "15",
18 to_char(sum(decode(substr(time,10,2),'16',1,0)),'99') "16",
19 to_char(sum(decode(substr(time,10,2),'17',1,0)),'99') "17",
20 to_char(sum(decode(substr(time,10,2),'18',1,0)),'99') "18",
21 to_char(sum(decode(substr(time,10,2),'19',1,0)),'99') "19",
22 to_char(sum(decode(substr(time,10,2),'20',1,0)),'99') "20",
23 to_char(sum(decode(substr(time,10,2),'21',1,0)),'99') "21",
24 to_char(sum(decode(substr(time,10,2),'22',1,0)),'99') "22",
25 to_char(sum(decode(substr(time,10,2),'23',1,0)),'99') "23"
26 from v$log_history
27 group by substr(time,1,5);
group by substr(time,1,5)
ERROR at line 27:
ORA-00904: "TIME": invalid identifier
SQL> desc v$log_history
Name Null? Type
RECID NUMBER
STAMP NUMBER
THREAD# NUMBER
SEQUENCE# NUMBER
FIRST_CHANGE# NUMBER
FIRST_TIME DATE
NEXT_CHANGE# NUMBER
SQL> Well, Gints has already check the doc.
To be fair, I would agree that could be a nightmare to maintain a bunch of scripts through different release, but at least see the evidence, on that case, on the age of the scritp if it was tested a day.
Nicolas.
Message was edited by:
N. Gasparotto -
Archive log files are not being created
I am doing some testing of the backup and recovery of our databases.
I have a database which is in archive log mode.
I have added some records to a table and I am expecting to see some archive files
being written to but nothing is being produced.
We are running Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
The current archive information is as follows :
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
SQL> show parameter db_recovery_file_dest;
NAME TYPE VALUE
db_recovery_file_dest string /export/flash_recovery_area
When I look in the directory as indicated above I see no files :
cd /export/flash_recovery_area/MRESTORE/archivelog/2013_04_16
(no files)
I'm wondering if I am missing something obvious here.
Thank you in advance.user6502667 wrote:
I am doing some testing of the backup and recovery of our databases.
I have a database which is in archive log mode.
I have added some records to a table and I am expecting to see some archive files
being written to but nothing is being produced.
We are running Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
The current archive information is as follows :
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 1
Next log sequence to archive 3
Current log sequence 3
SQL> show parameter db_recovery_file_dest;
NAME TYPE VALUE
db_recovery_file_dest string /export/flash_recovery_area
When I look in the directory as indicated above I see no files :
cd /export/flash_recovery_area/MRESTORE/archivelog/2013_04_16
(no files)
I'm wondering if I am missing something obvious here.
Thank you in advance.There can be several, but I'd say the most likely is that you simply didn't yet generate enough redo information to fill a redolog and thus trigger the writing of an archivelog. -
Creating Obiee another instance
Hi all,
I would like to create an additional instance for the obiee which I have. But I'm not aware of doing it.
Could any body help me in providing a link for that?
Thanks in adv.
The environments are both Windows and Linux as well.Hi,
Refer section 2.2.3.2 - Installing Multiple, Standalone Oracle Business Intelligence Instances on a Single Computer - of the Oracle Fusion Middleware Installation Guide for Oracle Business Intelligence 11g Release 1 (11.1.1) E10539-02
The link is here: http://docs.oracle.com/cd/E21764_01/bi.1111/e10539/c2_scenarios.htm#CHDDIDGE
http://www.rittmanmead.com/2010/08/oracle-bi-ee-11g-vertical-clustering-fault-tolerance-multiple-bi-servers-in-a-box/
Thanks
Deva -
Why create a new instance of Main?
I see many code samples posted that create a new instance of Main, but none of them seem to use it. But yet the Netbeans IDE won't seem to run without it. What is it for? How can it be utilized? How can it be nullified if it's not needed?
Yes, that's a fair interpretation of my question. Why
would Netbeans, or other IDE's, create a new instance
of Main? Common logic says that there can only be one
Main. More than one creates confusion.There can only be one main(String[] args) per class, but there's nothing which says that there can't be one main method in each class. I usually uses main methods to show example code, or to have some code which performs some kind of tests.
Kaj -
Archive log mode in 3 node rac database
Sir I am using oracle 10.2.0.1 and created 3 node cluster setup on.OS i am using linux 4
I am too much confuse in setting up this cluster in archive log mode bec why we need to false cluater_database parameter to setup cluster in archive log mode.
I have search lot of documents but all are saying we not need setting this parameter flase perior to 10.2.But what is the actual concept behind setting this parameter to false in early release and why we need not to set it false.Please help me.
I know how to set up in archive log mode but this parameter create confusion.
Thanks sir in advanceI also dont't know all the details about this but I'll try to explain what I know. Setting cluster_database to false in a rac environment is usually done when you need to mount the database from one instance exclusively. This is still needed when you upgrade the catalog, for example during release update (from 10.2.0.1 to 10.2.0.4 for example). Now with switching archivelog mode someone at oracle must have discovered that it is sufficient when the database is mounted by only one instance and you can actually save one step.
As I (and propably most of you) don't switch archiving on or off on a daily basis, I don't worry about this a lot, simply turn this on when I create a new database and then leave it on forever.
Bjoern -
Problem with listener whe I create a new instance
Hi
I can not create two instances on my Oracle data base, the listener is not running or the service is not registred. This is my state:
I´ve installed Database Oracle 11gR2 on OEL (Oracle Linux Enterprise) and I need two instance, I created my first instance and I had some problems with my environment variable
but now it is fixed, when I execute: '*lsnrctl status*' it answers me:
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 13-DEC-2010 16:48:03
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=SPHYNX)(PORT=1521)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 13-DEC-2010 15:09:39
Uptime 0 days 1 hr. 38 min. 23 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /home/oracle/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /home/oracle/app/oracle/diag/tnslsnr/SPHYNX/listener/alert/log.xml
Listening Endpoints Summary...
+(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SPHYNX)(PORT=1521)))+
The listener supports no services
The command completed successfully
where SPHYNX is the computer´s name, when I read the 'forums.oracle' I see that the line where it tells me:
+'The listener supports no services'+
it´s wrong but I don´t undertand the problem
my Oracle´s enviroment variables are: env | grep ORA
ORACLE_BASE=/home/oracle/app/oracle
ORACLE_BIN=/home/oracle/app/oracle/product/11.2.0/dbhome_1/bin
ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
and my listener is the created in the Oracle´s intallation, it is:
+# listener.ora Network Configuration File: /home/oracle/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora+
+# Generated by Oracle configuration tools.+
LISTENER =
+(DESCRIPTION_LIST =+
+(DESCRIPTION =+
+(ADDRESS = (PROTOCOL = TCP)(HOST = SPHYNX)(PORT = 1521))+
+)+
+)+
ADR_BASE_LISTENER = /home/oracle/app/oracle
my tnsmanes is:
+# tnsnames.ora Network Configuration File: /home/oracle/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora+
+# Generated by Oracle configuration tools.+
INS1 =
+(DESCRIPTION =+
+(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))+
+(CONNECT_DATA =+
+(SERVER = DEDICATED)+
+(SERVICE_NAME = INS1)+
+)+
+)+
it´s created automatically when I delete my initial Instance orcl, created in the installation, and I create the instance INS1
My user is a special user called 'oracle' and it is not sudoer.
when I try to create the second instance the wizard tell me that the Enterprise Configuration is failed because the listener is not actived or the data base´s service is not registred in
it. I do not write the exact output because it is in spanish and I write a translation.
regards,
PabloI am sorry if there is any inconsistency but I am not the final user, I don´t know if the final user had some conection´s problem but I could access to the Enterprise Manager in the instance and I had to restart the computer many times after the first configuration, but the user never tell me anything about problems. I only access to the instance with the Enterprise Manager and it´s running
I´m sorry but I normally have Oracle on Windows and this is my first contact with Oracle and Linux, on Windows I never had these problems because Oracle did everything automatically on Windows.
The output that I receive with your commands is:
+[oracle@SPHYNX ~]$ uname -a+
Linux SPHYNX 2.6.18-194.el5xen #1 SMP Mon Mar 29 22:22:00 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
+[oracle@SPHYNX ~]$ uptime+
+19:12:22 up 6:16, 1 user, load average: 0.27, 0.18, 0.17+
+[oracle@SPHYNX ~]$ id+
uid=501(oracle) gid=501(oracle) grupos=501(oracle)
+[oracle@SPHYNX ~]$ ps -ef | grep -i pmon+
oracle 25336 25300 0 19:12 pts/1 00:00:00 grep -i pmon
+[oracle@SPHYNX ~]$ env | sort+
_=/bin/env
CVS_RSH=ssh
G_BROKEN_FILENAMES=1
HISTSIZE=1000
HOME=/home/oracle
HOSTNAME= SPHYNX
INPUTRC=/etc/inputrc
JAVA_HOME=/usr/java/jdk1.6.0_22/
LANG=es_ES.UTF-8
LESSOPEN=|/usr/bin/lesspipe.sh %s
LOGNAME=oracle
LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01;32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01;31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01;35:*.png=01;35:*.tif=01;35:
MAIL=/var/spool/mail/oracle
ORACLE_BASE=/home/oracle/app/oracle
ORACLE_BIN=/home/oracle/app/oracle/product/11.2.0/dbhome_1/bin
ORACLE_HOME=/home/oracle/app/oracle/product/11.2.0/dbhome_1
PATH=/home/oracle/app/oracle/product/11.2.0/dbhome_1/bin::/usr/java/jdk1.6.0_22/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin
PWD=/home/oracle
SHELL=/bin/bash
SHLVL=1
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
SSH_CLIENT=163.117.129.155 54971 22
SSH_CONNECTION=163.117.129.155 54971 163.117.129.170 22
SSH_TTY=/dev/pts/1
TERM=vt100
USER=oracle
+[oracle@SPHYNX ~]$ cat /etc/hosts+
+# Do not remove the following line, or various programs+
+# that require network functionality will fail.+
+127.0.0.1 SPHYNX localhost.localdomain localhost+
+::1 localhost6.localdomain6 localhost6+
+[oracle@SPHYNX ~]$ date+
lun dic 13 19:12:31 CET 2010 -
Where to keep the archived log in a cluster environment.
Hi
Where it is good to keep the archive log file. on shared storage or on local instance.
thanks.Where it is good to keep the archive log file. on shared storage or on local instance.
It is always good to keep the archivelog files at shared storage location. This location is accessible to all instances, so in case of failure you can recover the database from any of the instances.
Rgds. -
I am using Oracle database 9.2.0.1.0, My OS is Linux AS4 Update version.
My database is in archive log mode, the archive file size generated on disk is 100 MB. I want to monitor the reason that why the size of redo generated is too big.
Kindly suggest.
RegardsArchived log file size will always be the same size as redo log or less than the redo log size (but never bigger than redo log size)
ARCHIVE_LAG_TARGET is the reason (apart from manual archiving ALTER SYSTEM ARCHIVE LOG CURRENT/ALL) why you see archived logs with lesser size than redo log
why archive log file size constanly change? -
My question is about JVM instances. How do you create additional instances and how do you make sure that you are using the same instance? For example, if I execute an app like so
java StartDeamonhow can I make sure that when I execute a second app that it is running in the same JVM so that it interacts with the first app?
java StopDeamonAnd vice versa, how could I make sure that an app such as this one runs in a separate JVM so that it doesn't affect the processes in the previous JVM?
java KillAllDaemonsInThisJVMInstanceDoes that make sense?would you mind elaborating on that? what options are
there for a custom solution? How do others do it if
not in Java? If I execute a second program, does it
automatically get put in the same JVM instance of the
first program? I'm assuming it does.No. In windows, for example, when you run a Java program you are really starting a new instance of java.exe. There is nothing built in for it to know to use an existing instance.
Do additional
JVM instances get created automatically? If so, what
is the criteria?Always using a new instance is the MO. If you want some other behavior you need to implement it yourself.
It's not easy. Even if you find an existing instance of the JVM, you need someway of sending it a message telling it to run your main method. Also, how will you be sure that the JVM is one created by your application and not some other application that won't appreciate your application crashing it's private party?
Do a search on these forums and you will find a bunch of solutions (many of which are probably unusable.)
Off the top of my head, you could use memory mapped files to mark whether your application is running. If a second instance is started, send a message to a port that will tell the first instance to create a second instance of your app (assuming there is only one application.)
Is the application a GUI or non-visual? -
Question :
When creating a tablespace why should we enable LOGGING when a database is already on ARCHIVE LOG mode ?
Example:
Create Tablespace
CREATE SMALLFILE TABLESPACE "TEST_DATA"
LOGGING
DATAFILE '+DG_TEST_DATA_01(DATAFILE)' SIZE 10G
AUTOEXTEND ON NEXT 500K MAXSIZE 31000M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
LOGGING: Generate redo logs for creation of tables, indexes and partitions, and for subsequent inserts. Recoverable
Are they not logged and not recoverable if we do not enable LOGGING? What is that ARCHIVELOG mode does?What is that ARCHIVELOG Mode Does?
Whenever your database is in archive log mode , Oracle will backup the redo log files in the form of Archives so that we can recover the database to the consistent state in case of any failure.
Archive logging is essential for production databases where the loss of a transaction might be fatal.
Why Logging?
Logging is safest method to ensure that all the changes made at the tablespace will be captured and available for recovery in the redo logs.
It is just the level at which we defines:
Force Logging at DB level
Logging at Tablespace Level
Logging at schema Level
Before the existence of FORCE LOGGING, Oracle provided logging and nologging options. These two options have higher precedence at the schema object level than the tablespace level; therefore, it was possible to override the logging settings at the tablespace level with nologging setting at schema object level. -
I've been handed an Oracle Database to Admin, I'm not an Oracle DBA yet-learning, and I hate to ask such a simple question but it's a a critical application and my resources are GOOGLE & META-LINK subscription (which I'll ask too) Though I think the insiight given here when browsing the forums is very nice.
We are runing Windows 2003 64 bit oracle 10g 10.2.0.3.0
Enterprise manager states:
Archive Log 81% of archive area F:\InstanceNumber1
There are 2 instances running on F: Volume
F: \InstanceNumber1 (*Not backedup currently* ) using 2.77GB
(been up for several months)
** incidentally it is also the one that gave me notification alert in EM.
F: \InstanceNumber2 (*backedup using RMAN* 2 weekold instance) using 194GB,
It seems extremely large.
My question is I ran on InstanceNumber1: "SHOW PARAMETER DB_RECOVERY_FILE_DEST_SIZE" the result was 2G, this is assume is why I get that Alert.
I ran same on InstanceNumber2: 2G, but I have not viewed the EM console much on that one, it's still in implementation-- really concerned right now more with health of "INSTANCE1" (The production stuff)
Question What occurs if I go over??? (I pretty much assum it's gone over with 194GB on Instance2)
note: We have archive logs on F: drive and another drive (for redundancy by default installation of the instances that Im working with)
What's some advice/observations, Im open to anything. I think I need to delete the archives AFTER they have been backed up (Need to locate RMAN backups for instance2) .... then (Need to create a backup strategy for INSTANCE1) then delete archives too.
select flashback_on from v$database; (shows NO) not turned for the INSTANCE1 and INSTANCE2
Edited by: Norm on Oct 19, 2009 2:47 PMNorm wrote:
I've been handed an Oracle Database to Admin, I'm not an Oracle DBA yet-learning, and I hate to ask such a simple question but it's a a critical application and my resources are GOOGLE & META-LINK subscription (which I'll ask too) Though I think the insiight given here when browsing the forums is very nice.
We are runing Windows 2003 64 bit oracle 10g 10.2.0.3.0
Enterprise manager states:
Archive Log 81% of archive area F:\InstanceNumber1
There are 2 instances running on F: Volume
F: \InstanceNumber1 (*Not backedup currently* ) using 2.77GB
(been up for several months)This has been up for several months and never backed up?? That needs to be corrected. If you are not familia with the process, do it through OEM. Don't worry about getting everything 'right' or 'optimal' or best .. just get a backup .. NOW!
** incidentally it is also the one that gave me notification alert in EM.
F: \InstanceNumber2 (*backedup using RMAN* 2 weekold instance) using 194GB,
It seems extremely large.
My question is I ran on InstanceNumber1: "SHOW PARAMETER DB_RECOVERY_FILE_DEST_SIZE" the result was 2G, this is assume is why I get that Alert.
I ran same on InstanceNumber2: 2G, but I have not viewed the EM console much on that one, it's still in implementation-- really concerned right now more with health of "INSTANCE1" (The production stuff)
Question What occurs if I go over??? (I pretty much assum it's gone over with 194GB on Instance2) Are you in archivelog mode? If so, you won't go over, you'll just find the db just 'hangs' when it can no longer write archivelogs. It will pick itself up again when it has space to continue to write archvielogs.
>
note: We have archive logs on F: drive and another drive (for redundancy by default installation of the instances that Im working with)
What's some advice/observations, Im open to anything. I think I need to delete the archives AFTER they have been backed up (Need to locate RMAN backups for instance2) .... then (Need to create a backup strategy for INSTANCE1) then delete archives too.
select flashback_on from v$database; (shows NO) not turned for the INSTANCE1 and INSTANCE2
Edited by: Norm on Oct 19, 2009 2:47 PM -
ARCHIVE LOGS CREATED in WRONG FOLDER
Hello,
I'm facing an issue with the Archive logs.
In my Db the parameters for Archive logs are
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
db_create_file_dest string /u01/oradata/SIEB/dbf
db_create_online_log_dest_1 string /u01/oradata/SIEB/rdo
But the archive logs are created in
/u01/app/oracle/product/9.2.0.6/dbs
Listed Below :
bash-2.05$ ls -lrt *.arc
-rw-r----- 1 oracle dba 9424384 Jan 9 09:30 SIEB_302843.arc
-rw-r----- 1 oracle dba 7678464 Jan 9 10:00 SIEB_302844.arc
-rw-r----- 1 oracle dba 1536 Jan 9 10:00 SIEB_302845.arc
-rw-r----- 1 oracle dba 20480 Jan 9 10:00 SIEB_302846.arc
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 104858112 Jan 9 10:58 SIEB_302848.arc
bash-2.05$
Does anyone have an Idea why this happens?
Is this a Bug!!!
ThxsBut in another Db I've
log_archive_dest string
log_archive_dest_1 string LOCATION=/u03/archive/SIEB MANDATORY REOPEN=30
and my archivelogs are in
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] ls -lrt /u03/archive/SIEB
total 297696
-rw-r----- 1 oracle dba 10010624 Jan 9 10:30 SIEB_302847.arc
-rw-r----- 1 oracle dba 21573632 Jan 9 11:00 SIEB_302848.arc
-rw-r----- 1 oracle dba 101450240 Jan 9 11:30 SIEB_302849.arc
-rw-r----- 1 oracle dba 6308864 Jan 9 12:00 SIEB_302850.arc
-rw-r----- 1 oracle dba 12936704 Jan 9 12:30 SIEB_302851.arc
oracle@srvsdbs7p01:/u03/archive/SIEB/ [SIEB] -
Why the flashback log'size smaller than the archived log ?
hi, all . why the flashback log'size smaller than the archived log ?
Lonion wrote:
hi, all . why the flashback log'size smaller than the archived log ?Both are different.
Flash logs size depends on parameter DB_FLASHBACK_RETENTION_TARGET , how much you want to keep.
Archive log files is dumped file of Online redo log files, It can be either size of Online redo log file size or less depending on online redo size when switch occurred.
Some more information:-
Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named “FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well.
Source:- http://dba-blog.blogspot.in/2006/05/flashback-database-feature.html
Edited by: CKPT on Jun 14, 2012 7:34 PM -
Archive log gap is created is standby when ever audit trail is set to DB
Hi
I am a new dba. I am facing a problem at production server that whenever audit_trail parameter is set to db , archive log gap is created at the standby site.
My database version is 10.2.0.4
Os is windows 2003 R2
Audit_trail parameter is set to db only in primary site, after setting the parameter as db when I bounced the database and switched the logfile , archive log gap is created in the standby..I am using LGWR mode of log transport.
Is there any relation beteen audit_trail and log transport ?
Please note that my archive log location of both the sites has sufficient disk space and the drive is working fine.Also my primary and standby is in WAN.
Please help me in this.Any help will be highly appreciated.
Here a trace file which may be helpful to give any opinion.
Dump file d:\oracle\admin\sbiofac\bdump\sbiofac_lns1_6480.trc
Tue Jun 05 13:46:02 2012
ORACLE V10.2.0.4.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Windows Server 2003 Version V5.2 Service Pack 2
CPU : 2 - type 586, 1 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:16504M/18420M, Ph+PgF:41103M/45775M, VA:311M/2047M
Instance name: sbiofac
Redo thread mounted by this instance: 1
Oracle process number: 21
Windows thread id: 6480, image: ORACLE.EXE (LNS1)
*** SERVICE NAME:() 2012-06-05 13:46:02.703
*** SESSION ID:(534.1) 2012-06-05 13:46:02.703
*** 2012-06-05 13:46:02.703 58902 kcrr.c
LNS1: initializing for LGWR communication
LNS1: connecting to KSR channel
Success
LNS1: subscribing to KSR channel
Success
*** 2012-06-05 13:46:02.750 58955 kcrr.c
LNS1: initialized successfully ASYNC=1
Destination is specified with ASYNC=61440
*** 2012-06-05 13:46:02.875 73045 kcrr.c
Sending online log thread 1 seq 2217 [logfile 1] to standby
Redo shipping client performing standby login
*** 2012-06-05 13:46:03.656 66535 kcrr.c
Logged on to standby successfully
Client logon and security negotiation successful!
Archiving to destination sbiofacdr ASYNC blocks=20480
Allocate ASYNC blocks: Previous blocks=0 New blocks=20480
Log file opened [logno 1]
*** 2012-06-05 13:46:44.046
Error 272 writing standby archive log file at host 'sbiofacdr'
ORA-00272: error writing archive log
*** 2012-06-05 13:46:44.078 62692 kcrr.c
LGWR: I/O error 272 archiving log 1 to 'sbiofacdr'
*** 2012-06-05 13:46:44.078 60970 kcrr.c
kcrrfail: dest:2 err:272 force:0 blast:1
*** 2012-06-05 13:47:37.031
*** 2012-06-05 13:47:37.031 73045 kcrr.c
Sending online log thread 1 seq 2218 [logfile 2] to standby
*** 2012-06-05 13:47:37.046 73221 kcrr.c
Shutting down [due to no more ASYNC destination]
Redo Push Server: Freeing ASYNC PGA buffer
LNS1: Doing a channel reset for next time around...OK
Great details thanks!!
Are The SDU/TDU settings are configured in the Oracle Net files on both primary and standby ? I will see if I have an example.
The parameters appear fine.
There was an Oracle document 386417.1 on this, I have not double checked if its still available. ( CHECK - Oracle 9 but worth a galance )
Will Check and post here.
I have these listed too. ( Will check all three and see if they still exist )
When to modify, when not to modify the Session data unit (SDU) [ID 99715.1] ( CHECK - still there but very old )
SQL*Net Packet Sizes (SDU & TDU Parameters) [ID 44694.1] ( CHECK - Best by far WOULD REVIEW FIRST )
Any chance your firewall limit the Packet size?
Best Regards
mseberg
Edited by: mseberg on Jun 6, 2012 12:36 PM
Edited by: mseberg on Jun 6, 2012 12:43 PM
Additional document
The relation between MTU (Maximum Transmission Unit) and SDU (Session Data Unit) [ID 274483.1]
Edited by: mseberg on Jun 6, 2012 12:50 PM
Still later
Not sure if this helps but I played around will this on Oracle 11 a little, here that example:
# listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/network/admin/listener.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = yourdomain.com)(PORT = 1521))
SID_LIST_LISTENER = (SID_LIST =(SID_DESC =(SID_NAME = STANDBY)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0)
(SDU=32767)
(GLOBAL_DBNAME = STANDBY_DGMGRL.yourdomain.com)))
ADR_BASE_LISTENER = /u01/app/oracle
INBOUND_CONNECT_TIMEOUT_LISTENER=120Edited by: mseberg on Jun 6, 2012 12:57 PM
Also of interest
Redo is transporting in 10gR2 versions.
http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-dataguardnetworkbestpr-134557.pdf
Edited by: mseberg on Jun 6, 2012 1:11 PM
Maybe you are looking for
-
Can anyone tell me what OTH stands for?
On my payments page I have a OTH payment of 400 dollars. I'm assuming it's my deposit put of my account as a bill credit since this month marks one year since I put the deposit down but I was told I would get at least half of the deposit if not all o
-
HI, We have Installed OBI Apps V796. Below are the details. OBI Apps V796 OBIEE 10.1.3.4.1 DAC 10.1.3.4.1 Informatica 8.6.0 We are getting the below error for the task "SDE_ORA_PartyPersonDimension_Customer" when we run the execution plan related to
-
OMS_Linux4_Oracle9i
Hi, I am creating new OMS in my lunix server, its giving error while creating the OMS with error Ora-03110: end-of-file communication. Here are my OS and db details. OS: Linux 4 on X86 DB: Oracle 9.2.0 Do I have to need any linux packages to create O
-
I need a download link for Acrobat 8 professional to re-install the program on my computer.
-
When Press "Execute Query" LOV return Item not showing
I created a LOV and defined return items. In the form I created display items, database property NO and assign variable name which is in LOV. When I enter a new record LOV shows the return item but when I am executing query only database record is sh