Too many Archive Log
Hi,
I have a problem on a production server: our support partner told me that we are creating too many Archive Log files. They are not sure at all of what is happening.
Please, do you know if there is any System View which we could obtain more information about Archive Logs usage?
Thanks in advance.
Best Regards,
Joan Padilla
Hi Joan,
The sensible number of archive logs is determined by your backup and recovery strategy. You can delete them, once you have back-upped the database and the archivelogs. Further info in the Backup and Recovery Concepts manual.
You can find information on them in v$archived_log.
If you decide to delete them without having read the aforementioned manual, I will pray for you you don't suffer a hard disk crash.
You may want to review the relationship with your support partner too.
Their advice is not exactly professional.
Sybrand Bakker
Senior Oracle DBA
Similar Messages
-
Create procedure is generating too many archive logs
Hi
The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
What would be the answer? The db must remain in archivelog mode.
I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
,P_GRE NUMBER
,P_SDATE VARCHAR2
,P_EDATE VARCHAR2
,P_ssn VARCHAR2
) IS
CURSOR MainCsr IS
SELECT DISTINCT
PPF.NATIONAL_IDENTIFIER SSN
,ppf.full_name FULL_NAME
,ppa.effective_date Pay_date
,ppa.DATE_EARNED period_end
,pet.ELEMENT_NAME
,SUM(TO_NUMBER(prv.result_value)) VALOR
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAf.ASSIGNMENT_ID ASSG_ID
,paf.ORGANIZATION_ID
FROM
pay_element_classifications pec
, pay_element_types_f pet
, pay_input_values_f piv
, pay_run_result_values prv
, pay_run_results prr
, pay_assignment_actions paa
, pay_payroll_actions ppa
, APPS.pay_all_payrolls_f pap
,Per_Assignments_f paf
,per_people_f ppf
WHERE
ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND ppa.payroll_id = pap.payroll_id
AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
AND ppa.payroll_action_id = paa.payroll_action_id
AND paa.action_status = 'C'
AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
AND ppa.action_status = 'C'
--AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
AND paa.assignment_action_id = prr.assignment_action_id
AND prr.run_result_id = prv.run_result_id
AND prv.input_value_id = piv.input_value_id
AND piv.name = 'Pay Value'
AND piv.element_type_id = pet.element_type_id
AND pet.element_type_id = prr.element_type_id
AND pet.classification_id = pec.classification_id
AND pec.non_payments_flag = 'N'
AND prv.result_value <> '0'
--AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
-- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
------------------------------------------------------------------TO get emp.
AND ppf.person_id = paf.person_id
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
------------------------------------------------------------------TO get emp. ASSIGNMENT
--AND paf.assignment_status_type_id NOT IN (7,3)
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
GROUP BY PPF.NATIONAL_IDENTIFIER
,ppf.full_name
,ppa.effective_date
,ppa.DATE_EARNED
,pet.ELEMENT_NAME
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAF.ASSIGNMENT_ID
,paf.ORGANIZATION_ID
BEGIN
DELETE cust.DFC_PAYROLL_DW
WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND tax_unit_id = NVL(p_GRE, tax_unit_id)
AND ssn = NVL(p_ssn, ssn)
COMMIT;
FOR V_REC IN MainCsr LOOP
INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
COMMIT;
END LOOP;
END ;
So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
Thanks
Oracle 9.2.0.5
AIX 5.2The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo. -
System I/O and Too Many Archive Logs
Hi all,
This is frustrating me. Our production database began to produce too many archived redo logs instantly --again. This happened before; two months ago our database was producing too many archive logs; just then we began get async i/o errors, we consulted a DBA and he restarted the database server telling us that it was caused by the system(???).
But after this restart the amount of archive logs decreased drastically. I was deleting the logs by hand(350 gb DB 300 gb arch area) and after this the archive logs never exceeded 10% of the 300gb archive area. Right now the logs are increasing 1%(3 GB) per 7-8 mins which is too many.
I checked from Enterprise Manager, System I/O graph is continous and the details show processes like ARC0, ARC1, LGWR(log file sequential read, db file parallel write are the most active ones) . Also Phsycal Reads are very inconsistent and can exceed 30000 KB at times. Undo tablespace is full nearly all of the time causing ORA-01555.
The above symptoms have all began today. The database is closed at 3:00 am to take offline backup and opened at 6:00 am everyday.
Nothing has changed on the database(9.2.0.8), applications(11.5.10.2) or OS(AIX 5.3).
What is the reason of this most senseless behaviour? Please help me.
Thanks in advance.
Regards.
BurakSelam Burak,
High number of archive logs are being created because you may have massive redo creation on your database. Do you have an application that updates, deletes or inserts into any kind of table?
What is written in the alert.log file?
Do you have the undo tablespace with the guarentee retention option btw?
Have you ever checked the log file switch sequency map?
Please use below SQL to detirme the switch frequency;
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"+
+, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"+
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
+) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC+
+) WHERE ROWNUM < 8+
Ogan -
Too many archive logs getting generated on 11.1.0.7
I could see heavy arch generation on PROD instance although there is not much activity on PROD
this is a fresh instance which has gone live month ago and archive log being enabled weeks ago but
i could see about 20 to 23 GB of arch generation daily although the database size is 90 GB around
Raised SR they told me its unable to Purge statistics from the SYSAUX tablespace
they asked me to run some queries an run this
exec dbms_stats.purge_stats(sysdate - 50);
which was running for long hours and just coming out because of insufficient space
although the retention policy is 31 days
SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
31
history is avail for more than 90 days
SQL> select dbms_stats.get_stats_history_availability from dual;
GET_STATS_HISTORY_AVAILABILITY
01-APR-13 11.00.07.250483000 AM +05:30
asked to apply this patch 12683802 which i applied on DEV instance
but still i can see so many archs generating although there is no activity on DEV instance.
now when i run this scripts little by little
exec dbms_stats.purge_stats(sysdate - 50);
its purging but its taking ages and the size of sysaux is getting filled the current size of DEV have 3 datafiles each with around 4000mb
-- B4 applying patch
SQL> select trunc(first_time) on_date,
2 thread# thread,
3 min(sequence#) min_sequence,
4 max(sequence#) max_sequence,
5 max(sequence#) - min(sequence#) nos_archives,
6 (max(sequence#) - min(sequence#)) * log_avg_mb req_space_mb
7 from v$log_history,
8 (select avg(bytes/1024/1024) log_avg_mb
9 from v$log)
10 group by trunc(first_time), thread#, log_avg_mb
11 order by on_date
12 /
ON_DATE THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
24-JUN-13 1 1 3 2 2000
25-JUN-13 1 4 17 13 13000
26-JUN-13 1 18 30 12 12000
27-JUN-13 1 31 43 12 12000
28-JUN-13 1 44 51 7 7000
29-JUN-13 1 52 64 12 12000
30-JUN-13 1 65 77 12 12000
01-JUL-13 1 78 88 10 10000
-- after applying patch
ON_DATE
THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
21-JUN-13
1
1
5
4
4000
22-JUN-13
1
6
20
14
14000
23-JUN-13
1
21
35
14
14000
24-JUN-13
1
36
85
49
49000
25-JUN-13
1
86
111
25
25000
26-JUN-13
1
112
127
15
15000
27-JUN-13
1
128
134
6
6000
28-JUN-13
1
135
143
8
8000
29-JUN-13
1
144
151
7
7000
30-JUN-13
1
152
158
6
6000
01-JUL-13
1
159
163
4
4000
the above results b4 and after are taken from TEST and DEV which are cloned from PROD instance and only on DEV the patch is applied
here are env details
EBS:21.1.3
Database:11.1.0.7
OS:RHEL 5.6
am still not satisfied wanted to know if any one of you have a solution for this
please help
ZaviHi Amogh,
As said from support as well to run logminer i have run here is the output from it
i followed note id:1504755.1
------------------------------log miner output for archs of 02.07.13 on PROD------------------------
-- following logs
Jul 2 10:59 archive_PROD_1_1446_807549584.arc
Jul 2 11:05 archive_PROD_1_1447_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1446_807549584.arc', -
OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1447_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT APPLSYS FND_LOGINS 47
UPDATE PO RCV_TRANSACTIONS_INTERFAC 2
E
INSERT PO RCV_TRANSACTIONS 3
UNSUPPORTED INV MTL_SUPPLY 6
DELETE PO RCV_SUPPLY 3
UPDATE CSI CSI_ITEM_INSTANCES 3
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 17
INSERT JA JAI_RTP_POPULATE_T 3
INSERT GL GL_CODE_COMBINATIONS 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_AP_TDS_INV_TAXES 7
UNSUPPORTED AP AP_INVOICE_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 4
INSERT INV MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_HEADERS_GT 3
UPDATE XLA XLA_AE_HEADERS_GT 3
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 4
R
UPDATE APPLSYS FND_USER_DESKTOP_OBJECTS 10
INSERT APPLSYS FND_APPL_SESSIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE XLA XLA_TRANSFER_LOGS 2
INSERT GL GL_JE_HEADERS 2
DELETE GL GL_INTERFACE_CONTROL 1
DELETE GL GL_INTERFACE 3
INSERT CE CE_SECURITY_PROFILES_GT 1
INSERT PA PA_PROJECTS_FOR_ACCUM 8
INSERT PA PA_PJM_REQ_COMMITMENTS_TM 1162
P
DELETE PA PA_PROJECT_ACCUM_COMMITME 162
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE PA PA_TXN_ACCUM 1749
UPDATE PA PA_RESOURCE_LIST_ASSIGNME 13
NTS
INSERT ENI ENI_OLTP_ITEM_STAR 1
START 561
COMMIT 934
INSERT ICX ICX_SESSION_ATTRIBUTES 45
INSERT INV MTL_SUPPLY 8
UPDATE INV MTL_MATERIAL_TRANSACTIONS 11
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_TEMP
INSERT CSI CSI_TRANSACTIONS 3
INSERT CSI CSI_I_VERSION_LABELS 1
INSERT CSI CSI_I_VERSION_LABELS_H 1
INSERT JA JAI_RTP_TRANS_T 3
INSERT JA JAI_AP_INVOICE_LINES 1
UNSUPPORTED AP AP_INVOICE_DISTRIBUTIONS_ 9
ALL
DELETE BOM BOM_RESOURCE_CHANGES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
ROLLBACK 3
INSERT XLA XLA_TRANSACTION_ENTITIES, 2
AP
INSERT ENI MLOG$_ENI_OLTP_ITEM_STAR 7
UNSUPPORTED XLA XLA_TRANSACTION_ENTITIES, 4
AP
INSERT XLA XLA_DISTRIBUTION_LINKS,AP 6
UPDATE QA QA_CHARS 1
UPDATE MRP MRP_SCHEDULE_DATES 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE ENI ENI_DENORM_HIERARCHIES 4
UNSUPPORTED SYS SEG$ 1
INSERT INV MTL_TRANSACTION_ACCOUNTS 4
UPDATE APPLSYS FND_USER_PREFERENCES 5
DELETE SYS WRI$_OPTSTAT_HISTHEAD_HIS 235644
TORY
INSERT BNE BNE_DOC_USER_PARAMS 1
INSERT GL GL_INTERFACE 6
INSERT GL GL_JE_LINES 4
INSERT GL GL_JE_SEGMENT_VALUES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT GL GL_IMPORT_REFERENCES 4
UPDATE PA PA_MAPPABLE_TXNS_TMP 3
UPDATE PA PA_PROJECT_ACCUM_COMMITME 95
NTS
INSERT INV MLOG$_MTL_SYSTEM_ITEMS_B 1
INSERT INV MTL_SYSTEM_ITEMS_TL 1
UPDATE APPLSYS FND_CONFLICTS_DOMAIN 6040
INSERT APPLSYS MO_GLOB_ORG_ACCESS_TMP 97
UPDATE APPLSYS FND_CONCURRENT_QUEUES 117
UNSUPPORTED JA JAI_RCV_LINES 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT INV MLOG$_MTL_MATERIAL_TRANSA 21
C
INSERT CSI CSI_ITEM_INSTANCES 1
INSERT JA JAI_AP_TDS_INV_TAXES 2
INSERT BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_TXN_INTERFACE_AUDIT_AL 4
L
INSERT PA PA_EXPENDITURE_COMMENTS 2
DELETE PA PA_TRANSACTION_XFACE_CTRL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_ALL
INSERT AP AP_LINE_TEMP_GT 3
UPDATE ENI ENI_OLTP_ITEM_STAR 3
INSERT XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_AE_HEADERS,AP 8
DELETE XLA XLA_VALIDATION_LINES_GT 2
INSERT INV MTL_TXN_COST_DET_INTERFAC 2
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE INV MTL_CST_TXN_COST_DETAILS 2
DELETE INV MTL_TXN_COST_DET_INTERFAC 2
E
UNSUPPORTED SYS HISTGRM$ 16
UPDATE INV MTL_MATERIAL_TRANSACTIONS 6
INSERT XLA XLA_EVENTS,CST 3
DELETE BNE BNE_DOC_ACTIONS 1
INSERT GL GL_INTERFACE_CONTROL 2
INSERT XLA XLA_TB_WORK_UNITS 1
UPDATE ZX ZX_TRX_HEADERS_GT 67
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6611_EC264F 1
91
DELETE PA PA_TXN_ACCUM_DETAILS 1327
UNSUPPORTED PA PA_TXN_ACCUM 523
INSERT PA PA_MAPPABLE_TXNS_TMP 2
DELETE PA PA_RESOURCE_LIST_PARENTS_ 2
TMP
DELETE PA PA_PROJECTS_FOR_ACCUM 17
UPDATE SYS SEQ$ 224
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE APPLSYS WF_DEFERRED 5
UPDATE APPLSYS FND_CONC_PROG_ONSITE_INFO 51
INSERT APPLSYS FND_CONCURRENT_REQUESTS 46
INSERT PO PO_SESSION_GT 7
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
INSERT INV MTL_ONHAND_QUANTITIES_DET 3
AIL
UPDATE SYS SYS_FBA_BARRIERSCN 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE MRP MRP_MESSAGES_TMP 29
DELETE MRP MRP_MESSAGES_TMP 29
UPDATE AP AP_INVOICES_ALL 15
UPDATE JA JAI_AP_TDS_INV_TAXES 43
INSERT BOM MLOG$_BOM_RESOURCE_CHANGE 4
S
UNSUPPORTED PA PA_TRANSACTION_INTERFACE_ 2
ALL
INSERT PA PA_EXPENDITURE_GROUPS_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PA PA_EXPENDITURE_ITEMS_ALL 2
INSERT AP AP_INVOICE_LINES_ALL 3
INSERT APPLSYS FND_LOG_MESSAGES 29
INSERT ZX ZX_ITM_DISTRIBUTIONS_GT 71
UNSUPPORTED ZX ZX_TRX_HEADERS_GT 5
INSERT XLA XLA_EVENTS,AP 3
UPDATE AP AP_PREPAY_HISTORY_ALL 3
UPDATE AP AP_PREPAY_APP_DISTS 1
INSERT INV MLOG$_MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_LINES,AP 6
DELETE XLA XLA_AE_HEADERS_GT 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE SYS HIST_HEAD$ 76
INSERT SYS WRI$_OPTSTAT_IND_HISTORY 10
UNSUPPORTED ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 3
R
UPDATE INV MTL_CST_ACTUAL_COST_DETAI 3
LS
INSERT XLA XLA_TRANSACTION_ENTITIES, 3
CST
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ICX ICX_TRANSACTIONS 1
UPDATE GL GL_INTERFACE 4
UPDATE PO PO_REQ_DISTRIBUTIONS_ALL 66
UNSUPPORTED PO PO_REQUISITION_HEADERS_AL 1
L
UNSUPPORTED PO PO_REQUISITION_LINES_ALL 66
DELETE PA PA_COMMITMENT_TXNS 1461
UNSUPPORTED PA PA_MAPPABLE_TXNS_TMP 3
INSERT PA PA_PROJECT_ACCUM_COMMITME 197
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM CST_ITEM_COSTS 1
INSERT JA JAI_RCV_TRANSACTIONS 3
UPDATE PO RCV_SUPPLY 3
UPDATE INV MTL_SUPPLY 11
DELETE INV MTL_SUPPLY 11
UPDATE PO PO_DISTRIBUTIONS_ALL 3
UPDATE PO PO_LINE_LOCATIONS_ALL 9
INSERT INV MLOG$_MTL_ONHAND_QUANTITI 3
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE PO RCV_TRANSACTIONS_INTERFAC 3
E
INSERT MRP MRP_MESSAGES_TMP 22
INSERT AP AP_INVOICE_DISTRIBUTIONS_ 3
ALL
UPDATE AP AP_INVOICE_DISTRIBUTIONS_ 30
ALL
INSERT PA PA_EXPENDITURES_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ZX ZX_TRX_HEADERS_GT 8
UNSUPPORTED ZX ZX_LINES_DET_FACTORS 566
DELETE AP AP_LINE_TEMP_GT 9
UPDATE AP AP_PAYMENT_SCHEDULES_ALL 5
UPDATE ICX ICX_SESSIONS 12
UNSUPPORTED AP AP_INVOICES_ALL 3
INSERT JA JAI_RCV_JOURNAL_ENTRIES 4
UNSUPPORTED INV MTL_TRANSACTIONS_INTERFAC 24
E
UPDATE MRP MRP_RECOMMENDATIONS 14
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ENI MLOG$_ENI_DENORM_HIERARCH 16
I
INSERT SYS WRI$_OPTSTAT_HISTGRM_HIST 8
ORY
INSERT APPLSYS WF_CONTROL 1
UPDATE BNE BNE_DOC_USER_PARAMS 1
DELETE APPLSYS WF_CONTROL 1
UPDATE GL GL_JE_BATCHES 2
INSERT GL GL_POSTING_INTERIM 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_PO_OSP_LINES 1
INSERT AP AP_INVOICES_ALL 2
INSERT PA PA_COMMITMENT_TXNS_TMP 160
UPDATE PA PA_COMMITMENT_TXNS 1391
DELETE PA PA_MAPPABLE_TXNS_TMP 3
INTERNAL 4906910
UPDATE APPLSYS FND_CONCURRENT_REQUESTS 153
UPDATE JA JAI_RCV_LINES 3
INSERT INV MLOG$_MTL_SUPPLY 30
INSERT CSI CSI_I_PARTIES_H 1
UNSUPPORTED JA JAI_RCV_TRANSACTIONS 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE JA JAI_RCV_TRANSACTIONS 24
INSERT MRP MRP_RECOMMENDATIONS 8
UNSUPPORTED AP AP_PAYMENT_SCHEDULES_ALL 4
UPDATE PA PA_TRANSACTION_INTERFACE_ 4
ALL
INSERT XLA XLA_ACCT_PROG_EVENTS_GT 4
INSERT XLA XLA_AE_LINES_GT 12
UNSUPPORTED XLA XLA_AE_LINES_GT 30
INSERT XLA XLA_AE_HEADERS,AP 3
INSERT XLA XLA_VALIDATION_LINES_GT 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE XLA XLA_EVENTS_GT 1
UNSUPPORTED XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
DELETE XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
INSERT APPLSYS FND_CONC_REQUEST_ARGUMENT 2
S
UPDATE QA QA_RESULTS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT MRP MRP_SCHEDULE_CONSUMPTIONS 21
INSERT ENI ENI_DENORM_HRCHY_PARENTS 4
UNSUPPORTED ENI ENI_DENORM_HRCHY_PARENTS 4
UPDATE APPLSYS FND_USER 6
INSERT XLA XLA_TRANSFER_LOGS 2
UPDATE GL GL_JE_LINES 4
UPDATE APPLSYS FND_NODES 3
UNSUPPORTED PO PO_REQ_DISTRIBUTIONS_ALL 66
DELETE ZX ZX_ITM_DISTRIBUTIONS_GT 66
INSERT AP AP_PAYMENT_SCHEDULES_ALL 2
INSERT IBY IBY_DOCS_PAYABLE_GT 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT AP AP_DOC_SEQUENCE_AUDIT 1
INSERT PA PA_COMMITMENT_TXNS 1380
INSERT PA PA_RESOURCE_ACCUM_DETAILS 3
INSERT ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 19
I
UPDATE PA PA_PROJECT_ACCUM_ACTUALS 12
INSERT EGO EGO_ITEM_TEXT_TL 1
INSERT INV MTL_ITEM_REVISIONS_TL 1
UNSUPPORTED 1720
INSERT APPLSYS FND_CONC_PP_ACTIONS 47
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED APPLSYS FND_CONCURRENT_PROCESSES 97
DELETE APPLSYS MO_GLOB_ORG_ACCESS_TMP 11
INSERT MRP MRP_RELIEF_INTERFACE 16
UPDATE PO PO_REQUISITION_LINES_ALL 1
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
INSERT CSI CSI_I_PARTIES 1
UNSUPPORTED SYS DBMS_LOCK_ALLOCATED 15
UPDATE PO PO_SESSION_GT 4
INSERT MRP MRP_SCHEDULE_DATES 4
INSERT MRP MLOG$_MRP_SCHEDULE_DATES 15
DELETE MRP MRP_SCHEDULE_DATES 4
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_COST_DISTRIBUTION_LINE 2
S_ALL
INSERT ZX ZX_TRANSACTION_LINES_GT 138
UNSUPPORTED CSI CSI_ITEM_INSTANCES 2
UNSUPPORTED XLA XLA_EVENTS,AP 6
INSERT XLA XLA_AE_SEGMENT_VALUES 13
UNSUPPORTED SYS TAB$ 15
INSERT SYS WRI$_OPTSTAT_HISTHEAD_HIS 81562
TORY
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6610_EC264F 1
91
UNSUPPORTED SYS IND$ 10
DELETE APPLSYS FND_CONC_PP_ACTIONS 7
INSERT BNE BNE_DOC_ACTIONS 1
INSERT GL GL_JE_BATCHES 1
INSERT PA PA_TXN_ACCUM_DETAILS 1386
INSERT PA PA_TXN_ACCUM 2
INSERT PA PA_RESOURCE_LIST_PARENTS_ 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
TMP
UPDATE CTXSYS DR$INDEX 1
INSERT INV MTL_PENDING_ITEM_STATUS 1
DELETE SYS WRI$_OPTSTAT_IND_HISTORY 2130287
UNSUPPORTED APPLSYS FND_CONCURRENT_REQUESTS 49
INSERT PO RCV_TRANSACTIONS_INTERFAC 3
E
UPDATE APPLSYS FND_CONCURRENT_PROCESSES 40
UPDATE PO RCV_TRANSACTIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED PO PO_HEADERS_ALL 3
INSERT INV MTL_CST_TXN_COST_DETAILS 5
INSERT CSI CSI_ITEM_INSTANCES_H 3
DELETE INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
UPDATE SYS DBMS_LOCK_ALLOCATED 15
DELETE MRP MRP_RECOMMENDATIONS 8
UPDATE AP AP_INVOICE_LINES_ALL 11
INSERT BOM MLOG$_BOM_RES_INSTANCE_CH 4
A
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM BOM_RESOURCE_CHANGES 2
UPDATE PA PA_TRANSACTION_XFACE_CTRL 1
_ALL
INSERT AP AP_PREPAY_HISTORY_ALL 1
INSERT AP AP_PREPAY_APP_DISTS 1
INSERT XLA XLA_EVT_CLASS_ORDERS_GT 4
UPDATE XLA XLA_EVENTS,AP 6
INSERT ENI ENI_DENORM_HIERARCHIES 6
INSERT SYS WRI$_OPTSTAT_TAB_HISTORY 5
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED XLA XLA_AE_HEADERS_GT 3
UPDATE BNE BNE_DOC_ACTIONS 1
UPDATE GL GL_JE_HEADERS 2
DELETE XLA XLA_TRANSFER_LOGS 1
UNSUPPORTED ZX ZX_TRANSACTION_LINES_GT 264
INSERT PA PA_PJM_PO_COMMITMENTS_TMP 378
UNSUPPORTED INV MTL_SYSTEM_ITEMS_B 2
INSERT INV MTL_ITEM_REVISIONS_B 1
266 rows selected.
------------------------------log miner output for archs of 03.07.13 on PROD------------------------
--following log files
Jul 3 10:56 archive_PROD_1_1469_807549584.arc
Jul 3 10:58 archive_PROD_1_1470_807549584.arc
Jul 3 11:01 archive_PROD_1_1471_807549584.arc
Jul 3 11:03 archive_PROD_1_1472_807549584.arc
Jul 3 11:04 archive_PROD_1_1473_807549584.arc
Jul 3 11:05 archive_PROD_1_1474_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1469_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1470_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1471_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1472_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1473_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1474_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE SYS CCOL$ 6
DELETE SYS SEG$ 3
INSERT APPLSYS FND_LOGINS 108
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 33
UPDATE PO RCV_TRANSACTIONS_INTERFAC 4
E
INSERT APPLSYS FND_LOG_TRANSACTION_CONTE 1
XT
INSERT PO RCV_TRANSACTIONS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED INV MTL_SUPPLY 4
INSERT PO RCV_RECEIVING_SUB_LEDGER 4
INSERT GL GL_JE_HEADERS 12
DELETE GL GL_INTERFACE 49
DELETE GL GL_INTERFACE_CONTROL 8
INSERT JA JAI_RTP_POPULATE_T 1
INSERT JA JAI_RGM_TRM_SCHEDULES_T 4
UPDATE JA JAI_RCV_CENVAT_CLAIMS 2
INSERT APPLSYS FND_APPL_SESSIONS 9
UPDATE APPLSYS WF_NOTIFICATIONS 4
UPDATE PO PO_HEADERS_INTERFACE 10
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PO PO_DISTRIBUTIONS_INTERFAC 6
E
INSERT JA JAI_PO_LINE_LOCATIONS 18
UNSUPPORTED PO PO_LINE_LOCATIONS_ALL 2
UNSUPPORTED PO PO_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 14
INSERT PA PA_STRUCTURES_TASKS_TMP 440
INSERT SYS SEQ$ 2
INSERT BNE BNE_DOC_CREATION_PARAMS 9
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 6
OPERATION SEG_OWNER SEG_NAME COUNT(*)
R
INSERT SYS MON_MODS$ 9
UPDATE PA PA_PROJECTS_ALL 4
UPDATE WIP WIP_MOVE_TXN_INTERFACE 1
INSERT CE CE_SECURITY_PROFILES_GT 8
UPDATE WIP WIP_PERIOD_BALANCES 1
INSERT INV MTL_MWB_GTMP 14
UPDATE SYS WRI$_SCH_CONTROL 1
DELETE SYS OBJ$ 51
DDL SYS WRH$_ROWCACHE_SUMMARY 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS WRH$_ACTIVE_SESSION_HISTO 1
RY
DDL SYS WRH$_SYS_TIME_MODEL 1
INSERT IBY IBY_DOCS_PAYABLE_ALL 6
UPDATE IBY IBY_DOCS_PAYABLE_ALL 36
INSERT GL GL_CODE_COMBINATIONS 3
INSERT XLA XLA_AE_HEADERS_GT 12
UPDATE -
Too many archived logs when trying a backup
Hello all,
I'm having a bit of trouble running a backup script on an Oracle instance (10g1, on Solaris).
As a normal DBA practice, I guess the backup should be scheduled and run from the very beginning of using a DB. Sometimes, from various reasons, this does not happen. In this case, before running the first (full) backup of the DB, there might be tens or hundreds of archived logs waiting to be backed up, and the flash recovery area might just not be able to handle all of them (at least that's how I see it, I might be wrong, I'm still fighting my way through Oracle's Backup and Recovery issues). In that case, a backup script containing the following RMAN sequence:
run{
allocate channel ch1 type disk;
backup
incremental
level = 0
database;
release channel ch1;
fails with the error message ORA-19804 (cannot reclaim disk space from the DB_RECOVERY_FILE_DEST_SIZE limit).
After this, the archived logs that were backed up are marked as obsolete, and I can delete them from RMAN with "delete obsolete". The script I'm using for backup runs fine afterwards. Before attempting a backup, however, no logs are reported as obsolete.
The retention policy is the default "redundancy 1" and the archivelog deletion policy is none.
How could I prevent the backup script from crashing? If I'm changing the archivelog deletion policy, will I be able to restore the DB properly from my backup set? (as earlier logs will be deleted before making a backup)
Thank you for any suggestions, your help is very much appreciated,
AdrianI am having an impression that you are not using scheduled backups and let client decided when to take backup. Well this is not good, in this case you won't be able to get rid of this error bcz you never know when the next or even first backup is going to occur and without that you can't even think of deleting your logs. If you are not using tape drives then your archivelogs and backups both will sit in recovery area and you should have enough available space to hold both of them. I would say to schedule your backups and use DELETE INPUT clause of RMAN backup to delete the archivelogs after backing them up. And also delete the obsolete backups according to your recovery window. This is the proper way to manage recovery area space. You really need to tune the recovery area space by testing the amount of redo generation, backup size, retention policy etc etc and then come up with a figure of recovery area size which is suitable for your env to hold all of the required files for required time (recover window).
Daljit Singh -
Hi,
we have EBusiness suite 11.5.10.2 Solaris 5.10 Production server. Until last 3 week (for the past 1.5 years), there were max 10 archive log generation (archive size like 250mb) now which has been increased to 200 kb,152kb,1mb archives for every 1 MIN.
I am unable to understand this behaviour. The number of users still remain the same and the usage is as usual.
Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
Please suggest what can be done.
Thanks
Rajeshrajeshappsdba wrote:
Hi,
we have EBusiness suite 11.5.10.2 Solaris 5.10 Production server. Until last 3 week (for the past 1.5 years), there were max 10 archive log generation (archive size like 250mb) now which has been increased to 200 kb,152kb,1mb archives for every 1 MIN.
I am unable to understand this behaviour. The number of users still remain the same and the usage is as usual.
Is there a way I can check what has gone wrong? I could not see any errors also in the alert.log
Please suggest what can be done.
Thanks
Rajeshredo generation is the result of the activity on the system. You've therefore either started doing much more work - which the business users might well be able to let you know about - or you've started doing you existing workload extremely inefficently, which your business users will probably have noticed. I would start in this case, not with logminer, but with the business and the application - ebusiness suite for example keeps pretty good logs of what activity has happened on the system. Id also be interested in what was changed 3 weeks ago, because something was!
Niall -
REDO scanning goes in infinite loop and Too much ARCHIVE LOG
After we restart DATABASE, capture process REDO scanning goes in infinite loop and Too much ARCHIVE LOG being generated.
No idea whats going on.... otherwise basic streams functionality working fine.What's your DB version
-
i have a problem with my mac i have too many archives and programs and i want to delete all files and start at the begin my mac with out do format
I do not recommend reformatting your harddrive and reloading your software. This is a Windows thing. On an older mac it may be difficult to find all your software again.
Best to have greater than 2gig of free space. Many posters to these forums state that you need much more free space: 5 gig to 10 gig or 10 percent of you hd size.
(0)
Be careful when deleting files. A lot of people have trashed their system when deleting things. Place things in trash. Reboot & run your applications. Empty trash.
Go after large files that you have created & know what they are. Do not delete small files that are in a folder you do not know what the folder is for. Anything that is less than a megabyte is a small file these days.
(1)
Run
OmniDiskSweeper
"The simple, fast way to save disk space"
OmniDiskSweeper is now free!
http://www.omnigroup.com/applications/omnidisksweeper/download/
This will give you a list of files and folders sorted by size. Go after things you know that are big.
(2)
These pages have some hints on freeing up space:
http://thexlab.com/faqs/freeingspace.html
http://www.macmaps.com/diskfull.html
(3)
Buy an external firewire harddrive.
For a PPC computer, I recommend a firewire drive.
Has everything interface:
FireWire 800/400 + USB2, + eSATA 'Quad Interface'
save a little money interface:
FireWire 400 + USB 2.0
This web page lists both external harddrive types. You may need to scroll to the right to see both.
http://eshop.macsales.com/shop/firewire/1394/USB/EliteAL/eSATA_FW800_FW400_USB
(4)
Buy a flash card. -
Will Analyze create too many redo logs ??
DB : 10gR2
Hello Folks,
Is analyze process creates too many redo log files ?? Is their any other valid option other than no log ?I guess that's also going to create too many log switches as it's part of DDLHUH?
REDO log switches result from DML.
If you are obsessed with LOG switches, then you need to use LOGMINER to observe the the relative per centage of changed data is from DBMS_STATS
when compared to normal DML activity.
I seriously doubt that DBMS_STATS has any significant impact to redo log switches. -
Apps team uploading 114505 kbof text file,too many archives getting
the upload is going on ,too many are getting generated in archivelog location
i have increased 200gb to 900gb
please let me know how to know for 14505 kb of dataupload how many archives gettting generated
can we estimate before dataupload how many archives can generate
can you please explain clearly
thanksHi,
There is no specific method to calculate the number of archived log files to be generated for the particular load.
But, we may do some rough calculation by looking at the rate of archived log generation.
Check the following things,
1) How frequently the logs are getting generated and the size of the files.
2) How many data has been loaded so far.
Based on the above input, you may get some idea on number of ARCHIVED LOG files to be generated during the rest of the data load.
Also, you may do the following to avoid keep adding space to archive destination.
1) Wite a UNIX shell script to BACKUP the ARCHIVED LOG FILES and DELETE the files once backuped up; which should run for every 10 minutes.
Thanks. -
Too many archived documents are suddenly extracted into BW
Dear All,
I am not person of Finance but I will try to provide necessary details here.
We have archiving in place in ECC. For a specific document, if I go to FB03 for the document, it shows the information but with message that 'Document is already archived'. Document is related to year 2011.
Now, suddenly too many documents were extracted last week spanning from year 2009 to 2011. When I check in 'Document changes', it shows that last change was done in 2011 only.
I want to know what possible change could cause these archived documents to extract again into BW.
or any pointers are also welcome.
Thanks,
PurvangIf you bought it from Apple, you are covered by one year of AppleCare, which includes three months of free phone support, for any problem. You should take full advantage of that. It sounds like you may have hardware issues (the audio jack) but a lot of your items sound like a matter of changing preferences or display settings - Apple can help with this, as can a local friendly Mac consultant.
For example, you mention that you get a "screen-wide mass of writing". Do you know what application you're opening the file in? Many Mac apps have full-screen mode, which sounds like what you describe, and you can get out of it by moving your cursor to the top right of the screen until you see the menu bar appear - there's a blue icon with two arrows, and clicking this will get you out of full screen mode.
QuickTime does not play all formats of video. Try an alternative video player such as VLC. (http://videolan.org/VLC)
Hope that helps somewhat.
Matt -
Find how many archive logs are generated and backup through RMAN
friends,
how many arhcive logs are generating between 2 consecutive backups through RMAN.
Ur help is really appricated.
ThanksSorry, I misunderstood the question
I think you are not asking how many logs are generated since last backup. My initial repy was the answer for this
The answer of you question on 10G R2 is like below. Query groups your archivelog backup sessions and counts the logs in one session which is the number of generated archivelog file between two backup session/.
select session_key,count(sequence#) from V$BACKUP_ARCHIVELOG_DETAILS group by session_key order by 1 desc
Coskan Gundogar
http://coskan.wordpress.com -
Too Many Table logs in DBTABLOG, RSTBPDEL is taking too much time
Hi Experts,
In one of our CRM system, DBTABLOG table is logging one table which is having 1 Billion entries right now. Business dont want to switch off the logging at this moment. But the table is increasing rapidly 42 Gb per month. RSTBPDEL program is running from weeks to delete them, but no control on increment.
Can you please suggest any way to delete them quickly at first, so that my house keeping job will run daily and finish soon.
Regards,
Mohan.Hello Mohan,
The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
Note 531923 - Audit Trail: Indexes on table DBTABLOG
Note 579980 - Table logs: Performance during access to DBTABLOG
Regards,
Siddhesh -
Too many archive file generated
Hi there,
I'm doing a load test that insert 12 meg data into a test table. Nobody is on the database except me. My online redo log size 200 meg. Why did the database did two log switches when on 12 meg data inserted to the database? My database version is 10.2.0.2.
thanksA typical redo record has a minimum size of around 200 bytes. Before you worry about the data it contains, remember it has to hold details about the block being changed and the undo block being used to hold the undo for that change and various pointers and other bits of structural information. You then have two redo records per row inserted - one for the table change and one for the index changed. 2 * 200 * 1000000 gives you about 400Mb, which fits your observation of two log file switches with 200Mb log files.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
RMAN BACKUP hangs up on archive logs
Hi,
in 9i on Linux, My rman backup script is :
RMAN> run {
2> allocate channel t1 type disk;
3> backup incremental level=0 format '/mnt/rman/MYDB/full_%d_%t_%s_%p' database;
4> sql 'alter system switch logfile';
5> backup format '/mnt/rman/MYDB/al_%d_%t_%s_%p'
6> archivelog all delete input;
7> backup format '/mnt/rman/MYDB/ctl_%d_%t_%s_%p' current controlfile;
8> }
It works well until :
backup format '/mnt/rman/MYDB/al_%d_%t_%s_%p' archivelog all delete input;
Here it hangs up (may be there are many many archive log files). What do you propose ? How can we ask RMAN just backup archive logs since some recent dates ? How can we delete most of ancient archive logs ? Since many times RMAN backup was in error then archive logs were not deleted. Now impossible to finish RMAN backup. Many thanks for your help.Hi,
I launched following since last night but it is always waiting :
RMAN> crosscheck archivelog all;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=110 devtype=DISK
What can I do ? Any other way to say to RMAN that archived logs are not available ?
Many thanks.
Maybe you are looking for
-
What is missing when I get a blank white screen when I try to Access some web sites?
On some web sites, I get a blank white screen when I try to access them.
-
Ipod won't change from 'do not disconnect message', won't show up in itunes
My aunt has asked me to figure out how to get her ipod nano to work. So this is my story: When I plug it into the usb port is says 'Do not disconnect'. The menu and other buttons are unresponsive. If I reset it, it will say do not disconnect again. I
-
there are no other details!
-
In iMovie HD, I noticed that the first few clips also have an aspect ratio issue (where part of the screen is just a black border [as shown below]). how do i fix it?
-
I was reading some blogs and noticed that Version 3 for N97 is available, is this true? Some blog were stating that Nokia is planning to release Anna for N97... What's going on? The last I know was nokia released Version 2 software well over a year a