Find how many archive logs are generated and backup through RMAN
friends,
how many arhcive logs are generating between 2 consecutive backups through RMAN.
Ur help is really appricated.
Thanks
Sorry, I misunderstood the question
I think you are not asking how many logs are generated since last backup. My initial repy was the answer for this
The answer of you question on 10G R2 is like below. Query groups your archivelog backup sessions and counts the logs in one session which is the number of generated archivelog file between two backup session/.
select session_key,count(sequence#) from V$BACKUP_ARCHIVELOG_DETAILS group by session_key order by 1 desc
Coskan Gundogar
http://coskan.wordpress.com
Similar Messages
-
Too many archive logs getting generated on 11.1.0.7
I could see heavy arch generation on PROD instance although there is not much activity on PROD
this is a fresh instance which has gone live month ago and archive log being enabled weeks ago but
i could see about 20 to 23 GB of arch generation daily although the database size is 90 GB around
Raised SR they told me its unable to Purge statistics from the SYSAUX tablespace
they asked me to run some queries an run this
exec dbms_stats.purge_stats(sysdate - 50);
which was running for long hours and just coming out because of insufficient space
although the retention policy is 31 days
SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
31
history is avail for more than 90 days
SQL> select dbms_stats.get_stats_history_availability from dual;
GET_STATS_HISTORY_AVAILABILITY
01-APR-13 11.00.07.250483000 AM +05:30
asked to apply this patch 12683802 which i applied on DEV instance
but still i can see so many archs generating although there is no activity on DEV instance.
now when i run this scripts little by little
exec dbms_stats.purge_stats(sysdate - 50);
its purging but its taking ages and the size of sysaux is getting filled the current size of DEV have 3 datafiles each with around 4000mb
-- B4 applying patch
SQL> select trunc(first_time) on_date,
2 thread# thread,
3 min(sequence#) min_sequence,
4 max(sequence#) max_sequence,
5 max(sequence#) - min(sequence#) nos_archives,
6 (max(sequence#) - min(sequence#)) * log_avg_mb req_space_mb
7 from v$log_history,
8 (select avg(bytes/1024/1024) log_avg_mb
9 from v$log)
10 group by trunc(first_time), thread#, log_avg_mb
11 order by on_date
12 /
ON_DATE THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
24-JUN-13 1 1 3 2 2000
25-JUN-13 1 4 17 13 13000
26-JUN-13 1 18 30 12 12000
27-JUN-13 1 31 43 12 12000
28-JUN-13 1 44 51 7 7000
29-JUN-13 1 52 64 12 12000
30-JUN-13 1 65 77 12 12000
01-JUL-13 1 78 88 10 10000
-- after applying patch
ON_DATE
THREAD MIN_SEQUENCE MAX_SEQUENCE NOS_ARCHIVES REQ_SPACE_MB
21-JUN-13
1
1
5
4
4000
22-JUN-13
1
6
20
14
14000
23-JUN-13
1
21
35
14
14000
24-JUN-13
1
36
85
49
49000
25-JUN-13
1
86
111
25
25000
26-JUN-13
1
112
127
15
15000
27-JUN-13
1
128
134
6
6000
28-JUN-13
1
135
143
8
8000
29-JUN-13
1
144
151
7
7000
30-JUN-13
1
152
158
6
6000
01-JUL-13
1
159
163
4
4000
the above results b4 and after are taken from TEST and DEV which are cloned from PROD instance and only on DEV the patch is applied
here are env details
EBS:21.1.3
Database:11.1.0.7
OS:RHEL 5.6
am still not satisfied wanted to know if any one of you have a solution for this
please help
ZaviHi Amogh,
As said from support as well to run logminer i have run here is the output from it
i followed note id:1504755.1
------------------------------log miner output for archs of 02.07.13 on PROD------------------------
-- following logs
Jul 2 10:59 archive_PROD_1_1446_807549584.arc
Jul 2 11:05 archive_PROD_1_1447_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1446_807549584.arc', -
OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1447_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT APPLSYS FND_LOGINS 47
UPDATE PO RCV_TRANSACTIONS_INTERFAC 2
E
INSERT PO RCV_TRANSACTIONS 3
UNSUPPORTED INV MTL_SUPPLY 6
DELETE PO RCV_SUPPLY 3
UPDATE CSI CSI_ITEM_INSTANCES 3
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 17
INSERT JA JAI_RTP_POPULATE_T 3
INSERT GL GL_CODE_COMBINATIONS 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_AP_TDS_INV_TAXES 7
UNSUPPORTED AP AP_INVOICE_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 4
INSERT INV MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_HEADERS_GT 3
UPDATE XLA XLA_AE_HEADERS_GT 3
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 4
R
UPDATE APPLSYS FND_USER_DESKTOP_OBJECTS 10
INSERT APPLSYS FND_APPL_SESSIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE XLA XLA_TRANSFER_LOGS 2
INSERT GL GL_JE_HEADERS 2
DELETE GL GL_INTERFACE_CONTROL 1
DELETE GL GL_INTERFACE 3
INSERT CE CE_SECURITY_PROFILES_GT 1
INSERT PA PA_PROJECTS_FOR_ACCUM 8
INSERT PA PA_PJM_REQ_COMMITMENTS_TM 1162
P
DELETE PA PA_PROJECT_ACCUM_COMMITME 162
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE PA PA_TXN_ACCUM 1749
UPDATE PA PA_RESOURCE_LIST_ASSIGNME 13
NTS
INSERT ENI ENI_OLTP_ITEM_STAR 1
START 561
COMMIT 934
INSERT ICX ICX_SESSION_ATTRIBUTES 45
INSERT INV MTL_SUPPLY 8
UPDATE INV MTL_MATERIAL_TRANSACTIONS 11
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_TEMP
INSERT CSI CSI_TRANSACTIONS 3
INSERT CSI CSI_I_VERSION_LABELS 1
INSERT CSI CSI_I_VERSION_LABELS_H 1
INSERT JA JAI_RTP_TRANS_T 3
INSERT JA JAI_AP_INVOICE_LINES 1
UNSUPPORTED AP AP_INVOICE_DISTRIBUTIONS_ 9
ALL
DELETE BOM BOM_RESOURCE_CHANGES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
ROLLBACK 3
INSERT XLA XLA_TRANSACTION_ENTITIES, 2
AP
INSERT ENI MLOG$_ENI_OLTP_ITEM_STAR 7
UNSUPPORTED XLA XLA_TRANSACTION_ENTITIES, 4
AP
INSERT XLA XLA_DISTRIBUTION_LINKS,AP 6
UPDATE QA QA_CHARS 1
UPDATE MRP MRP_SCHEDULE_DATES 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE ENI ENI_DENORM_HIERARCHIES 4
UNSUPPORTED SYS SEG$ 1
INSERT INV MTL_TRANSACTION_ACCOUNTS 4
UPDATE APPLSYS FND_USER_PREFERENCES 5
DELETE SYS WRI$_OPTSTAT_HISTHEAD_HIS 235644
TORY
INSERT BNE BNE_DOC_USER_PARAMS 1
INSERT GL GL_INTERFACE 6
INSERT GL GL_JE_LINES 4
INSERT GL GL_JE_SEGMENT_VALUES 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT GL GL_IMPORT_REFERENCES 4
UPDATE PA PA_MAPPABLE_TXNS_TMP 3
UPDATE PA PA_PROJECT_ACCUM_COMMITME 95
NTS
INSERT INV MLOG$_MTL_SYSTEM_ITEMS_B 1
INSERT INV MTL_SYSTEM_ITEMS_TL 1
UPDATE APPLSYS FND_CONFLICTS_DOMAIN 6040
INSERT APPLSYS MO_GLOB_ORG_ACCESS_TMP 97
UPDATE APPLSYS FND_CONCURRENT_QUEUES 117
UNSUPPORTED JA JAI_RCV_LINES 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT INV MLOG$_MTL_MATERIAL_TRANSA 21
C
INSERT CSI CSI_ITEM_INSTANCES 1
INSERT JA JAI_AP_TDS_INV_TAXES 2
INSERT BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_TXN_INTERFACE_AUDIT_AL 4
L
INSERT PA PA_EXPENDITURE_COMMENTS 2
DELETE PA PA_TRANSACTION_XFACE_CTRL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
_ALL
INSERT AP AP_LINE_TEMP_GT 3
UPDATE ENI ENI_OLTP_ITEM_STAR 3
INSERT XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_EVENTS_GT 3
UPDATE XLA XLA_AE_HEADERS,AP 8
DELETE XLA XLA_VALIDATION_LINES_GT 2
INSERT INV MTL_TXN_COST_DET_INTERFAC 2
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE INV MTL_CST_TXN_COST_DETAILS 2
DELETE INV MTL_TXN_COST_DET_INTERFAC 2
E
UNSUPPORTED SYS HISTGRM$ 16
UPDATE INV MTL_MATERIAL_TRANSACTIONS 6
INSERT XLA XLA_EVENTS,CST 3
DELETE BNE BNE_DOC_ACTIONS 1
INSERT GL GL_INTERFACE_CONTROL 2
INSERT XLA XLA_TB_WORK_UNITS 1
UPDATE ZX ZX_TRX_HEADERS_GT 67
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6611_EC264F 1
91
DELETE PA PA_TXN_ACCUM_DETAILS 1327
UNSUPPORTED PA PA_TXN_ACCUM 523
INSERT PA PA_MAPPABLE_TXNS_TMP 2
DELETE PA PA_RESOURCE_LIST_PARENTS_ 2
TMP
DELETE PA PA_PROJECTS_FOR_ACCUM 17
UPDATE SYS SEQ$ 224
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE APPLSYS WF_DEFERRED 5
UPDATE APPLSYS FND_CONC_PROG_ONSITE_INFO 51
INSERT APPLSYS FND_CONCURRENT_REQUESTS 46
INSERT PO PO_SESSION_GT 7
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
INSERT INV MTL_ONHAND_QUANTITIES_DET 3
AIL
UPDATE SYS SYS_FBA_BARRIERSCN 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE MRP MRP_MESSAGES_TMP 29
DELETE MRP MRP_MESSAGES_TMP 29
UPDATE AP AP_INVOICES_ALL 15
UPDATE JA JAI_AP_TDS_INV_TAXES 43
INSERT BOM MLOG$_BOM_RESOURCE_CHANGE 4
S
UNSUPPORTED PA PA_TRANSACTION_INTERFACE_ 2
ALL
INSERT PA PA_EXPENDITURE_GROUPS_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PA PA_EXPENDITURE_ITEMS_ALL 2
INSERT AP AP_INVOICE_LINES_ALL 3
INSERT APPLSYS FND_LOG_MESSAGES 29
INSERT ZX ZX_ITM_DISTRIBUTIONS_GT 71
UNSUPPORTED ZX ZX_TRX_HEADERS_GT 5
INSERT XLA XLA_EVENTS,AP 3
UPDATE AP AP_PREPAY_HISTORY_ALL 3
UPDATE AP AP_PREPAY_APP_DISTS 1
INSERT INV MLOG$_MTL_ITEM_CATEGORIES 3
INSERT XLA XLA_AE_LINES,AP 6
DELETE XLA XLA_AE_HEADERS_GT 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE SYS HIST_HEAD$ 76
INSERT SYS WRI$_OPTSTAT_IND_HISTORY 10
UNSUPPORTED ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 3
R
UPDATE INV MTL_CST_ACTUAL_COST_DETAI 3
LS
INSERT XLA XLA_TRANSACTION_ENTITIES, 3
CST
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ICX ICX_TRANSACTIONS 1
UPDATE GL GL_INTERFACE 4
UPDATE PO PO_REQ_DISTRIBUTIONS_ALL 66
UNSUPPORTED PO PO_REQUISITION_HEADERS_AL 1
L
UNSUPPORTED PO PO_REQUISITION_LINES_ALL 66
DELETE PA PA_COMMITMENT_TXNS 1461
UNSUPPORTED PA PA_MAPPABLE_TXNS_TMP 3
INSERT PA PA_PROJECT_ACCUM_COMMITME 197
NTS
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM CST_ITEM_COSTS 1
INSERT JA JAI_RCV_TRANSACTIONS 3
UPDATE PO RCV_SUPPLY 3
UPDATE INV MTL_SUPPLY 11
DELETE INV MTL_SUPPLY 11
UPDATE PO PO_DISTRIBUTIONS_ALL 3
UPDATE PO PO_LINE_LOCATIONS_ALL 9
INSERT INV MLOG$_MTL_ONHAND_QUANTITI 3
E
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE PO RCV_TRANSACTIONS_INTERFAC 3
E
INSERT MRP MRP_MESSAGES_TMP 22
INSERT AP AP_INVOICE_DISTRIBUTIONS_ 3
ALL
UPDATE AP AP_INVOICE_DISTRIBUTIONS_ 30
ALL
INSERT PA PA_EXPENDITURES_ALL 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ZX ZX_TRX_HEADERS_GT 8
UNSUPPORTED ZX ZX_LINES_DET_FACTORS 566
DELETE AP AP_LINE_TEMP_GT 9
UPDATE AP AP_PAYMENT_SCHEDULES_ALL 5
UPDATE ICX ICX_SESSIONS 12
UNSUPPORTED AP AP_INVOICES_ALL 3
INSERT JA JAI_RCV_JOURNAL_ENTRIES 4
UNSUPPORTED INV MTL_TRANSACTIONS_INTERFAC 24
E
UPDATE MRP MRP_RECOMMENDATIONS 14
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT ENI MLOG$_ENI_DENORM_HIERARCH 16
I
INSERT SYS WRI$_OPTSTAT_HISTGRM_HIST 8
ORY
INSERT APPLSYS WF_CONTROL 1
UPDATE BNE BNE_DOC_USER_PARAMS 1
DELETE APPLSYS WF_CONTROL 1
UPDATE GL GL_JE_BATCHES 2
INSERT GL GL_POSTING_INTERIM 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED JA JAI_PO_OSP_LINES 1
INSERT AP AP_INVOICES_ALL 2
INSERT PA PA_COMMITMENT_TXNS_TMP 160
UPDATE PA PA_COMMITMENT_TXNS 1391
DELETE PA PA_MAPPABLE_TXNS_TMP 3
INTERNAL 4906910
UPDATE APPLSYS FND_CONCURRENT_REQUESTS 153
UPDATE JA JAI_RCV_LINES 3
INSERT INV MLOG$_MTL_SUPPLY 30
INSERT CSI CSI_I_PARTIES_H 1
UNSUPPORTED JA JAI_RCV_TRANSACTIONS 7
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UPDATE JA JAI_RCV_TRANSACTIONS 24
INSERT MRP MRP_RECOMMENDATIONS 8
UNSUPPORTED AP AP_PAYMENT_SCHEDULES_ALL 4
UPDATE PA PA_TRANSACTION_INTERFACE_ 4
ALL
INSERT XLA XLA_ACCT_PROG_EVENTS_GT 4
INSERT XLA XLA_AE_LINES_GT 12
UNSUPPORTED XLA XLA_AE_LINES_GT 30
INSERT XLA XLA_AE_HEADERS,AP 3
INSERT XLA XLA_VALIDATION_LINES_GT 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE XLA XLA_EVENTS_GT 1
UNSUPPORTED XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
DELETE XLA XLA_BAL_CONCURRENCY_CONTR 2
OL
INSERT APPLSYS FND_CONC_REQUEST_ARGUMENT 2
S
UPDATE QA QA_RESULTS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT MRP MRP_SCHEDULE_CONSUMPTIONS 21
INSERT ENI ENI_DENORM_HRCHY_PARENTS 4
UNSUPPORTED ENI ENI_DENORM_HRCHY_PARENTS 4
UPDATE APPLSYS FND_USER 6
INSERT XLA XLA_TRANSFER_LOGS 2
UPDATE GL GL_JE_LINES 4
UPDATE APPLSYS FND_NODES 3
UNSUPPORTED PO PO_REQ_DISTRIBUTIONS_ALL 66
DELETE ZX ZX_ITM_DISTRIBUTIONS_GT 66
INSERT AP AP_PAYMENT_SCHEDULES_ALL 2
INSERT IBY IBY_DOCS_PAYABLE_GT 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT AP AP_DOC_SEQUENCE_AUDIT 1
INSERT PA PA_COMMITMENT_TXNS 1380
INSERT PA PA_RESOURCE_ACCUM_DETAILS 3
INSERT ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 19
I
UPDATE PA PA_PROJECT_ACCUM_ACTUALS 12
INSERT EGO EGO_ITEM_TEXT_TL 1
INSERT INV MTL_ITEM_REVISIONS_TL 1
UNSUPPORTED 1720
INSERT APPLSYS FND_CONC_PP_ACTIONS 47
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED APPLSYS FND_CONCURRENT_PROCESSES 97
DELETE APPLSYS MO_GLOB_ORG_ACCESS_TMP 11
INSERT MRP MRP_RELIEF_INTERFACE 16
UPDATE PO PO_REQUISITION_LINES_ALL 1
INSERT INV MTL_MATERIAL_TRANSACTIONS 5
INSERT CSI CSI_I_PARTIES 1
UNSUPPORTED SYS DBMS_LOCK_ALLOCATED 15
UPDATE PO PO_SESSION_GT 4
INSERT MRP MRP_SCHEDULE_DATES 4
INSERT MRP MLOG$_MRP_SCHEDULE_DATES 15
DELETE MRP MRP_SCHEDULE_DATES 4
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE BOM BOM_RES_INSTANCE_CHANGES 2
INSERT PA PA_COST_DISTRIBUTION_LINE 2
S_ALL
INSERT ZX ZX_TRANSACTION_LINES_GT 138
UNSUPPORTED CSI CSI_ITEM_INSTANCES 2
UNSUPPORTED XLA XLA_EVENTS,AP 6
INSERT XLA XLA_AE_SEGMENT_VALUES 13
UNSUPPORTED SYS TAB$ 15
INSERT SYS WRI$_OPTSTAT_HISTHEAD_HIS 81562
TORY
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS SYS_TEMP_0FD9D6610_EC264F 1
91
UNSUPPORTED SYS IND$ 10
DELETE APPLSYS FND_CONC_PP_ACTIONS 7
INSERT BNE BNE_DOC_ACTIONS 1
INSERT GL GL_JE_BATCHES 1
INSERT PA PA_TXN_ACCUM_DETAILS 1386
INSERT PA PA_TXN_ACCUM 2
INSERT PA PA_RESOURCE_LIST_PARENTS_ 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
TMP
UPDATE CTXSYS DR$INDEX 1
INSERT INV MTL_PENDING_ITEM_STATUS 1
DELETE SYS WRI$_OPTSTAT_IND_HISTORY 2130287
UNSUPPORTED APPLSYS FND_CONCURRENT_REQUESTS 49
INSERT PO RCV_TRANSACTIONS_INTERFAC 3
E
UPDATE APPLSYS FND_CONCURRENT_PROCESSES 40
UPDATE PO RCV_TRANSACTIONS 3
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED PO PO_HEADERS_ALL 3
INSERT INV MTL_CST_TXN_COST_DETAILS 5
INSERT CSI CSI_ITEM_INSTANCES_H 3
DELETE INV MTL_MATERIAL_TRANSACTIONS 5
_TEMP
UPDATE SYS DBMS_LOCK_ALLOCATED 15
DELETE MRP MRP_RECOMMENDATIONS 8
UPDATE AP AP_INVOICE_LINES_ALL 11
INSERT BOM MLOG$_BOM_RES_INSTANCE_CH 4
A
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT BOM BOM_RESOURCE_CHANGES 2
UPDATE PA PA_TRANSACTION_XFACE_CTRL 1
_ALL
INSERT AP AP_PREPAY_HISTORY_ALL 1
INSERT AP AP_PREPAY_APP_DISTS 1
INSERT XLA XLA_EVT_CLASS_ORDERS_GT 4
UPDATE XLA XLA_EVENTS,AP 6
INSERT ENI ENI_DENORM_HIERARCHIES 6
INSERT SYS WRI$_OPTSTAT_TAB_HISTORY 5
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED XLA XLA_AE_HEADERS_GT 3
UPDATE BNE BNE_DOC_ACTIONS 1
UPDATE GL GL_JE_HEADERS 2
DELETE XLA XLA_TRANSFER_LOGS 1
UNSUPPORTED ZX ZX_TRANSACTION_LINES_GT 264
INSERT PA PA_PJM_PO_COMMITMENTS_TMP 378
UNSUPPORTED INV MTL_SYSTEM_ITEMS_B 2
INSERT INV MTL_ITEM_REVISIONS_B 1
266 rows selected.
------------------------------log miner output for archs of 03.07.13 on PROD------------------------
--following log files
Jul 3 10:56 archive_PROD_1_1469_807549584.arc
Jul 3 10:58 archive_PROD_1_1470_807549584.arc
Jul 3 11:01 archive_PROD_1_1471_807549584.arc
Jul 3 11:03 archive_PROD_1_1472_807549584.arc
Jul 3 11:04 archive_PROD_1_1473_807549584.arc
Jul 3 11:05 archive_PROD_1_1474_807549584.arc
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1469_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1470_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1471_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1472_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1473_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/archive_prod/prod/archive_PROD_1_1474_807549584.arc', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DELETE SYS CCOL$ 6
DELETE SYS SEG$ 3
INSERT APPLSYS FND_LOGINS 108
UPDATE APPLSYS FND_CONC_RELEASE_CLASSES 33
UPDATE PO RCV_TRANSACTIONS_INTERFAC 4
E
INSERT APPLSYS FND_LOG_TRANSACTION_CONTE 1
XT
INSERT PO RCV_TRANSACTIONS 2
OPERATION SEG_OWNER SEG_NAME COUNT(*)
UNSUPPORTED INV MTL_SUPPLY 4
INSERT PO RCV_RECEIVING_SUB_LEDGER 4
INSERT GL GL_JE_HEADERS 12
DELETE GL GL_INTERFACE 49
DELETE GL GL_INTERFACE_CONTROL 8
INSERT JA JAI_RTP_POPULATE_T 1
INSERT JA JAI_RGM_TRM_SCHEDULES_T 4
UPDATE JA JAI_RCV_CENVAT_CLAIMS 2
INSERT APPLSYS FND_APPL_SESSIONS 9
UPDATE APPLSYS WF_NOTIFICATIONS 4
UPDATE PO PO_HEADERS_INTERFACE 10
OPERATION SEG_OWNER SEG_NAME COUNT(*)
INSERT PO PO_DISTRIBUTIONS_INTERFAC 6
E
INSERT JA JAI_PO_LINE_LOCATIONS 18
UNSUPPORTED PO PO_LINE_LOCATIONS_ALL 2
UNSUPPORTED PO PO_LINES_ALL 8
DELETE ZX ZX_TRX_HEADERS_GT 14
INSERT PA PA_STRUCTURES_TASKS_TMP 440
INSERT SYS SEQ$ 2
INSERT BNE BNE_DOC_CREATION_PARAMS 9
UPDATE ENI DR$ENI_DEN_HRCHY_PAR_IM1$ 6
OPERATION SEG_OWNER SEG_NAME COUNT(*)
R
INSERT SYS MON_MODS$ 9
UPDATE PA PA_PROJECTS_ALL 4
UPDATE WIP WIP_MOVE_TXN_INTERFACE 1
INSERT CE CE_SECURITY_PROFILES_GT 8
UPDATE WIP WIP_PERIOD_BALANCES 1
INSERT INV MTL_MWB_GTMP 14
UPDATE SYS WRI$_SCH_CONTROL 1
DELETE SYS OBJ$ 51
DDL SYS WRH$_ROWCACHE_SUMMARY 1
OPERATION SEG_OWNER SEG_NAME COUNT(*)
DDL SYS WRH$_ACTIVE_SESSION_HISTO 1
RY
DDL SYS WRH$_SYS_TIME_MODEL 1
INSERT IBY IBY_DOCS_PAYABLE_ALL 6
UPDATE IBY IBY_DOCS_PAYABLE_ALL 36
INSERT GL GL_CODE_COMBINATIONS 3
INSERT XLA XLA_AE_HEADERS_GT 12
UPDATE -
Too many archived logs when trying a backup
Hello all,
I'm having a bit of trouble running a backup script on an Oracle instance (10g1, on Solaris).
As a normal DBA practice, I guess the backup should be scheduled and run from the very beginning of using a DB. Sometimes, from various reasons, this does not happen. In this case, before running the first (full) backup of the DB, there might be tens or hundreds of archived logs waiting to be backed up, and the flash recovery area might just not be able to handle all of them (at least that's how I see it, I might be wrong, I'm still fighting my way through Oracle's Backup and Recovery issues). In that case, a backup script containing the following RMAN sequence:
run{
allocate channel ch1 type disk;
backup
incremental
level = 0
database;
release channel ch1;
fails with the error message ORA-19804 (cannot reclaim disk space from the DB_RECOVERY_FILE_DEST_SIZE limit).
After this, the archived logs that were backed up are marked as obsolete, and I can delete them from RMAN with "delete obsolete". The script I'm using for backup runs fine afterwards. Before attempting a backup, however, no logs are reported as obsolete.
The retention policy is the default "redundancy 1" and the archivelog deletion policy is none.
How could I prevent the backup script from crashing? If I'm changing the archivelog deletion policy, will I be able to restore the DB properly from my backup set? (as earlier logs will be deleted before making a backup)
Thank you for any suggestions, your help is very much appreciated,
AdrianI am having an impression that you are not using scheduled backups and let client decided when to take backup. Well this is not good, in this case you won't be able to get rid of this error bcz you never know when the next or even first backup is going to occur and without that you can't even think of deleting your logs. If you are not using tape drives then your archivelogs and backups both will sit in recovery area and you should have enough available space to hold both of them. I would say to schedule your backups and use DELETE INPUT clause of RMAN backup to delete the archivelogs after backing them up. And also delete the obsolete backups according to your recovery window. This is the proper way to manage recovery area space. You really need to tune the recovery area space by testing the amount of redo generation, backup size, retention policy etc etc and then come up with a figure of recovery area size which is suitable for your env to hold all of the required files for required time (recover window).
Daljit Singh -
How to find how many stand reports are there under each module
Hi experts,
I have to prepare POC for a new project. I have a question how to find the number of standard reports present under each module like sd,mm,etc. Please hlep me any body
Thanks and Regards
pedamarlaYou can find that using help.sap.com
-
Create procedure is generating too many archive logs
Hi
The following procedure was run on one of our databases and it hung since there were too many archive logs being generated.
What would be the answer? The db must remain in archivelog mode.
I understand the nologging concept, but as I know this applies to creating tables, views, indexes and tablespaces. This script is creating procedure.
CREATE OR REPLACE PROCEDURE APPS.Dfc_Payroll_Dw_Prc(Errbuf OUT VARCHAR2, Retcode OUT NUMBER
,P_GRE NUMBER
,P_SDATE VARCHAR2
,P_EDATE VARCHAR2
,P_ssn VARCHAR2
) IS
CURSOR MainCsr IS
SELECT DISTINCT
PPF.NATIONAL_IDENTIFIER SSN
,ppf.full_name FULL_NAME
,ppa.effective_date Pay_date
,ppa.DATE_EARNED period_end
,pet.ELEMENT_NAME
,SUM(TO_NUMBER(prv.result_value)) VALOR
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAf.ASSIGNMENT_ID ASSG_ID
,paf.ORGANIZATION_ID
FROM
pay_element_classifications pec
, pay_element_types_f pet
, pay_input_values_f piv
, pay_run_result_values prv
, pay_run_results prr
, pay_assignment_actions paa
, pay_payroll_actions ppa
, APPS.pay_all_payrolls_f pap
,Per_Assignments_f paf
,per_people_f ppf
WHERE
ppa.effective_date BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND ppa.payroll_id = pap.payroll_id
AND paa.tax_unit_id = NVL(p_GRE, paa.tax_unit_id)
AND ppa.payroll_action_id = paa.payroll_action_id
AND paa.action_status = 'C'
AND ppa.action_type IN ('Q', 'R', 'V', 'B', 'I')
AND ppa.action_status = 'C'
--AND PEC.CLASSIFICATION_NAME IN ('Earnings','Alien/Expat Earnings','Supplemental Earnings','Imputed Earnings','Non-payroll Payments')
AND paa.assignment_action_id = prr.assignment_action_id
AND prr.run_result_id = prv.run_result_id
AND prv.input_value_id = piv.input_value_id
AND piv.name = 'Pay Value'
AND piv.element_type_id = pet.element_type_id
AND pet.element_type_id = prr.element_type_id
AND pet.classification_id = pec.classification_id
AND pec.non_payments_flag = 'N'
AND prv.result_value <> '0'
--AND( PET.ELEMENT_INFORMATION_CATEGORY LIKE '%EARNINGS'
-- OR PET.element_type_id IN (1425, 1428, 1438, 1441, 1444, 1443) )
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PET.EFFECTIVE_START_DATE AND PET.EFFECTIVE_END_DATE
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN PIV.EFFECTIVE_START_DATE AND PIV.EFFECTIVE_END_DATE --dcc
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN Pap.EFFECTIVE_START_DATE AND Pap.EFFECTIVE_END_DATE --dcc
AND paf.ASSIGNMENT_ID = paa.ASSIGNMENT_ID
AND ppf.NATIONAL_IDENTIFIER = NVL(p_ssn, ppf.NATIONAL_IDENTIFIER)
------------------------------------------------------------------TO get emp.
AND ppf.person_id = paf.person_id
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN ppf.EFFECTIVE_START_DATE AND ppf.EFFECTIVE_END_DATE
------------------------------------------------------------------TO get emp. ASSIGNMENT
--AND paf.assignment_status_type_id NOT IN (7,3)
AND NVL(PPA.DATE_EARNED, PPA.EFFECTIVE_DATE) BETWEEN paf.effective_start_date AND paf.effective_end_date
GROUP BY PPF.NATIONAL_IDENTIFIER
,ppf.full_name
,ppa.effective_date
,ppa.DATE_EARNED
,pet.ELEMENT_NAME
,PET.ELEMENT_INFORMATION_CATEGORY
,PET.CLASSIFICATION_ID
,PET.ELEMENT_INFORMATION1
,pet.ELEMENT_TYPE_ID
,paa.tax_unit_id
,PAF.ASSIGNMENT_ID
,paf.ORGANIZATION_ID
BEGIN
DELETE cust.DFC_PAYROLL_DW
WHERE PAY_DATE BETWEEN TO_DATE(p_sdate) AND TO_DATE(p_edate)
AND tax_unit_id = NVL(p_GRE, tax_unit_id)
AND ssn = NVL(p_ssn, ssn)
COMMIT;
FOR V_REC IN MainCsr LOOP
INSERT INTO cust.DFC_PAYROLL_DW(SSN, FULL_NAME, PAY_DATE, PERIOD_END, ELEMENT_NAME, ELEMENT_INFORMATION_CATEGORY, CLASSIFICATION_ID, ELEMENT_INFORMATION1, VALOR, TAX_UNIT_ID, ASSG_ID,ELEMENT_TYPE_ID,ORGANIZATION_ID)
VALUES(V_REC.SSN,V_REC.FULL_NAME,v_rec.PAY_DATE,V_REC.PERIOD_END,V_REC.ELEMENT_NAME,V_REC.ELEMENT_INFORMATION_CATEGORY, V_REC.CLASSIFICATION_ID, V_REC.ELEMENT_INFORMATION1, V_REC.VALOR,V_REC.TAX_UNIT_ID,V_REC.ASSG_ID, v_rec.ELEMENT_TYPE_ID, v_rec.ORGANIZATION_ID);
COMMIT;
END LOOP;
END ;
So, how could I assist our developer with this, so that she can run it again without it generating a ton of logs ? ?
Thanks
Oracle 9.2.0.5
AIX 5.2The amount of redo generated is a direct function of how much data is changing. If you insert 'x' number of rows, you are going to generate 'y' mbytes of redo. If your procedure is destined to insert 1000 rows, then it is destined to create a certain amount of redo. Period.
I would question the <i>performance</i> of the procedure shown ... using a cursor loop with a commit after every row is going to be a slug on performance but that doesn't change the fact 'x' inserts will always generate 'y' redo. -
How to find out which archived logs needed to recover a hot backup?
I'm using Oracle 11gR2 (11.2.0.1.0).
I have backed up a database when it is online using the following backup script through RMAN
connect target /
run {
allocate channel d1 type disk;
backup
incremental level=0 cumulative
filesperset 4
format '/san/u01/app/backup/DB_%d_%T_%u_%c.rman'
database
}The backup set contains the backup of datafiles and control file. I have copied all the backup pieces to another server where I will restore/recover the database but I don't know which archived logs are needed in order to restore/recover the database to a consistent state.
I have not deleted any archived log.
How can I find out which archived logs are needed to recover the hot backup to a consistent state? Can this be done by querying V$BACKUP_DATAFILE and V$ARCHIVED_LOG? If yes, which columns should I query?
Thanks for any help.A few ways :
1a. Get the timestamps when the BACKUP ... DATABASE began and ended.
1b. Review the alert.log of the database that was backed up.
1c. From the alert.log identify the first Archivelog that was generated after the begin of the BACKUP ... DATABASE and the first Archivelog that was generated after the end of the BACKUP .. DATABASE.
1d. These (from 1c) are the minimal Archivelogs that you need to RECOVER with. You can choose to apply additional Archivelogs that were generated at the source database to contininue to "roll-forward"
2a. Do a RESTORE DATABASE alone.
2b. Query V$DATAFILE on the restored database for the lowest CHECKPOINT_CHANGE# and CHECKPOINT_TIME. Also query for the highest CHECKPOINT_CHANGE# and CHECKPOINT_TIME.
2c. Go back to the source database and query V$ARCHIVED_LOG (FIRST_CHANGE#) to identify the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the lowest CHECKPOINT_CHANGE# from 2b above. Also query for the first Archivelog that has a higher SCN (FIRST_CHANGE#) than the highest CHECKPOINT_CHANGE# from 2b above.
2d. These (from 2c) are the minimal Archivelogs that you need to RECOVER with.
(why do you need to query V$ARCHIVED_LOG at the source ? If RESTORE a controlfile backup that was generated after the first Archivelog switch after the end of the BACKUP ... DATABASE, you would be able to query V$ARCHIVED_LOG at the restored database as well. That is why it is important to force an archivelog (log switch) after a BACKUP ... DATABASE and then backup the controlfile after this -- i.e. last. That way, the controlfile that you have restored to the new server has all the information needed).
3. RESTORE DATABASE PREVIEW in RMAN if you have the archivelogs and subsequent controlfile in the backup itself !
Hemant K Chitale -
How to find how many Orders are reserved
I need to find how many order lines are reserved. If you can send me the SQL that will be great.
ThanksReservation lines are stored in the table MTL_DEMAND.
-
HOw to find how many users into into r/3 system and when the log in
HI
I need to find who are others log in to R/3 system?
HOw to find how many users into into r/3 system and when the log inHi,
You can also use transaction code AL08 to see the list of users logged on with the transactions they are working on.
Regards,
Venkat -
Is there a way to find how many users are logging on to my site?
Is there a way to find how many users are logging on to my site at a specific time?
Thanks in advance..Is it possible to use an EJB3.1 Singleton beans for this too? (instead of the application context)
Or will this create a bottleneck because of the standard write lock? It wouldn't be thread safe to provide a read lock on a user_counter increment method? -
How can I turn off archive logs are being generated by system? (ugrent)
Dear all,
How can I turn off archive logs are being generated by system?
Best Regards,
AmySorry not to you @kamran its to OP.accidently it reply button pressed for you
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 171966464 bytes
Fixed Size 787988 bytes
Variable Size 145750508 bytes
Database Buffers 25165824 bytes
Redo Buffers 262144 bytes
Database mounted.
SQL> alter database noarchivelog
2 /
Database altered.
SQL> Khurram -
Is there any way to find a log of the send e-mail from ICloud.. to analyse how many email has been send and deleted within a day from send items
Thx Winston,
I can count the send item. But let me try to explain why did I ask this question.
I had a trouble in sending e-mails from icloud on 22/2 and when I saw in my send item there were no emails.
But the mesgs to whom I emailed received and out of which only few of them didn't got the email. So I wanted to know the log abt the outflow of the mesg with subject and the receipants email id and the list with attachment or not.
How and where will I get the information apart from send item and draft mesg and the outflow undelivered?
Regards
Sarfaraz -
How to determine which archive logs are needed in flashback.
Hi,
Let's assume I have archive logs 1,2,3,4, then a "backup database plus archivelogs" in RMAN, and then archive logs 5+6. If I want to flashback my database to a point immediately after the backup, how do I determine which archive logs are needed?
I would assume I'd only need archive logs 5 and/or 6 since I did a full backup plus archivelogs and the database would have been checkpointed at that time. I'd also assume archive logs 1,2,3,4 would be obsolete as they would have been flushed to the datafiles in the checkpoint.
Are my assumptions correct? If not what queries can I run to determine what files are needed for a flashback using the latest checkpointed datafiles?
Thanks.Thanks for the explanation, let me be more specific with my problem.
I am trying to do a flashback on a failed primary database, the only reason why I want to do a flashback is because dataguard uses the flashback command to try and synchronize the failed database. Specifically dataguard is trying to run:
FLASHBACK DATABASE TO SCN 865984
But it fails, if I run it manually then I get:
SQL> FLASHBACK DATABASE TO SCN 865984;
FLASHBACK DATABASE TO SCN 865984
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38761: redo log sequence 5 in thread 1, incarnation 3 could not be accessed
Looking at the last checkpoint I see:
CHECKPOINT_CHANGE#
865857
Also looking at the archive logs:
RECID STAMP THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# RESETLOGS_CHANGE# RESETLOGS
25 766838550 1 1 863888 10-NOV-11 863892 863888 10-NOV-11
26 766838867 1 2 863892 10-NOV-11 864133 863888 10-NOV-11
27 766839225 1 3 864133 10-NOV-11 864289 863888 10-NOV-11
28 766839340 1 4 864289 10-NOV-11 864336 863888 10-NOV-11
29 766840698 1 5 864336 10-NOV-11 865640 863888 10-NOV-11
30 766841128 1 6 865640 10-NOV-11 865833 863888 10-NOV-11
31 766841168 1 7 865833 10-NOV-11 865857 863888 10-NOV-11
How can I determine what archive logs are needed by a flashback command? I deleted any archive logs with a SCN less than the checkpoint #, I can restore them from backup but I am trying to figure out how to query what is required for a flashback. Maybe this coincides with the point that flashbacks have nothing to do with the backups of datafiles or the checkpoints? -
Find how many users are connected in the Oracle Server
Hi,
I am using Oracle 10g. My question is, is it possible to find how many users are connected in the Oracle Server. We are having one Server and we are having many client machines which will connect the Oracle.
And one more question in the meanwhile i want to take Backup of one database which client as connected. Is it get any problem to the client machine which is accessing the server. And How to take the backup from the server machine. Any commands to process.
Thank u...!Hi there.
If You run
select count(*) from v$session where username is not null;you'll get the number of users connected to Oracle server,
and yes , you could do backup while users are connect to db you are backing up.
cheers -
How many span ports are supported on Sup2T and Catalyst 6880?
Hi,
I did not find any information concerning this.
Would be great if anybody could send me a link to the information how many span ports are supported on the new Cat68 series.
Regards
Thorsten SteffenFor sup2t
========
Local SPAN, RSPAN, and ERSPAN Session Limits
Total Sessions
Local and Source Sessions
Destination Sessions
Local SPAN,
RSPAN Source,
ERSPAN Source
Ingress or Egress or Both
Local SPAN Egress-Only
RSPAN
ERSPAN
80
2
14
64
23
http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/15-1SY/config_guide/sup2T/15_1_sy_swcg_2T/span_rspan_erspan.html
Regards,
Naveen
****Rate if it is helpful**** -
I have a presentation to make to a customer and need to know how many educational apps are there for the iPad/iPod? More specifically special education apps.
ThanksKeep in mind as well that not all apps are "created equal". Some are really good, and others are not very good. You might also want to look at some education sites for recommendations. An example of a general iPads in education forum is
http://ipadeducators.ning.com/
but I imagine some exist for special education needs as well. Our healthcare organization is using some apps for working with autistic children and find them very helpful.
Maybe you are looking for
-
Can I edit FCE projects when the files are on a External HD?
Hello, I've recently reinstalled Snow Leopard. Since I have a clean start, I'm thinking about new ways to manage my video files for FCE editing. Video projects take up most of the space on my HD, so I'd like to keep them on an external drive this tim
-
IPhoto 09 - exporting metadata question
I'm using iPhoto 09 as one of several organizational tools for my doctoral dissertation. I have thousands of research photos organized nicely in iPhoto. Over the last year I have added text to the "Description" field for many photos. What I would lik
-
7.0.1 Updater install problem
Hi We are running CF MX 7.0.0.91690 on a MS Windows 2000 SP4 Server, and am having problems installing the 7.0.1 Updater. A few seconds after double clicking on the Updater the install process stops with no error message. Has anyone else had the same
-
UI Customisation Error - Could not load type ~.PlumHandler from assembly portal50
Setting up the .Net development portal for UI customisation (5.0.1). I get a clean build but when I try to run the portal I get the following error. Server Error in '/portal' Application. <HR noShadeIZE=1> Configuration ErrorDescription: An error occ
-
Is there an update for Camera Raw that includes a Canon G16
I have a new Canon G16 and trying to use Camera Raw to work with raw files. It tells me I need an update but can not find one. Does this exist?