Large number of objets - log miner scalability?
We have been consolidating several departmental databases into one big RAC database. Moreover, in tests databases we are cloning test cells (for example, an application schema is getting cloned hundred of times so that our users may test independently from each others).
So, our acception test database now have about 500,000 objects in it. We have production databases with over 2 millions objects in it.
We are using streams. At this time we're using a local capture, but our architecture aims to use downstream capture soon... We are concerned about the resources required for the log miner data dictionary build.
We are currently not using DBMS_LOGMNR_D.build directly, but rather indirectly through the DBMS_STREAMS_ADM.add_table_rule. We only want to replicate about 30 tables.
We are surprised to find that the log miner always build a complete data dictionary for every objets of the database (tables, partitions, columns, users, and so on).
Apparently there is no way to create a partial data dictionary even by using DBMS_LOGMNR_D.BUILD directly...
Lately, it took more than 2 hours just to build the log miner data dictionary on a busy system! And we ended up with an ORA-01280 error. So we started all over again...
We just increased our redo log size recently. I haven't had a chance to test after the change. Our redo log was only 4MB, we increased it to 64MB to reduce checkpoint activity. This will probably help...
Does anybody has encountered slow log miner dictionary build?
Any advice?
Thanks you in advance.
Jocelyn
Hello Jocelyn,
In streams environment, the logminer dictionary build is done using DBMS_CAPTURE_ADM.BUILD procedure. You should not be using DBMS_LOGMNR_D.BUILD for this.
In Streams Environment, DBMS_STREAMS_ADM.ADD_TABLE_RULE will dump the dictionary only on the first time when you call this, since the capture process is not yet created and it will be created only when you call DBMS_STREAMS_ADM.ADD_TABLE_RULE and a dictionary dump as well. Logminer dictionary will have the information about all the objects like tables, partitions, columns, users and etc.. The dictionary dump will take time depends on the number of objects in the database since if the number of objects are very high in the database then the data dictionary itself will be big.
Your redo size 64MB and this is too small for a production system, you should consider having a redo log size of 200M atleast.
You can have a complete logminer dictionary build using DBMS_CAPTURE_ADM.BUILD and then create a capture process using the FIRST_SCN returned from the BUILD procedure.
Let me know if you have more doubts.
Thanks,
Rijesh
Similar Messages
-
Large number of event Log entries: connection open...
Hi,
I am seeing a large number of entries in the event log of the type:
21:49:17, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
21:49:15, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
Are these anything I should be concerned about? I have tried a couple of forum and Google searches, but I don't quite know where to start beyond pasting the first bit of the message. I haven't found anything obvious from those searches.
DHCP table lists 192.168.1.78 as the desktop PC on which I'm writing this.
Please could you point me in the direction of any resources that will help me to work out if I should be worried about this?
A slightly longer extract is shown below:
21:49:17, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
21:49:15, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
21:49:15, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/SYN_SENT ppp0 NAPT)
21:49:11, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] TIME_WAIT/CLOSED ppp0 NAPT)
21:49:03, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [178.190.63.75:55535] CLOSED/SYN_SENT ppp0 NAPT)
21:49:00, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [2.96.4.85:23939] TIME_WAIT/CLOSED ppp0 NAPT)
21:48:59, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.144.143.222:21617] CLOSED/TIME_WAIT ppp0 NAPT)
21:48:58, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28188] ppp0 NAPT)
21:48:57, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28288] CLOSED/SYN_SENT ppp0 NAPT)
21:48:57, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:18048] ppp0 NAPT)
21:48:57, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:54199] CLOSED/SYN_SENT ppp0 NAPT)
21:48:55, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.144.91.49:60704] ppp0 NAPT)
21:48:55, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:50875] TIME_WAIT/CLOSED ppp0 NAPT)
21:48:45, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:57656] ppp0 NAPT)
21:48:39, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:56975] CLOSED/SYN_SENT ppp0 NAPT)
21:48:29, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [79.99.145.46:8368] CLOSED/SYN_SENT ppp0 NAPT)
21:48:27, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [90.192.249.173:45250] ppp0 NAPT)
21:48:16, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [212.17.96.246:62447] ppp0 NAPT)
21:48:10, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [82.16.198.117:49942] TIME_WAIT/CLOSED ppp0 NAPT)
21:48:08, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] CLOSED/SYN_SENT ppp0 NAPT)
21:48:04, 11 Mar.
IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [89.153.251.9:53729] TIME_WAIT/CLOSED ppp0 NAPT)
21:47:54, 11 Mar.
IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:37150] ppp0 NAPT)Hi,
Thank you for the response. I think, but can't remember for sure, that UPnP was already switched off when I captured that log. Anyway, even if it wasn't, it is now. So I will see what gets captured in my logs.
I've just had to restart my Home Hub because of other connection issues and I notice that the first few entries are also odd:
19:35:16, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
19:34:45, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:34:31, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
19:34:31, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:34:04, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
19:33:46, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
19:33:46, 12 Mar.
IN: BLOCK [12] Spoofing protection (IGMP 86.164.178.188->224.0.0.22 on ppp0)
19:33:45, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:33:39, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
19:33:33, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
19:33:29, 12 Mar.
IN: BLOCK [15] Default policy (UDP 111.252.36.217:26328->86.164.178.188:12708 on ppp0)
19:33:16, 12 Mar.
IN: BLOCK [15] Default policy (TCP 193.113.4.153:80->86.164.178.188:49572 on ppp0)
19:33:14, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:33:14, 12 Mar.
IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:44266 on ppp0)
19:33:14, 12 Mar.
( 164.240000) CWMP: session completed successfully
19:33:13, 12 Mar.
( 163.700000) CWMP: HTTP authentication success from https://pbthdm.bt.mo
19:33:05, 12 Mar.
BLOCKED 106 more packets (because of Default policy)
19:33:05, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
19:33:05, 12 Mar.
IN: BLOCK [15] Default policy (TCP 213.1.72.209:80->86.164.178.188:49547 on ppp0)
19:33:05, 12 Mar.
BLOCKED 94 more packets (because of Default policy)
19:33:05, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:33:05, 12 Mar.
IN: BLOCK [15] Default policy (TCP 199.59.148.87:443->86.164.178.188:49531 on ppp0)
19:33:05, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
19:33:04, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:33:04, 12 Mar.
( 155.110000) CWMP: Server URL: https://pbthdm.bt.mo; Connecting as user: ACS username
19:33:04, 12 Mar.
( 155.090000) CWMP: Session start now. Event code(s): '1 BOOT,4 VALUE CHANGE'
19:32:59, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:32:54, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
19:32:53, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:52, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
19:32:51, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:32:48, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:47, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
19:32:46, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:46, 12 Mar.
BLOCKED 4 more packets (because of First packet is Invalid)
19:32:45, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49461->199.59.149.232:443 on ppp0)
19:32:44, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:44, 12 Mar.
BLOCKED 1 more packets (because of First packet is Invalid)
19:32:43, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49398->193.113.4.153:80 on ppp0)
19:32:42, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:42, 12 Mar.
BLOCKED 3 more packets (because of First packet is Invalid)
19:32:42, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49277->119.254.30.32:443 on ppp0)
19:32:41, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:41, 12 Mar.
BLOCKED 1 more packets (because of First packet is Invalid)
19:32:41, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:38, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
19:32:36, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
19:32:34, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
19:32:30, 12 Mar.
IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:47022 on ppp0)
19:32:30, 12 Mar.
( 120.790000) CWMP: session closed due to error: WGET TLS error
19:32:30, 12 Mar.
( 120.140000) NTP synchronization success!
19:32:30, 12 Mar.
BLOCKED 1 more packets (because of Default policy)
19:32:29, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49458->217.41.223.234:80 on ppp0)
19:32:28, 12 Mar.
OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
19:32:26, 12 Mar.
( 116.030000) NTP synchronization start
19:32:25, 12 Mar.
OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49442->74.125.141.91:443 on ppp0)
19:32:25, 12 Mar.
OUT: BLOCK [15] Default policy (TCP 192.168.1.78:49310->204.154.94.81:443 on ppp0)
19:32:25, 12 Mar.
IN: BLOCK [15] Default policy (TCP 88.221.94.116:80->86.164.178.188:49863 on ppp0) -
Database large Number of archive log
Oracle 11g
window server 2008 R2
My database working fine, from last week i have noticed that database generating large no of archive log.
Database size is 30GB
Only one table space is 16GB , other tablespaces not more 2 GB.
I can not figured out why it generating large no. of archive log. can any one help me to figure out.
previous week i have only did these changes
Drop index
create index
create new table from existing table.
nothing else i did.Hi
As you say workload increases. See when the number of log switches goes high and take an AWR report or statspack report. Check the DML operations. Use below query to chk the log switches
spool c:\log_hist.txt
SET PAGESIZE 90
SET LINESIZE 150
set heading on
column "00:00" format 9999
column "01:00" format 9999
column "02:00" format 9999
column "03:00" format 9999
column "04:00" format 9999
column "05:00" format 9999
column "06:00" format 9999
column "07:00" format 9999
column "08:00" format 9999
column "09:00" format 9999
column "10:00" format 9999
column "11:00" format 9999
column "12:00" format 9999
column "13:00" format 9999
column "14:00" format 9999
column "15:00" format 9999
column "16:00" format 9999
column "17:00" format 9999
column "18:00" format 9999
column "19:00" format 9999
column "20:00" format 9999
column "21:00" format 9999
column "22:00" format 9999
column "23:00" format 9999
SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0), '99')) "00:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0), '99')) "01:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0), '99')) "02:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0), '99')) "03:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0), '99')) "04:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0), '99')) "05:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0), '99')) "06:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0), '99')) "07:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0), '99')) "08:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0), '99')) "09:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0), '99')) "10:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0), '99')) "11:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0), '99')) "12:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0), '99')) "13:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0), '99')) "14:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0), '99')) "15:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0), '99')) "16:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0), '99')) "17:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0), '99')) "18:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0), '99')) "19:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0), '99')) "20:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0), '99')) "21:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0), '99')) "22:00"
, SUM(TO_NUMBER(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0), '99')) "23:00"
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
) WHERE ROWNUM <8;
spool off
One common mistake is enabling debugging. You can check in application code if any debugging is enabled. (insert every records for logging or support purpose)
Regards
Anand. -
i have a large number of PDFs documents stored on a windows pc that I would like to access on my iPad mini even when not connected to a network. Does anyone know a solution. Eg maybe card reader and a app.
Those devices have mixed results depending on the application. If the application does not support the device, then you are out of luck. There is no file manager in IOS to allow you to move files between folders because there are no folders that you can access. An app has to specifically support a storage location to access files.
It's not that bad. The app has to have the ability to accept data from another app. The latest memory dongles come with an ios app to access the external storage. You then transfer the pdf from dongle's app to your app. If your app can accept data from dropbox, it will be able to accept data from the memory dongle app.
Other solutions:
== "GoodReaderUSB is a practical and useful application whose main purpose is to help users transfer files and folders from their mobile device to PC effortlessly. It helps them to move and backup important data from their Apple device via a USB cable."
http://www.softpedia.com/get/IPOD-TOOLS/Multimedia-IPOD-tools/GoodReaderUSB.shtm l
== "iExplorer's disk mounting features allow you to use your iPhone, iPod or iPad like a USB flash drive."
http://www.macroplant.com/iexplorer/ -
Using Log Miner to find reason for extremely large archive logs.
Hello everyone,
I have an Oracle 10g RAC database that sometimes generates extremely large number archive logs. The database is in ARCHIVELOG mode.
The usual volume of archive logs per day after compression is about 5GB, sometimes that spikes to 15GB and I cannot understand why.
I am looking at gathering statistics based on the inflated redo logs via LogMiner.
Looking at the structure of V$LOGMNR_CONTENTS - there are columns with promising names such as REDO_LENGTH, REDO_OFFSET, UNDO_LENGTH, UNDO_OFFSET.
However all these columns are deprecated. http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1154.htm
Is there a way of identifying operations that generate large redo logs?
The documentation for LogMiner has some example user sessions but none show how to generate statistics on the connection between redo logs and sql statements.
I see nothing that can help me in the following views:
V$LOGMNR_DICTIONARY, V$LOGMNR_DICTIONARY_LOAD, V$LOGMNR_LATCH,V$LOGMNR_LOGS, V$LOGMNR_PARAMETERS, V$LOGMNR_PROCESS, V$LOGMNR_SESSION
These views plus the following columns sound somewhat promising:
V$LOGMNR_CONTENTS -> RBABLK, RBABYTE, UBAFIL, UBABLK, UBAREC, UBASQN, ABS_FILE#, REL_FILE#, DATA_BLK#, DATA_OBJ#, DATA_OBJD#
V$LOGMNR_STATS -> NAME , VALUE
However I found nothing in the documentation on how to use them. (esp. not in Database reference, Database Utilities - the main documents I looked into). What should I read? Any strategies or ideas?
Kind regards:
al_shopovTo find sessions generating lots of redo, you can use either of the following
methods. Both methods examine the amount of undo generated. When a transaction
generates undo, it will automatically generate redo as well.
The methods are:
1) Query V$SESS_IO. This view contains the column BLOCK_CHANGES which indicates
how much blocks have been changed by the session. High values indicate a
session generating lots of redo.
The query you can use is:
SELECT s.sid, s.serial#, s.username, s.program,
i.block_changes
FROM v$session s, v$sess_io i
WHERE s.sid = i.sid
ORDER BY 5 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.
2) Query V$TRANSACTION. This view contains information about the amount of
undo blocks and undo records accessed by the transaction (as found in the
USED_UBLK and USED_UREC columns).
The query you can use is:
SELECT s.sid, s.serial#, s.username, s.program,
t.used_ublk, t.used_urec
FROM v$session s, v$transaction t
WHERE s.taddr = t.addr
ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;
Run the query multiple times and examine the delta between each occurrence
of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by
the session.
hth
Kezie -
How can I delete all my junk mail or a large number of emails in one go?
Please can someone tell me if I can delete all the mail in my junk or a large number of mail (100+) in go? I'm so fed up of spending 5-10 mins just selected each mail and deleting it one by one!
Please help or apple update ASAP. Even old school phones have a delete all button!
Thanks :)Lulu6094 wrote:
Even old school phones have a delete all button!
But the iPad mail app does not have one. You must select them one at a time.
http://www.apple.com/feedback/ipad.html -
Not getting SCN details in Log Miner
Oracle 11g
Windows 7
Hi DBA's,
I am not getting the SCN details in log miner. Below are steps for the same:-
SQL> show parameter utl_file_dir
NAME TYPE VALUE
utl_file_dir string
SQL> select name,issys_modifiable from v$parameter where name ='utl_file_dir';
NAME ISSYS_MOD
utl_file_dir FALSE
SQL> alter system set utl_file_dir='G:\oracle11g' scope=spfile;
System altered.
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1071333376 bytes
Fixed Size 1334380 bytes
Variable Size 436208532 bytes
Database Buffers 629145600 bytes
Redo Buffers 4644864 bytes
Database mounted.
Database opened.
SQL> show parameter utl_file_dir
NAME TYPE VALUE
utl_file_dir string G:\oracle11g\logminer_dir
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
NO
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
YES
SQL> /* Minimum supplemental logging is now enabled. */
SQL>
SQL> alter system switch logfile;
System altered.
SQL> select g.group# , g.status , m.member
2 from v$log g, v$logfile m
3 where g.group# = m.group#
4 and g.status = 'CURRENT';
GROUP# STATUS
MEMBER
1 CURRENT
G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG
SQL> /* start fresh with a new log file which is the group 1.*/
SQL> create table scott.test_logmnr
2 (id number,
3 name varchar2(10)
4 );
Table created.
SQL> BEGIN
2 DBMS_LOGMNR_D.build (
3 dictionary_filename => 'logminer_dic.ora',
4 dictionary_location => 'G:\oracle11g');
5 END;
6 /
PL/SQL procedure successfully completed.
SQL> /*
SQL> This has recorded the dictionary information into the file
SQL> "G:\oracle11g\logminer_dic.ora".
SQL> */
SQL> conn scott/
Connected.
SQL> insert into test_logmnr values (1,'TEST1');
1 row created.
SQL> insert into test_logmnr values (2,'TEST2');
1 row created.
SQL> commit;
Commit complete.
SQL> select * from test_logmnr;
ID NAME
1 TEST1
2 TEST2
SQL> update test_logmnr set name = 'TEST';
2 rows updated.
SQL> select * from test_logmnr;
ID NAME
1 TEST
2 TEST
SQL> commit;
Commit complete.
SQL> delete from test_logmnr;
2 rows deleted.
SQL> commit;
Commit complete.
SQL> select * from test_logmnr;
no rows selected
SQL> conn / as sysdba
Connected.
SQL> select g.group# , g.status , m.member
2 from v$log g, v$logfile m
3 where g.group# = m.group#
4 and g.status = 'CURRENT';
GROUP# STATUS MEMBER
1 CURRENT G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG
SQL> begin
2 dbms_logmnr.add_logfile
3 (
4 logfilename => 'G:\oracle11g\oradata\my11g\REDO01.LOG',
5 options => dbms_logmnr.new
6 );
7
8 /
PL/SQL procedure successfully completed.
SQL> select filename from v$logmnr_logs;
FILENAME
G:\oracle11g\oradata\my11g\REDO01.LOG
PL/SQL procedure successfully completed.
SQL> BEGIN
2 -- Start using all logs
3 DBMS_LOGMNR.start_logmnr (
4 dictfilename => 'G:\oracle11g\logminer_dic.ora');
5
6 END;
7 /
PL/SQL procedure successfully completed.
SQL> DROP TABLE myLogAnalysis;
Table dropped.
SQL> create table myLogAnalysis
2 as
3 select * from v$logmnr_contents;
Table created.
SQL> begin
2 DBMS_LOGMNR.END_LOGMNR();
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> set lines 1000
SQL> set pages 500
SQL> column scn format a6
SQL> column username format a8
SQL> column seg_name format a11
SQL> column sql_redo format a33
SQL> column sql_undo format a33
SQL> select scn , seg_name , sql_redo , sql_undo
2 from myLogAnalysis
3 where username = 'SCOTT'
4 AND (seg_owner is null OR seg_owner = 'SCOTT');
SCN SEG_NAME
SQL_REDO
SQL_UNDO
set transaction read write;
commit;
set transaction read write;
########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
("ID","NAME") values ('1','TEST1' where "ID" = '1' and "NAME" = 'T
EST1' and ROWID = 'AAARjeAAEAAAAD
PAAA';
########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
("ID","NAME") values ('2','TEST2' where "ID" = '2' and "NAME" = 'T
EST2' and ROWID = 'AAARjeAAEAAAAD
PAAB';
commit;
set transaction read write;
########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set update "SCOTT"."TEST_LOGMNR" set
"NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST1' where "NAME" = '
EST1' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAAD
PAAA';
PAAA';
########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set update "SCOTT"."TEST_LOGMNR" set
"NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST2' where "NAME" = '
EST2' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAADKindly type
Desc v$logmnr_contents
Please notice the scn is a *number* column,not varchar2
By using format a6 you are forcing Oracle to display a too big number as a char. Hence the ##.
Sybrand Bakker
Senior Oracle DBA -
Log miner end-of-file on communcation channel
Hi,
I'm trying to use log miner but when I perform a select from the
v$logmnr_conents table such as
select operation from v$logmnr_conents where username = 'FRED'
I get a ORA-03113: end-of-file on communcation channel.
the trace files given no information expect the very unuseful 'internal
error'.
Anyone had this problem? is it possible to read the archive log without
logminer?? I really need to read the logs because someone updated the wrong data in the database and I need to recover this.
Thanks in advance,
steve.Hi Joel,
Here is SGA information:
select * from v$sgastat where name = 'free memory';
POOL NAME BYTES
shared pool free memory 75509528
large pool free memory 16777216
java pool free memory 83886080
Thank you for your time,
Katya -
Best way to delete large number of records but not interfere with tlog backups on a schedule
Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules. There is a list of tables that need a lot of records purged from them. What would be a good approach to use for deleting the old records?
Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
Approach #1
A one-time delete that did everything. Delete all the old records, in batches of say 50,000 at a time.
After each run through all the tables for that DB, execute a tlog backup.
Approach #2
Create a job that does a similar process as above, except dont loop. Only do the batch once. Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
Note:
Some of these (well, most) are going to have relations on them.Hi shiftbit,
According to your description, in my opinion, the type of this question is changed to discussion. It will be better and
more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state.
For more information about deleting a large number of records without affecting the transaction log.
http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
Hope it can help.
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
Best practice for handling data for a large number of indicators
I'm looking for suggestions or recommendations for how to best handle a UI with a "large" number of indicators. By large I mean enough to make the block diagram quite large and ugly after the data processing for each indicator is added. The data must be "unpacked" and then decoded, e.g., booleans, offset binary bit fields, etc. The indicators are updated once/sec. I am leanding towards a method that worked well for me previously, that is, binding network shared variables to each indicator, then using several sub-vis to process the particular piece of data and write to the appropriate variables.
I was curious what others have done in similar circumstances.
Bill
“A child of five could understand this. Send someone to fetch a child of five.”
― Groucho Marx
Solved!
Go to Solution.I can certainly feel your pain.
Note that's really what is going on in that png You can see the Action Engine responsible for updating the display to the far right.
In my own defence: the FP concept was presented to the client's customer before they had a person familliar with LabVIEW identified. So I worked it this way from no choice of mine. I knew it would get ugly before I walked in the door and chose to meet the challenge head on anyway. Defer Panel Updates was my very good friend. The sensors these objects represent were constrained to pass info via a single ZigBee network so I had the benefit of fairly low data rates as well but even changing view (Yes there is a display mode that swaps what information is displayed for each sensor) was updated fast enough that the user still got a responsive GUI.
(the GUI did scale poorly though! That is a lot of wires! I was greateful to Jack for the Idea to make align and distribute work on wires)
Jeff -
Log miner doesn't show all transactions on a table
I'm playing a little with log miner on oracle 11gR2 on a 32bit CentOS Linux install, but it looks like it's not showing me all DML on my test table. Am I doing something wrong?
Hi, there's my test case:
- Session #1, create table and insert first row:
SQL> create table stolf.test_table (
col1 number,
col2 varchar(10),
col3 varchar(10),
col4 varchar(10));
2 3 4 5
Table created.
SQL> insert into stolf.test_table (col1, col2, col3, col4) values ( 0, 20100305, 0, 0);
1 row created.
SQL> commit;
SQL> select t.ora_rowscn, t.* from stolf.test_table t;
ORA_ROWSCN COL1 COL2 COL3 COL4
1363624 0 20100305 0 0
- Execute shell script to insert a thousand lines into table:
for i in `seq 1 1000`; do
sqlplus -S stolf/<passwd><<-EOF
insert into stolf.test_table (col1, col2, col3, col4) values ( $ , 20100429, ${i}, ${i} );
commit;
EOF
done
- Session #1, switch logfiles:
SQL> alter system switch logfile;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> alter system switch logfile;
System altered.+
- Session #2, start logminer with continuous_mine on, startscn = first row ora_rowscn, endscn=right now. The select on v$logmnr_contents should return at least a thousand rows, but it returns three rows instead :
BEGIN
SYS.DBMS_LOGMNR.START_LOGMNR(STARTSCN=>1363624, ENDSCN=>timestamp_to_scn(sysdate), OPTIONS => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + sys.DBMS_LOGMNR.COMMITTED_DATA_ONLY + SYS.DBMS_LOGMNR.CONTINUOUS_MINE);
END;
SQL> select SCN, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_REDO IS NOT NULL AND seg_owner = 'STOLF';
SCN
SQL_REDO
SQL_UNDO
1365941
insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('378','20100429','378','378');
delete from "STOLF"."TEST_TABLE" where "COL1" = '378' and "COL2" = '20100429' and "COL3" = '378' and "COL4" = '378' and ROWID = 'AAASOHAAEAAAATfAAB';
1367335
insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('608','20100429','608','608');
delete from "STOLF"."TEST_TABLE" where "COL1" = '608' and "COL2" = '20100429' and "COL3" = '608' and "COL4" = '608' and ROWID = 'AAASOHAAEAAAATfAAm';
1368832
insert into "STOLF"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('849','20100429','849','849');
delete from "STOLF"."TEST_TABLE" where "COL1" = '849' and "COL2" = '20100429' and "COL3" = '849' and "COL4" = '849' and ROWID = 'AAASOHAAEAAAATbAAA';+Enable supplemental logging.
Please see below,
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 422670336 bytes
Fixed Size 1300352 bytes
Variable Size 306186368 bytes
Database Buffers 109051904 bytes
Redo Buffers 6131712 bytes
alter databsDatabase mounted.
SQL>
2
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> alter system checkpoint;
System altered.
SQL> drop table test_Table purge;
Table dropped.
SQL> create table test_table(
2 col1 number,
col2 varchar(10),
col3 varchar(10),
col4 varchar(10)); 3 4 5
Table created.
SQL> insert into test_table (col1, col2, col3, col4) values ( 0, 20100305, 0, 0);
1 row created.
SQL> commit;
Commit complete.
SQL> select t.ora_rowscn, t.* from test_table t;
ORA_ROWSCN COL1 COL2 COL3 COL4
1132572 0 20100305 0 0
SQL> for i in 1..1000 loop
SP2-0734: unknown command beginning "for i in 1..." - rest of line ignored.
SQL> begin
2 for i in 1..1000 loop
3 insert into test_table values(i,20100429,i,i);
4 end loop; commit;
5 end;
6 /
PL/SQL procedure successfully completed.
SQL> alter system switch logfile;
System altered.
SQL> /
SQL> select * from V$version;
BANNER
--------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for Linux: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - ProductionIn the second session,
SQL> l
1 select SCN, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS where SQL_REDO IS NOT NULL
2* and seg_owner='SYS' and table_name='TEST_TABLE'
--------------------------------------------------------------------------------insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('2','20100429','2','2');
delete from "SYS"."TEST_TABLE" where "COL1" = '2' and "COL2" = '20100429' and "COL3" = '2' and "COL4" = '2' and ROWID = 'AAASPKAABAAAVpSAAC';
1132607
insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('3','2010042
SCN
SQL_REDO
--------------------------------------------------------------------------------SQL_UNDO
--------------------------------------------------------------------------------9','3','3');
delete from "SYS"."TEST_TABLE" where "COL1" = '3' and "COL2" = '20100429' and "COL3" = '3' and "COL4" = '3' and ROWID = 'AAASPKAABAAAVpSAAD';
1132607
insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('4','20100429','4','4');
<<trimming the output>>
--------------------------------------------------------------------------------429','997','997');
delete from "SYS"."TEST_TABLE" where "COL1" = '997' and "COL2" = '20100429' and
"COL3" = '997' and "COL4" = '997' and ROWID = 'AAASPKAABAAAVpVACU';
1132607
insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('998','20100429','998','998');
SCN
SQL_REDO
--------------------------------------------------------------------------------SQL_UNDO
--------------------------------------------------------------------------------delete from "SYS"."TEST_TABLE" where "COL1" = '998' and "COL2" = '20100429' and
"COL3" = '998' and "COL4" = '998' and ROWID = 'AAASPKAABAAAVpVACV';
1132607
insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('999','20100429','999','999');
delete from "SYS"."TEST_TABLE" where "COL1" = '999' and "COL2" = '20100429' and
SCN
SQL_REDO
--------------------------------------------------------------------------------SQL_UNDO
--------------------------------------------------------------------------------"COL3" = '999' and "COL4" = '999' and ROWID = 'AAASPKAABAAAVpVACW';
1132607
insert into "SYS"."TEST_TABLE"("COL1","COL2","COL3","COL4") values ('1000','20100429','1000','1000');
delete from "SYS"."TEST_TABLE" where "COL1" = '1000' and "COL2" = '20100429' and "COL3" = '1000' and "COL4" = '1000' and ROWID = 'AAASPKAABAAAVpVACX';
SCN
SQL_REDO
--------------------------------------------------------------------------------SQL_UNDO
1000 rows selected.
SQL>HTH
Aman.... -
Oracle Error 01034 After attempting to delete a large number of rows
I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
SQL Command I ran:
delete from oss_cell_main where time < '30 jul 2009'
If I try to connect to the database now I get the following error:
ORA-01034: ORACLE not available
df -h returns the following:
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
/dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
/dev/md/dsk/d8 42G 42G 0K 100% /dbo
I tried to get the space back by deleting all the data in the table oss_cell_main :
drop table oss_cell_main purge
But no change in df output.
I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
du -h :
du -h8K ./lost+found
1008M ./system/69333
1008M ./system
10G ./rollback/69333
10G ./rollback
27G ./data/69333
27G ./data
1K ./inx/69333
2K ./inx
3.8G ./tmp/69333
3.8G ./tmp
150M ./redo/69333
150M ./redo
42G .
I think its the rollback folder that has increased in size immensely.
SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 10800
undo_tablespace string UNDOTBS1
select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
UNDOTBS1 8192 65536 1
2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.Check the alert log for errors.
Select file_name, bytes from dba_data_files order by bytes;
Try to shrink some datafiles to get space back. -
DBA Reports large number of inactive sessions with 11.1.1.1
All,
We have installed System 11.1.1.1 on some 32 bit windows test machines running Windows Server 2003. Everything seems to be working fine, but recently the DBA is reporting that there are a large number of inactive sessions throwing alarms that we are reaching our Max Allowed Process on the Oracle Database server. We are running Oracle 10.2.0.4 on AIX.
We also have some System 9.3.1 Development servers that point at separate schemas in this environment and we don't see the same high number of inactive connections?
Most of the inactive connections are coming from Shared Services and Workspace. Anyone else see this or have any ideas?
Thanks for any responses.
Keith
Just a quick update. Originally I said this was only with 11.1.1.1 but we see the same high number of inactive sessions in 9.3. Anyone else see a large number of inactive sessions. They show up in Oracle as JDBC_Connect_Client. Does Shared Service, Planning Workspace etc utilize persistent connections or does it just abandon sessions when the windows service associated with an application is shutdown? Any information or thoughts are appreciated.
Edited by: Keith A on Oct 6, 2009 9:06 AMHi,
Not the answer you are looking for but have you logged it with Oracle as you might not get many answers to this question on here.
Cheers
John
http://john-goodwin.blogspot.com/ -
Large number of http posts navigating between forms
Hi,
i'm not really a forms person (well not since v3/4 running character mode on a mainframe!), so please be patient if I'm not providing the most useful information.
An oracle forms 10 system that I have fallen into supporting has to me very poor performance in doing simple things like navigating between forms/tabs.
Looking at the java console (Running Sun JRE 1.6.0_17), and turning on network tracing, I can see a much larger number of post requests than I would expect (I looked here first as initially we had an issue with every request going via a proxy server, and I wondered if we had lost the bypass proxy setting). Only a normal number of GETS though.
Moving to one particualr detail form from a master record is generating over 300 post requests - I'v confirmed this looking at the Apache logs on the server. This is the worst one I have found, but in general the application appears to be extremely 'chatty'
The only other system I work with which uses forms doesn't generate anything like these numbers of requests, which makes me think this isn't normal (As well as the fact this particular form is very slow to open)
This is a third party application, so i don't have access to the source unfortunately.
Is there anything we should look at in our setup, or is this likely to be an application coding issue? This app is a recent conversion from a forms 6 client server application (Which itself ran ok, at least this bit of the application did with no delays in navigation between screens).
I'm happy to go back to the supplier, but it might help if I can point them into some specific directions, plus i'd like to know what's going on too!
Regards,
CarlSounds odd. 300 Requests is by far too much. As it was a C/S application: did they do anything else except the recompile on 10g? Moving from C/S to 10g webforms seems to be easy as you just need to recompile but in fact it isn't. There are many things which didn't matter in a C/S environment but have disastrous effects once the form is deployed over the web. The synchronize built in for example. In C/S calls to synchronize wasn't that bad; But when you are using web deployed forms...each call to synchronize is a roundtrip. The usage of timers is also best kept on a low level in webforms for example.
A good starting point for the whole do's and dont's when moving forms to the web is the forms upgrade center:
http://www.oracle.com/technetwork/developer-tools/forms/index-095046.html
If you don't have the source code available that's unfortune; but if you want to know what's happening behind the scenes there is the possibility to trace a forms session:
http://download.oracle.com/docs/cd/B14099_19/web.1012/b14032/tracing002.htm#i1035515
maybe this sheds some light upon what's going on.
cheers -
Large number of deadlocks seen after upgrade to 2.4.14
We upgraded the BDB version to 2.4.14 and are using the latest 4.7 release. Without code change on our part we are seeing large number of deadlocks with the new version.
Do let me know if more information is needed.
BDB: 106 lockers
BDB: 106 lockers
BDB: 106 lockers
BDB: 106 lockers
BDB: Aborting locker 8000a651
BDB: MASTER: /m-db/m-db432816-3302-4c30-9dd0-e42a295c970c/master rep_send_message: msgv = 5 logv 14 gen = 149 eid -1, type log, LSN [825][1499447]
BDB: 107 lockers
BDB: Aborting locker 8000a652
BDB: 107 lockers
BDB: MASTER: /m-db/m-db432816-3302-4c30-9dd0-e42a295c970c/master rep_send_message: msgv = 5 logv 14 gen = 149 eid -1, type log, LSN [825][1500259] perm
BDB: MASTER: will await acknowledgement: need 1
BDB: 106 lockers
BDB: 106 lockers
BDB: Aborting locker 8000a65a
BDB: Aborting locker 8000a658
BDB: MASTER: got ack [825][1500259](149) from site rit-004:10502
BDB: 105 lockers
BDB: 103 lockers
BDB: Container - 5e69b5cf184b41ef8f0719e1b0f944a1.bdbxml - Updating document: 5ca1ab1e0a0571bf048c6e298618c7048c6e2ec315a3
BDB: 104 lockers
BDB: Container - 5e69b5cf184b41ef8f0719e1b0f944a1.bdbxml - Updating document: 5ca1ab1e0a0571bf048c6e298618c7048c6e2ec35d5dAlso an interesting observation the replica process which not doing any thing except keeping up with the master is eating 3/4 times more cpu than the master when i am creating and updating records in the xml db.
On a 4 cpu setup, the master process takes about half a cpu where as the replica is chewing upto 2 CPUs
What is the replica doing which is this cpu intensive?!?!?!
Maybe you are looking for
-
AP Invoice Aging Report in BI Apps
Hi All, I am working with BI Apps 7.9.6.4 ( Financial Analytic) In AP DAshboard, we have AP Invoice Aging report. In this report 1-30, 31-60, 61-90 and 91+ Aging Buckets are available. I want all 365 days like 1-30, 31-60, 61-90, 91-120,121-150,151-1
-
Error in the ABAP Code for Customer Exit Variable
Could you please update me what is the wrong with the below ABAP Code developed for Customer Exit Variable in BW i created a Variable (ZVWKNO) of Customer Exit,Single Value ,Mandatory and Variable is ready for input In CMOD i had written the below Co
-
WRT54GS problems connecting PS3 to PC Vista Basic network to enable Media server access
I have a the above router and a new PS3. I am trying to connect it to me Windows Vista PC so that I can use WMP11 as a media server. The PS3 is connector to the router with a wired connection and I have no problems with it connecting to the inter ne
-
Is there a way to download my purchased music from another computer I authorize?
Is there a way to download all my purchased music onto a different computer I authorize? my last computer bit the dust and i was unable to deauthorize it and I would like to download the music I bought on to the computer I am able to use. Also, is th
-
HT1926 How can I update to newest version of itunes to use my new nano?
I am logged in as administrator, the updater keeps telling me that it does not have permissions to complete the update so i cant use the new nano because i cant update to 10.7 or later.