All possible causes of database hung
Hi,
Can someone please tell me what are all the possible causes of database hung?
Thanks.
Creems
Already discussed threads.
http://forums.oracle.com/forums/search.jspa?threadID=&q=database+hang&objID=f61&dateRange=lastyear&userID=&numResults=15
Regards,
Sabdar Syed.
Similar Messages
-
Compiled trigger causes database hung
Hi there,
I compile some triggers in our database and they cause our database hung, how can I do ? any hint? ThxThat is ugly - calling a remote proc from inside a trigger. And the wrong thing to do in most cases.
A trigger is there to protect the integrity of the data and transaction. A remote proc call can fail for numerous reasons. Network issues. Remote db is in restricted mode/down. Etc. These will cause the trigger to fail. The trigger failure will cause the business transaction to fail.
And I'm pretty sure that no business transaction and validation and processing logic dictates that it must fail because of sonmething like a network problem if it can be prevented.
If this is a replication issue, you should consider using Oracle's built-in replication instead.
Anything else - I would rather have the trigger inserting instructions (into a table) for a background (DBMS_JOB) process to pick up and execute. -
Log sequence error - possible causes?
We're using C++, DBXML 2.1.7, with underlying Berkeley 4.3.28 - core 5 linux (2.6.16.28). We use transacted write's, with no nesting of transactions. We've been running with this version of DBXML for some time and this is the first time we've seen any sort of data corruption with the database.
In this case, the database server was shutdown, and the system restarted - on restart, the database server core dumped. Repeated attempts to restart the database gave the same failure. We enabled error output for the Berkeley DB and we get the following errors:
Finding last valid log LSN: file: 1 offset 8234100
Recovery starting from [1][7965842]
Log sequence error: page LSN 1 1664073; previous LSN 1 5236280
Recovery function for LSN 1 8228918 failed on forward pass
PANIC: Invalid argument
PANIC: fatal region error detected; run recovery (repeated several times)
followed by a segfault in libdb_cxx-4.3.so.
So I have 2 questions, the first (and most important) being - how can the log file get corrupted? Is this an OS/file system problem? Or could we have a problem in our database server? It's relatively simple - there is a single thread for read's/write's, and a separate "checkpoint" thread that periodically calls the txn_checkpoint function. Something was just changed on the system that has to do with mirroring, specifically on the partition that holds our database, but I don't know the details (I can get the info, though).
The second question - why is Berkeley choking on the error path, instead of causing a database panic? Granted, in this situation it would appear that we're hosed either way, but a panic is at least a little more user-friendy than a core dump. Looking at the core file, it appears that we've entered the error handling portion of dbenv_open, and the mp_handle of the environment object is NULL - we fail in the call to __dbenv_refresh because of that. (If you're interested, we rebuilt Berkeley with debug symbols - I can give you a stack trace with details for the segfault.)
Oh, the startup flags for the database server are: DB_CREATE|DB_INIT_LOCK|DB_INIT_LOG|DB_INIT_MPOOL|DB_INIT_TXN|DB_RECOVER|DB_THREAD
Thanks!
WendyThanks Michael -
Here's the stack trace from one of the core files, generated with the debug version of the library:
#0 0xb7d97c3a in __dbenv_refresh (dbenv=0x80b5430, orig_flags=1024,
rep_check=0) at ../dist/../env/env_open.c:722
#1 0xb7d9993d in __dbenv_open (dbenv=0x80b5430,
db_home=0x80b5334 "/pivot3/repository/xml", flags=188513, mode=432)
at ../dist/../env/env_open.c:415
#2 0xb7d1fe78 in DbEnv::open (this=0xbfa94488,
db_home=0x80b5334 "/pivot3/repository/xml", flags=188513, mode=0)
at ../dist/../cxx/cxx_env.cpp:442
#3 0x0804e0e8 in main (argc=134537448, argv=0xbfa94488) at dbserver.cxx:179
As to the "why's": the database files are never moved (in normal operation - I did copy them to a similar system for debug, but the failure is the same on both systems), we always run recovery when we restart the server, we never physically access the environment outside of the server, and there is only one database server on the system, so no "cross contamination" from another server (basically, we're using DBXML as an embedded database to store configuration information - we have it running on multiple, identical systems, and this is the first time in over 2 years that we've seen any sort of database corruption).
So based on what you've said, the only other real possibility is if something happened to the partition holding the log file (all of the database files are on the same partition). Would there be any smoking guns we could look for on the physical system that might indicate what happened? The system that the failure originally occurred on is still in the 'last booted' state, in case there was anything that we could look for. The partition mirroring changes were just made this past week, which is why we thought it might be something at a lower level.
I just ran db_printlog and the entry for 1664073 looks like this:
[1][1664073]__bam_repl: rec: 58 txnid 80000047 prevlsn [0][0]
fileid: 21
pgno: 3
lsn: [1][8228918]
indx: 3
isdeleted: 0
orig: 0xc
repl: 0x11
prefix: 8
suffix: 23
The other record (5236280) does not exist in the log file, based on the output from db_printlog.
If there's anything else from the log print that you need, just let me know.
Thanks! -
Hi All,
I am getting an error message "All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054" when trying to log into HRC (This user has the reporting capabilities) . I checked the log files this is what i found out
The log file stated that there were ongoing connections of HRC with the CCX (I am sure there isn't any active login to HRC)
|| When you tried to login the following error was being displayed because the maximum number of connections were reached for the server . We can see that a total number of 5 connections have been configured . ||
1: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Current number of connections (5) from historical Clients/Scheduler to 'CRA_DATABASE' database exceeded the maximum number of possible connections (5).Check with your administrator about changing this limit on server (wfengine.properties), however this might impact server performance.
|| Below we can see all 5 connections being used up . ||
2: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:[DB Connections From Clients (count=5)]|[(#1) 'username'='uccxhrc','hostname'='3SK5FS1.ucsfmedicalcenter.org']|[(#2) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#3) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#4) 'username'='uccxhrc','hostname'='PFS-HHXDGX1.ucsfmedicalcenter.org']|[(#5) 'username'='uccxhrc','hostname'='47BMMM1.ucsfmedicalcenter.org']
|| Once the maximum number of connection was reached it threw an error . ||
3: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Number of max connection to 'CRA_DATABASE' database was reached! Connection could not be established.
4: 6/20/2014 9:13:49 AM %CHC-LOG_SUBFAC-3-UNK:Database connection to 'CRA_DATABASE' failed due to (All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054.)
Current exact UCCX Version 9.0.2.11001-24
Current CUCM Version 8.6.2.23900-10
Business impact Not Critical
Exact error message All available connections to database server are in use by other client machines. Please try again later and check the log file for error 5054
What is the OS version of the PC you are running and is it physical machine or virtual machine that is running the HRC client ..
OS Version Windows 7 Home Premium 64 bit and it’s a physical machine.
. The Max DB Connections for Report Client Sessions is set to 5 for each servers (There are two servers). The no of HR Sessions is set to 10.
I wanted to know if there is a way to find the HRC sessions active now and terminate the one or more or all of that sessions from the server end ?We have had this "PRX5" problem with Exchange 2013 since the RTM version. We recently applied CU3, and it did not correct the problem. We have seen this problem on every Exchange 2013 we manage. They are all installations where all roles
are installed on the same Windows server, and in our case, they are all Windows virtual machines using Windows 2012 Hyper-V.
We have tried all the "this fixed it for me" solutions regarding DNS, network cards, host file entries and so forth. None of those "solutions" made any difference whatsoever. The occurrence of the temporary error PRX5 seems totally random.
About 2 out of 20 incoming mail test by Microsoft Connectivity Analyzer fail with this PRX5 error.
Most people don't ever notice the issue because remote mail servers retry the connection later. However, telephone voice mail systems that forward voice message files to email, or other such applications such as your scanner, often don't retry and
simply fail. Our phone system actually disables all further attempts to send voice mail to a particular user if the PRX5 error is returned when the email is sent by the phone system.
Is Microsoft totally oblivious to this problem?
PRX5 is a serious issue that needs an Exchange team resolution, or at least an acknowledgement that the problem actually does exist and has negative consequences for proper mail flow.
JSB -
How to find which all workbook is using Database function ( User Defined)
Hi All,
Is it possible to find out which all workbook is using Database function( User Defined).
Thanks,Hi,
If I had to do this detective work, I would probably do the following:
1. Activate for a period of time the function, eul5_post_save_document. This function when activated is triggered at the time a workbook is saved. If you look at its columns, it save the worksheet's SQL.
2. Next, I would parse the EUL5_WORKSHEET_SQL.SQL_SEGMENT column which is a varchar2(4000) column. There are many effective Oracle functions which could aid you in this effort (e.g. instring or perhaps a regular expression function).
I hope this helps.
Patrick -
I don't know what's wrong with my Mac Mozilla Firefox, version 3.6.8, but today, it started alerting me about an error message on the "Error Console". In every website I visit, it tells me: "The 'charCode' property of a keyup event should not be used. The value is meaningless." Is this possibly caused by a virus?
I saw a pop-up which did not allow me to click it when I scatter the windows on my Mac. I was using Private Browsing, with pop-ups disabled, but one pop-up managed to get passed my settings, and open in another window. It would not allow me to select it, so all I did was to close Firefox, and start a new session. So far, everything has been normal, I also deleted the cookies it installed.
But, I still keep seeing that "Error Console" notice under my "Tools" on the Menu Bar, and when I clicked on it, it listed errors (such as what I listed above).
Would someone explain this to me?
Thanks for your help!The messages you see in the Error Console are mostly to assist the web site's author in resolving compatibility problems. Some of them can assist you in determining why a web site doesn't work as intended. The one you mentioned doesn't sound that suspicious, except that it occurs on many sites. Perhaps one of your add-ons is trying to monitor what you type?
To diagnose whether this is caused by an add-on or one of your settings, you could try the following:
First, make a backup of your computer for safekeeping. To back up Firefox, see [https://support.mozilla.com/en-US/kb/Backing+up+your+information Backing up your information].
Next, try starting Firefox in Firefox
[http://support.mozilla.com/kb/Safe+Mode Safe Mode]. Be careful not to "reset" anything permanently if you didn't back up.
Does that resolve the errors? If so, then an add-on usually is the culprit. If not, try creating a new (blank) profile: [http://support.mozilla.com/kb/Managing+profiles Managing profiles].
If the new profile works correctly, you can choose between further research on your old profile or moving key settings like bookmarks from your old profile to the new one. [https://support.mozilla.com/en-US/kb/Recovering+important+data+from+an+old+profile Recovering important data from an old profile].
Hope this helps. -
Hi,
We have a cluster with 2 nodes. Everything works fine in Node1. When I try to failover TEST1 database to Node-2 it fails with this message.
Generic service 'Analysis Services (TEST1)' could not be brought online (with error '1060') during an attempt to open the service. Possible causes include: the service is either not installed or the specified service name is invalid.
Any help is much appreciated.
ThanksHello,
The error message is pretty straight forward, it's saying either the service isn't installed or it's not installed as the same service on that node. Did you install analysis services on the second node (from the error it seems like it isn't)?
Sean Gallardy | Blog |
Twitter -
I used FireFox a long time for email and other scure sites. Out of the blue my logins started rejecting. In frustration, I went back to Internet Explorer and found I could log in using the same UID and PWD.
I cain't imagine what setting could be causing this to happen. Have you ever heard of such a thing? And is there a solution?In Firefox 3.6.4 and later the default connection settings have been changed to "Use the system proxy settings".
See "Firefox connection settings" in [[Server not found]]
You can find the connection setting here: Tools > Options > Advanced : Network : Connection
If you do not need to use a proxy to connect to internet then select No Proxy
Another possible cause is security software (firewall) that blocks or restricts Firefox without informing you about that,.
Remove all rules for Firefox from the permissions list in the firewall and let your firewall ask again for permission to get full unrestricted access to internet for Firefox.
See [[Server not found]] and [[Firewalls]] and http://kb.mozillazine.org/Firewalls -
Simply delete all entries in a database.
Hello,
how do I simply delete all entries in a database (which must be thread safe, and most probably is)? For instance it is needed, as I'm developing a versioned open source XML/JSON database system, whereas I'm using a BerkeleyDB environment/database as a transaction log per revision (resource/log/version_number/...) for dirty blocks/pages and now want to introduce checkpointing (with currently only a single write-transaction per resource). That is another reading transaction (possibly another thread might read a transaction log, a BerkeleyDB environment/database while another thread, the checkpointer (most probably a deamon thread) commits, that is writes the log periodically or during less workload into the real resource. After the data is commited, the transaction-log must be emptied, but probably a reading transaction still reads from the log and falls back to reading from the real resource if the page is not in the log. That is I can't remove the database, but probably simply have to delete all entries and a simple .commit-file flag which indicates if the data has been written back to the real resource or the checkpointer must be writing it back sometime in the future (if the .commit-file still exists). Do I have to iterate through the database with a cursor (and .getNext())? Or does a dedicated method exist?
kind regards
JohannesHi Johannes,
As I think you've already discovered, there is no built-in method for deleting all records of a database that is open. The only similar built-in methods are those for removing or truncating an entire database (removeDatabase and truncateDatabase), and the database must be closed.
If you can't find a way to use removeDatabase or truncateDatabase, then you'll have to iterate through the records and delete them individually. If this is done for a large numbers of records in a single transaction, it will be expensive on a number of fronts, including memory usage for the locks: each record is individually locked.
If you don't need to delete all records in a single transaction (I couldn't completely understand your use case), then you can iterate with a cursor using READ_UNCOMMITTED and delete the records in individual transactions using Database.delete. This avoids using lots of memory for the locks, since only one record is locked at a time.
In either case the cost can be reduced by using DatabaseEntry.setPartial(0, 0, true) for the DatabaseEntry that is passed as the data parameter. You only need the key to delete the record, not the data, and avoiding a fetch of the data is a big cost savings (if the record data is not in cache). This optimization is only in JE 5.0 and above --in JE 4.1 and earlier, this has no advantage because the data is always fetched internally, as part of the deletion operation.
--mark -
My Macbook Pro is getting hot pretty quickly. What are the possible causes? I have had the machine about 3 years.
Open Activity Monitor in the Utilities folder. Select All Processes from the Processes dropdown menu. Click twice on the CPU% column header to display in descending order. If you find a process using a large amount of CPU time (>=70,) then select the process and click on the Quit icon in the toolbar. Click on the Force Quit button to kill the process. See if that helps. Be sure to note the name of the runaway process so you can track down the cause of the problem.
Install iStat Menus 4.05 to provide actual temperature readings and fan speeds. -
How to search all columns of all tables in a database
i need to search all columns of all tables in a database , i already write the code below , but i've got the error message below when run this script
DECLARE
cnt number;
v_data VARCHAR2(20);
BEGIN
v_data :='5C4CA98EAC4C';
FOR t1 IN (SELECT table_name, column_name FROM all_tab_cols where owner='admin' and DATA_TYPE='VARCHAR2') LOOP
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ' ||t1.table_name|| ' WHERE ' ||t1.column_name || ' = :1' INTO cnt USING v_data;
IF cnt > 0 THEN
dbms_output.put_line( t1.table_name ||' '||t1.column_name||' '||cnt );
END IF;
END LOOP;
END;
Error report:
ORA-00933: SQL command not properly ended
ORA-06512: at line 7
00933. 00000 - "SQL command not properly ended"
*Cause:
*Action:
Any help pleaseSQL solutions by Michaels
michaels> var val varchar2(5)
michaels> exec :val := 'as'
PL/SQL procedure successfully completed.
michaels> select distinct substr (:val, 1, 11) "Searchword",
substr (table_name, 1, 14) "Table",
substr (t.column_value.getstringval (), 1, 50) "Column/Value"
from cols,
table
(xmlsequence
(dbms_xmlgen.getxmltype ('select ' || column_name
|| ' from ' || table_name
|| ' where upper('
|| column_name
|| ') like upper(''%' || :val
|| '%'')'
).extract ('ROWSET/ROW/*')
) t
-- where table_name in ('EMPLOYEES', 'JOB_HISTORY', 'DEPARTMENTS')
order by "Table"or
11g upwards
SQL> select table_name,
column_name,
:search_string search_string,
result
from (select column_name,
table_name,
'ora:view("' || table_name || '")/ROW/' || column_name || '[ora:contains(text(),"%' || :search_string || '%") > 0]' str
from cols
where table_name in ('EMP', 'DEPT')),
xmltable (str columns result varchar2(10) path '.')
TABLE_NAME COLUMN_NAME SEARCH_STRING RESULT
DEPT DNAME es RESEARCH
EMP ENAME es JAMES
EMP JOB es SALESMAN
EMP JOB es SALESMAN
4 rows selected. -
Is it possible to connect database using session bean
Dear all,
Is it possible to connect database using session bean without using entity beans like cmp,bmp.
if ur answer is yes, then pls tell me where to put the select statement and transaction attribute like(6 types).
if u have sample code, then it is good for me.
Hope I will get answer.Sure it is.
Try something like this (and maybe get a book on JDBC):
String name;
try {
InitialContext ic = new InitialContext();
DataSource ds = (DataSource) ic.lookup(Constants.MY_DATASOURCE);
Connection connection = ds.getConnection();
String sql = "SELECT * FROM TABLE";
PreparedStatement statement = connection.prepareStatement(sql);
ResultSet rs = statement.executeQuery();
while (rs.next()) {
name = rs.getString("NAME");
if (rs != null)
rs.close();
if (statement != null)
statement.close();
if (connection != null)
connection.close();
catch (NamingException e) {
// Can't get JDBC datasource
// ... do something with this exception
catch (SQLException e) {
// SQL exception from getter
// .... do seomthing with this one too
} -
Hello,
I've been running the DMV 'sys.event_log', and have noticed that I am getting a lot of errors about connection issues to some of my SQL Azure databases saying "Database could not be opened. May be caused because database does not exist or lack of authentication
to open the database."
The event type column says: 'connection_failed' and the event_subtype_desc column says: 'failed_to_open_db' both are associated with the above error message.
I know that these databases are on-line as I have numerous people connected to them, all of whom are not experiencing any issues. My question is, is there a query that you can run on SQL Azure to try and find out a bit more information about the connection
attempts?
If this was a hosted SQL solution it would be much easier.
MarcusHello,
As for Windows Azure SQL Database, we can't access the error log file as On-premise SQL Server. Currently, it is only support troubleshooting the connection error with the following DMV. The SQL database connections events are collected and aggregated in
two catalog views that reside in the logical master database: sys.database_connection_stats and sys.event_log. We can use sys.event_log view to display the details if there is error occurs.
Just as the connection failed describe, it may ocurrs when user didnot has login permission when connect to the SQL Database. If so, please verify the user has logon permission.
Regards,
Fanny Liu
Fanny Liu
TechNet Community Support -
ROW CACHE ENQUEUE LOCK/ibrary cache load lock leads to database hung
(lowercase, curly brackets, no spaces)
We faced database hung on 3 node 11i erp 9i rac database.
We saw the library cache load lock timed out events reported in alert log.
Then few ora-600 and later ROW CACHE ENQUEUE LOCK timed out event. Eventually database was hung and we had to bounce the services .
we created support sr 7845542.992 for RCA.
The support says to increase shared pool size to avoid shared pool fragmentation and avoid reload ,additionaly to upgrade to 10g database.
I am not covinced adding additional pool size would solve this or upgrade to 10 .furthermore even 10g has such issues reported.
I saw couple of bugs mentioned such issue can happen due deadlock of session holding latches .
kindly let me know your view on issue
If required i can attach statspack for more information. (lowercase, curly brackets, no spaces)Many Thanks, i was keen to have your update .
There are 8 cpus on each node . Reloads very high during time period ,but normally there are not high reloads.
Statspack details for 3 nodes
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD1 1 9.2.0.8.0 YES npi-or-db-p-
11.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149817 30-Oct-09 13:00:09 574 #########
End Snap: 149837 30-Oct-09 14:00:17 602 #########
Elapsed: 60.13 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 122,414.93 11,449.13
Logical reads: 69,550.76 6,504.89
Block changes: 928.41 86.83
Physical reads: 196.24 18.35
Physical writes: 28.65 2.68
User calls: 343.97 32.17
Parses: 558.61 52.25
Hard parses: 43.48 4.07
Sorts: 467.24 43.70
Logons: 0.63 0.06
Executes: 2,046.99 191.45
Transactions: 10.69
% Blocks changed per Read: 1.33 Recursive Call %: 97.59
Rollback per transaction %: 5.07 Rows per Sort: 15.85
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.72 In-memory Sort %: 100.00
Library Hit %: 96.79 Soft Parse %: 92.22
Execute to Parse %: 72.71 Latch Hit %: 99.77
Parse CPU to Parse Elapsd %: 60.10 % Non-Parse CPU: 78.07
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 249,234 0 1,537 6 6.5
db file scattered read 61,776 0 769 12 1.6
row cache lock 780,098 10 566 1 20.2
library cache lock 697,849 157 432 1 18.1
latch free 127,926 4,715 387 3 3.3
global cache cr request 370,770 3,091 309 1 9.6
PL/SQL lock timer 59 58 112 1903 0.0
wait for scn from all nodes 303,572 18 103 0 7.9
library cache pin 26,231 2 100 4 0.7
global cache null to x 17,717 716 92 5 0.5
buffer busy waits 5,388 18 74 14 0.1
db file parallel read 5,245 0 69 13 0.1
log file sync 20,407 29 66 3 0.5
enqueue 52,200 70 60 1 1.4
buffer busy global CR 4,845 33 55 11 0.1
CGS wait for IPC msg 412,512 407,106 50 0 10.7
ksxr poll remote instances 1,279,565 483,046 48 0 33.2
log file parallel write 160,040 0 42 0 4.1
library cache load lock 1,491 2 29 20 0.0
global cache open x 19,507 344 28 1 0.5
buffer busy global cache 957 0 22 23 0.0
global cache s to x 16,516 180 20 1 0.4
db file parallel write 11,120 0 12 1 0.3
log file sequential read 618 0 11 18 0.0
DFS lock handle 23,768 0 10 0 0.6
control file sequential read 8,563 0 4 0 0.2
KJC: Wait for msg sends to c 1,549 57 4 3 0.0
lock escalate retry 76 76 4 52 0.0
SQL*Net break/reset to clien 12,546 0 3 0 0.3
SQL*Net more data to client 85,773 0 3 0 2.2
control file parallel write 1,265 0 2 1 0.0
global cache null to s 648 23 1 2 0.0
global cache busy 200 0 1 5 0.0
global cache open s 1,493 28 1 1 0.0
log file switch completion 12 0 1 61 0.0
PX Deq Credit: send blkd 161 70 1 4 0.0
kksfbc child completion 119 118 1 5 0.0
PX Deq: reap credit 5,948 5,456 0 0 0.2
PX Deq: Execute Reply 83 29 0 3 0.0
process startup 8 0 0 25 0.0
LGWR wait for redo copy 992 12 0 0 0.0
IPC send completion sync 450 450 0 0 0.0
PX Deq: Parse Reply 100 28 0 1 0.0
undo segment extension 10,380 10,372 0 0 0.3
PX Deq: Join ACK 146 65 0 1 0.0
buffer deadlock 222 221 0 0 0.0
async disk IO 1,179 0 0 0 0.0
wait list latch free 2 0 0 16 0.0
PX Deq: Msg Fragment 112 28 0 0 0.0
Library Cache Activity for DB: PROD Instance: PROD1 Snaps: 149817 -149837
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 116,007 1.1 133,347 19.9 24,338 0
CLUSTER 4,224 0.6 5,131 1.0 0 0
INDEX 15,048 24.1 13,798 26.4 2 0
JAVA DATA 82 0.0 692 39.6 136 0
JAVA RESOURCE 66 39.4 206 25.2 12 0
PIPE 1,140 0.5 1,160 0.5 0 0
SQL AREA 1,197,908 12.6 13,517,660 1.5 111,833 73
TABLE/PROCEDURE 3,847,439 0.8 4,230,265 7.9 142,200 0
TRIGGER 8,444 2.4 8,657 18.5 1,274 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 1,234 1,258 985 0
CLUSTER 3,222 25 25 25 0
INDEX 13,792 3,641 3,631 3,629 0
JAVA DATA 0 0 0 0 0
JAVA RESOURCE 0 26 25 0 0
PIPE 0 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 857,137 13,130 13,264 10,762 0
TRIGGER 0 200 202 200 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD2 2 9.2.0.8.0 YES npi-or-db-p-
12.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149847 30-Oct-09 14:00:05 493 #########
End Snap: 149857 30-Oct-09 15:00:02 432 #########
Elapsed: 59.95 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 71,853.44 32,058.65
Logical reads: 273,904.84 122,207.36
Block changes: 889.13 396.70
Physical reads: 40.40 18.03
Physical writes: 20.97 9.35
User calls: 153.74 68.60
Parses: 66.19 29.53
Hard parses: 2.66 1.19
Sorts: 25.70 11.47
Logons: 0.16 0.07
Executes: 726.41 324.10
Transactions: 2.24
% Blocks changed per Read: 0.32 Recursive Call %: 92.41
Rollback per transaction %: 4.84 Rows per Sort: 193.55
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 99.99 In-memory Sort %: 100.00
Library Hit %: 99.35 Soft Parse %: 95.97
Execute to Parse %: 90.89 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 36.55 % Non-Parse CPU: 98.28
Wait Events for DB: PROD Instance: PROD2 Snaps: 149847 -149857
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 65,823 33,667 90,459 1374 8.2
row cache lock 38,996 560 1,795 46 4.8
PX Deq Credit: send blkd 522 499 1,223 2344 0.1
PX Deq: Parse Reply 466 416 987 2117 0.1
db file sequential read 50,130 0 421 8 6.2
library cache lock 78,842 172 210 3 9.8
db file scattered read 6,904 0 152 22 0.9
global cache cr request 84,801 575 113 1 10.5
latch free 8,096 736 65 8 1.0
log file sync 5,676 27 41 7 0.7
wait for scn from all nodes 18,891 10 24 1 2.3
CGS wait for IPC msg 394,678 392,142 21 0 49.0
library cache pin 1,339 0 17 13 0.2
global cache null to x 2,145 48 16 8 0.3
global cache s to x 3,242 32 16 5 0.4
buffer busy waits 366 10 15 40 0.0
ksxr poll remote instances 70,990 31,295 14 0 8.8
db file parallel read 359 0 11 31 0.0
global cache open x 2,708 55 10 4 0.3
async disk IO 3,474 0 8 2 0.4
global cache open s 3,470 10 6 2 0.4
log file parallel write 13,076 0 5 0 1.6
global cache busy 58 40 5 90 0.0
PL/SQL lock timer 1 1 5 4877 0.0
DFS lock handle 3,362 0 5 1 0.4
log file sequential read 412 0 4 10 0.1
db file parallel write 2,774 0 3 1 0.3
library cache load lock 59 0 3 58 0.0
buffer busy global CR 722 0 3 4 0.1
control file sequential read 6,398 0 3 0 0.8
SQL*Net break/reset to clien 16,078 0 2 0 2.0
name-service call wait 26 0 2 67 0.0
control file parallel write 1,248 0 2 1 0.2
process startup 24 0 1 49 0.0
KJC: Wait for msg sends to c 3,491 4 1 0 0.4
SQL*Net more data to client 23,724 0 1 0 2.9
buffer busy global cache 23 0 0 19 0.0
global cache null to s 114 0 0 4 0.0
PX Deq: reap credit 5,646 5,509 0 0 0.7
log file switch completion 4 0 0 58 0.0
lock escalate retry 54 54 0 1 0.0
IPC send completion sync 119 118 0 0 0.0
direct path read 2,820 0 0 0 0.3
direct path read (lob) 3,632 0 0 0 0.5
PX Deq: Join ACK 88 37 0 0 0.0
direct path write 2,470 0 0 0 0.3
kksfbc child completion 6 6 0 6 0.0
buffer deadlock 3 3 0 11 0.0
global cache quiesce wait 4 4 0 8 0.0
Library Cache Activity for DB: PROD Instance: PROD2 Snaps: 149847 -149857
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 27,353 0.5 28,091 6.5 1,643 0
CLUSTER 203 1.0 269 1.5 0 0
INDEX 526 9.9 271 19.9 0 0
JAVA DATA 18 0.0 120 6.7 4 0
JAVA RESOURCE 20 45.0 56 26.8 3 0
JAVA SOURCE 1 100.0 1 100.0 0 0
PIPE 999 0.4 1,043 0.4 0 0
SQL AREA 131,793 7.6 3,406,577 0.4 7,012 0
TABLE/PROCEDURE 926,987 0.2 1,907,993 1.0 8,845 0
TRIGGER 1,519 0.1 1,532 4.9 69 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 129 277 117 0
CLUSTER 168 2 2 2 0
INDEX 271 52 56 52 0
JAVA DATA 0 0 0 0 0
JAVA RESOURCE 0 9 6 0 0
JAVA SOURCE 0 1 1 1 0
PIPE 0 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 89,523 764 868 460 0
TRIGGER 0 2 14 2 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
DB Name DB Id Instance Inst Num Release Cluster Host
PROD 21184234 PROD3 3 9.2.0.8.0 YES npi-or-db-p-
13.npi.corp
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 149808 30-Oct-09 14:00:00 31 #########
End Snap: 149809 30-Oct-09 15:00:02 34 11,831.4
Elapsed: 60.03 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 8,192M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 10,240K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 1,518.14 36,700.35
Logical reads: 1,333.43 32,235.02
Block changes: 5.09 123.01
Physical reads: 54.31 1,312.88
Physical writes: 3.91 94.44
User calls: 1.46 35.40
Parses: 2.24 54.21
Hard parses: 0.04 0.93
Sorts: 0.84 20.28
Logons: 0.06 1.45
Executes: 3.11 75.23
Transactions: 0.04
% Blocks changed per Read: 0.38 Recursive Call %: 94.31
Rollback per transaction %: 45.64 Rows per Sort: 215.97
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 96.21 In-memory Sort %: 100.00
Library Hit %: 99.07 Soft Parse %: 98.29
Execute to Parse %: 27.94 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 69.88 % Non-Parse CPU: 97.92
Wait Events for DB: PROD Instance: PROD3 Snaps: 149808 -149809
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 19,510 7,472 15,509 795 130.9
PX Deq: Parse Reply 1,152 1,071 2,577 2237 7.7
row cache lock 2,202 518 1,579 717 14.8
db file scattered read 31,556 0 354 11 211.8
db file sequential read 17,272 0 67 4 115.9
db file parallel read 1,722 0 34 20 11.6
global cache cr request 53,754 91 32 1 360.8
wait for scn from all nodes 1,897 13 10 5 12.7
CGS wait for IPC msg 403,358 401,478 10 0 2,707.1
DFS lock handle 4,753 0 8 2 31.9
direct path read 1,248 0 6 5 8.4
PX Deq: Execute Reply 110 38 6 51 0.7
global cache open s 160 10 5 31 1.1
control file sequential read 6,442 0 3 0 43.2
name-service call wait 26 0 2 78 0.2
latch free 129 109 2 13 0.9
KJC: Wait for msg sends to c 153 24 1 9 1.0
control file parallel write 1,245 0 1 1 8.4
buffer busy waits 199 0 1 6 1.3
process startup 20 0 1 44 0.1
global cache null to x 74 2 1 9 0.5
global cache null to s 19 0 1 29 0.1
global cache open x 268 1 1 2 1.8
library cache lock 1,150 0 0 0 7.7
PX Deq: Join ACK 129 48 0 3 0.9
log file parallel write 1,157 0 0 0 7.8
async disk IO 219 0 0 1 1.5
direct path write 1,024 0 0 0 6.9
ksxr poll remote instances 6,740 4,595 0 0 45.2
PX Deq: reap credit 6,580 6,511 0 0 44.2
buffer busy global CR 73 0 0 2 0.5
log file sequential read 11 0 0 10 0.1
log file sync 100 0 0 1 0.7
global cache s to x 282 2 0 0 1.9
db file parallel write 95 0 0 1 0.6
library cache pin 142 0 0 0 1.0
SQL*Net break/reset to clien 28 0 0 1 0.2
IPC send completion sync 81 81 0 0 0.5
PX Deq: Signal ACK 32 14 0 1 0.2
PX Deq Credit: send blkd 3 1 0 7 0.0
SQL*Net more data to client 841 0 0 0 5.6
PX Deq: Msg Fragment 37 17 0 0 0.2
log file single write 4 0 0 1 0.0
db file single write 1 0 0 1 0.0
SQL*Net message from client 4,213 0 13,673 3246 28.3
gcs remote message 214,784 75,745 7,016 33 1,441.5
wakeup time manager 233 233 6,812 29237 1.6
PX Idle Wait 2,338 2,294 5,686 2432 15.7
PX Deq: Execution Msg 2,151 1,979 4,796 2229 14.4
Library Cache Activity for DB: PROD Instance: PROD3 Snaps: 149808 -149809
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
BODY 1,290 0.0 1,290 0.0 0 0
CLUSTER 18 0.0 8 0.0 0 0
SQL AREA 4,893 2.0 36,371 0.5 2 0
TABLE/PROCEDURE 1,555 3.9 3,834 4.9 71 0
TRIGGER 286 0.0 286 0.0 0 0
GES Lock GES Pin GES Pin GES Inval GES Invali-
Namespace Requests Requests Releases Requests dations
BODY 1 0 0 0 0
CLUSTER 4 0 0 0 0
SQL AREA 0 0 0 0 0
TABLE/PROCEDURE 863 224 42 42 0
TRIGGER 0 0 0 0 0
------------------------------------------------------------- -
Creating a tree of all possible combinations of {a,b,c,d}
I am an MSc conversion student who is struggling with the initial stages of coding his Association Rule Mining dissertation project.
I have a TreeNode class, which is in the process of being written.
The class is designed to create a tree that stores all possible combinations of a Vector alphabet of strings {a,b,c,d}
Please could somebody explain what my TreeNode class is doing so far?
I have had a lot of help with this and I can't understand why there are so many vectors and what they all do.
A few pointers (no sarcastic comments please) would be much appreciated.
The code so far:
package treenode;
import java.io.*;
import java.util.*;
public class TreeNode
public static final String INFO_TEXT ="A program to store a vector of string items a,b,c,d in all their possible combinations into a TreeNode class";
private Vector alphabet;
private Vector data; // this IS TreeRootData
private Vector children;
public TreeNode(Vector inputAlphabet, Vector inputData)
// the alphabet and data have already been parsed in.
this.alphabet = inputAlphabet;
this.data = inputData;
System.out.println("Tree node constructed with data "+displayVectorOfStrings(this.data));
children = new Vector(); // we now construct the Vector of children
//createChildren();
public void createChildren()
if data.length =
// first check to see whether the current data length is the same
// as the length of the alphabet
// if it is then stop here (return;)
// we loop through the alphabet we have...
// for each candidate in the alphabet, attempt to create a new child TreeNode
// with a data Vector that is the same as the data in this TreeNode plus
// the candidate we are looking at in the alphabet. i.e. add available LHS
// NB: Do NOT add if the candidate is already in the data. {a,b,a} is wrong!
// NB: We will check to see whether data is already in the tree at a later stage
// NB: The new Child TreeNode MUST BE ADDED to the children VECTOR -
// or it will be lost
public static String displayVectorOfStrings(Vector vector) //Printing the Vector of Strings
String vectorText = "{";
for (int i=0; i<vector.size(); i++)
vectorText += (String)vector.elementAt(i)+",";
if (vector.size() > 0) vectorText = vectorText.substring(0, vectorText.length()-1);
// now we chop off the last element, replacing it with a closing bracket,
// provided that the Vector is not empty.
vectorText += "}";
return vectorText;
public static void main(String[] args) //main method WITH THE ARGUMENTS PASSED IN.
// This main class can be thought of as OUTSIDE the TreeNode class. Although
// it is in the file called TreeNode.java it is only here for convenience.
// This is the top level of the program where everything starts. It is here -
// that the alphabet is hardcoded in and that the Tree root is constructed.
displayHeading();
Vector alphabet = new Vector(); // constructing an initial alphabet,
alphabet.addElement("a"); // which will eventually come from the database
alphabet.addElement("b");
alphabet.addElement("c");
alphabet.addElement("d");
System.out.println("Alphabet set to be "+TreeNode.displayVectorOfStrings(alphabet));
// the root's initial data is now made into an empty Vector...
Vector initialData = new Vector();
// the root can now be constructed
TreeNode treeRoot = new TreeNode(alphabet, initialData); // the treeRoot constructor
public static void displayHeading()
System.out.println();
System.out.println(INFO_TEXT);I don't remember from which site I took this material.
You read this to find all possible combination of a set {a,b,c,d}
Consider three elements A, B, and C. It turns out that for n distinct elements there are n! permutations possible. For these three elements all of the possible permutations are listed below:
ABC, ACB, BAC, BCA, CAB, CBA
Since there are 3 elements there are 3! = 6 permutations possible.
Suppose instead of A, B, and C you have a deck of 52 playing cards. There are 52! permutations possible (i.e., the number of different ways the cards could be shuffled). Since 52! = 8x10^67, you will need to play a lot of cards before you have seen every permutation of the deck! Since shuffling the deck is a random process (or it should be), each of these 52! permutations is equally likely to occur.
Developing a non-recursive algorithm to generate all permutations of n elements is a non-trivial task. However, developing a recursive algorithm to do this requires only modest effort and this is what we will do next.
1. Let E = {e1, e2, ..., en } denote the set of n elements whose permutations we are to generate.
2. Let Ei be the set obtained by removing element i from E.
3. Let perm(X) denoted the permutations of the elements in the set X.
4. Let ei.perm(X) denote the permutation list obtained by prefixing each permutation in perm(X) with element ei.
Consider the following example, which is provided to clarify the definitions given above:
If E = {a, b, c}, then E1 = {b, c} since the element in position 1 has been removed from E to create E1. Perm(E1) = (bc, cb) since these are the only two possible permutations of the two elements which appear in E1. Finally, e1.perm(E1) = (abc, acb) which should be apparent from the value of e1 and perm(E1) since the element a is simply used as a prefix to each permutation in perm(E1).
Given these definitions, we set the recursion base case when n = 1. Since only one permutation is possible for one element, perm(E) = (e) where e is the lone element in E. When n > 1, perm(E) is the list e1.perm(E1) followed by e2.perm(E2) followed by e3.perm(E3) � followed by en.perm(En). This recursive definition of perm(E) defines perm(E) in terms of n perm(X)s, each of which involves an X with n � 1 elements. Thus both the base component and the recursive component (of a recursive algorithm) have been established and we thus have a complete recursive technique to generate the permutations.
The following Java method is an implementation of this recursive definition of perm(E). This method will output all permutations whose prefix is list[0:k-1] and whose suffixes are the permutations of list[k:m]. By invoking the method with Perm(list, 0, n-1) all n! permutations of list[0: n-1] will be produced. [This is because this invocation will set k = 0 and m = n � 1, so the prefix of the generated permutations is null and their suffixes are the permutations of list[0: n-1]. When k = m, there is only one suffix which is list[m] and list[0: m] defines a permutation that is to be produced. When k < m, the else clause is executed.
In the algorithm, let E denote the elements in list[k: m] and let Ei be the set obtained by removing ei = list[i] from E. The first swapping sequence in the for-loop has the effect of setting list[k] = ei and list[k+1: m] = Ei. Therefore, next statement which is the call to perm computes ei.perm(Ei). The final swapping sequence restores list[k: m] to its state prior to the first swapping sequence (the state is was in before the recursive call occurred).
public static void perm(Object [] list, int k, int m)
{ //generate all permutations of list[k:m]
int i; Object temp;
if (k == m) //list has one permutation � so output it
{ for (i = 0; i <= m; i++)
System.out.print(list);
System.out.println;
else //list has more than one permutation � so generate them recursively
for (i = k; i <= m; i++)
{ temp = list[k]; //these next three lines are a simple swap function
list[k] = list[i];
list[i] = temp;
perm(list, k+1, m); //the recursive call
temp = list[k]; //reset the order in the list
list[k] = list[i];
list[i] = temp;
}//end for-loop
} //end method perm
See the program pasted below that prints all possible combination of a set {a,b,c,d}
public class Perm
public Perm()
public static void main(String[] args)
String s = "abcd";
char [] chars = s.toCharArray();
perm(chars, 0);
public static void perm(char [] list, final int k)
int i;
char temp;
if (k == list.length-1) //list has one permutation, so output it
for (i = 0; i <= list.length-1; i++)
System.out.print(list[i]);
System.out.println();
else { //list has more than one permutation, so generate them recursively
for (i = k; i <= list.length-1; i++)
temp = list[k]; //these next three lines are a simple swap function
list[k] = list[i];
list[i] = temp;
perm(list, k+1); //the recursive call
temp = list[k]; //reset the order in the list
list[k] = list[i];
list[i] = temp;
}//end for-loop
Maybe you are looking for
-
Problem importing a module and finding it's commands?
Hi, I have a problem. I'm trying to import the DFSR module and use it's commands in my PS session. But, I can't get the commands to show. I've looked in my system32 directory and I can see the DFSR module folder. But when I run the command to import
-
Merged files not opening correctly from TOC
Hi, We are having issues with one of our merged .chms. It is a small .chm with one topic and 8 merged TOCs. When this .chm file is compiled, the TOC works fine and all the TOC topics open and the topics display correctly in the right-hand frame. Howe
-
I have written a program which creates a new material using function module BAPI_MATERIAL_MAINTAINDATA_RT. I get the following error: KM701 Profit center K100/12345 does not exist However, I can add the same profit center (12345) to the article in tr
-
Upgrading Indesign CS4 from Creative Suite 3
Hi! Is it possible to upgrade only Indesign CS4 from Creative Suite 3 or do I have to buy the whole Creative Suite 4? I bought an upgrade for Indesign CS4, but now I can use it only as a testversion, because when I start the programme it asks me besi
-
Any problems to PS CS4 if I do an archive/install of Leopard?
Hi! I have PS CS4 installed on my Mac running Leopard. The CS4 version is itself an upgrade from CS3. I need to do an archive & install of Leopard as I am having some problems with my system. Will this affect my PS installation? Will I have to reinst