Synchronous writes
Im looking for help to find out what is correct way to determine when process doing synchronous writes to files.
As much as I understand synchronous writes happens when you open file with O_SYNC and O_DSYNC and perform write call.
I have created small C test with O_SYNC:
write_sync.c
int main()
srand ( time(NULL) );
char *filename = "test.txt";
char buf[4];
sprintf(buf,"%d",rand());
int ftest=access("test.txt",F_OK);
int fd = open (filename, O_WRONLY | O_CREAT |O_SYNC, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH );
write(fd, buf, sizeof(buf));
close(fd);
Dtrace show this on io:
dtrace -n 'io:::start / execname == "write_sync"/ {printf("%s %d %d %s",execname,args[0]->b_flags,args[0]->b_bufsize,args[2]->fi_pathname);}'
dtrace: description 'io:::start ' matched 6 probes
CPU ID FUNCTION:NAME
11 3381 bdev_strategy:start write_sync 524561 1024 /export/home/user/test.txt
11 3381 bdev_strategy:start write_sync 257 0 <none>
11 3381 bdev_strategy:start write_sync 257 0 <none>
11 3381 bdev_strategy:start write_sync 16777489 0 /export/home/user/test.txt
11 3381 bdev_strategy:start write_sync 16777489 0 /export/home/user/test.txt
11 3381 bdev_strategy:start write_sync 257 0 <none>
11 3381 bdev_strategy:start write_sync 257 0 <none>
11 3381 bdev_strategy:start write_sync 257 0 <none>
11 3381 bdev_strategy:start write_sync 16777473 0 <none>
11 3381 bdev_strategy:start write_sync 16777473 0 <none>
As I see buffer size shows only in first line 1024 and it cold be write I looking for. So b_flags for this write is 524561 and value according to /usr/include/sys/buf.h means:
0x0001 B_BUSY -> not on av_forw/back list
0x0010 B_PAGEIO -> do I/O to pages on bp->p_pages
0x0100 B_WRITE -> non-read pseudo-flag
0x080000 B_NOCACHE -> dont cache block when released
So if this true than you can find all sync writes with
dtrace -n 'io:::start / args[0]->b_flags == 524561 / {@Z[execname,args[2]->fi_pathname]=count();}'
This is how far I got and I'm not really sure if this is correct way.
P.S. sorry for my english
Edited by: 811258 on Jan 27, 2012 2:23 AM
Edited by: 811258 on Jan 27, 2012 2:24 AM
Edited by: 811258 on Jan 27, 2012 2:28 AM
there is a way to activate the processing of a file using a trigger file;
you should create the trigger file only after completion of the creation of the main file
consumption of the main file is activated by the trigger file
hope it helps
Similar Messages
-
Does Concurrent Data Store use synchronous writes?
When specifying the DB_INIT_CDB flag in opening a dbenv (to use a Concurrent Data Store), you are unable to specify any other flags except DB_INIT_MPOOL. Does this mean that logging and transactions are not enabled, and in turn that db does not use synchronous disk writes? It would be great if someone could confirm...
Hi,
Indeed, when setting up CDS (Concurrent Data Store) the only other subsystem you may initialize are the shared memory buffer pool subsystem (DB_INIT_MPOOL). CDS suites applications where there is no need for full recoverability or transaction semantics, and where you need support for deadlock-free, multiple-reader/single writer access to the database.
You will not initialize the transaction subsystem (DB_INIT_TXN) nor the logging subsystem (DB_INIT_LOG). Note that you cannot specify recovery configuration flags when opening the environment with DB_INIT_CDB (DB_INIT_RECOVER or DB_RECOVER_FATAL).
I assume that by synchronous/asynchronous writes you're referring to the possibility of using DB_TXN_NOSYNC and DB_TXN_WRITE_NOSYNC for transactions in TDS applications to influence the default behavior (DB_TXN_SYNC) when committing a transaction (which is to synchronously flush the log when the transaction commits). Since in a CDS set up there is no log buffer, no logs or transactions these flags do not apply.
The only aspect pertaining to writes in CDS applications that needs discussion is flushing the cache. Flushing the cache (or database cache) - DB->sync(), DB->close(), DB_ENV->memp_sync() - ensures that any changes made to the database are wrote to disk (stable storage). So, you could say that since there are no durability guarantees, including recoverability after failure, that disk writes in CDS application are not synchronous (they do not reach stable storage, you need to explicitly flush the environment/database cache).
More information on CDS applications is here:
[http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/cam.html]
Regards,
Andrei -
Commit Handling for Synchronous Write Scenarios
Hello gurus,
I have to use syncronous interfaces between SAP and custom UI application. SAP will both be client and server in different scenarios. I would like to know how I can achieve consistency for write operations with commit handling mechanisms, both for PI 7.0 and PI 7.1. Proxies will be used on SAP side.
All helpful hints/linkd/docs will be very much appreciated and rewarded.
Thank you for your input.
GökhanHi Gokhan,
I believe you refer to the server proxy coding where the write operations are performed.
If you are using ABAP server proxy, then the coding should be similar to the way you handle SAP LUW in any ABAP program. In the method of your ABAP proxy class, you can group several BAPI calls or changes to several tables into one LUW and just the do the commit at the very end.
You need to create update FM to updae your tables and call them in update task mode.
For example.
LOOP AT <your incoming message table>
CALL <BAPI> IN UPDATE TASK
CALL <custom FM to update tables> IN UPDATE TASK
ENDLOOP.
CALL BAPI_TRANSACTION_COMMIT.
Regards,
Lim... -
Synchronous writes using FileAdapter
Hi all,
I'm new to this SOA and BPEL stuff so I'm probably missing something obvious.
I'm writing out a file using a File adapter. This works fine.
The problem is when calling the invoke it invokes the FileAdapter as an async process. After the file is written there is a second(non soa/bpel) background process that is run to do some other processing on the file. However, with it being asynchronous we run into a situation where the second process can begin processing a file without it being completely written out yet.
How do we go about preventing this situation or at the very least is there a way to make the file adapter invoke synchronous?
Thanks!there is a way to activate the processing of a file using a trigger file;
you should create the trigger file only after completion of the creation of the main file
consumption of the main file is activated by the trigger file
hope it helps -
Oracle gives Log read is SYNCHRONOUS though disk_asynch_io is enabled!
Hi,
We have a Oracle 11.2.0.2.0 64 bit database on a SuSe 10 server (with NFS mounts) and in some trace file I did found the following message:
Log read is SYNCHRONOUS though disk_asynch_io is enabled!
I'm puzzled because Async Io is enabled:
filesystemio_options is set to SETALL and disk_async_io is set to TRUE. When I query:select file_no,filetype_name,asynch_io from v$iostat_file;
I get the following:
FILE_NO FILETYPE_NAME ASYNCH_IO
0 Other ASYNC_OFF
0 Control File ASYNC_OFF
0 Log File ASYNC_OFF
0 Archive Log ASYNC_OFF
0 Data File Backup ASYNC_OFF
0 Data File Incremental Backup ASYNC_OFF
0 Archive Log Backup ASYNC_OFF
0 Data File Copy ASYNC_OFF
0 Flashback Log ASYNC_OFF
0 Data Pump Dump File ASYNC_OFF
all datafiles: async_on!
The NFS mount options for all filesystems are correct: rw,v3,rsize=32768,wsize=32768,hard,nolock,timeo=600,proto=tcp,sec=sys
Is IO for log file allways synchrounous? The reason for my question is that we see from time to time the following Awr information:
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
log file sync 62,576 32,908 526 67.98 Commit
db file sequential read 539,144 11,033 20 22.79 User I/O
DB CPU 3,800 7.85
enq: TX - row lock contention 375 693 1848 1.43 Application
Log archive I/O 667 59 89 0.12 System I/O
During this high log file sync users complain about bad performance.
Any ideas?
regards,
Ivan>
I think this is correct. The concept behind commit processing, where your session must hang following commit until the logfile write is complete, would seem to require synchronous writes. If the write were asynchronous, in effect Oracle could return a "commit complete" message before the redo was written to disc, so it would then be theoretically possible to lose a committed transaction.
>
Not necessarily. Though commit's logical write must be "synchronous" or "assured", it is not necessarily that it must use synchronous i/o calls for it.
For example Log Writer can issue 3 asynchronous writes to 3 logfiles simultaneously and then wait (or do other stuff like next buffer population) until all 3 finish, then if all 3 done successfully return control to user process.
From a user process perspective the commit still look solid and "synchronous". But here Log Writer benefits from simultaneous parallel write to 3 disks. Plus Log Writer meantime can do some other activity with buffers or whatever. Other words it is 3 times faster versus it do that in 3 sequential synchronous i/o calls.
to prove my point I run strace on lgwr process and run some quite big transactions.
[root@oracledev ~]# ps -ef|grep lgwr
# root 525 32728 0 13:41 pts/2 00:00:00 grep lgwr
oraerd 7436 1 0 Jan27 ? 00:08:26 ora_lgwr_ERD
# strace -p 7436
io_submit(20971520, 2, {{...}, {...}}) = 2
io_getevents(20971520, 2, 1024, {{...}, {...}}, {...}) = 2
times(NULL) = 749695972
times(NULL) = 749695972
times(NULL) = 749695972
semctl(294917, 96, SETVAL, 0xfff0a600) = 0
times(NULL) = 749695972
times(NULL) = 749695972
times(NULL) = 749695972
io_submit(20971520, 2, {{...}, {...}}) = 2
io_getevents(20971520, 2, 1024, {{...}, {...}}, {...}) = 2
times(NULL) = 749695973
times(NULL) = 749695973
times(NULL) = 749695973
semctl(294917, 88, SETVAL, 0xfff0a600) = 0
times(NULL) = 749695973
times(NULL) = 749695973
semtimedop(294917, 0xfff0c8e0, 1, {...}) = 0
...Note io_submit() and io_getevents() calls - it is Async I/O.
I did not see other I/O like write().
Oracle 10.2.0.5 on Linux on IBM POWER
uanme -a
Linux oracledev 2.6.18-308.1.1.el5 #1 SMP Fri Feb 17 16:51:00 EST 2012 ppc64 ppc64 ppc64 GNU/Linux
Edited by: Mark Malakanov (user11181920) on Mar 5, 2013 1:41 PM -
Serial write takes unexpectedly long when more than 7 bytes are written
Hi,
My vi is attached.
As you see, it's very simple.
- output buffer fifo is set to 128 bytes, which is generously higher than my needs.
- my baudrate is 2.5 mpbs.
- I write string of 9 bytes such as 012345678, and the execution time of the vi is around 40 us.
I thought it's because of the blocking structure of the synchronous write, and I decided to switch to asynchronous
write, since I need to go above 20 kHz.
- when I switch to asynchronous write, it even gets worse, and I get ~58 us. it seems like asynchronous doesn't work at all.
so far, I explained my problem. I also did some simple experiments to debug the problem.
- when my string is shorter than 8 bytes, everything is beautiful, asynchronous write takes nearly 15 us.
when I enter an 8 bytes or longer string, it jumps up to 58 us again.
what am I doing wrong? I'm stuck here.
Gorkem Secer.
Attachments:
serialWrite_niForum_pic.png 19 KBThe driver might for a lot of reasons not want to or even can't fill up the 8 byte hardware FIFO buffer entirely. This could be for instance since it has to work around some bugs in certain hardware. It might not be necessary for the specific hardware revision in your system but that driver has to work for many different hardware systems.
The magnitude of timing control you try to achieve is simply beyond a software system if you require reliable and hard timings. It may be possible to achieve on a simpler but still powerful embedded system with custom made software drivers and RT OS but not on a more general purpose RT OS even if the hardware is pretty powerful. But such more custom made solutions would be more than a few magnitudes more expensive to develop.
You can keep barking up this tree but it is unlikely that NI can do much about it without redesigning parts of the RT system, which is pretty much out of question as they simply license it from Ardence/IntervalZero and only adapt it where it is strictly necessary to work with their hardware. Most likely their license doesn't even allow them to customize it at will in any other way than is strictly necessary to get it to work on their own hardware.
Your options are as far as I can see, to either rethink the timing requirements or adapt the software in such a way that the bigger delay won't be a problem or to go with a hardware solution based on an FPGA board or similar.
As to the difference of asynchronous write and synchronous that is mostly about what VISA API is called underneath. The LabVIEW function remains blocking for as long as is necessary to pass the data to the OS driver. In synchonous mode the LABVIEW VI calls the synchronous VISA API once and that simply waits until VISA returns from the function. For the asynchronous case LabVIEW calls the asynchonous VISA function and then keeps looping inside its own cooperative multithreading layer until VISA indicates that the asynchonous function has been performed. This is mostly for historical reasons when LabVIEW didn't have OS supported multithreading and all the multithreading happened in the cooperative LabVIEW code scheduler. Nowadays asynchonous VISA mode has almost no benefits anymore but genearlly will cause significantly more CPU load.
Rolf Kalbermatter
CIT Engineering Netherlands
a division of Test & Measurement Solutions -
Colocation of client and key owner for writes
Hi,
In a distributed, replicated cache (1 backup copy) does the write to the backup owner for the key happen in parallel with the write to the primary owner? Or does the primary owner issue a synchronous update to the backup owner for the key? Other?
We are considering routing HTTP requests to the primary owner of a key to keep the primary update local and therefore only incur latency for the synchronous write to the backup owner. Would we see any benefit from doing this?
We do realize that primary owners can move with removal and/or addition of nodes and only see this as an optimization, not a requirement.
We are using Coherence version 3.3.
Thanks.Bob,
>
>
I don't think you provided all relevantinformation
about this. What key and data are we speakingabout?
What would do the directing of the http request? The key and data are arbitrary, but it could be a set
of keys assigned to the same partition (using
Partition Affinity) that collectively represent a
session. The request would be directed by a network
device (e.g. F5).
What you outline here depends on and assumes a couple of things:
1. The load-balancer is aware of key ownership in a partitioned cache service and also does failover based on this. I am not sure if any hardware load-balancer is capable of doing that. Also, in that case the load-balancer would need an additional Coherence licence.
2. The load-balancer is able to translate or extract the key data from the HTTP request / its observations of the previous requests in the HTTP session. Again, I find it unlikely, but a little bit more doable.
3. As you say that the load-balancer directs the request to the specific node owning the key, that would mean that your Webapp nodes and only they would be the storage-enabled nodes for the data you refer on.
This (among others) means
- that you won't be able to bring down the JVM of the web containers without losing access to the cached data.
- your Coherence nodes has to be running inside the web container JVMs which possibly means additional licence costs (for other software in the web container) if the web container is not free
Even if there are such load-balancers (are there any?), are you sure you really want to do the things outlined above?
I think it is much more sensible to have a web container tier which has nothing to do with Coherence, and this way the load-balancer does not have to be Coherence-aware, and it allows you to be more flexible in the system architecture and the Coherence features you use, and not be constrained by what the load-balancer's Coherence features (if such a load-balancer exists at all) direct you to.
In a partitioned cache service (distributed ornear
cache) the write is always communicated to theowner
of the primary copy of the partition containingthe
relevant key, and that node communicates thechange
to all backups. This is done synchronously fromthe
client's viewpoint, meaning that this is allcarried
out before the put method call returns on the
client.From your reply, I believe we would be reducing
latency on our updates by routing the requests to the
node with the partitioned which owns the set of keys.It theoretically would, however I am not really sure that it is possible to do that from a load-balancer, and if it is worth doing that at all.
Are we really speaking about HTTP requests?
BR,
Robert -
Multithreaded write support in Master
1. We have an design where in we write synchronously at the master level. Since it is
bottleneck to usesynchronous write in a multithreaded environment. We have removed the
synchronous write,
which lead to too many DB_LOCK_NOTGRANTED Exception at master.
Note: write transaction timeout = 10secs ( which is too high)
Below PseudoCode will help in understanding above point.
HttpController
requestHandler()
synchronous
adapter.write(entity);
//the adapter writes data to master inside a transaction with timeout value
10secs.
The questions are
a) Does BDB support multithreaded write at master?
b) If yes, is it something configurable?
2. In our new design, we have created an asynchronous threads(3 threads with queuecapacity of
20k) at the master level. 8-13 Replica(In production will have asynchronous threads(3 threads
with queuecapacity of 200k). What is the optimum number of threads that we can have at the
master level and queueCapacity?
For eg.
HttpController
requestHandler()
aynchronousthreadexecutor.execute(entity, adapter);
//asynchronous thread with 3 threads, 20k queueCapacity
}Corrigendum: Added information about BDB Version and few more helping information for this context.
Hi There,
We have the following questions w.r.t Multi threaded write at master.
1. We have an design where in we write synchronously at the master level. Since it is bottleneck to
use synchronous write in a multithreaded environment we have removed the synchronous write,
which lead to too many DB_LOCK_NOTGRANTED Exception at master.
Note: write transaction timeout = 10secs ( which is too high)
Below PseudoCode will help in understanding above point.
HttpController
requestHandler()
synchronous
adapter.write(entity);
//the adapter writes data to master inside a transaction with timeout value 10secs.
The questions are
a) Does BDB support multithreaded write at master?
b) If yes, is it something configurable?
2. In our new design, we have created asynchronous threads(3 number with queuecapacity of 20k) at the
master level. Replica will have asynchronous threads(3 number with queuecapacity of 200k). In Production Replica count could
go upto 13.
a) What is the optimum number of threads that we can have at the master level
b) What would be ideal value qeueCapacity for the Executor (we use Spring Thread Executor)?
For eg.
HttpController
requestHandler()
aynchronousthreadexecutor.execute(entity, adapter);
//asynchronous thread with 3 threads, 20k queueCapacity
Thanks. -
I have tried all the turnarounds as I can find from the NI website, such as
1) Setting the NI-MinimumBufferSize to 10,000, or
2) Using the TDMS Advanced Synchronous Write along with other Advanced TDMS functions instead of standard TDMS write
However, the RT free memory still diminishes gradually when a DMS file is written. It is also noted that the drive space available on the /C/ is reduced as well. Due to the reduced free RT memory, a FPGA memory Full error occurs after recording 4~5 several large TDMS files (100 to 200 MB). Can anyone help on this issue?
More details on the system and the program I am working on:
1) LabVIEW 2014 Professional Developement
2) cRIO 9068 + 2 NI 9234 modules for triggered Sound & Vibration measuement
3) 8 Channels at 51200 S/s
4) FPGA buffer size 8191
5) RT buffer depth 5*8*51200
6) LabVIEW Code is simply beased on the LabVIEW sample project: LabVIEW FPGA Wavefrom Acquisition and Logging on CompactRIO
Regards,
JJDFRDHi Kyle,
Thank you for the work you've done on this.
I'm not 100% sure where we should be putting the TDMS Flush function? I've attached a modified cRIO Waveform Acquisition and Logging on CompactRIO Sample project based on a cRIO 9068 and 9234 as JJDFRD is using, and I still see the physcial memory declining with each TDMS write until it gets down to about 4M (reserved for the system) - Funnily enough it seems to work fine and continue to log and write even after it reaches this bottom threshold. If I change the Free Space constant to delete files when the disk space reaches say 30% rather thean the default value of 70%, I can see the Physcial Memory bounces back after a TDMS write and older tdms files are deleted.
If you could pelase confirm where we should put the TDMS Flush function this would be very helpful, and JJDFRD can try this out to see if it fixes his problem.
Kind Regards,
Stuart Little
Applications Engineering -
NFS write performance 6 times slower than read
Hi all,
I built my self a new homeserver and want to use NFS to export stuff to the clients. Problem is that I get a big difference in writing and reading from/to the share. Everything is connected by GBit Network, and raw network speed is fine.
Reading on the clients yields about 31MByte/s which is almost the native speed of the disks (which are luks-encrypted). But writing to the share gives only about 5.1MByte/s in the best case. Writing to the disks internally gives about 30MByte/s too. Also writing with unencrypted rsync from the client to the server gives about 25-30MByte/s, so it is definitely not a network or disk problem. So I wonder if there is anything that I could do to improve the Write-Performance of my NFS-shares. Here is my config which gives the best results so far:
Server-Side:
/etc/exports
/mnt/data 192.168.0.0/24(rw,async,no_subtree_check,crossmnt,fsid=0)
/mnt/udata 192.168.0.0/24(rw,async,no_subtree_check,crossmnt,fsid=1)
/etc/conf.d/nfs-server.conf
NFSD_OPTS=""
NFSD_COUNT="32"
PROCNFSD_MOUNTPOINT=""
PROCNFSD_MOUNTOPTS=""
MOUNTD_OPTS="--no-nfs-version 1 --no-nfs-version 2"
NEED_SVCGSSD=""
SVCGSSD_OPTS=""
Client-Side:
/etc/fstab
192.168.0.1:/mnt/data /mnt/NFS nfs rsize=32768,wsize=32768,intr,noatime 0 0
Additional Infos:
NFS to the unencrypted /mnt/udata gives about 20MByte/s reading and 10MByte/s writing.
Internal Speed of the discs is about 37-38MByte/s reading/writing for the encrypted one, and 44-45MByte/s for the unencrypted (notebook-hdd)
I Noticed that the load average on the server goes over 10 while the CPU stays at 10-20%
So if anyone has any idea what might go wrong here please let me know. If you need more information I will gladly provide it.
TIA
seiichiro0185
Last edited by seiichiro0185 (2010-02-06 13:05:23)Your rsize and wsize looks way too big. I just use defaults and it runs fine.
I don't know what your server is but I plucked this from BSD Magazine.
There is one point worth mentioning here, modern Linux usually uses wsize and rsize 8192 by default and that can cause problems with BSD servers as many support only wsize and rsize 1024. I suggest you add the option -o wsize=1024,rsize=1024 when you mount the share on your Linux machines.
You also might want to check here for some optimisations http://www.linuxselfhelp.com/howtos/NFS … WTO-4.html
A trick to increase NFS write performance is to disable synchronous writes on the server. The NFS specification states that NFS write requests shall not be considered finished before the data written is on a non-volatile medium (normally the disk). This restricts the write performance somewhat, asynchronous writes will speed NFS writes up. The Linux nfsd has never done synchronous writes since the Linux file system implementation does not lend itself to this, but on non-Linux servers you can increase the performance this way with this in your exports file:
Last edited by sand_man (2010-03-03 00:23:23) -
Solaris Enable DirectIO - JMS Store Write Policy
In the documentation for the JMS Backing store is says that Direct-Write is
faster except in high parallel situations
We have Veritas filesystem and have enabled direct IO so there is no buffer
between a write call and the disk. We use EMC as our disk subsytem.
Question is?
Which synchronous write policy would be the fastest or can we disable it?
Every write will flush to disk automatically and we have found this faster
when using products like Berkeley DB for a backing store.
Currently we use Cache-Flush but that was before we enabled direct-io
Thanks for your helpTom Barnes <> wrote in news:[email protected]:
> Hi Larry,
>
> - As with any database or JMS vendor, synchronous disk writes are
> required for transactionality - no matter what the underlying disk
> subsystem. Some disk subsystems will reliably cache synchronous
> writes in replicated and/or battery backed caches, so that they do not
> always have to result in physical disk write. File stores provide
> two synchronous write options: Cache-Flush and Direct-Write.
>
> - You can set disk write policy to "Disabled", but this disables
> synchronous writes and risks data loss on power failure or if the O/S
> itself crashes. In essence, when you set "Disabled" it enables the
> operating system to cache disk-writes and perform them lazily at a
> later time. This is an unsafe mode that's often the default behavior
> in open source and is also the default behavior in a couple commercial
> JMS vendors.
>
> - For versions 8.1 and earlier, the difference in perf between
> Cache-Flush and Direct-Write depends on the load (as you already
> mentioned) - where direct-write tends to win when concurrency is low.
> Also for versions 8.1SP4 and later, but prior to version 9.0, its can
> sometimes can be faster to use a JDBC store than a file store.
>
> - For versions 9.0 and later, its faster to use "Direct-Write". 9.0
> persistence performance is generally much much higher than 8.1, so if
> performance is an issue I highly recommend upgrading.
>
> Tom
>
Tom using Veritas filesystem if you enable direct io Unix does not cache
the data in the buffer cache it is forced to write to disk for every call
to do a write. Normally when you call fwrite or write it does buffer the
data in the OS and its up to the OS when it flushes to disk.
This is not the case when Direct-IO is enabled on a Veritas filesystem.
When you mention async io do you mean that weblogic writes the data async
to disk in the application code. If it does that then i would agree this
would not be safer even with direct io because you don't sync the write
to the send to the queue.
We are going to move to 9.x but not in our next release.
An example using UFS Direct I/O
http://www.solarisinternals.com/wiki/index.php/Direct_I/O -
Weblogic 8.14: Stuck Threads during JMS - IO native write operations
G'day!
Hope somebody can advise a solution on the following...
Recently message loads have started to increase on our weblogic 8.14 cluster (3 servers) and we started to see an almost daily occurrence of Stuck Thread issues. From thread dumps we could see these were caused by jrockit/net/SocketNativeIO.write processes. We concluded we had reached the max performance limit for the file based JMS stores and so we moved these to database based JMS stores. This certainly seemed to improve matters - but we have recently seen the issue return. Which is puzzling, as technically we thought all messages would be persisted to the database, and any file IO on the application server would be eliminated. Our database resides on a separate dedicated oracle server. We tracked a recent cause to a large message (200K) which caused the Struck Thread - resulting in a build up of pending messages. After bouncing the server the message processed ok. Any thoughts on how to overcome these issues - greatly appreciated. Here's some detail on the stuck thread from the thread dump: (+Why does weblogic use IO when the JMS store is on a database?+). All messages are persistant.
"ExecuteThread: '6' for queue: 'weblogic.kernel.Default'" prio=5 id=0xc00 pid=27544 active, daemon
at jrockit/net/SocketNativeIO.write(Native Method)@0xf3aeae10
at jrockit/net/SocketNativeIO.write(Unknown Source)@0xf3aeae9c
at jrockit/io/NativeIO.write(Unknown Source)@0xf3ae62c6
at java/net/AbstractSocketImpl$2.write(Unknown Source)@0xf3aeada4
at jrockit/io/NativeIOOutputStream.write(Unknown Source)@0xf3ae61ec
at jrockit/io/NativeIOOutputStream.write(Unknown Source)@0xf3ae618d
at java/io/DataOutputStream.write(Unknown Source)@0xf4e3ec25
^-- Holding lock: java/io/DataOutputStream@0x4309fc80[thin lock]
at com/ibm/mq/MQInternalCommunications.send(MQInternalCommunications.java:1022)@0xed34364a
^-- Holding lock: com/ibm/mq/MQInternalCommunications@0x4309f7e0[thin lock]
at com/ibm/mq/MQSESSIONClient.lowLevelComms(MQSESSIONClient.java:2832)@0xed342fbd
^-- Holding lock: java/lang/Integer@0x4309f8a0[thin lock]
at com/ibm/mq/MQSESSIONClient.MQPUT(MQSESSIONClient.java:3844)@0xed34271b
at com/ibm/mq/MQQueue.putMsg2(MQQueue.java:1486)@0xed342262
^-- Holding lock: com/ibm/mq/MQSPIQueue@0x4312ce80[thin lock]
at com/ibm/mq/jms/MQMessageProducer.sendInternal(MQMessageProducer.java:1569)@0xed33d1ab
at com/ibm/mq/jms/MQMessageProducer.send(MQMessageProducer.java:1012)@0xed860a49
at com/ibm/mq/jms/MQMessageProducer.send(MQMessageProducer.java:1046)@0xed8605f4
at weblogic/jms/adapter/JMSBaseConnection.sendInternal(JMSBaseConnection.java:667)@0xef7df0bf
^-- Holding lock: weblogic/jms/adapter/JMSBaseConnection@0x43093748[thin lock]
at weblogic/jms/adapter/JMSBaseConnection.access$200(JMSBaseConnection.java:80)@0xef7dee28
at weblogic/jms/adapter/JMSBaseConnection$6.run(JMSBaseConnection.java:647)@0xef7dee1d
at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)@0xf2fcbb88
at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:147)@0xf2fcbb0a
at weblogic/jms/adapter/JMSBaseConnection.send(JMSBaseConnection.java:644)@0xef7decf3
at weblogic/jms/adapter/JMSConnectionHandle.send(JMSConnectionHandle.java:144)@0xef7dec7c
at jrockit/reflect/NativeMethodInvoker.invoke0(Native Method)@0xf4f10d10
at jrockit/reflect/NativeMethodInvoker.invoke(Unknown Source)@0xf4f10e98
at jrockit/reflect/VirtualNativeMethodInvoker.invoke(Unknown Source)@0xf4e86d70
at java/lang/reflect/Method.invoke(Unknown Source)@0xf4f104c3
at weblogic/connector/common/internal/ConnectionWrapper.invoke(ConnectionWrapper.java:149)@0xed96a498
at $Proxy14.send(Unknown Source)@0xef7dec04
at weblogic/jms/bridge/internal/MessagingBridge.onMessageInternal(MessagingBridge.java:1258)@0xed857b25
at weblogic/jms/bridge/internal/MessagingBridge.onMessage(MessagingBridge.java:1181)@0xed857864
at weblogic/jms/adapter/JMSBaseConnection$27.run(JMSBaseConnection.java:1943)@0xef7de623
at weblogic/security/acl/internal/AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)@0xf2fcbb88
at weblogic/security/service/SecurityManager.runAs(SecurityManager.java:147)@0xf2fcbb0a
at weblogic/jms/adapter/JMSBaseConnection.onMessage(JMSBaseConnection.java:1939)@0xef7de586
at weblogic/jms/client/JMSSession.onMessage(JMSSession.java:2678)@0xeee7e30c
at weblogic/jms/client/JMSSession.execute(JMSSession.java:2598)@0xeee7d25d
at weblogic/kernel/ExecuteThread.execute(ExecuteThread.java:219)@0xf4e14070
at weblogic/kernel/ExecuteThread.run(ExecuteThread.java:178)@0xf4e1fddb
at java/lang/Thread.startThreadFromVM(Unknown Source)@0xf4f3c9f3
Thanks
C.I'm not sure what operating system you're running, but I only have experience of WLS8.1 on Windows.
We did extensive load testing of JMS before we went live, and found JDBC file stores to be rubbish in comparison to file store. When a message is persisted, it has to be written to the database, then when it is consumed, it must be deleted from the table. This simply killed our system when we load tested it.
One thing we did do though, was use a second JMS server and second file store for one particularly heavy queue, where we did have IO issues. This was with a JMS distributed destination. We also use a synchronous write policy of cache-flush on the file store and we also have configured a paging store to page messages so the app server does not run out of memory.
200K isn't by any means a huge message though, and shouldn't be any problem at all, we have 1mb messages quite regularly.
I notice from the stack trace, that you have MQ series aswell. I'm not experienced with MQ, so can't give any information, but what I would try is the following.
Using a JMS server with JMS File storage, develop a small java client to write as many messages to the queue and effectively stress the JMS subsystem just to establish if the filestore is the problem. This should be done without a connection to MQ and also use large messages. if you don't get any issues and WebLogic JMS with file store looks ok, then attach a consumer to the queue, which should also be a simple java client. Then you have production and consumption going on, which will result in writes and read's/deletes from the jms file store.
Once you've stress tested JMS and you are satisfied that its ok or not, then add the MQseries connection and see how that affects things.
You may well have already done all this, so please feel free to ignore me! I did a load of this kind of stuff before we went live to try and bash JMS to bits, but I didn't manage it! The only thing we did was add a second file store and second JMS server.
I just looked back in my support cases, and found we'd had a similar issue - disk fragmentation seemed to be one of the factors here too. The JMS file store was heavily fragmented. That's windows for you though!!!
Hope that helps.
Pete -
ORA 04030 Out of process memory error
Dear experts,
I know there are multiple discussions around this error and I have been reading through most of them in the past one week or so, but looks like we are running out of options or are missing the color altogether. Ok, we are getting ORA-04030 - out of process memory while allocating....while one of our batch process runs in the night. It simply tries to insert/update to a table. Our installation is 11.2.0.1.0 with no RAC configuration and on 64-bit AIX having 6 cores, 12 CPUs and 16 GB memory.
We have checked the Workarea_Size_Policy is set to be as Auto so Oracle decides how much memory to allocate to PGA automatically on run-time based on the demand. And based on the AWR report it doesnt look like we are anywhere near the country having a PGA-deficit!! I am attaching the AWR report in a word document here for your reference.
Also attached below are the configurations and the ulimit values.
IKBTRN1> show parameter workarea;
NAME TYPE VALUE
workarea_size_policy string AUTO
oraipeikbtrn1:/home/oracle-> ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited
threads(per process) unlimited
processes(per user) unlimited
Now, nothing seems to have contributed to the out of process memory issue from Oracle standpoint. I would be happy to be proved wrong here, if I am wrong.
So, whats going wrong here? A possible memory leak which we cannot zero down to, a OS memory limit or something else?
Seeking expert's advise on this, and also sincerely appreciate your time in looking at this.
Thanks.
P.S - I am pasting the whole AWR report since there is no 'upload file' option here that I can see.
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst num Startup Time Release RAC
IKBTRN1 54659199 IKBTRN1 1 06-Jun-11 02:06 11.2.0.1.0 NO
Host Name Platform CPUs Cores Sockets Memory (GB)
oraipeikbtrn1.******.com AIX-Based Systems (64-bit) 12 6 16.00
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 5952 26-Aug-11 03:00:48 34 2.0
End Snap: 5953 26-Aug-11 04:00:52 32 1.9
Elapsed: 60.07 (mins)
DB Time: 1.93 (mins)
Report Summary
Cache Sizes
Begin End
Buffer Cache: 1,056M 704M Std Block Size: 8K
Shared Pool Size: 3,456M 3,456M Log Buffer: 7,184K
Load Profile
Load Profile
Per Second Per Transaction Per Exec Per Call
DB Time(s): 0.0 2.0 0.02 0.02
DB CPU(s): 0.0 0.5 0.00 0.00
Redo size: 556.1 34,554.8
Logical reads: 151.4 9,407.6
Block changes: 1.9 119.8
Physical reads: 14.2 882.6
Physical writes: 9.5 590.4
User calls: 1.8 112.8
Parses: 1.5 93.7
Hard parses: 0.1 8.9
W/A MB processed: -0.1 -6.9
Logons: 0.0 1.6
Executes: 1.9 115.4
Rollbacks: 0.0 0.0
Transactions: 0.0
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 96.63 In-memory Sort %: 99.97
Library Hit %: 95.68 Soft Parse %: 90.49
Execute to Parse %: 18.74 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 57.23 % Non-Parse CPU: 86.28
Shared Pool Statistics
Begin End
Memory Usage %: 85.72 85.76
% SQL with executions>1: 93.91 96.66
% Memory for SQL w/exec>1: 89.07 87.04
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 29 24.66
db file scattered read 3,456 17 5 14.92 User I/O
db file sequential read 4,304 17 4 14.77 User I/O
direct path read temp 764 17 22 14.31 User I/O
direct path write temp 259 5 21 4.70 User I/O
Host CPU (CPUs: 12 Cores: 6 Sockets: )
Load Average Begin Load Average End %User %System %WIO %Idle
1.39 1.37 0.2 0.2 0.2 99.6
Instance CPU
%Total CPU %Busy CPU %DB time waiting for CPU (Resource Manager)
0.1 20.5 0.0
Memory Statistics
Begin End
Host Mem (MB): 16,384.0 16,384.0
SGA use (MB): 4,704.0 4,352.0
PGA use (MB): 196.1 188.4
% Host Mem used for SGA+PGA: 29.91 27.71
Main Report
• Report Summary
• Wait Events Statistics
• SQL Statistics
• Instance Activity Statistics
• IO Stats
• Buffer Pool Statistics
• Advisory Statistics
• Wait Statistics
• Undo Statistics
• Latch Statistics
• Segment Statistics
• Dictionary Cache Statistics
• Library Cache Statistics
• Memory Statistics
• Streams Statistics
• Resource Limit Statistics
• Shared Server Statistics
• init.ora Parameters
Back to Top
Wait Events Statistics
• Time Model Statistics
• Operating System Statistics
• Operating System Statistics - Detail
• Foreground Wait Class
• Foreground Wait Events
• Background Wait Events
• Wait Event Histogram
• Wait Event Histogram Detail (64 msec to 2 sec)
• Wait Event Histogram Detail (4 sec to 2 min)
• Wait Event Histogram Detail (4 min to 1 hr)
• Service Statistics
• Service Wait Class Stats
Back to Top
Time Model Statistics
• Total time in database user-calls (DB Time): 115.9s
• Statistics including the word "background" measure background process time, and so do not contribute to the DB time statistic
• Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 101.69 87.75
DB CPU 28.58 24.66
parse time elapsed 10.14 8.75
hard parse elapsed time 9.92 8.56
failed parse elapsed time 4.92 4.25
hard parse (sharing criteria) elapsed time 4.27 3.68
connection management call elapsed time 0.42 0.36
PL/SQL compilation elapsed time 0.34 0.30
PL/SQL execution elapsed time 0.18 0.15
sequence load elapsed time 0.00 0.00
repeated bind elapsed time 0.00 0.00
DB time 115.88
background elapsed time 86.01
background cpu time 5.06
Back to Wait Events Statistics
Back to Top
Operating System Statistics
• *TIME statistic values are diffed. All others display actual values. End Value is displayed if different
• ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
Statistic Value End Value
NUM_LCPUS 0
NUM_VCPUS 0
AVG_BUSY_TIME 1,260
AVG_IDLE_TIME 360,705
AVG_IOWAIT_TIME 534
AVG_SYS_TIME 483
AVG_USER_TIME 679
BUSY_TIME 16,405
IDLE_TIME 4,329,811
IOWAIT_TIME 7,284
SYS_TIME 7,092
USER_TIME 9,313
LOAD 1 1
OS_CPU_WAIT_TIME 503,900
PHYSICAL_MEMORY_BYTES 17,179,869,184
NUM_CPUS 12
NUM_CPU_CORES 6
GLOBAL_RECEIVE_SIZE_MAX 1,310,720
GLOBAL_SEND_SIZE_MAX 1,310,720
TCP_RECEIVE_SIZE_DEFAULT 16,384
TCP_RECEIVE_SIZE_MAX 9,223,372,036,854,775,807
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 9,223,372,036,854,775,807
TCP_SEND_SIZE_MIN 4,096
Back to Wait Events Statistics
Back to Top
Operating System Statistics - Detail
Snap Time Load %busy %user %sys %idle %iowait
26-Aug 03:00:48 1.39
26-Aug 04:00:52 1.37 0.38 0.21 0.16 99.62 0.17
Back to Wait Events Statistics
Back to Top
Foreground Wait Class
• s - second, ms - millisecond - 1000th of a second
• ordered by wait time desc, waits desc
• %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
• Captured Time accounts for 78.2% of Total DB time 115.88 (s)
• Total FG Wait Time: 62.08 (s) DB CPU time: 28.58 (s)
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) %DB time
User I/O 8,949 0 56 6 48.74
DB CPU 29 24.66
System I/O 1,916 0 3 1 2.18
Other 506 88 1 2 0.92
Configuration 2 50 1 500 0.86
Commit 37 0 1 18 0.56
Application 20 0 0 17 0.29
Network 4,792 0 0 0 0.01
Concurrency 1 0 0 0 0.00
Back to Wait Events Statistics
Back to Top
Foreground Wait Events
• s - second, ms - millisecond - 1000th of a second
• Only events with Total Wait Time (s) >= .001 are shown
• ordered by wait time desc, waits desc (idle events last)
• %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn % DB time
db file scattered read 3,456 0 17 5 59.59 14.92
db file sequential read 4,304 0 17 4 74.21 14.77
direct path read temp 764 0 17 22 13.17 14.31
direct path write temp 259 0 5 21 4.47 4.70
control file sequential read 1,916 0 3 1 33.03 2.18
ADR block file read 38 0 1 28 0.66 0.92
log buffer space 2 50 1 500 0.03 0.86
log file sync 37 0 1 18 0.64 0.56
enq: RO - fast object reuse 14 0 0 24 0.24 0.29
local write wait 44 0 0 1 0.76 0.03
SQL*Net message to client 4,772 0 0 0 82.28 0.01
Disk file operations I/O 110 0 0 0 1.90 0.00
ADR block file write 7 0 0 0 0.12 0.00
SQL*Net message from client 4,773 0 15,396 3226 82.29
Streams AQ: waiting for messages in the queue 720 100 3,600 5000 12.41
Back to Wait Events Statistics
Back to Top
Background Wait Events
• ordered by wait time desc, waits desc (idle events last)
• Only events with Total Wait Time (s) >= .001 are shown
• %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn % bg time
control file sequential read 4,950 0 35 7 85.34 40.74
control file parallel write 1,262 0 31 25 21.76 36.46
log file parallel write 383 0 4 10 6.60 4.37
db file parallel write 627 0 2 3 10.81 2.36
change tracking file synchronous read 56 0 2 34 0.97 2.21
os thread startup 17 0 1 88 0.29 1.74
ADR block file read 135 0 1 7 2.33 1.04
change tracking file synchronous write 56 0 1 15 0.97 0.98
SGA: allocation forcing component growth 8 100 1 100 0.14 0.93
db file sequential read 112 0 1 6 1.93 0.75
process diagnostic dump 94 0 0 1 1.62 0.09
ADR block file write 92 0 0 1 1.59 0.07
LGWR wait for redo copy 11 0 0 1 0.19 0.01
log file sync 2 0 0 3 0.03 0.01
ADR file lock 92 22 0 0 1.59 0.01
Parameter File I/O 24 0 0 0 0.41 0.01
direct path write 6 0 0 1 0.10 0.00
Disk file operations I/O 54 0 0 0 0.93 0.00
rdbms ipc message 17,637 97 61,836 3506 304.09
Streams AQ: waiting for time management or cleanup tasks 5 60 11,053 2210602 0.09
DIAG idle wait 7,203 100 7,203 1000 124.19
PX Idle Wait 1,802 100 3,604 2000 31.07
pmon timer 1,212 99 3,603 2973 20.90
Space Manager: slave idle wait 726 99 3,603 4963 12.52
smon timer 12 100 3,600 300004 0.21
Streams AQ: qmn slave idle wait 128 0 3,583 27993 2.21
Streams AQ: qmn coordinator idle wait 256 50 3,583 13996 4.41
SQL*Net message from client 293 0 2 5 5.05
Back to Wait Events Statistics
Back to Top
Wait Event Histogram
• Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
• % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
• % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
• Ordered by Event (idle events last)
% of Waits
Event Total Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 173 80.3 5.2 2.3 5.8 1.7 4.6
ADR block file write 99 96.0 3.0 1.0
ADR file lock 102 100.0
Disk file operations I/O 165 100.0
LGWR wait for redo copy 11 90.9 9.1
Parameter File I/O 24 100.0
SGA: allocation forcing component growth 8 100.0
SQL*Net break/reset to client 6 100.0
SQL*Net message to client 4992 100.0
SQL*Net more data from client 20 100.0
asynch descriptor resize 541 100.0
change tracking file synchronous read 56 83.9 1.8 14.3
change tracking file synchronous write 56 80.4 7.1 1.8 10.7
control file parallel write 1262 80.3 1.7 .6 .6 .8 1.3 14.7
control file sequential read 6866 94.1 .9 .7 .7 .3 .4 2.9
db file parallel write 628 94.3 2.1 1.0 .8 .3 .3 1.3
db file scattered read 3457 72.6 7.2 5.4 6.9 5.7 .5 1.6
db file sequential read 4525 78.7 2.7 1.8 9.6 5.3 .4 1.5
direct path read temp 764 40.2 18.6 9.4 6.2 11.0 5.8 8.9
direct path sync 1 100.0
direct path write 6 83.3 16.7
direct path write temp 259 .4 1.2 88.8 .4 9.3
enq: RO - fast object reuse 14 42.9 42.9 7.1 7.1
latch free 1 100.0
latch: cache buffers lru chain 2 100.0
latch: checkpoint queue latch 2 100.0
latch: messages 2 100.0
latch: object queue header operation 2 100.0
latch: redo allocation 1 100.0
latch: row cache objects 1 100.0
local write wait 44 100.0
log buffer space 2 50.0 50.0
log file parallel write 383 92.4 .8 1.0 5.7
log file sync 39 82.1 2.6 2.6 12.8
os thread startup 17 100.0
process diagnostic dump 94 34.0 63.8 2.1
reliable message 7 100.0
utl_file I/O 12 100.0
DIAG idle wait 7204 100.0
PX Idle Wait 1802 100.0
SQL*Net message from client 5067 87.1 6.6 1.0 .5 .5 .1 .5 3.7
Space Manager: slave idle wait 726 .6 99.4
Streams AQ: qmn coordinator idle wait 256 49.2 .8 50.0
Streams AQ: qmn slave idle wait 128 100.0
Streams AQ: waiting for messages in the queue 721 100.0
Streams AQ: waiting for time management or cleanup tasks 5 40.0 20.0 40.0
class slave wait 17 100.0
pmon timer 1212 .9 99.1
rdbms ipc message 17.6K 1.8 .4 .2 .2 .1 .1 21.0 76.2
smon timer 12 100.0
Back to Wait Events Statistics
Back to Top
I couldnt add the rest of the report here since it is telling me I have exceeded 30000 characters. If you want to see the full report, please email me at [email protected]Unless your database is strictly a DSS-type of database, your AWR report exposes loads of issues with it. And I think none of the time during the AWR window was spent on database. Look at the DB time (with all those multi cores) compared with the elapsed time of the AWR.
As you are on 11g, why not make use of MEMORY_TARGET (a single parameter to manage both SGA and PGA)? If you are already on it, ignore this as I can't see it anywhere. If not, get rid of SGA_TARGET and PGA_AGGREGATE_TARGET and replace it with a single MEMORY_TARGET parameter. However you may have a minimum threshold set for different SGA pools so that they won't shrink beyond that point.
Having said that, setting MEMORY_TARGET is not a guarantee to avoid ORA-4030. Just a single bad PL/SQL code could go and exploit the untunable part of your process memory and even go and blow up the physical memory. If you are using FORALL and BULK load, see if you can cut it down into few chunks rather than running as a single process.
What does your V$PGASTAT say? -
MSI GeForce4 Ti-4200 128M 8X software
Here is the deal. The card install went well. MSI driver installs, but the MSI info and Clock tabs are inop. Upon selection, I get an error message, app .... .dll error
and when i install the MSI live update it shows an VBIOS.dll error and will not run at all. The 3D Desktop application is also inop, crashes browser instantly. Is this software compatible with Windows XP home SP1? If I have to go in and delete dll registry entries, than I have defeated the purpose of the software/card package. That is the reason I chose MSI over Asus.
The card benchmarks well, and works very well with the Nvidia detonator drivers. Leads me to believe that this a software problem. ?(
Windos XP sp1, DX9, Compaq motherboard, Athlon 1.2Ghz, Volcano 7+ cooling, 400W P/S, 448 M of 133Mhz Ram.
All help will be appreciated. 8)SiSoftware Sandra
Computer System
Name : COMPUTER
User Name : default
Logon Domain : COMPUTER
Processor(s)
Model : AMD Athlon(tm) Processor
Speed : 1.20GHz
Model Number : 1334 (estimated)
Performance Rating : PR1759 (estimated)
L2 On-board Cache : 256kB ECC synchronous write-back
Mainboard and BIOS
Bus(es) : AGP PCI USB
MP Support : No
System BIOS : Compaq 786K1
Mainboard : Compaq 06E4h
System Chipset : VIA Technologies Inc VT8363/5 KT133/KM133 System Controller
Front Side Bus Speed : 2x 100MHz (200MHz data rate)
Installed Memory : 448MB SDRAM
Memory Bus Speed : 1x 100MHz (100MHz data rate)
Video System
Monitor/Panel : Philips 107S (107S2)
Adapter : NVIDIA GeForce4 Ti 4200 with AGP8X
Physical Storage Devices
Removable Drive : Floppy disk drive
Disk Drive : QUANTUM FIREBALLlct15 30
CD-ROM/DVD : CREATIVE CD5233E-N
CD-ROM/DVD : PHILIPS CDRW2412A
Logical Storage Devices
1.44MB 3.5" (A:) : N/A
Hard Disk (C:) : 25.2GB (8.9GB, 35% Free) (NTFS)
Hard Disk (D:) : 2.7GB (584MB, 21% Free) (NTFS)
CD-ROM/DVD (E:) : N/A
CD-ROM/DVD (F:) : N/A
Peripherals
Serial/Parallel Port(s) : 1 COM / 1 LPT
USB Controller/Hub : NEC PCI to USB Open Host Controller
USB Controller/Hub : NEC PCI to USB Open Host Controller
USB Controller/Hub : NEC PCI to USB Enhanced Host Controller (B1)
USB Controller/Hub : VIA Rev 5 or later USB Universal Host Controller
USB Controller/Hub : VIA Rev 5 or later USB Universal Host Controller
USB Controller/Hub : USB Root Hub
USB Controller/Hub : USB Root Hub
USB Controller/Hub : USB Root Hub
USB Controller/Hub : USB Root Hub
USB Controller/Hub : USB Root Hub
USB Controller/Hub : USB Printing Support
USB Controller/Hub : Generic USB Hub
USB Controller/Hub : Logitech USB Camera (Pro 3000)
USB Controller/Hub : USB Composite Device
Keyboard : Standard 101/102-Key or Microsoft Natural PS/2 Keyboard
Keyboard : HID Keyboard Device
Mouse : PS/2 Compatible Mouse
Human Interface : HID-compliant consumer control device
Human Interface : HID-compliant device
Human Interface : HID-compliant consumer control device
Human Interface : HID-compliant device
Human Interface : HID-compliant device
Human Interface : HID-compliant device
Human Interface : HID-compliant device
Human Interface : HID-compliant device
Human Interface : USB Human Interface Device
Human Interface : USB Human Interface Device
MultiMedia Device(s)
Device : SoundMAX Integrated Digital Audio
Printers and Faxes
Model : Lexmark Z22-Z32 Series
Model : Lexmark Z22-Z32 Color Jetprinter
Operating System(s)
Windows System : Microsoft Windows XP Home Ver 5.01.2600 Service Pack 1
Network Adapter(s)
Networking Installed : Yes
Adapter : SMC EZ Card 10/100 PCI (SMC1211 Series) -
[865PE/G Neo2 Series] Memory timings and power settings anyone?
I'm not very good with computers, so I've just played around with the timings on my memory, but have not found one stable setting as of yet.
The memory is brand new and I've tested it for errors on my brothers computer and there's nothing wrong there.
Intensive harddrive usage, dvd playing and intense downloading while listening to music are sure ways to crash my system. "..........Not less or equal" is the most common error message, but there are
many many more.
Oh, maybe I should point out that overheating's not an issue, cause I've good fans and run my computer without sidepanels.
I also like to say that I have the first generation of p4 with ht and 533fsb.
I'm sure there's something I forgot to mention but here below I'll paste the system summary that Sandra gave me.
Would be forever grateful if someone could help me out with the memory timings and power settings for my board. Setting it to "auto by spd" gives me a very unstable system, so no avail there.
I'm running the mem at 2.70 and sometimes at 2.65, since these two settings seem to be the ones with least crashes.
The Memory installed are 4*512mb ,kingston 512 kvr333x64c25 pc2700
The PSU is a Tagan "TG480-u01" 480w
Many many thanks in advance guys.
SiSoftware Sandra
Processor
Model : Intel(R) Pentium(R) 4 CPU 3.06GHz
Speed : 3.10GHz
Performance Rating : PR4118 (estimated)
Cores per Processor : 1 Unit(s)
Threads per Core : 2 Unit(s)
Internal Data Cache : 8kB Synchronous, Write-Thru, 4-way set, 64 byte line size
L2 On-board Cache : 512kB ECC Synchronous, ATC, 8-way set, 64 byte line size, 2 lines per sector
Mainboard
Bus(es) : ISA AGP PCI IMB USB FireWire/1394 i2c/SMBus
MP Support : 1 Processor(s)
MP APIC : Yes
System BIOS : American Megatrends Inc. V2.5
System : MICRO-STAR INC. MS-6728
Mainboard : MICRO-STAR INC. MS-6728
Total Memory : 2GB DDR-SDRAM
Chipset 1
Model : Micro-Star International Co Ltd (MSI) 82865G/PE/P, 82848P DRAM Controller / Host-Hub Interface
Front Side Bus Speed : 4x 135MHz (540MHz data rate)
Total Memory : 2GB DDR-SDRAM
Memory Bus Speed : 2x 168MHz (336MHz data rate)
Video System
Monitor/Panel : Iiyama A201HT VisionMaster Pro 510
Adapter : RADEON X800 Series
Adapter : RADEON X800 Series Secondary
Imaging Device : CanoScan FB630U/FB636U #2
Imaging Device : Video Blaster WebCam 3/WebCam Plus (WDM) #2
Physical Storage Devices
Removable Drive : Diskettenhet
Hard Disk : ST3160023AS (149GB)
Hard Disk : WDC WD1000BB-00CAA0 (93GB)
Hard Disk : WDC WD1200JB-00CRA1 (112GB)
Hard Disk : FUJITSU MPG3409AT E SCSI Disk Device (38GB)
CD-ROM/DVD : PHILIPS DVDR1640P (CD 63X Rd, 63X Wr) (DVD 8X Rd, 8X Wr)
CD-ROM/DVD : PLEXTOR CD-R PX-W4824A (CD 40X Rd, 48X Wr)
CD-ROM/DVD : AXV CD/DVD-ROM SCSI CdRom Device (CD 32X Rd) (DVD 4X Rd)
CD-ROM/DVD : AXV CD/DVD-ROM SCSI CdRom Device (CD 32X Rd) (DVD 4X Rd)
CD-ROM/DVD : AXV CD/DVD-ROM SCSI CdRom Device (CD 32X Rd) (DVD 4X Rd)
CD-ROM/DVD : AXV CD/DVD-ROM SCSI CdRom Device (CD 32X Rd) (DVD 4X Rd)
CD-ROM/DVD : AXV CD/DVD-ROM SCSI CdRom Device (CD 32X Rd) (DVD 4X Rd)
Logical Storage Devices
1.44MB 3.5" (A:) : N/A
Hard Disk (C:) : 149GB (101GB, 67% Free Space) (NTFS)
Xp (D:) : 112GB (2.1GB, 2% Free Space) (NTFS)
Big (E:) : 93GB (9GB, 10% Free Space) (NTFS)
Jimjimjim (F:) : 38GB (2.1GB, 5% Free Space) (NTFS)
Bologna_2 (G:) : 3.9GB (UDF)
CD-ROM/DVD (H:) : N/A
Bfv_3 (I:) : 486MB (CDFS)
Doom 3 roe (J:) : 650MB (CDFS)
Sims2ep1_1 (K:) : 650MB (CDFS)
Bf2 dvd (L:) : 1.9GB (UDF)
CD-ROM/DVD (M:) : N/A
Peripherals
Serial/Parallel Port(s) : 1 COM / 0 LPT
USB Controller/Hub : Intel(R) 82801EB USB Universal Host Controller - 24D2
USB Controller/Hub : Intel(R) 82801EB USB Universal Host Controller - 24D4
USB Controller/Hub : Intel(R) 82801EB USB Universal Host Controller - 24D7
USB Controller/Hub : Intel(R) 82801EB USB2 Enhanced Host Controller - 24DD
USB Controller/Hub : Intel(R) 82801EB USB Universal Host Controller - 24DE
USB Controller/Hub : USB-rotnav (hub)
USB Controller/Hub : USB-rotnav (hub)
USB Controller/Hub : USB-rotnav (hub)
USB Controller/Hub : USB-rotnav (hub)
USB Controller/Hub : USB-rotnav (hub)
USB Controller/Hub : USB-enhet (sammansatt)
USB Controller/Hub : Stöd för USB-skrivarport
FireWire/1394 Controller/Hub : OHCI-kompatibel IEEE 1394-värdstyrenhet
Keyboard : Logitech HID-Compliant Keyboard
Keyboard : HID-tangentbordsenhet
Mouse : HID-compliant MX310 Optical Mouse
Mouse : HID-kompatibel mus
Mouse : HID-kompatibel mus
Human Interface : Logitech WingMan Force 3D USB (HID)
Human Interface : HID-kompatible konsumentkontrollenhet
Human Interface : HID-kompatibel enhet
Human Interface : Logitech Virtual Hid Device
Human Interface : Logitech Virtual Hid Device
Human Interface : Logitech USB MX310 Optical Mouse
Human Interface : Logitech WingMan Force 3D USB
Human Interface : USB HID (Human Interface Device)
Human Interface : Internet Keys USB
MultiMedia Device(s)
Device : Audigy X YouP-PAX A4 v1.00 Audio Driver(WDM)
Device : Creative Game Port
Device : Pinnacle PCTV Stereo PAL Capture Device
Printers and Faxes
Model : Microsoft Office Document Image Writer
Model : Canon PIXMA iP4000
Model : Adobe PDF
Power Management
AC Line Status : On-Line
Operating System(s)
Windows System : Microsoft Windows XP/2002 Professional (Win32 x86) 5.01.2600 (Service Pack 2)
Network Services
Adapter : Intel(R) PRO/1000 MT Desktop Adapter #2Thanks for your answers.
Danny: 1. Yes, I've put my memory in my brothers computer, and ran memtest there, and it came out ok, but on mine it crashes almost instantly.
2. Yea, I have 2 fans blowing into the case and one out, so airflow is good, cpu is also cool and seems to be happy ;-)
Geps: I can't say I understand all these numbers but there are som specs here as an example, Tagan TG480-U01 480W ATX
I can't imagine that this psu ain't enough, and if so.....I've been had by the dealer.
Maybe you are looking for
-
XI Attachment and main document MIME sequence
Hi All, We are doing SRMXIARIBA PurchaseOrder Integration, everything works fine. Now we want to add a new feature to support attachments. So from SRM XI will be receiving MainXML and Attachment(s)(this can be any type like word, pdf, jpg etc). We ar
-
Multiple IP address (alias) - 1 NIC - setting default outgoing IP
I have a mac that I use for a web server. I have two IP addresses on the same interface, say x.x.x.5 and x.x.x.6, Usually when starting a connection out of this box, I get x.x.x.5 as my IP address. Occasionally though, I end up with x.x.x.6 which cau
-
RFC call failed : Error when opening an RFC connection
Hi, As a result of "Start Data Collection" operation in the t-code RZ70, I have got this error. "RFC call failed : Error when opening an RFC connection" Others things are with success message. Regards, Bala Raja
-
Shared photostream where is it?
My wife sent me an invite to a new photo stream "gary" I clicked join, and aperture opened. It then crashed. I reopened and tried rejoining, it goes to the photostream section but there is no 'gary' then I got a notification saying 5 new photos were
-
Hi experts. I have created a iview in the portal that connect with CRMD_ORDER of CRM. In the portal I create an activity than have a survey when I try to save the survey the portal is releaded to the next page: res://ieframe.dll/unknownprotocol.htm#s