Kodo 2.3 speed
Kodo 2.3 speed of updates seems to improve a lot over 2.2.x. Probably due to
tuned writes and statement caching(?)
Speed of read seems to be a bit slower. I am wondering is it due to support
for large queries? I would like to be able to configure Kodo in this regard.
In many cases I do not need large queries and would prefer to fetch all data
at one and close JDBC result set and may be connection too.
Alex-
In the next beta of 2.3 you will be able to set the
com.solarmetric.kodo.DefaultFetchThreshold to -1, which will disable
large result handling and just instantiate the entire result list in one
go.
Any more details you can provide about any slowdowns is greatly
appreciated.
In article <ahamr0$h3e$[email protected]>, Alex Roytman wrote:
Kodo 2.3 speed of updates seems to improve a lot over 2.2.x. Probably due to
tuned writes and statement caching(?)
Speed of read seems to be a bit slower. I am wondering is it due to support
for large queries? I would like to be able to configure Kodo in this regard.
In many cases I do not need large queries and would prefer to fetch all data
at one and close JDBC result set and may be connection too.
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com
Kodo Java Data Objects Full featured JDO: eliminate the SQL from your code
Similar Messages
-
Hello,
I was going to compare MySQL and PostgreSQL performance using Kodo. It was
really surprising to find that reading records was about 25 times slower
using Postgres. After some research, I found that Kodo's querys do not put
single quotes around numerals and this, for some reason, makes Postgres
ignore column indices. To me, this sounds like a bug in Postgres.
Anyway, I was able to circumvent the problem by adding the quotes around
numerals in a Dictionary which extended PostgresDictionary, but I suppose
you would like to know about the issue.
And another thing: The Kodo documentation states that the Postgres
dictionary is called
"com.solarmetric.kodo.impl.jdbc.schema.dict.PostgreSQLDictionary", when it
is in fact "...PostgresDictionary".
Dag H__idahlHi-
If a table's indexed column isn't an int4, postgres's query parser will see
the int4s in the
query and won't realize that it can use the index. This is a known issue
for Postgres, but
the quotes-workaround seems to be the best way to fix it.
-Mike
Dag Hoidahl wrote:
Hello,
I was going to compare MySQL and PostgreSQL performance using Kodo. It was
really surprising to find that reading records was about 25 times slower
using Postgres. After some research, I found that Kodo's querys do not put
single quotes around numerals and this, for some reason, makes Postgres
ignore column indices. To me, this sounds like a bug in Postgres.
Anyway, I was able to circumvent the problem by adding the quotes around
numerals in a Dictionary which extended PostgresDictionary, but I suppose
you would like to know about the issue.
And another thing: The Kodo documentation states that the Postgres
dictionary is called
"com.solarmetric.kodo.impl.jdbc.schema.dict.PostgreSQLDictionary", when it
is in fact "...PostgresDictionary".
Dag H__idahl--
Mike Bridge -
My experience migrating Kodo 3.4 to 4.1
Hello Stefan,
I struggled with Kodo 4.0 and gave it up. Kodo 4.1 seems to be a much better
release. I migrated my app in a day. First I managed to run it against 3.4
metadata with some property file changes (migration docs are not very good
and miss few things but may be if you use Kodo automated migration tools it
will do for you what I was doing manually) . If you use lots of custom
filed mappings (especially non-trivial mappings) allocate much more time for
conversion - the whole thing has changed. I have not had chance to convert
my mapping and had to bandate it with externalizers and other things for
now. One thing you lose in kodo3.4 mode is ability to query by interface
since now it must be explicetly.
Couple of tips here
- kodo.Log: kodo(DefaultLevel=INFO, SQL=TRACE...) kodo is no longer valid
logger name
- kodo.jdbc.DBDictionary: oracle(BatchLimit=30) BatchLimit=30 is no longer a
valid option use
kodo.jdbc.SQLFactory: (BatchLimit=50) instead
- kodo.PersistentClasses= ... is no longer works use
kodo.MetaDataFactory: kodo3(Types=....) in kodo3 mode or
kodo.MetaDataFactory: (Types=...) in jdo2 mode
- Any SQL with DATE column is no longer batched leading to 2-3 times
performance drop. The decision swa made based on bugs in oracle 9 drivers in
batching mode. If you have latest drivers (and database) from my experience
you will not have any problems. So to reenable it you can register your own
instance of AdvancedSQL (now a factored out part of DatabaseDictionary):
kodo.jdbc.SQLFactory: (BatchLimit=50,
AdvancedSQL=com.peacetech.jdo.kodo.kodo4.patch.OracleAdvancedSQL)
where OracleAdvancedSQL could look like:
public class OracleAdvancedSQL extends kodo.jdbc.sql.OracleAdvancedSQL {
@Override public boolean canBatch(Column col) {
switch (col.getType()) {
case Types.DATE:
return true;
default:
return super.canBatch(col);
- I have not tested read performance much since I was concentrating on
writes. But write performance even with batch enabled does not seems to be
not up to 3.4 level I observed a consistent 30-40% decrease in performance
while persisting large graph complex of fairly small objects. I ran 3.4 and
4.1 versions side by side against a dedicated Oracle 10 server and noticed
performance decrease of 30-40%
SQL generated by both versions was very close if not identical (I only did
spotcheck) but incase of INSERT you would not expect it any different anyway
I tried profiling 4.1 version and found some significant hot spots but could
not dechipher them in any reasonable time because of huge depth of stacks
and lack of source code. I might try it again if I have time because
performance is critical for us.
- I have not tried any new/advanced features yet. including new mappings,
detachment, data cache, quality of eager fetch so I can not give you any
good feedback on that. At least I can say that this release worth trying -
after migration my application worked as expected except for lower
performance
I also have to say I do not know how well automated migration of kodo 3.4
metadata to jdo2 metadata works (if it exists) because I use my model driven
code generator and I just developed JDO2 plugin for it and regenerated my
metadata from the model (I did not have to gegenerate my java classes of
course)
Alex
Then I created native JDO2 mappings and everythingDenis,
Could you email it to me please shurik at peacetech dot com
Thanks
Alex
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Alex,
The issue was in version 3.4.1. BEA has provided a patch, no new version.
Denis.
"Alex Roytman" <[email protected]> wrote in message
news:[email protected]...
Denis,
In which version did you observe it and which version fixed it?
Thank you
Alex
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
I don't know, I didn't tried 4.1 yet. It is possible that this issue
was'n exist in kodo 4.x at all.
"Christiaan" <[email protected]> wrote in message
news:[email protected]...
Nice! Is it also solved for 4.1?
regards,
Christiaan
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Finally, BEA has solved the issue I mentioned. Reading cached PCs
which have embedded objects become much faster (about 10 times in my
tests).
Thank you very much to all why was involved in this job.
Denis.
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Hi Alex,
I know about default-fetch-group, of course I marked these embedded
fields properly. You're right, it is not a cache miss but an
unnecessary fetch from DB. It's strange that nobody has found this
before. I managed to create a stanalone test case and send it to BEA
support. They agree that it is a bug, but still can't fix the issue.
The test is quite small, so if anyone interested I can send it here.
Denis.
"Alex Roytman" <[email protected]> wrote in message
news:[email protected]...
Hi Denis,
That's very strange. All custom fields such enums etc. are
essentially mapped onto regular JDO mandated types. I use it all the
time and have not observed this behavior but I might have missed it
of course. I have a suspicion that what you are seeing is not cache
misses but rather fetches outside of default fetch group. Keep in
mind that Kodo does not fetch any custom field as part of default
fetch group unless you explicetly specified it in your package.jdo
So, try to mark all your custom mapped fields with
default-fetch-group="true" and I suspect all your extra database
selects will disspear.
Read speed, inded, is always the critical part. I just have not had
chance to play with 4.1 reads enough to say if it is faster or
slower. There are more ways to optimize reads (various flavors of
eager fetches, cutom optimized mapping of collections including
embedding ...) but very few optimizations for updates
Alex
"Denis Sukhoroslov" <[email protected]> wrote in message
news:[email protected]...
Hi Alex.
My question is out of this topic, but it looks like you may have an
answer. BEA support did nothing for the last few months.
We still use kodo 3.4.1. DB is Sybase 12.5.x. In our app we're very
concerned on performance as well. But, we do have much more reads
then writes. So, we're trying to cache as much as possible. Kodo
distributed cache works quite good. At least, it presents better
performance then Gemfire and Tangosol, on the same use cases. But
we found a bug in its caching mechanism: when you have a persistent
class and this class has an embedded attribute of some
non-primitive type (like enum or just a simple complex type with
one or two attributes in it) kodo bypasses cache and performs a
select to DB each time. Have you seen this? Is it possible to solve
via custom mapping, what do you think?
Thanks. Denis.
"Alex Roytman" <[email protected]> wrote in message
news:[email protected]...
Hello Stefan,
I struggled with Kodo 4.0 and gave it up. Kodo 4.1 seems to be a
much better release. I migrated my app in a day. First I managed
to run it against 3.4 metadata with some property file changes
(migration docs are not very good and miss few things but may be
if you use Kodo automated migration tools it will do for you what
I was doing manually) . If you use lots of custom filed mappings
(especially non-trivial mappings) allocate much more time for
conversion - the whole thing has changed. I have not had chance to
convert my mapping and had to bandate it with externalizers and
other things for now. One thing you lose in kodo3.4 mode is
ability to query by interface since now it must be explicetly.
Couple of tips here
- kodo.Log: kodo(DefaultLevel=INFO, SQL=TRACE...) kodo is no
longer valid logger name
- kodo.jdbc.DBDictionary: oracle(BatchLimit=30) BatchLimit=30 is
no longer a valid option use
kodo.jdbc.SQLFactory: (BatchLimit=50) instead
- kodo.PersistentClasses= ... is no longer works use
kodo.MetaDataFactory: kodo3(Types=....) in kodo3 mode or
kodo.MetaDataFactory: (Types=...) in jdo2 mode
- Any SQL with DATE column is no longer batched leading to 2-3
times performance drop. The decision swa made based on bugs in
oracle 9 drivers in batching mode. If you have latest drivers (and
database) from my experience you will not have any problems. So to
reenable it you can register your own instance of AdvancedSQL (now
a factored out part of DatabaseDictionary):
kodo.jdbc.SQLFactory: (BatchLimit=50,
AdvancedSQL=com.peacetech.jdo.kodo.kodo4.patch.OracleAdvancedSQL)
where OracleAdvancedSQL could look like:
public class OracleAdvancedSQL extends
kodo.jdbc.sql.OracleAdvancedSQL {
@Override public boolean canBatch(Column col) {
switch (col.getType()) {
case Types.DATE:
return true;
default:
return super.canBatch(col);
- I have not tested read performance much since I was
concentrating on writes. But write performance even with batch
enabled does not seems to be not up to 3.4 level I observed a
consistent 30-40% decrease in performance while persisting large
graph complex of fairly small objects. I ran 3.4 and 4.1 versions
side by side against a dedicated Oracle 10 server and noticed
performance decrease of 30-40%
SQL generated by both versions was very close if not identical (I
only did spotcheck) but incase of INSERT you would not expect it
any different anyway :-)
I tried profiling 4.1 version and found some significant hot spots
but could not dechipher them in any reasonable time because of
huge depth of stacks and lack of source code. I might try it
again if I have time because performance is critical for us.
- I have not tried any new/advanced features yet. including new
mappings, detachment, data cache, quality of eager fetch so I can
not give you any good feedback on that. At least I can say that
this release worth trying - after migration my application worked
as expected except for lower performance
I also have to say I do not know how well automated migration of
kodo 3.4 metadata to jdo2 metadata works (if it exists) because I
use my model driven code generator and I just developed JDO2
plugin for it and regenerated my metadata from the model (I did
not have to gegenerate my java classes of course)
Alex
Then I created native JDO2 mappings and everything -
KODO Disgrace - Newgroup Quality
KODO Newsgroup Quality
We are planning to use KODO as the underlying persistence mechanism for an already implemented system. We have updated our persistency wrapper with the JPA compliant KODO since we wanted to be JPA compliant. Everything were fine till we started to use the wrarpper in the real system. We had some problems during the migration.
That shall be disgrace. There is no reply to our posts even more than a month. Maybe this place is not to write such questions, maybe there is a bug, maybe we couldnt clearly explain the problem etc etc.. however our posts does not have even a single negative/positive reply. We are almost on the edge of giving up the decision to use KODO. If the problem is the money, we planned to use it with the end of a successful migration.
Congratulations dear KODO team, and thank you.
SiyamedWe are reading these forums.
Unfortunately, our response times have not been good because we had not set up a responsibility for creating responses. But we do care very much about the concerns raised here, and I am as disturbed as you to find that our Support is not hitting the mark. Rest assured, I’m taking action on this.
Marc Logemann does hit the nail on the head when he identifies these problems as stemming mainly from the transition. Kodo developers used to monitor these user groups, but we’ve assigned the Kodo developers to new product development. In their place, we are training a new Kodo Support Team, but it does take time to put infrastructure in place—which is why our responses… even this one… may seem sluggish in this transition phase. Once people are in place with the expertise needed, I think you’ll find our responses to be both swift and knowledgeable.
However, that isn’t an excuse. The truth is, it was never our intention to ignore developers, even during this transition period. We do still have support structures in place for Kodo. Our support team is not quite up to the level of the original Kodo engineers, it is true, but we expected them to meet the need during this transition.
I think the major disconnect here is that Kodo users have been used to receiving support in a different way than we anticipated, and in a different way than we are used to providing it. However, we should have done a better job of familiarizing long-time Kodo users like you with the processes BEA has in place for customers to raise and escalate concerns and get issue resolution. We should have been more prepared for the culture you already have in place.
Luckily, that is easily remedied. We are working to get everyone educated about our Support channels. We are also in the process of building a strong infrastructure to provide Kodo users with BEA’s normal level of comprehensive support. And we expect to be able to regularly review forums such as this one.
As for our support of Kodo as a stand-alone product: it is absolutely not in question. BEA’s resource commitment to Kodo is very strong. We do not intend to change the things that make Kodo most successful. I’ve copied and pasted a few bullet points below to indicate how we are working internally to bring Kodo support up to date.
Here are some of the commitments BEA is making to Kodo:
a. More BEA Support engineers assigned to Kodo (and currently coming up to speed) than the size of the entire original SolarMetric engineering team prior to acquisition.
b. Kodo Support engineers training in all regions worldwide. This enables us to provide business hour support including phone support in EMEA and APAC as well as in the Americas.
c. 24x7 Production Support, another first for Kodo customers. Previous SolarMetric support was email-only and on a 12x5 basis (10 AM – 10 PM EST).
d. We have been focusing the expertise of the Kodo Engineers on product development: This year, the BEA Kodo team delivered a major release for EJB3 (Kodo 4.0) and released that technology to the open source community via Open JPA. We are also nearing the release for JDO2 (Kodo 4.1).
e. Adding staff and additional trainings in preparation for future releases of Kodo.
I hope this answers some of the questions raised here. I encourage you to contact me if you have concerns about the support you are receiving from BEA.
Thanks for offering us feedback and for your support of Kodo.
Terry Clearkin
WW SVP, BEA Support -
I have 2 ea. 2gb DDR3 PC-8500 1066Mhz memory modules installed in my T400. If I check the modules under Everest Ultimate 460 they read DDR2 running at 667 Mhz. If I use CPU-Z they also read DDR2 but don't show a speed. Has anyone else been able to check the speed of their modules and what program did you use? I don't see anything in the bios to cause the memory to run slower than rated. Do you think they are really running at 1066 MHz or do I have some type of compatibility problem. I get the same reading if I pull one of the modules out and check them one at a time.
Short answer, yes, the DDR3 memory bus in your T400/T500 should show at 533mhz(+/- 1mhz)
CPU-Z 1.51 will properly show as DDR3, previous versions may have incorrectly shown as DDR2
Here are your answers explianed in detail:
http://en.wikipedia.org/wiki/DDR3_SDRAM
T500 - P8600 2.4Ghz Core 2 Duo, Modded - 4GB Patriot DDR3 and 320GB Caviar Black 7200rpm drive with Ati HD3650 and Catalyst 9.6 modded drivers - Vista Business 32bit stripped down to bare bones with VM's from Ubuntu to Win7RC1 64-bit. -
This past Tuesday I installed the new N router. Also, we upgraded to FiOS Quantum 75/35 (previously 15/5). That was activated the following day. I can't complain about the internet speed to our pc since it's wired. Speed tests showed we were getting what was advertised. :-)
For the wifi, I can't figured out for the life of me what's causing our devices (e.g., our Nexus 7 tablets) to have a link speeds of 65Mbps one minute and then drop down to 5 or even 1. The same situation was happening with my HTC Incredible II phone, but I think that connection was tapping out at 54Mbps. The bottom would just drop out of the signal for some reason. Other times we couldn't even connect back in. The devices would say they're not in range even when they were in front of the router. Not sure if I fubarred the router install or what? I just swapped out the current Fios router for this new one. Pretty straightforward. At first, I logged into the router software with the defaults, but then I changed the the username, pswd, SSID, etc. to match what I used previously. I thought it was working fine the night of the install but not so much after Fios got bumped up in speed the following day. Maybe just a co-inky-dink?
I know you never get the same speeds compared to a wired pc but this seems worse (wifi-wise) than before the new router came along. I've read that the 'N' routers can be finicky to set-up to run properly. Maybe a setting or two is off?
I don't know if the fact that my son's netbook's wifi card is only b/g compatible would slow our network down? At this point I'm grasping at straws! I talked with tech support last night for 40 mins to no avail. I got the usual story about wireless devices running slower than wired ones, that these smaller devices aren't capable to maintaining higher speeds (which I'm not sure I agree with 100%), the more active devices you have slows down your network, etc, etc. I get all that. But something is going on or actually, it's not. LOL!
Why I don't agree with the above statement I mentioned is because I have my tablet at work now and the wifi speed is 54Mbps and it's constant. So it seems to have no problems with this speed, unlike what I was told over the phone.
Let me know if I can provide any further technical details that'll make it easier to help diagnose our wifi issues.
Thanks for listening to me whine ,
-billThanks for the help, Hubrisnxs.
Here's an update since I got home:
Nexus7 (N7) took a min or so to connect to wifi when I got home. When it did, speed said 65Mbps. So I ran speedtest.net and got 27/20. Shortly after that I tried again but it was hesitating. I checked the speed and it was down to 5Mbps. Brought it upstairs to router and pc, and eventually it was back to 65. Ran speed test a few more times with similar results. I did notice that the connection would intermittently come and go. Not sure why. Looking at the wifi section on my N7, it says our network is out of range and the N7 is 2' from the router. Every now and then my network will go to the top of the list and it'll say it's obtaining ip address, then secured with wpa/wpa2 psk and it's locked in. So I hit the connect button and it's out of range again. Of course, while I was typing it decided to connect after 10-15 mins of trying/nothing. Says, signal strength excellent, 52 Mbps. Spoke too soon. It's off again. Crap! Back on again but speed was 19, now 5.
I did check router settings and this is what I have:
Performance mode ('n')
WPA2 AES
Channel 1
nonbroadcasting SSID
Maybe I'll try channel 11 next.....
Just for the heck of it, here is the results from verizon speed test on our pc (wired of course):
Checking for Middleboxes . . . . . . . . . . . . . . . . . . Done
SendBufferSize set to [261360]
running 10s outbound test (client to server) . . . . . 34.66Mb/s
running 10s inbound test (server to client) . . . . . . 84.24Mb/s
------ Client System Details ------
OS data: Name = Windows XP, Architecture = x86, Version = 5.1
Java data: Vendor = Sun Microsystems Inc., Version = 1.6.0_37
------ Web100 Detailed Analysis ------
Client Receive Window detected at 1045440 bytes.
622 Mbps OC-12 link found.
Link set to Half Duplex mode
Information: throughput is limited by other network traffic.
Good network cable(s) found
Normal duplex operation found.
Web100 reports the Round trip time = 41.51 msec; the Packet size = 1452 Bytes; and
There were 72 packets retransmitted, 2254 duplicate acks received, and 2281 SACK blocks received
The connection was idle 0 seconds (0%) of the time
This connection is sender limited 91.83% of the time.
This connection is network limited 8.17% of the time.
Web100 reports TCP negotiated the optional Performance Settings to:
RFC 2018 Selective Acknowledgment: ON
RFC 896 Nagle Algorithm: ON
RFC 3168 Explicit Congestion Notification: OFF
RFC 1323 Time Stamping: OFF
RFC 1323 Window Scaling: ON
Information: Network Middlebox is modifying MSS variable
Server IP addresses are preserved End-to-End
Information: Network Address Translation (NAT) box is modifying the Client's IP address
Server says [] but Client says []
-bill -
New VMS/IPC boxes - speed and pricing
For the most part, I've been a very happy FiOS customer. The internet side has gone out exactly once in the year I've had the service, and it was restored within an hour. TV has dropped out twice, both times very late at night (I assume for head-end maintenance). Reliability is better than every other ISP I've ever had, combined.
That said..
I wanted to snag another STB for the house - found out the local stores had all been closed (I can understand why from the financial aspect, but it was still a shock pulling up to my old store to pick up another STB.. and finding a "for lease" sign, when I'd been by there 2 months prior), so I looked at options online. The new VMS + IPC boxes looked promising, especially since they added the ability for the entire house to pause, rewind, schedule recordings, etc, from any TV - instead of only the DVR being able to do that (though the old boxes could stream recordings from the DVR).
My chief complaint so far - while the VMS box itself is responsive, the IPC boxes are painfully slow to respond to anything. Picture quality is fantastic, but using them feels, to put it in nerd terms, like using a Pentium 90 with Windows XP and 128MB RAM. They take 5+ to respond to anything more than a channel change (and even that takes up to 3 seconds), and frequently "lose" remote key presses if you try to, say, hit guide, hit page down, then try to arrow down to the channel that you know is on the next page. It literally feels like a throwback to the earliest digital set top boxes from Comcast/Time Warner - the first ones with built-in TV listings in the mid 1990s. The speed is right on par with them; the only thing faster right now is they don't shut down for 30 minutes a day to update the TV listings.
I've gone through the built-in diagnostics on the boxes to verify signal levels - they're decent (not perfect, but this is a 20 year old house with mostly original coax), but we have no pixelation or picture drop-out, so I would think they could carry the basic stuff between the IPC boxes and VMS pretty easily. The router certainly has no problem handling the 75/35 on the same coax (speedtest shows 85-90 down, 35-40 up, with a ~5ms ping, no matter which cable outlet it's plugged into)..
My other complaint - why did Verizon move to "rooms" for pricing? I thought I was getting a DVR plus 3 STBs (since I ordered 3 STBs and asked for a DVR), instead of a VMS + 2 STBs - and I'm paying much more than I did for the old DVR + 2 STBs.
With the griping out of the way - is there a reasonable chance that firmware updates will help with the speed of the IPC boxes? Or is there a chance it's a cabling issue? (I have no problem pulling new coax or cat5e, and I can provide signal levels if that'll help) If not, is there any chance of getting the old QIP7232 DVR + QIP7100 box back (and adding another 7100), and switching back to what I was paying before? I know they're a bit antiquated (especially the 7100s), but they were much more responsive, and didn't make me want to throw the STBs through the window. It just seems really rediculous that I'm paying such a large premium (plus the "upgrade fee") for equipment that, in terms of the user experience, is much worse.
Again.. been a happy customer for a long time, but the new terminals are super painful to use.
Thanks.If this is the case. I think I will wait till the bugs are worked out
Question what were you paying before and after for the boxes. So what was the old DVR + 2 STB's and the new DVR plus 2 stb's
Trying to get an idea of what you meant by way more. Most people said same pricing and a small $20 or so upgrade fee -
How to find the max data transfer rate(disk speed) supported by mobo?
I plan on replacing my current HDD with a new and bigger HDD.
For this I need to know the max data transfer rate(disk speed) that my mobo will support. However, dmidecode is not telling me that. Am I missing something?
Here's dmidecode:
# dmidecode 2.11
SMBIOS 2.5 present.
80 structures occupying 2858 bytes.
Table at 0x000F0450.
Handle 0xDA00, DMI type 218, 101 bytes
OEM-specific Type
Header and Data:
DA 65 00 DA B2 00 17 4B 0E 38 00 00 80 00 80 01
00 02 80 02 80 01 00 00 A0 00 A0 01 00 58 00 58
00 01 00 59 00 59 00 01 00 75 01 75 01 01 00 76
01 76 01 01 00 05 80 05 80 01 00 D1 01 19 00 01
00 15 02 19 00 02 00 1B 00 19 00 03 00 19 00 19
00 00 00 4A 02 4A 02 01 00 0C 80 0C 80 01 00 FF
FF 00 00 00 00
Handle 0xDA01, DMI type 218, 35 bytes
OEM-specific Type
Header and Data:
DA 23 01 DA B2 00 17 4B 0E 38 00 10 F5 10 F5 00
00 11 F5 11 F5 00 00 12 F5 12 F5 00 00 FF FF 00
00 00 00
Handle 0x0000, DMI type 0, 24 bytes
BIOS Information
Vendor: Dell Inc.
Version: A17
Release Date: 04/06/2010
Address: 0xF0000
Runtime Size: 64 kB
ROM Size: 4096 kB
Characteristics:
PCI is supported
PNP is supported
APM is supported
BIOS is upgradeable
BIOS shadowing is allowed
ESCD support is available
Boot from CD is supported
Selectable boot is supported
EDD is supported
Japanese floppy for Toshiba 1.2 MB is supported (int 13h)
3.5"/720 kB floppy services are supported (int 13h)
Print screen service is supported (int 5h)
8042 keyboard services are supported (int 9h)
Serial services are supported (int 14h)
Printer services are supported (int 17h)
ACPI is supported
USB legacy is supported
BIOS boot specification is supported
Function key-initiated network boot is supported
Targeted content distribution is supported
BIOS Revision: 17.0
Handle 0x0100, DMI type 1, 27 bytes
System Information
Manufacturer: Dell Inc.
Product Name: OptiPlex 755
Version: Not Specified
UUID: 44454C4C-5900-1050-8033-C4C04F434731
Wake-up Type: Power Switch
SKU Number: Not Specified
Family: Not Specified
Handle 0x0200, DMI type 2, 8 bytes
Base Board Information
Manufacturer: Dell Inc.
Product Name: 0PU052
Version:
Handle 0x0300, DMI type 3, 13 bytes
Chassis Information
Manufacturer: Dell Inc.
Type: Space-saving
Lock: Not Present
Version: Not Specified
Asset Tag:
Boot-up State: Safe
Power Supply State: Safe
Thermal State: Safe
Security Status: None
Handle 0x0400, DMI type 4, 40 bytes
Processor Information
Socket Designation: CPU
Type: Central Processor
Family: Xeon
Manufacturer: Intel
ID: 76 06 01 00 FF FB EB BF
Signature: Type 0, Family 6, Model 23, Stepping 6
Flags:
FPU (Floating-point unit on-chip)
VME (Virtual mode extension)
DE (Debugging extension)
PSE (Page size extension)
TSC (Time stamp counter)
MSR (Model specific registers)
PAE (Physical address extension)
MCE (Machine check exception)
CX8 (CMPXCHG8 instruction supported)
APIC (On-chip APIC hardware supported)
SEP (Fast system call)
MTRR (Memory type range registers)
PGE (Page global enable)
MCA (Machine check architecture)
CMOV (Conditional move instruction supported)
PAT (Page attribute table)
PSE-36 (36-bit page size extension)
CLFSH (CLFLUSH instruction supported)
DS (Debug store)
ACPI (ACPI supported)
MMX (MMX technology supported)
FXSR (FXSAVE and FXSTOR instructions supported)
SSE (Streaming SIMD extensions)
SSE2 (Streaming SIMD extensions 2)
SS (Self-snoop)
HTT (Multi-threading)
TM (Thermal monitor supported)
PBE (Pending break enabled)
Version: Not Specified
Voltage: 0.0 V
External Clock: 1333 MHz
Max Speed: 5200 MHz
Current Speed: 2666 MHz
Status: Populated, Enabled
Upgrade: Socket LGA775
L1 Cache Handle: 0x0700
L2 Cache Handle: 0x0701
L3 Cache Handle: Not Provided
Serial Number: Not Specified
Asset Tag: Not Specified
Part Number: Not Specified
Core Count: 2
Core Enabled: 2
Thread Count: 2
Characteristics:
64-bit capable
Handle 0x0700, DMI type 7, 19 bytes
Cache Information
Socket Designation: Not Specified
Configuration: Enabled, Not Socketed, Level 1
Operational Mode: Write Back
Location: Internal
Installed Size: 32 kB
Maximum Size: 32 kB
Supported SRAM Types:
Other
Installed SRAM Type: Other
Speed: Unknown
Error Correction Type: None
System Type: Data
Associativity: 8-way Set-associative
Handle 0x0701, DMI type 7, 19 bytes
Cache Information
Socket Designation: Not Specified
Configuration: Enabled, Not Socketed, Level 2
Operational Mode: Varies With Memory Address
Location: Internal
Installed Size: 6144 kB
Maximum Size: 6144 kB
Supported SRAM Types:
Other
Installed SRAM Type: Other
Speed: Unknown
Error Correction Type: Single-bit ECC
System Type: Unified
Associativity: <OUT OF SPEC>
Handle 0x0800, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: PARALLEL
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: DB-25 female
Port Type: Parallel Port PS/2
Handle 0x0801, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: SERIAL1
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: DB-9 male
Port Type: Serial Port 16550A Compatible
Handle 0x0802, DMI type 126, 9 bytes
Inactive
Handle 0x0803, DMI type 126, 9 bytes
Inactive
Handle 0x0804, DMI type 126, 9 bytes
Inactive
Handle 0x0805, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB1
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0806, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB2
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0807, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB3
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0808, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB4
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x0809, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB5
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080A, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB6
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080B, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB7
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080C, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: USB8
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Access Bus (USB)
Port Type: USB
Handle 0x080D, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: ENET
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: RJ-45
Port Type: Network Port
Handle 0x080E, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: MIC
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x080F, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: LINE-OUT
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x0810, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: LINE-IN
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x0811, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: HP-OUT
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port
Handle 0x0812, DMI type 8, 9 bytes
Port Connector Information
Internal Reference Designator: MONITOR
Internal Connector Type: None
External Reference Designator: Not Specified
External Connector Type: DB-15 female
Port Type: Video Port
Handle 0x090A, DMI type 9, 13 bytes
System Slot Information
Designation: SLOT1
Type: x1 Proprietary
Current Usage: In Use
Length: Long
Characteristics:
PME signal is supported
Handle 0x0901, DMI type 126, 13 bytes
Inactive
Handle 0x0902, DMI type 9, 13 bytes
System Slot Information
Designation: SLOT2
Type: 32-bit PCI
Current Usage: Available
Length: Long
ID: 2
Characteristics:
5.0 V is provided
3.3 V is provided
PME signal is supported
Handle 0x0903, DMI type 126, 13 bytes
Inactive
Handle 0x0904, DMI type 126, 13 bytes
Inactive
Handle 0x0905, DMI type 126, 13 bytes
Inactive
Handle 0x0906, DMI type 126, 13 bytes
Inactive
Handle 0x0907, DMI type 126, 13 bytes
Inactive
Handle 0x0908, DMI type 126, 13 bytes
Inactive
Handle 0x0A00, DMI type 10, 6 bytes
On Board Device Information
Type: Video
Status: Disabled
Description: Intel Graphics Media Accelerator 950
Handle 0x0A02, DMI type 10, 6 bytes
On Board Device Information
Type: Ethernet
Status: Enabled
Description: Intel Gigabit Ethernet Controller
Handle 0x0A03, DMI type 10, 6 bytes
On Board Device Information
Type: Sound
Status: Enabled
Description: Intel(R) High Definition Audio Controller
Handle 0x0B00, DMI type 11, 5 bytes
OEM Strings
String 1: www.dell.com
Handle 0x0D00, DMI type 13, 22 bytes
BIOS Language Information
Language Description Format: Long
Installable Languages: 1
en|US|iso8859-1
Currently Installed Language: en|US|iso8859-1
Handle 0x0F00, DMI type 15, 29 bytes
System Event Log
Area Length: 2049 bytes
Header Start Offset: 0x0000
Header Length: 16 bytes
Data Start Offset: 0x0010
Access Method: Memory-mapped physical 32-bit address
Access Address: 0xFFF01000
Status: Valid, Not Full
Change Token: 0x00000018
Header Format: Type 1
Supported Log Type Descriptors: 3
Descriptor 1: POST error
Data Format 1: POST results bitmap
Descriptor 2: System limit exceeded
Data Format 2: System management
Descriptor 3: Log area reset/cleared
Data Format 3: None
Handle 0x1000, DMI type 16, 15 bytes
Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 8 GB
Error Information Handle: Not Provided
Number Of Devices: 4
Handle 0x1100, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_1
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Manufacturer: AD00000000000000
Handle 0x1101, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_3
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Handle 0x1102, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_2
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Handle 0x1103, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x1000
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM_4
Bank Locator: Not Specified
Type: DDR2
Type Detail: Synchronous
Speed: 667 MHz
Handle 0x1300, DMI type 19, 15 bytes
Memory Array Mapped Address
Starting Address: 0x00000000000
Ending Address: 0x000FDFFFFFF
Range Size: 4064 MB
Physical Array Handle: 0x1000
Partition Width: 1
Handle 0x1400, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00000000000
Ending Address: 0x0007FFFFFFF
Range Size: 2 GB
Physical Device Handle: 0x1100
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 1
Interleaved Data Depth: 1
Handle 0x1401, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00080000000
Ending Address: 0x000FDFFFFFF
Range Size: 2016 MB
Physical Device Handle: 0x1101
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 1
Interleaved Data Depth: 1
Handle 0x1402, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00000000000
Ending Address: 0x0007FFFFFFF
Range Size: 2 GB
Physical Device Handle: 0x1102
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 2
Interleaved Data Depth: 1
Handle 0x1403, DMI type 20, 19 bytes
Memory Device Mapped Address
Starting Address: 0x00080000000
Ending Address: 0x000FDFFFFFF
Range Size: 2016 MB
Physical Device Handle: 0x1103
Memory Array Mapped Address Handle: 0x1300
Partition Row Position: 1
Interleave Position: 2
Interleaved Data Depth: 1
Handle 0x1410, DMI type 126, 19 bytes
Inactive
Handle 0x1800, DMI type 24, 5 bytes
Hardware Security
Power-On Password Status: Enabled
Keyboard Password Status: Not Implemented
Administrator Password Status: Enabled
Front Panel Reset Status: Not Implemented
Handle 0x1900, DMI type 25, 9 bytes
System Power Controls
Next Scheduled Power-on: *-* 00:00:00
Handle 0x1B10, DMI type 27, 12 bytes
Cooling Device
Type: Fan
Status: OK
OEM-specific Information: 0x0000DD00
Handle 0x1B11, DMI type 27, 12 bytes
Cooling Device
Type: Fan
Status: OK
OEM-specific Information: 0x0000DD01
Handle 0x1B12, DMI type 126, 12 bytes
Inactive
Handle 0x1B13, DMI type 126, 12 bytes
Inactive
Handle 0x1B14, DMI type 126, 12 bytes
Inactive
Handle 0x2000, DMI type 32, 11 bytes
System Boot Information
Status: No errors detected
Handle 0x8100, DMI type 129, 8 bytes
OEM-specific Type
Header and Data:
81 08 00 81 01 01 02 01
Strings:
Intel_ASF
Intel_ASF_001
Handle 0x8200, DMI type 130, 20 bytes
OEM-specific Type
Header and Data:
82 14 00 82 24 41 4D 54 01 01 00 00 01 A5 0B 02
00 00 00 00
Handle 0x8300, DMI type 131, 64 bytes
OEM-specific Type
Header and Data:
83 40 00 83 14 00 00 00 00 00 C0 29 05 00 00 00
F8 00 4E 24 00 00 00 00 0D 00 00 00 02 00 03 00
19 04 14 00 01 00 01 02 C8 00 BD 10 00 00 00 00
00 00 00 00 FF 00 00 00 00 00 00 00 00 00 00 00
Handle 0x8800, DMI type 136, 6 bytes
OEM-specific Type
Header and Data:
88 06 00 88 5A 5A
Handle 0xD000, DMI type 208, 10 bytes
OEM-specific Type
Header and Data:
D0 0A 00 D0 01 03 FE 00 11 02
Handle 0xD100, DMI type 209, 12 bytes
OEM-specific Type
Header and Data:
D1 0C 00 D1 78 03 07 03 04 0F 80 05
Handle 0xD200, DMI type 210, 12 bytes
OEM-specific Type
Header and Data:
D2 0C 00 D2 F8 03 04 03 06 80 04 05
Handle 0xD201, DMI type 126, 12 bytes
Inactive
Handle 0xD400, DMI type 212, 242 bytes
OEM-specific Type
Header and Data:
D4 F2 00 D4 70 00 71 00 00 10 2D 2E 42 00 11 FE
01 43 00 11 FE 00 0F 00 25 FC 00 10 00 25 FC 01
11 00 25 FC 02 12 00 25 FC 03 00 00 25 F3 00 00
00 25 F3 04 00 00 25 F3 08 00 00 25 F3 0C 07 00
23 8F 00 08 00 23 F3 00 09 00 23 F3 04 0A 00 23
F3 08 0B 00 23 8F 10 0C 00 23 8F 20 0E 00 23 8F
30 0D 00 23 8C 40 A6 00 23 8C 41 A7 00 23 8C 42
05 01 22 FD 02 06 01 22 FD 00 8C 00 22 FE 00 8D
00 22 FE 01 9B 00 25 3F 40 9C 00 25 3F 00 09 01
25 3F 80 A1 00 26 F3 00 A2 00 26 F3 08 A3 00 26
F3 04 9F 00 26 FD 02 A0 00 26 FD 00 9D 00 11 FB
04 9E 00 11 FB 00 54 01 23 7F 00 55 01 23 7F 80
5C 00 78 BF 40 5D 00 78 BF 00 04 80 78 F5 0A 01
A0 78 F5 00 93 00 7B 7F 80 94 00 7B 7F 00 8A 00
37 DF 20 8B 00 37 DF 00 03 C0 67 00 05 FF FF 00
00 00
Handle 0xD401, DMI type 212, 172 bytes
OEM-specific Type
Header and Data:
D4 AC 01 D4 70 00 71 00 03 40 59 6D 2D 00 59 FC
02 2E 00 59 FC 00 6E 00 59 FC 01 E0 01 59 FC 03
28 00 59 3F 00 29 00 59 3F 40 2A 00 59 3F 80 2B
00 5A 00 00 2C 00 5B 00 00 55 00 59 F3 00 6D 00
59 F3 04 8E 00 59 F3 08 8F 00 59 F3 00 00 00 55
FB 04 00 00 55 FB 00 23 00 55 7F 00 22 00 55 7F
80 F5 00 58 BF 40 F6 00 58 BF 00 EB 00 55 FE 00
EA 00 55 FE 01 40 01 54 EF 00 41 01 54 EF 10 ED
00 54 F7 00 F0 00 54 F7 08 4A 01 53 DF 00 4B 01
53 DF 20 4C 01 53 7F 00 4D 01 53 7F 80 68 01 56
BF 00 69 01 56 BF 40 FF FF 00 00 00
Handle 0xD402, DMI type 212, 152 bytes
OEM-specific Type
Header and Data:
D4 98 02 D4 70 00 71 00 00 10 2D 2E 2D 01 21 FE
01 2E 01 21 FE 00 97 00 22 FB 00 98 00 22 FB 04
90 00 11 CF 00 91 00 11 CF 20 92 00 11 CF 10 E2
00 27 7F 00 E3 00 27 7F 80 E4 00 27 BF 00 E5 00
27 BF 40 D1 00 22 7F 80 D2 00 22 7F 00 45 01 22
BF 40 44 01 22 BF 00 36 01 21 F1 06 37 01 21 F1
02 38 01 21 F1 00 39 01 21 F1 04 2B 01 11 7F 80
2C 01 11 7F 00 4E 01 65 CF 00 4F 01 65 CF 10 D4
01 65 F3 00 D5 01 65 F3 04 D2 01 65 FC 00 D3 01
65 FC 01 FF FF 00 00 00
Handle 0xD403, DMI type 212, 157 bytes
OEM-specific Type
Header and Data:
D4 9D 03 D4 70 00 71 00 03 40 59 6D 17 01 52 FE
00 18 01 52 FE 01 19 01 52 FB 00 1A 01 52 FB 04
1B 01 52 FD 00 1C 01 52 FD 02 1D 01 52 F7 00 1E
01 52 F7 08 1F 01 52 EF 00 20 01 52 EF 10 21 01
52 BF 00 22 01 52 BF 40 87 00 59 DF 20 88 00 59
DF 00 E8 01 66 FD 00 E9 01 66 FD 02 02 02 53 BF
00 03 02 53 BF 40 04 02 53 EF 00 05 02 53 EF 10
06 02 66 DF 00 07 02 66 DF 20 08 02 66 EF 00 09
02 66 EF 10 17 02 66 F7 00 18 02 66 F7 08 44 02
52 BF 40 45 02 52 BF 00 FF FF 00 00 00
Handle 0xD800, DMI type 126, 9 bytes
Inactive
Handle 0xDD00, DMI type 221, 19 bytes
OEM-specific Type
Header and Data:
DD 13 00 DD 00 01 00 00 00 10 F5 00 00 00 00 00
00 00 00
Handle 0xDD01, DMI type 221, 19 bytes
OEM-specific Type
Header and Data:
DD 13 01 DD 00 01 00 00 00 11 F5 00 00 00 00 00
00 00 00
Handle 0xDD02, DMI type 221, 19 bytes
OEM-specific Type
Header and Data:
DD 13 02 DD 00 01 00 00 00 12 F5 00 00 00 00 00
00 00 00
Handle 0xDE00, DMI type 222, 16 bytes
OEM-specific Type
Header and Data:
DE 10 00 DE C1 0B 00 00 10 05 19 21 01 00 00 01
Handle 0x7F00, DMI type 127, 4 bytes
End Of Table
Hdparm also does not tell me the max data transfer rate (disk speed) of my current drive although this link : www.wdc.com/en/library/sata/2879-001146.pdf says that it is 3.0Gb/s
and here's hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: WDC WD800JD-75JNC0
Firmware Revision: 06.01C06
Standards:
Supported: 6 5 4
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
CHS current addressable sectors: 16514064
LBA user addressable sectors: 156250000
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 76293 MBytes
device size with M = 1000*1000: 80000 MBytes (80 GB)
cache/buffer size = 8192 KBytes
Capabilities:
LBA, IORDY(can be disabled)
Standby timer values: spec'd by Standard, with device specific minimum
R/W multiple sector transfer: Max = 16 Current = 8
Recommended acoustic management value: 128, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
Automatic Acoustic Management feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* SMART error logging
* SMART self-test
* Gen1 signaling speed (1.5Gb/s)
* Host-initiated interface power management
* SMART Command Transport (SCT) feature set
* SCT Long Sector Access (AC1)
* SCT LBA Segment Access (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
not supported: enhanced erase
Checksum: correct
Last edited by Inxsible (2011-03-27 04:40:49)I just checked my BIOS and my current setting is set at IDE although it also mentions that the default should be AHCI. Currently I have a dual boot of Windows 7 (need it for Tax software) and Arch
So I guess, when I get the new HDD, I will first set it to AHCI and then install the OSes on it. See if NCQ helps any, and if not I will turn it back and re-install (if I have to). I am planning to have Windows only in virtualbox in the new drive.
Anyhoo, while I was in the BIOS I found two things which I had questions about :
1) Under Onboard Devices --> Integrated NIC , my setting is currently set at "On w/PXE" and it says the default should be just "On". Would it be ok to change it back to On since its a single machine and its not booting an OS on any server. I just don't want to have to re-install anything now since I will be doing that in the new HDD.
2) How would I know whether my BIOS would support a 64 bit OS in Virtualbox? I checked some setting under Virtualization, but they weren't very clear.
I will edit this post and let you know exactly what settings were present under the Virtualization sub-section. -
Is there a way to speed up video clips in imovie on ipad
is there a way to speed up video clips in imovie on ipad? I'm currently editing some videos and would like to speed up a few clips to make the video shorter. Is this possible on ipad or just mac pcs only?
actually, no 'add-ons' possible or avail for iMovie/MacOS/iOS…
-
T61 - Half theCore speed at maximum performanc​e mode.
Hello,
My T61 was bought during March 2008. I recently installed CPU-Z and was bit worried since under maximum performance mode, the CPU was not throttling to full speed (with AC power).
The core speed is showing approximately 1200MHz. FSB is showing around 800Mhz. It is a penryn T9300 processor and I believe it should be 2500Mhz. Is there a setting somewhere on the power options that needs to configured to attain the maximum core speed? OS is XP SP3 clean install.
Thanks for any help,
Regards,
ThanimaiYou have to actually put quite some load to get the machine to work at its full throttle...while I'm typing this my ThinkPad is showing a core speed of 1199Mhz (it's a P4M 2.0) and trust you me, it would take quite a bit to get it up to the max, and your CPU is far newer and can take a lot more without shooting up...
Hope this helps.
Cheers,
George
In daily use: R60F, R500F, T61, T410
Collecting dust: T60
Enjoying retirement: A31p, T42p,
Non-ThinkPads: Panasonic CF-31 & CF-52, HP 8760W
Starting Thursday, 08/14/2014 I'll be away from the forums until further notice. Please do NOT send private messages since I won't be able to read them. Thank you. -
Who has the Unlimited Data Plan and has Been Speed Capped when You went over 5Gig?
Just checking to see how many others have been speed capped to 200Kb on the unlimited data plan when they went over 5Gig in a billing cycle? (you can get a one time reprieve if you call them at 1-888-483-7200)
Did they ask you to change your plan to 10Gig and Update your Equipment and sign a new two year agreement?
Did you know that Sprint is offering unlimited Data on their 4G network and that Verizon is trying to lock down customers that are not on 2 year contracts? (Gives them time to catch up with the competition) Marcus in Fraud and Investigation told me (when I told him that Sprint was offering Unlimited Data) "...That is a better option..."
I am now looking into other options along with allot of the rest of you.I was instructed to call that Fraud Department number and was treated like I had broken some kind of law instead of going over the 5 Gig fine print limit on my unlimited grandfathered in account. They would not even consider giving me a one time do over for being a MiFi newbie. I am hearing class action suit rumblings out there!!
-
Error while configuring kodo to lookup a datasource
We had a working application using EEPersistenceManagerFactory
I changed the kodo.properties to lookup a non XA JDBC datasource.
After that the application is working fine (it creates
,updates,deletes,finds record in the DB)
but SystemOut.log has the following error for every operation
We are using Kodo 2.5.0, Websphere 5.0 and Oracle 8
How can we avoid getting this error ?.
We tried to find any property on the Websphere datasource which can be
altered to avoid this error but no luck.
Thanks
Paresh
[10/7/03 15:30:45:467 IST] 3d8b2d1a MCWrapper E J2CA0081E: Method
destroy failed while trying to execute method destroy on ManagedConnection
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl@437f6d23 from resource
<null>. Caught exception: com.ibm.ws.exception.WsException: DSRA0080E: An
exception was received by the Data Store Adapter. See original exception
message: Cannot call 'cleanup' on a ManagedConnection while it is still in
a transaction..
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:222)
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:172)
at
com.ibm.ws.rsadapter.AdapterUtil.createDataStoreAdapterException(AdapterUtil.java:182)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.cleanupTransactions(WSRdbManagedConnectionImpl.java:1826)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.destroy(WSRdbManagedConnectionImpl.java:1389)
at com.ibm.ejs.j2c.MCWrapper.destroy(MCWrapper.java:1032)
at
com.ibm.ejs.j2c.poolmanager.FreePool.returnToFreePool(FreePool.java:259)
at com.ibm.ejs.j2c.poolmanager.PoolManager.release(PoolManager.java:777)
at com.ibm.ejs.j2c.MCWrapper.releaseToPoolManager(MCWrapper.java:1304)
at
com.ibm.ejs.j2c.ConnectionEventListener.connectionClosed(ConnectionEventListener.java:195)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.processConnectionClosedEvent(WSRdbManagedConnectionImpl.java:843)
at
com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.closeWrapper(WSJdbcConnection.java:569)
at com.ibm.ws.rsadapter.jdbc.WSJdbcObject.close(WSJdbcObject.java:132)
at
com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.close(SQLExecutionManagerImpl.java:814)
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.release(JDBCStoreManager.java(Inlined
Compiled Code))
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.preSerialize(StateManagerImpl.java:784)
at com.paresh.core.vo.Release.jdoPreSerialize(Release.java)
at com.paresh.core.vo.Release.writeObject(Release.java)
at java.lang.reflect.Method.invoke(Native Method)
at
com.ibm.rmi.io.IIOPOutputStream.invokeObjectWriter(IIOPOutputStream.java:703)
at com.ibm.rmi.io.IIOPOutputStream.outputObject(IIOPOutputStream.java:671)
at
com.ibm.rmi.io.IIOPOutputStream.simpleWriteObject(IIOPOutputStream.java:146)
at
com.ibm.rmi.io.ValueHandlerImpl.writeValueInternal(ValueHandlerImpl.java:217)
at com.ibm.rmi.io.ValueHandlerImpl.writeValue(ValueHandlerImpl.java:144)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1590)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1107)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie.findCorrectionAction(_EJSRemoteStatelessValidation_da16513c_Tie.java:309)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie._invoke(_EJSRemoteStatelessValidation_da16513c_Tie.java:104)
at
com.ibm.CORBA.iiop.ServerDelegate.dispatchInvokeHandler(ServerDelegate.java:582)
at com.ibm.CORBA.iiop.ServerDelegate.dispatch(ServerDelegate.java:437)
at com.ibm.rmi.iiop.ORB.process(ORB.java:320)
at com.ibm.CORBA.iiop.ORB.process(ORB.java:1544)
at com.ibm.rmi.iiop.Connection.doWork(Connection.java:2063)
at com.ibm.rmi.iiop.WorkUnitImpl.doWork(WorkUnitImpl.java:63)
at com.ibm.ejs.oa.pool.PooledThread.run(ThreadPool.java:95)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:592)
kodo.properties
com.solarmetric.kodo.LicenseKey=
#com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=TransactionFactory
TransactionManagerMethod=com.ibm.ejs.jts.jta.TransactionManagerFactory.getTransactionManager
#com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.InvocationManagedRuntime
com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.AutomaticManagedRuntime
#javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.ee.EEPersistenceManagerFactory
javax.jdo.option.ConnectionFactoryName=ds/kodo/DataSource1
javax.jdo.option.Optimistic=true
javax.jdo.option.RetainValues=true
javax.jdo.option.NontransactionalRead=true
#com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.plugins.CacheImpl
# Changing these to a non-zero value will dramatically increase
# performance, but will cause in-memory databases such as Hypersonic
# SQL to never exit when your main() method exits, as the pooled
# connections in the in-memory database will cause a daemon thread to
# remain running.
javax.jdo.option.MinPool=5
javax.jdo.option.MaxPool=10We do have a makeTransientAll() before the object returns from Session
Bean.
We also tried the JCA path
After installing the JCA RAR and doing a lookup for
PersistenceManagetFactory the same code is not throwing any exception.
The exception is thrown only if datasource is used.
Thanks
Paresh
Marc Prud'hommeaux wrote:
Paresh-
It looks like you are returning a collection of instances from an EJB,
which will cause them to be serialized. The serialization is happening
outside the context of a transaction, and Kodo needs to obtain a
connection. Websphere seems to not like that.
You have a few options:
1. Call makeTransientAll() on all the instances before you return them
from your bean methods
2. Manually instantiate all the fields yourself before sending them
back. You could use a bogus ObjectOutputStream to do this.
3. In 3.0, you can use the new detach() API to detach the instances
before sending them back to the client.
In article <[email protected]>, Paresh wrote:
We had a working application using EEPersistenceManagerFactory
I changed the kodo.properties to lookup a non XA JDBC datasource.
After that the application is working fine (it creates
,updates,deletes,finds record in the DB)
but SystemOut.log has the following error for every operation
We are using Kodo 2.5.0, Websphere 5.0 and Oracle 8
How can we avoid getting this error ?.
We tried to find any property on the Websphere datasource which can be
altered to avoid this error but no luck.
Thanks
Paresh
[10/7/03 15:30:45:467 IST] 3d8b2d1a MCWrapper E J2CA0081E: Method
destroy failed while trying to execute method destroy on ManagedConnection
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl@437f6d23 from resource
<null>. Caught exception: com.ibm.ws.exception.WsException: DSRA0080E: An
exception was received by the Data Store Adapter. See original exception
message: Cannot call 'cleanup' on a ManagedConnection while it is still in
a transaction..
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:222)
at
com.ibm.ws.rsadapter.exceptions.DataStoreAdapterException.<init>(DataStoreAdapterException.java:172)
at
com.ibm.ws.rsadapter.AdapterUtil.createDataStoreAdapterException(AdapterUtil.java:182)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.cleanupTransactions(WSRdbManagedConnectionImpl.java:1826)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.destroy(WSRdbManagedConnectionImpl.java:1389)
at com.ibm.ejs.j2c.MCWrapper.destroy(MCWrapper.java:1032)
at
com.ibm.ejs.j2c.poolmanager.FreePool.returnToFreePool(FreePool.java:259)
at com.ibm.ejs.j2c.poolmanager.PoolManager.release(PoolManager.java:777)
at com.ibm.ejs.j2c.MCWrapper.releaseToPoolManager(MCWrapper.java:1304)
at
com.ibm.ejs.j2c.ConnectionEventListener.connectionClosed(ConnectionEventListener.java:195)
at
com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl.processConnectionClosedEvent(WSRdbManagedConnectionImpl.java:843)
at
com.ibm.ws.rsadapter.jdbc.WSJdbcConnection.closeWrapper(WSJdbcConnection.java:569)
at com.ibm.ws.rsadapter.jdbc.WSJdbcObject.close(WSJdbcObject.java:132)
at
com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.close(SQLExecutionManagerImpl.java:814)
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.release(JDBCStoreManager.java(Inlined
Compiled Code))
at
com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.load(JDBCStoreManager.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.loadFields(StateManagerImpl.java(Compiled
Code))
at
com.solarmetric.kodo.runtime.StateManagerImpl.preSerialize(StateManagerImpl.java:784)
at com.paresh.core.vo.Release.jdoPreSerialize(Release.java)
at com.paresh.core.vo.Release.writeObject(Release.java)
at java.lang.reflect.Method.invoke(Native Method)
at
com.ibm.rmi.io.IIOPOutputStream.invokeObjectWriter(IIOPOutputStream.java:703)
at com.ibm.rmi.io.IIOPOutputStream.outputObject(IIOPOutputStream.java:671)
at
com.ibm.rmi.io.IIOPOutputStream.simpleWriteObject(IIOPOutputStream.java:146)
at
com.ibm.rmi.io.ValueHandlerImpl.writeValueInternal(ValueHandlerImpl.java:217)
at com.ibm.rmi.io.ValueHandlerImpl.writeValue(ValueHandlerImpl.java:144)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1590)
at com.ibm.rmi.iiop.CDROutputStream.write_value(CDROutputStream.java:1107)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie.findCorrectionAction(_EJSRemoteStatelessValidation_da16513c_Tie.java:309)
at
com.paresh.core.interfaces._EJSRemoteStatelessValidation_da16513c_Tie._invoke(_EJSRemoteStatelessValidation_da16513c_Tie.java:104)
at
com.ibm.CORBA.iiop.ServerDelegate.dispatchInvokeHandler(ServerDelegate.java:582)
at com.ibm.CORBA.iiop.ServerDelegate.dispatch(ServerDelegate.java:437)
at com.ibm.rmi.iiop.ORB.process(ORB.java:320)
at com.ibm.CORBA.iiop.ORB.process(ORB.java:1544)
at com.ibm.rmi.iiop.Connection.doWork(Connection.java:2063)
at com.ibm.rmi.iiop.WorkUnitImpl.doWork(WorkUnitImpl.java:63)
at com.ibm.ejs.oa.pool.PooledThread.run(ThreadPool.java:95)
at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:592)
kodo.properties
com.solarmetric.kodo.LicenseKey=
#com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=java:/TransactionManager
>>
>
com.solarmetric.kodo.ee.ManagedRuntimeProperties=TransactionManagerName=TransactionFactory
>>
>
TransactionManagerMethod=com.ibm.ejs.jts.jta.TransactionManagerFactory.getTransactionManager
>>
>>
>
#com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.InvocationManagedRuntime
>>
>
com.solarmetric.kodo.ee.ManagedRuntimeClass=com.solarmetric.kodo.ee.AutomaticManagedRuntime
>>
>>
>
#javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBCPersistenceManagerFactory
>>
>
javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.ee.EEPersistenceManagerFactory
>>
javax.jdo.option.ConnectionFactoryName=ds/kodo/DataSource1
javax.jdo.option.Optimistic=true
javax.jdo.option.RetainValues=true
javax.jdo.option.NontransactionalRead=true
#com.solarmetric.kodo.DataCacheClass=com.solarmetric.kodo.runtime.datacache.plugins.CacheImpl
>>
>>
# Changing these to a non-zero value will dramatically increase
# performance, but will cause in-memory databases such as Hypersonic
# SQL to never exit when your main() method exits, as the pooled
# connections in the in-memory database will cause a daemon thread to
# remain running.
javax.jdo.option.MinPool=5
javax.jdo.option.MaxPool=10
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Java3d speed collapse caused by other java apps running at the same time
Hi
I am programming a flightsimulator for some months.
The current state is online available (all free, no copyrights)
at http://www.snowraver.org/efcn/efcnsim/index.htm
especially the sample (source) which shows the
behaviour which is the reason for my post is here
http://www.snowraver.org/efcn/efcnsim/page2.htm
My Problem:
When I start the sim while two other java programs
( one is a server running localhost, one is a client )
are running, the speed of the flightsim is very slow,
one frame update takes 3 to 5 seconds.
( 3 java.exe's in task list plus 1 which is the IDE )
When I start the flightsim ALONE, I have 30 to 40 frames per second.
( 2 java.exe's in the task list = the flightsim and the IDE -> no prob here )
That means, the flightsim is about 100 times slower, when
started while the other two apps are running.
BUT the other two applications do almost ***NOTHING***, the
CPU load is 1 or 2 percent.
Of course they have threads running, but all are waiting
for a signal - no thread really consumes CPU power.
Interestingly, when I FIRST start the flightsim and AFTER THIS
start the two other applications, the flightsim
holds 30 frames per seconds without problems, even
though the other applications consume some CPU power
until they have completely started up.
Configurations:
JSDK 1.4.2_1 , 0_2..
Java3D 1.3.1 OPENGL (The DirectX version crashes with D3D device lost)
Win2000,XP CPU 800MHz upto 3 GHz
In my point of view, the java3d thread scheduler makes
some funny decisions when it starts up, which lead
to the order dependent behaviour described above.
My question is, if anyone has some ideas, how I could
get away from this speed collapse.
The problem is caused in native code I guess.
I also could imagine, that it has to do something with
the order in which one creates, attaches and starts
the Canvas3D. (? could produce race conditions)
The flightsim runs in full retained mode. Of course
the CPU work in the behaviours is rather big, because
the ROAM triangulation update (..) is done there
and the triangles are recalculated and passed
( all BY_REFERENCE ).
Or could it have to do something with the memory
consumption ( when all runs, almost all of
the 512MB RAM is taken by the three java.exe's ) ?
Any hints or ideas ?:) No, Sun does handle it [lol]
I just have tested it on my computer at work
( 3GHz HP compaq, 1GB Ram and a Intel 82865G Graphics
Card with 64MB memory, Windows XP )
and it has worked without problems any way I tried.
( Except for xclusive fullscreen mode, but I guess, the administrators
have deactivated it somehow, so we don't play games at work :)
I couldn't test it under Linux so far, but I think, this will be less
problematic than Windows [usually].
However my current assumption is:
I totally have forgot the [limited] videocard memory.
I suppose, Java3D tries to put all triangle data and all
textures to the videocards memory, so most data processing
then can be passed to it's graphiccard CPU using
OpenGL commands.
Now the flightsim produces a varying amount of (by_reference) triangle data ( a few thousands )
and has some texture maps for the terrain, the sea and other things,
plus indexed triangle data for the planes and ships.
The notebook system, which slows down has an ATI Mobile Radeon card
with only 32MB RAM onboard, whereas the others have 64MB Ram.
An additional pointer to that theory is that I can trigger the slowdown by resizing
the flightsim window, while it is running.
On the notebook, it holds 30fps, until the window exceeds a size of 962*862 pixels.
At this size the speed collapses and goes down to 1 frame update every 4 seconds.
If I make the window a few pixels smaller, the speed of 30fps immediately
is there again.
Therefore I guess, some data passed to the graphic cards memory depends
linearly from the canvas3d's window dimension, and at some limit,
the graphiccard's memory is too small and Java3D changes it's strategy
and performs most calculations on the computer's mainmemory,
which of course is a lot slower.
I'm not very sure about that, I'm just speculating.
Next thing I will try is to disable directdraw for the other two applications,
possibly swing also uses graphicscard memory, when directdraw is enabled.
The solution seems to be clear anyway: The flightsim must examine the system
and set some parameters depending on the machine's capabilities.
Onboard videagraphic ram is one of them. If it's too slow, I start to decrease
the window size and expect to see a sudden increase of speed, as soon as
the rendering can be done by the graphicscard CPU. If this never happens,
I assume no OpenGL accelerator is present on that system. This can be seen as a method
for finding out the amount of videocard memory on a system by trial and error ..?:)
Thanks for your tips, Alain.
I especially have to check out the data sharing class in 1.5. -
I have a mid 2009 13 inch unibody 2.53GHz MacBook Pro. I'm finding that it doesn't run as quickly as it used to.
A genius in the Apple store suggested that I replace my optical drive with an SSD, however only use the SSD for OSX, applications, system and library. Keep all documents, pictures, music etc on the current hard drive.
I would be grateful if someone could help me with:
1) installing OSX on the SSD without copying across data from the current hard drive
2) transferring applications, system and library folders across to the SSD so that they still function
3) changing my settings so that OSX reads the home folder from the current hard drive, as well as all the applications' data (documents, music etc...)
However, I would like to run iMovie, with all events etc solely from the SSD to speed up the process of editing movies.
If anyone could help with this, it would be much appreciated.If you got the data transfer cable with your SSD, the procedure should be pretty simple - and there should be step-by-step instructions in the box. You're simply going to remove the bottom case of your computer (using a Phillips #00 screwdriver), take out the two screws in the bracket holding the hard drive into place (using same screwdriver), remove the drive and (use a Torx 6 screwdriver) remove the four screws that hold the hard drive in place. Then put in the SSD and reassemble the machine.
Then you'll plug up the old hard drive by using the SATA to USB cable and use the option key to boot from the old drive. I don't know what data transfer software Crucial provides, but I would recommend formatting the SSD using Disk Utility from your old drive ("Mac OS Extended (Journaled)" with a single GUID partition) and then use Carbon Copy Cloner to clone your old drive to your new SSD (see this user tip for cloning - https://discussions.apple.com/docs/DOC-4122). You needn't worry about getting an enclosure since you have the data transfer cable and you don't want to use your old hard drive.
There are a number of videos on YouTube that take you step-by-step through this procedure - many specific to Crucial SSDs and their data transfer kit - do a little searching there if you're unsure of how to procede.
Clinton -
Many times my computer takes too long to connect to new website. I have wireless internet (time capsule) and I am running a pretty powerful real time financial work program at same time, what is the best solution? Upgrading speed from cable network? is it a hard drive issue? do I only need to "clean out" the computer? Or all of the above...not to computer saavy. It is a Macbook Pro osx 10.6.8 (late 2010).
Almost certainly none of the above! Try each of the following in this order:
Select 'Reset Safari' from the Safari menu.
Close down Safari; move <home>/Library/Caches/com.apple.Safari/Cache.db to the trash; restart Safari.
Change the DNS servers in your network settings to use the OpenDNS servers: 208.67.222.222 and 208.67.220.220
Turn off DNS pre-fetching by entering the following command in Terminal and restarting Safari:
defaults write com.apple.safari WebKitDNSPrefetchingEnabled -boolean false
Maybe you are looking for
-
AME takes a long time to start encoding when using GPU Acceleration (OpenCL)
Have had this issue for a while not (current update as well as at least the previous one). I switched to using GPU Hardware Acceleration over Software Only rendering. The speed for the actual encoding on my Mac is much faster that software only rende
-
X-fi picks up all computer audio please h
This is an issue regarding the X-fi extreme music. Before anybody states it, yes I have "what you hear" unticked, this is my 3rd motherboard with this card and have had the same problem with everyone of them. Also I have the most recent drivers. My p
-
Hi All, My Requirement was to publish a RFC as a webserice in PI7.1. I did the entire configuration and publish display the sender agreement as webservice and took the WSDL from there. when I test this using SOAP UI tool then this WSDL works fine whe
-
Group Footer to be repeated on each and every page.
hi all. i designed a crystal report with a group on department. within department i have 20 records and hence the group is carry forwarding to next page and i get group footer on the last page of the group. but i want group footer to be on every pag
-
External Backup Method - Alternate procedure
I normally do a backup of my Macintosh HD partition to an external drive using SD (all files to an image on external drive) and also do a backup of BOOTCAMP when within Win 7 x64 using Acronis True Image (again using all files option to a single file