Windows7 in a RAID Environment
Hi Guys,
I know Win7 doesn't install on RAID configs, which is fine. I have it installed on a SSD on its own. But can't seem to get it to boot up when my other internal drives are connected (they are striped together). Anyone know why? I couldn't get Win7 to install initially as well until i removed all the internal Raided drives.
System currently set up as follows:
MacPro(5,1)
2 x PCIE SSD (striped) for OSX and Apps
4 x 2TB HDDs (striped) for Data
1 x SSD for Win7
Currently running Mavericks (10.9.4).
I don't need access to the other drives when i'm in Win7.
Is there a proper way to go about getting Win7 installed in this environment? Any help is appreciated!
If I understand correctly you have an existing cabled ethernet network to which you have cabled the Airport Express, and the problem is that a wireless client computer connected to the Airport Express cannot "see" computers on the cabled network (and vice versa).
This problem is easily solved by disabling the router built into the Airport Express. To disable the Airport Express router - run the Airport Admin Utility, and uncheck the setting to "Distribute IP addresses" found under the Network tab. Update settings to the Airport Express. Note that any wireless clients of the Airport Express MUST then have an IP address on the same IP subnet as the computers on your cabled network, so you must either have (a) a DHCP server on that network to provide the IP addresses automatically or (b) you must also manually assign static IP addresses to your wireless computers as well.
Similar Messages
-
BPM workspace doesnt find my process in Windows7.
Hi All,
I have installed BPM suite11.1.1.5 in both windows7 and in Linux environment.
In Linux environment i could able to see the process in BPM workspace. However in Windows7, BPM workspace doesnt list my process.
Is BPM suite11.1.1.5 not recommened for windows7 installation?
Thanks
Rajesh.Hi Desmukh,
Jdev worked fine. I am taking about BPM workspace.
I have installed BPM suite11.1.1.5 and deployed a BPM process. Next logged onto BPM workspace using admin user(weblogic), assigned a testuser to the process role.
Now when i login to BPM workspace using testuser, I am unable to see the process listed in BPMworkspace.
Thanks
Rajesh. -
Has anyone tried a Western Digital SATA RAID hard drive on a non-RAID controler?
Because of a compatability problem, I want to try a different brand of SATA II drive on my non-RAID motherboard. The WD RAID drives have great specs but I don't know if they will work. WD says they are ONLY for RAID and not to put them on a non-RAID board. My theory is that they will be more reliable over the long term because they are a heavier duty drive.
Has anyone tried this and if so, what were the results?I think there was some confusion about what I am asking. It is definately not a duplication of my other post and some of the information in that other post is incorrect because I JUST found out that my motherboard controller IS a RAID controller even though I am not using it as one. I will correct that when I am finished with this post.
The other post was about a specific model of Maxtor drive that doesn't work on my motherboard. This post is about Western Digital. The only reason that I even mentioned the word RAID in my previous post is that there is a known compatibility problem with RAID controllers and that drive. I wanted to emphasize that my problem was not the one of that known compatibility problem, but because it is a RAID controller, it might be that problem.
When you use the motherboard's controller as a non-RAID controller, you do format the drives individually, not as a RAID. I know that because I did it that way, not realizing I even had the option of using RAID on this board. That works fine on one model of Maxtor hard drive and does not work at all on the other model of Maxtor hard drive.
What I want to know now is: Will the Western Digital RAID drives will work on a non-RAID controller. The controllers on my add-in card is not RAID. Also, will that drive work on my motherboard's controller even though it is not being used as RAID.
Here is a link to the WD site's description of one such drive. There are several other WD RAID drives, this is just one example... http://www.wdc.com/en/products/Products.asp?DriveID=238&Language=en It does not state whether or not they will work well in a non-RAID environment. -
Activating TRIM feature on RAIDed drives?
Hi Apple community.
I have 4 Samsung SSD 840 series drives in my bay. One I use as system. The other (3) are RAIDed via software raid. I recently downloaded both Chameleon SSd Optomizer and TrimEnabler. (still not sure which I will use yet) to ENABLE trim feature on my Mac Pro.
My question is: Should I enable the trim on all drives or just the sytem drive?
TIA
-terryyou leave out a critical item and what and why they are used.
most use 3-4 SSDs for scratch editing, even CS6 still uses scratch.
many would put them on two PCIe controllers like Sonnet Tempo Pro SSD which is $300 which gets you near 2GB/sec fast I/O and speeds up editing large files.
And the size of the devices. 240GB are better, more queue depth.
RAID should all be same size of course and firmware etc.
Intel motherboards and controllers do not support TRIM in RAID environment though they have been looking into it and some beta drivers for ages it has not happened. And anyone doing so I would say is taking a risk.
But you always need to be prepared with SSD to restore
Data seems to do better than when used as a system too.
And having TRIM might have one value: having it on your maintenance boot drive so you can run Disk Utilty and let it invoke TRIM command.
TRIM is or was suppose to be part of the SATA 3.x specification and rolled into the SATA Native Command Queue instruction set.
SSDs have to me probably been one of the most complex and the development and evolution longer than I ever expected, with new features often breaking and causing more work and testing at each stage. Personally. It was summer '08 we began to see Intel X25 and OWC and others and start using them in our Macs, it was not until very end of 2010 the SandForce 2.0 firmware and when I jumped in. And now SF is often pointed to as if they are not reliable (though Intel customizes and as an OEM has to do more testing and R&D than anyone).
Two years ago, May 2011 era, was when SATA3 SSDs firmware was in a heap of trouble and messy soup. -
Hard disks, partitions and Mac Pro
Hi all,
I'm new to the Mac Pro and am migrating from a Windows environment, I have used a iBook for the past 6 months and this is the reason for totally migrating to a Mac Desktop.
I have spent half a day on browsing the forum here, very confusing I must say but then again I add that I am an absolute a-technical person.
I have a few questions on storage which I hope might get answered here;
My Hardware:
I am about to order a 2,66ghz Mac Pro, 4GB Ram, XP1900 and a 160Gb hard disk will will be replaced as soon as the Mac Pro enters my dwelling.
My Work Environment:
Apart from the basic office things, Internet, Mail and such I will use the mac Pro 90% for Imaging purposes. Most used applications:
- CS2
- Aperture
- Adobe Lightroom
as well as a number of add-ons like HDR programs, Noise programs, RAW converters etc.
I am a Photographer working with Mid-format and Standard format images, which in some case, like when using HDR can reach sizes of well over 150-200Mb per image!
Useually the image size of a 16bit mid-format tiff are around 70MB.
Now for the question, how to configure the best possible HD setup.
I will use WinXP, which I would like to run on a seperate partition/disk
I need a schratch disk for CS2
I need roughly 500GB of intermediate storage for imgaes before the periodically get back-up to external hard disk.
So I was thinking the following, and this only based upon how I used my Win Environment:
150GB Raptor as start-up disk and location for programs and OSX as well as basic documents and files
76GB Raptor as Scratchdisk
500GB as WinXp disk (30%) and rest as storage for Image cataloging system (mainly normal sized Jpegs)
750GB as Image tank
I have the Raptors, but don't have to use them if a better solution comes out. Since I will be using my Win pc until everythink works satisfactory on the Mac...
some remarks:
I have red about raid, both 0 and 1 seem interesting but I will loose out on disk space possibilities, and will be forced to work with external drives again, which I don't like... but if push comes to shove I will do that.
sorry for the information overload, and maybe all points have been answered in other threads before, but whilst browsing through hundreds of threads I couldn't see the forest becasue of the trees anymore.
One last comment, I make my living with photography so a good setup is of vital importance ....
Kindest regards, and many thanks in advanceHi all,
Hi,
Now for the question, how to configure the best
possible HD setup.
IMO; use a 3 drive RAID 0 for your boot, application, and
data partition - my style would be all in one big partition
and just segragate stuff with folders. I would then add 2
USB or FireWire drives. One for backing up your system
and one for the windows XP if you just have to have windows
running on your MacPro. They don't run at the same time you
know - like on Amiga where MacOS, Windows, and AmigaOS all
multitasked together in shared memory space simultainiously.
As such my opinion is that it's better to keep your WinTel
box on a KBM switch and LAN the two together.
I will use WinXP, which I would like to run on a
seperate partition/disk
I need a schratch disk for CS2
With a 3 drive RAID your disk I/O is between 3 and 20 times
the speed of a single drive setup so separate cache drive
isn't really advantagious. Click on the link below and
scroll down to the Drive Test area:
http://db.xbench.com/merge.xhtml?doc1=199338&doc2=195252
Both are my profiles. One with a single drive, and one
with a 3-drive raid. No other differences.
I need roughly 500GB of intermediate storage for
imgaes before the periodically get back-up to
external hard disk.
So I was thinking the following, and this only based
upon how I used my Win Environment:
150GB Raptor as start-up disk and location for
programs and OSX as well as basic documents and
files
76GB Raptor as Scratchdisk
500GB as WinXp disk (30%) and rest as storage for
Image cataloging system (mainly normal sized Jpegs)
750GB as Image tank
I have the Raptors, but don't have to use them if a
better solution comes out. Since I will be using my
Win pc until everythink works satisfactory on the
Mac...
IMO the Raptors are overpriced. If it were me I would sell
them. I think the two together will pay for all three 320
or 300 gig RAID drives. I did some research regarding which
drives are good in a Mac RAID environment and I ended up
going for the the MaxLine Maxtor drives (even though I hate
Maxtor usually). So far I'm very happy with them and the
fact that they are like one of the most inexpensive drives
you can get didn't hurt either.
I think that if a person REALLY needs the speed offered by
Raptor drives and is willing to pay the difference for it
then they should just get a dedicated RAID card which offers
even more of a speed increase!!!
some remarks:
I have red about raid, both 0 and 1 seem interesting
but I will loose out on disk space possibilities, and
will be forced to work with external drives again,
which I don't like... but if push comes to shove I
will do that.
You don't lose any space if you use just RAID 0. If there
are three 300gig HDs that format individually to 260gig
each then in a RAID 0 configuration you would have 780gigs. -
Error on Device Access API class/interface import
Hi,
I have followed https://apex.oracle.com/pls/apex/f?p=44785:141:128148408213710::::P141_PAGE_ID,P141_SECTION_ID:144,1032#prettyPhoto/1/ video tutorial to set up Java ME Embedded development environment in my Windows system and I have chosen Raspberry PI as embedded platform for ME applications.
I thought of experimenting on Pi’s GPIO header to control a LED through a Switch. But import statement for com.oracle.deviceaccess.PeripheralConfig is giving error in NetBeans IDE and there is no Java ME library containing this interface in the ME SDK installation directory so that I can include that in project classpath to get rid of this error.
Where can I download the JAR for Device Access API?
Please suggest…
Thank you.Thank you for your reply.
I could successfully execute ‘blinking LED’ application on Raspberry PI, I have done this using DeviceManager class, GPIOPin interface present in device-io_1.0.jar which has come with ME SDK installation (C:\Java_ME_platform_SDK_8.0\lib).
But https://apex.oracle.com/pls/apex/f?p=44785:141:10585690084130::::P141_PAGE_ID,P141_SECTION_ID:144,1033#prettyPhoto/2/ demonstrates the same application by using classes and interfaces present in com.oracle.deviceaccess package (for eg, com.oracle.deviceaccess.PeripheralManager, com.oracle.deviceaccess.gpio.GPIOPin) and the import on the same is not working in my IDE (compile time error).
I have used below software installers in a Windows7 system for development environment set up:
Java SE SDK: jdk-8u11-windows-x64.exe
Java ME SDK: oracle-jmesdk-8-0-rr-win32-bin.exe
NetBeans all-in-one bundle: netbeans-8.0-windows.exe
NetBeans plugins for Java ME: oracle-jmesdk-8-0-rr-nb-plugins.zip
I have used only above installers.Have I missed anything during development environment setup??
Please suggest further…
Thanks -
Using a shared networked drive
We have been trying to use a NAS drive to store project files in a central location that can be accessed by all systems in a shared RAID environment. (This way all editors can read and write to the same volume, as opposed to the problems that happen with projects living on Read only drives and forcing you to create alternate versions. We are having 2 issues.
1) The modification date on the project file does not change. It does not change when it is accessed and saved in the application. So the only way to determine the modification date is to manually enter the info in the file name
(ex. Project_111408)
2) At times during the day, FInal Cut Pro will not be able to save to the NAS. The error code-File Error:File Unknown- appears. The only workaround is to save the file locally to the desktop and manually copy the file to the NAS at the Finder level. If one was to close the application and restart it, then it will work again...perhaps for a while until the next time.
So there is apparently some issues with writing to files on OS 10.4.11 that prevent it from updating the modification date, and in the case of Final Cut Pro, being unable to locate the NAS. Any ideas?..There is another possibility here that I've been experimenting with. I've been using Dropbox to store and backup my project files. This is an automatic syncing service that can be used on multiple machines. I use shared folders for multiple machines. When a file is added or modified in your dropbox, it is automatically synced to the dropbox server, and also to any other machine that has access to it.
What prompted me to experiment with this was a question from a co-worker who asked if it was possible to automatically copy a modified file to a separate location for backup purposes. So far, this method has been working quite well. I've been working on some projects at home for the past couple of days, and was able to access all of my project files. In fact, yesterday I had a co-worker at work open a project file that I had just finished. After reconnecting the media to their Xsan, he was able to export my project for me.
The only problem I can foresee is that if a project file gets corrupted for some reason, that corruption is also automatically synced. Which is why it's also good practice to keep autosaves stored in another location. Hope this helps. -
Raid0 on k8n neo2 platinum... what am I missing?
Hi all
Just took 2 seagate 80gb SATA drives to play with Raid.
I'd like to set up a Raid0 array.
After enabling Sata and Raid on the mobo, I start Nvidia Raid Utility to create the array. No problem with that.
Start windows from cdrom, press F6 and install both "NVIDIA RAID CLASS DRIVER" and "NVIDIA Nforce Storage Controller", as the manual says. Press enter to continue... but after that the system hangs on the "setup is starting windows" screen, just before the partition&format step.
If I install only the Nvidia raid class driver, the setup continues but formats only one 80gb drive and gives a Read Error after rebooting.
If I install only the Nvidia storage controller, the setup does not recognize my array.
What can I try?
Hope you understand my problem, my english is poor...
Thanks all!
Bye, FrancescoQuote from: MurdoK on 16-June-06, 17:29:42
Hello !!
Which SATA Connectors are you using ? are the drives really identical, check the firmware versions on the labels on the hdd´s.
Are you installing the latest drivers ? Did you order the 2 hdd´s as the first bith drives in Bios Boot Order Sequence ?
Please add more hardware info, and amperes on power supply´s 12V rail. Its on a label on PSU.
Greetz
Hi all
thanks for your replies.
I'm using SATA3&4 (the ones near the cpu) but the same happens with SATA1&2.
The drives are 99% identical, I'll re-check this thing tonight when I go home, since now I'm at work.
My boot sequence is: floppy - cdrom - hard disk. I've read that I should check something like "Hard Disk Boot Priority List", will look at this... but what's that? I don'k know :D
For the PSU, it's a Tagan 380W and has 22A on the +12 rail.
I've also read that WinXP Sp2 boot cd won't install on a raid environment, i's much better to use a Sp1 bootcd... is it possible?!?
Thanks, bye!
Francesco -
Massive accumulation of byte[]s
Here's hoping someone has some insight on the following problem:
We're running a series of environments that are smallish -- they'll be a few gigabytes in size when fully populated -- but we're having problems getting past the (say) 200Mb barrier. We can currently run our system for about half an hour, adding documents at the rate of about ten to fifteen a second (no queries), per thread, using ten threads. The keys for this database come in two forms: one is just a long that is almost always monotonically increasing (not guaranteed); the other is that same long plus a String. There's 10 of the long/String keys for every long key. Values for the long key is a serialized object with about 10 instance fields; for the latter it's a series of String values.
Here's what we see when the machine starts dying:
* It's associated with garbage collection, specifically with full gc. Typically, it happens when a full GC is triggered. -verbose:gc gives us a time of around 1.0 - 1.5s for the collection but it's obvious, looking at a profiler, that the actual time taken is closer to 30s. I've tried tuning the young/tenured and eden/survivor ratios, but nothing seems to work (details available if you want them.) This behavior doesn't -always- happen with a full gc, but a full gc is going on every time we see the problem.
* The runnable process count goes through the roof. It's usually 1-2; it spikes to around 165-200. It stays in this condition for anywhere between 20 seconds and a minute and, when it's done immediately drops to 1-2 within three seconds.
* The number of context switches and interrupts drops and stays low. CS drops by an order of magnitude; interrupts by 10-20%.
* System time hits ~95% and stays there.
* The machine is unresponsive, even to ls, less, etc.
* There's an ever-increasing number of byte[]s all through the run. We do eventually get out of memory exceptions, but the GC problems and the spike in runnables also occur well before we run out of memory.
* If we remove the code that adds the long/String keys (i.e., if we reduce the amount of data written to 1/11th its normal amount) the problem still happens, but it takes much longer. Also, the number of byte[]s written is dramatically lower -- but still ever-increasing.
I've already tried waiting until all cursors and transactions are closed/committed, then calling sync() and evictMemory(), as per these threads:
OutOfMemoryError: Java heap space [Lucene + BDB JE transactions]
OutOfMemoryException when i do data insertion continuously
We do not have any large transactions.
Environment:
* java -version: Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_09-b01, mixed mode)
* uname -a: Linux bubba 2.6.15-1-amd64-k8-smp #2 SMP Tue Mar 7 21:00:29 UTC 2006 x86_64 GNU/Linux
* JE 3.2.13
* Machines are dual CPU, dual-core Opterons @2.0GHz, 8Gb RAM, 10k disks on SATA-II, no RAID
* Environment is transactional, allowCreate, txnWriteNoSync = true, 10Mb cache size (too low, but increasing it had no effect)
* Databases are transactional, allowCreate
* All writes are performed with transactions and all transactions are explicitly committed or aborted
Help is much appreciated. This is a blocking issue for us (because the machine is unresponsive, it crashes everything in our app and affects other machines as a result.) I've been beating my head against it -- to no avail.
Thanks in advance for whatever help you can offer. I'm stumped (again).
-jMark, Linda and all,
I spoke too soon when I said I found and solved the problem with the number of byte arrays and the massive number of runnable threads. It turns out that there were two problems -- I had a memory leak in my own code that didn't release the bytes, and there's another problem that seems to involve JE and is related to the number of threads in the runnable queue. ALL of the symptoms I described above -- the number of threads in the runnable queue, the number of context switches and interrupts, the system time stats, etc. -- with the exception of the number of byte arrays, are accurate descriptions of the second problem. In other words, they're not related to the problem with my memory leak, and I'm hoping you can lend a hand.
I have a test I can run outside our application that reproduces the problem with the latest 3.2 build, though it occurs at random and is worse on some machines compared to others. The test allocates enough environments, each with 10Mb cache, to fill a certain percentage of the heap; that means hundreds environments on our largest machine. Each environment has a single database and the test tries to add 50 million, 2-5k records in parallel. Obviously you wouldn't do this in a "real" application but we see the same problem this test produces in our "real" code, which uses only ~25 environments and ten threads adding data concurrently.
To answer some of your comments above:
* All my tests are using the latest 3.2 build. Each test is started cleanly, so no data from previous runs exists beforehand (avoiding the problem with the log file formats.)
* I can reproduce the problem on Xeon as well as Opteron machines. I don't have access to a single-core machine, so all the tests have been run on machines that are dual-core, dual-CPU or both. One of our Xeon machines really, really likes producing this problem. It has no problems running anything else, however. I'm in the process of validating the hardware now.
* I tried using the je.evictor.lruOnly=false setting and had the same results
* The problem becomes worse when more of the heap is used. If I allocate enough Environments to use 85% of the heap as cache, I see lots and lots of blocked threads, but none of the high runnable-count behavior. If I bump that up to 95% I start seeing problems
* I can't get a stack dump of any sort -- the machine is so locked up that the kill -3 executes AFTER the machine has relaxed. Using nice, writing to ramdisks, writing to other machines via netcat -- none of them helped us get a stack dump. Running vmstat before the machine locks works, but it seems like there's no way to fire up a new process.
Obviously we're not using JE in the way suggested by the docs -- in particular, we're running with 24 environments and that doesn't use resources efficiently. We're willing to live with the inefficiency. The real problem here is that the machine occasionally locks so hard that I can't even use 'ls' or 'kill'. I'd be hard-pressed to purposely write something in Java that does that.
I have full logs with -verbose:gc and -XX:+CITime set on the vm; the logs include stats dumps from the databases and environments, and the output of vmstat at five second intervals. I can also provide the test that reproduces the problem.
As I've said, any help is much appreciated. If we're doing something wrong, I would like to know about it. Thanks for your help!
-j -
Anyone running 2009 MP with SSD without TRIM?
Does anyone have experience of running a SSD on a 2009 MP, without TRIM, without issues....if so for how long and which model?
ThxOS X should by this time, but does not implement TRIM across the board, even though SSD has been around for 4 yrs and TRIM support in Windows has been supported since Windows 7 beta over 3 yrs ago. Is it only a matter of throwing a switch to support non-Apple ? that seems to be what TRIM Enabler suggests. Not if Apple is using proprietary firmware.
The SATA 3.1 specifications Queued Trim Command - allows SATA SSDs to execute Trim without impacting normal operation...
http://www.sata-io.org/technology/6Gbdetails.asp
Wikipedia SSD: "includes support for the TRIM command to reduce garbage collection for data which the operating system has already determined is no longer valid. Without support for TRIM, the SSD would be unaware of this data being invalid and would unnecessarily continue to rewrite it during garbage collection causing further wear on the SSD."
https://en.wikipedia.org/wiki/Solid-state_drive
https://en.wikipedia.org/wiki/Solid-state_drivehttps://en.wikipedia.org/wiki/TRIM
http://en.wikipedia.org/wiki/Write_amplification
I've seen Crucial m4 be loved and no love lost. Systems vary like people it seems.
Top tier and Apple uses Samsung and is highly rated so I would have no trouble recommending them
http://www.amazon.com/SAMSUNG-2-5-Inch-Internal-MZ-7PC128B-WW/dp/B0077CR60Q/
http://www.amazon.com/Kingston-SSDNow-120GB-SVP200S3-120G/dp/B006YLTY2O/
http://www.amazon.com/Kingston-SSDNow-240GB-SVP200S3-240G/dp/B006YLTLUY/
I have half a dozen Corsair, none with firmware bugs all are older SATA2 models.
The newest looks or sounds good and 180GB is a good value size
http://www.amazon.com/Corsair-Force-240GB-2-5-Inch-CSSD-F240GBGS-BK/dp/B008FSYUO 6/
MacRumors post: one of the best tutorials on SSDs and TRIM along with BGC.
Garbage Collection (GC) is the process of relocating existing data, deleting stale data, and creating empty blocks for new data
All SSDs will have some form of GC – it is not an optional feature
NAND flash cannot directly overwrite a page with data; it has to be first erased
One full block of pages has to be erased, not just one page
GC starts after each page has been written one time
Valid data is consolidated and written into new blocks
Invalid (replaced) data is ignored and gets erased
Wear leveling mainly occurs during GC
The OS tracks what files are present and what logical blocks are holding the files
SSDs do not understand the file structure of an OS; they only track valid data locations reported by the OS
When the OS deletes a file, it marks the file’s space in its logical table as free - It does not tell the drive anything
When the OS writes a new file to the drive, it will eventually write to the previously used spaces in the table
An SSD only knows data is no longer needed when the OS tells it to write to an address that already contains data
How Trim Works -
The OS sends a TRIM command at the point of file deletion
The SSD marks the indicated locations as invalid data
TRIM Features:
► Prevents GC on invalid data
► Increases the free space known to the SSD controller
TRIM Benefits:
► Higher throughput – Faster host write speeds because less time writing for GC
► Improved endurance – Reduced writes to the flash
Lower write amplification – Less data rewritten and more free space is available • TRIM does not generally work behind a RAID environment
Anandtech 2009 SSD tutorial - http://www.anandtech.com/show/2738/5
keeping 15-20% free will help (or figure how much you write to the drive in a month) seems if you get to 90% full can result in problems. Whether Durawrite or any other controller and firmware. -
I am planning on adding a 500GB internal drive to my Power Mac G5 / 2GHz Dual machine (Mac OS 10.4.8). It would either be use either as a boot or storage drive.
I have a Maxtor MaxLine 300GB (7V300F0) in the second bay but I want to replace the Maxtor 150 GB (boot drive) that originally came with the Power Mac.
I am looking at the following drives:
- Maxtor MaxLine 500 GB (7H500F0)
- Western Digital Caviar SE16 (WD5000KS)
- Western Digital RE2 (WD5000YS)
I am leaning towards the Caviar but it only has a 3 year warranty and no published MTBF information.
The RE2 has 5 year warranty but it has TLER, NQC.
What is TLER and NQC and do I really need them? I also read somewhere, something about needing to turn off TLER in a non-RAID environment. How is that done?
Thanks.
PS -- I have included below the links to the PDF spec sheets for the above drives as reference.
Maxtor MaxLine 500 GB
http://www.maxtor.com/files/maxtor/en_us/documentation/data_sheets/maxline_pro_500_data_sheeten.pdf
Western Digital Caviar SE16
http://www.wdc.com/en/library/sata/2879-001144.pdf
Western Digital RE2
http://www.wdc.com/en/library/sata/2879-701176.pdf
Power Mac G5 / 2GHz Mac OS X (10.4.8)Don't get a Western Digital, I have an 80 from them and it has been send in for replacement about 6 or 7 times. Get a Maxtor (Mine Lasted about 4 Years) or a seagate.
-
Lacie Horrors - No Power!
Not quite sure if this is the appropriate board for this, but I'll try!
We have had our 1.2TB hard drive for 5 months. It contains CRITICAL data. Yes, we have backed up to some degree, however, as we all know, backing up video takes up so much space. Loss of this data would be horrendous for our company.
PROBLEM:
5 days ago, our LaCie (1.2TB) external went from a 3 month, healthy external to dead. Suddenly it lost power, so of course it is not mounting on the desktop. Therefore, no DiskWarrior, etc. to be done because it simply has no power.
EVENTS:
Called LaCie immediately and explained the problem. Patrick (LaCie rep) said that this seemed to be a "cut and dry" problem and that we just needed to try a new power supply, so sent one. It just arrived. We plugged it in and, sure enough, STILL NO POWER! The new green light on the power supply box is solid green.
Just got off the phone with LaCie again. Told him the situation and he had no answers. He said that we needed to find a company that "maintains RAID and keeps the data intact so that we could back-up all of our data." He suggested Best Buy Geek Squad and Drive Savers (which he says has a "90% success rate in data recovery.")
I asked him how we could recover the data if there was no power supply in the first place. He said that by taking the LaCie apart and creating a raid environment on another drive, we would be able to have power and be able to back up.
So, I am about to freak out. Medication is standing by.
ADDITIONAL INFO:
-Working on G5, OSX 10.3.9, FCP 4.5, Dual 2GHz, 512 MB DDR SDRAM
-I had partitioned the LaCie into 3 parts
-It was working great, and the power outage had nothing to do with surge (everything is properly surge/battery protected.)
-I tried plugging the new power supply directly into the wall...still solid green light, but no power to the LaCie.
-The blue light on the front has not come on for days.
-In System Profile, the firewire port is empty
Although I have read and re-read many threads on the power supply problems with lacie drives and the new power pack did the trick. This is not the case here.
Any advice, suggestions, rants or positive energy would be MUCH APPRECIATED!
Thank you!
Mark McKinneyIf you feel adventurous and know what you're doing, you could get another exact same drive, open both (and lose the warranty while you're at it) and try to swap parts as needed - that's under the assumption that the data on the actual disks is fine and complete, and "only" some other part is defective. Under this assumption, chances of recovery are good.
I guess that's pretty much what the LaCie guy meant.
I'm afraid I can't think of anything better right now. Data recovery specialists will likely cost more than an additional drive would have (for backups). The following you don't want to hear, but it needs to be said: For most people it takes one loss (or near loss) of data to take backup seriously. The question is always: is the loss of data more horrendous than the price for proper backup? Or in your case: the price for the data recovery specialists.
Sorry for the rant, and good luck!
Quad G5 Mac OS X (10.4.7) 4 GB ECC RAM, Raptor 150 GB, Seagate 750 GB, GeForce 7800 GT -
Serial ATA on K8T Neo2-F V2.0
I have just build my Neo2-F V2.0 with 2 IDE internal disks (recognized by XP), as well as a data serial ATA1 disk, which is not recognized yet.
I was wondering how to configure this ... I press <tab> key during boot startup, but it is telling me something lik the configuration is not ok to build RAID environment (I just have 1 disk ... maybe the reason ??)
I have also downloaded from the MSI the "VIA VT8237(R) SATA RAID Driver (For floppy driver)" which as far as I understand is to create a new version of the floppy we get with the motherboard ... but that package is more than 10 MB !!!
So I do not see if :
- I really need the floppy
- How to setup my data disk with SATA.
Any advice is welcome!
Many Thanks.Quote from: Hans on 26-July-06, 00:36:27
Yes? Okay, explain to me how to setup RAID 1 or RAID 0 with only one SATA drive on a VIA controller ....
Don't complicate things. You cannot configure RAID with one drive, period.
im not triyng to confuse cdelecole, just wana info to be correct. Promise and Nvidia RAID's no problem to do it. for VIA you will need a RAID mod.
there is no idea to running at this couse will no performace differents.
"Bu on my previous MSI motherboard (a K7N2 Delta ILSR), I had to go in the "RAID" menu configuring a kind of "RAID".
It was called a Stripe or strike, or ... can't remember."
yes you have typical example with your old mobo. which is using (Fastrack 378) running single SATA HDD as RAID 0. -
655 MAX SATA Questions/Problems
I am currently trying to set up my pc to use the SATA function of the motherboard. I have an 80 GB Samsung SATA HD and want to install it as a single drive in a non-RAID configuration. I can configure it in a RAID environment but I would prefer not to use it that way. I have been scanning this forum and ran across some comments about changing settings in the BIOS but when I look, I don't appear to have those options. I just upgraded my BIOS to the latest version (1.2) and I still don't see the options that I am speaking of. Here is one of the posts:
In bios under Integrated Peripherals, On Chip ATA Operating Mode set
Native Mode
ATA Configuration - PATA only
SATA Keep enabled - yes
PATA Keep enabled - yes
PATA channel selection - both
In my BIOS settings I don't have the Chip ATA Operating Mode option. I have either SATA Controller Enabled or Disabled.
I don't seem to understand why mine would be different. Any ideas?
System Config:
MSI 655MAX motherboard
Intel P4 Celeron 2.8 GHZ
1 GB RAM
Samsung 80 GB HD @ SER1
Fedora Core Linux (currently installed on Samsung drive but in RAID config)
I can provide more info if needed.
Thanks in advance.The settings you have quoted are for an Intel chipset board. I believe the board you have has an onboard Promise raid controller for the SATA drives, which means you have to enable the Promise controller in the bios and install the SATA drivers for it. I am not familiar with that board, so I don't know if you can install them as IDE drivers or if they have to be installed as raid drivers.
-
How do I increase my game FPS?
World of Warcraft's in game FPS is very low, and in raid environment where multi-player spells and graphics are going off I even have frame lock ups.
Searching around on google, I found limitless tip and tricks for Windows users, but nothing for Max OS X users.
Any tips or info on where to look would be appreciated.
Somethings I found that could cross over to mac from pc users (but I don't know how to do):
1. Somehow telling your system to use all cores at high setting (as to one used and one playing it cool)
2. Disabling power efficiency settings, allowing your computer to max it performance
3. Marking the Program (ex:World of Warcraft) as high priority for CPU processingfor 3d games u can always set the graphics to medium but, it will make ur game lagg at certain point, but it all depends on ur hardware, u either upgrade ur mac to something better like
16gb RAM DDR5
NVIDIA with at least a 2 to 4 gb of Graphics RAM
Quad-Core i7 3.6 to 3.8 GHz
( I know this is expensive but then again, this WILL MAKE YOUR MAC AND IT'S GAMING EXPERIENCE AMAZING AND FAST AS ****! )
You should update your OS X because it has those graphics drivers in them but, apple never tells u about the graphics update they feature inside those OS updates, trust me, i had skyrim and i have a 10gb RAM and with INTEL 3000 1000mb, that game lagged a **** lot, trust me, and at that time i had 10.8.1, i didnt know about other updates, just didnt get time. So as soon as i updated straight to 10.8.4, Skyrim never lagged! the driver updates were so good that it improved the FPS in all the games, from crappy ones like minecraft, to awesome games like Far Cry 3 or Dead Island Riptide!
so i really prefer a hardware update with OS X updates
Thank you! and hope this helped u!
Message was edited by: maclover1234 itself :)
Maybe you are looking for
-
I can log in, but everything seems to be reset to "factory"
I can't seem to find this problem detailed anywhere, so here goes: My Intel Mac mini running 10.5 froze up tonight, so I had to do a hard shut-down. When it restarted, the machine is in a state that was (is) very bizarre. It recognizes my user name,
-
Will i get a student discount in Thailand if i am studying in India????
will i get a student discount in Thailand if i am studying in India????
-
Essbase Server and Deployment Server on different boxes
Hi,We're currently testing Hyperion Essbase 7.1.2 and we were wondering if it was possible to have the Essbase (Analytic) Server on one machine and Deployment Server on a different one. At first sight, it seemed so, so we did just that.Now, we'd like
-
Producing musicals, I wish there was a "chunk" feature like in DP
Hi guys, especially iSchwartz, Jim, Rohan... This is another day I wished there was a chunk feature like in DP. Maybe you have a few tips for me. I'm doing a recording of another musical: lots of songs, lots of dialogues, lots of special effects. It
-
Hi experts, I need to schedule automated RMAN backups in 10G(RMAN database) and i need to include the support for Oracle 8i, 9i and 10g database backups. Any type of suggestions are welcome. Thanx in Advance, Senthil