Cheap RAC Hardware Config
I have read the paper on building cheap RAC on Linux using Firewire technology. I want to build one, strictly for learning purposes. Can someone post complete hardware specs in detail, which I need to buy to make this possible. I would appreciate if someone who has already built one posts the detailed hardware config.
there are some links on this page that show the actual hardware used in our set up at Oracle World.
http://otn.oracle.com/tech/linux/install/ow2003sf/OTN_firewire_demo_req.html
the particular model of HD used has been discontinued but as long as you have a Firewire HD with a Oxford 911 chipset you should be ok.
Similar Messages
-
Error 10401 occurred at AI Hardware Config, when using AI Acquire Waveform vi
I can successfully run applications that use the AI Sample Channel vi, however when I use the AI Acquire Waveform vi the following errors are generated:-
Error -10401 occurred at AI Hardware Config, followed by
Error -10401 occurred at AI SingleScan.
All applications have been developed using Labview 6.0.2, on an NT machine. The applications are then run (or atleast attempted to be run) using the Labview Run-Time Engine on a different NT machine.
The driver software I am using is the NI-DAQ Native Data Acquisition Driver Software Version 6.1.Hi,
I've found a Knowledge Base on the NI website describing some situations where this problem occurs:
http://digital.ni.com/public.nsf/websearch/46F78EDD19BA0CC386256CEE007201FC?OpenDocument
That error code is generally seen when something has changed in the DAQ card's configuration, or the drivers are not installed properly. It's strange that this is showing up only on certain functions for your application.
Also try having a look through the DAQ troubleshooting pages on the website:
http://www.ni.com/support/daq/
Regards,
Mark Lee
National Instruments -
I obtain error -10003 from AI hardware config when I use AC coupling
I am using a function generator to make a sine wave and I want to obtain the data. However, there is an error 10003 at AI hardware config when I use AC coupling and I am not sure why?
Attachments:
Motor_Current_Measurement2.vi 133 KBWhat model DAQ board are you using. There's only a couple from NI that support AC coupling.
-
How AI Hardware Config.vi into Acquire&Proc N Scans - Trig.vi amplify gain?
I am using Acquire&Proc N Scan - Trig.vi program. I tried using AI Hardware Config.vi to add into the block diagram so as to increase the ECG waveform gain but it seems useless. How can it be done to amplify the gain of the waveform. Thanks for the help.
Attachments:
Scan_Trigger.vi 130 KBIf you aren't using virtual channels...
Using the original Acquire&Proc example program, you can set the gain by putting values into the Input Limits. Labview will automatically scale the results. So if your signal is 0-10V, put 0 as the minimum limit, and 10 as the maximum limit. Typically, you''ll use the range of whatever signal you're measuring, but if a higher gain is needed, use a smaller range (liek 0-1V).
See also:
http://sine.ni.com/apps/we/niepd_web_display.DISPLAY_EPD4?p_guid=B45EACE3E8A156A4E034080020E74861&p_node=DZ52300&p_submitted=N&p_rank=&p_answer=&p_source=External
2006 Ultimate LabVIEW G-eek. -
Recommended Hardware Config for huge OLAP Cube build
Hi David ,
Can you please provide the recommended hardware config for cube having billions of data under fact table . We ran a cube with 0.1 billion of data took around 7 hours to process. What could be the key areas to gain the performance benefit ? ansd also what could be the CPU , RAM (server) , RAM (Oracle DB) to gain much more perf benefit in such configurations ?
Also we have 32 bit windows 2003 server .Can we get better result if we switch to 64 bit ?
Thanks in advance,
DxP.Hi!
Well, I would definitely recommend you to proceed with a consultant because I feel that you have some lack of methodology and experience together with time constraints for your project.
Regarding hardware, unfortunately, I would not be able to give you any precise figures because I have no supporting information. What you should bear in mind that your system must be balanced. You (better with consultant) need to find a right balance between all the factors you consider as important while building a pile of hardware:
- cost
- architecture
- speed
- availability
- load scenarios
etc...
Regarding architecture point bear in mind that today finding right balance between processing power and storage processing capacity is a challenge. Put attention to this.
Regards,
Kirill -
Regarding Hardware Config. of XI
Hi all,
Can someone gives me a brief idea of Hardware configuration of XI , if there are four interfaces and message size <= 1 mbHi all,
Thanks for reply..
Actually I need a brief idea about the Hardware config.
(RAM , Harddisk ,Memory ,Additional Installation for
XI etc..)
There will four interfaces(Legacy , RDBMS ,WEB APP ,
SAP with XI in between) and message size will not be
greator than 1MB.The communication will be sync. in
case of on-line interfaces.
I have seen the Installation guides but need a
practical scenario .
It would be of great help if someone can give the estimates in figures according to the experience he/she is having in the project .
Regards,
Shikha -
Hi Guys,
We have an archive project planning,situation is:
1.around 100TB archive data to be stored in BDB;
2.around 50 concurrent users query the database;
I am not sure about the hardware config,like cpu count,memory size ...
Do you have any suggestion or reference about it.
Thanks a lot.Hi all,
Thanks for reply..
Actually I need a brief idea about the Hardware config.
(RAM , Harddisk ,Memory ,Additional Installation for
XI etc..)
There will four interfaces(Legacy , RDBMS ,WEB APP ,
SAP with XI in between) and message size will not be
greator than 1MB.The communication will be sync. in
case of on-line interfaces.
I have seen the Installation guides but need a
practical scenario .
It would be of great help if someone can give the estimates in figures according to the experience he/she is having in the project .
Regards,
Shikha -
Hi,
What VI in DAQmx could be substitutable with "Hardware config.VI" in traditional DAQ? I can't find it in DAQmx.
I have a snapshot for my VI below and attachment.
Thanks in advance.
Attachments:
STIM9.28P_1.vi 286 KBYou can get a description of what it does by turning on context help. It looks like it performs about the same function as the DAQmx Create Channel when it's inside a for loop and passed an array of channels.
-
NB: This has also been posted to High Speed Digitizers forum.
Changing the "gain" (part of the "alternate input limits" cluster) appears to affect the group settings, but when I pass a signal outside the supposed new voltage range, the signal doesn't clip as I would expect. This doesn't happen when I use MAX (where clipping is clearly visible), but unfortunately that is not an acceptable long-term solution. Any thoughts? I am only using allowable gain settings for the 6110E.
Additionally, I am saving my data in hsdl format, and to get the actual true voltage from the binary numbers (after uncompressing and converting back to standard binar
y format) I need to multiply the final number by the gain. Should this be correct? I thought the "scale multiplier" (part of "group settings cluster") compensated for "gain", but it doesn't seem to do so. Is this part of the same problem?Thanks Spencer. I do call Hardware Config.vi in several different places, so maybe its the multiple calls that is stuffing things up. I will investigate further.
As for the multiplier, which bit sounds correct? The fact that I have to multiply by the gain, or that it should take this into account automatically and therefore not have to muliply by the gain (which is what the default hsdl code does). What do you do to convert from binary to real?
Thanks for your help,
Jo -
LR + PS Optimizing Hardware Config on my Win7 PC
Dear All,
I'm looking to speed a few things up in terms of my hardware/software perfromance re. Photo editing (I'm a pro-photog).
Here is my current hardware profile (assembled by an individual contractor - not a brand):
Win 7 Ultimate 64bit
12 GB RAM (Standard issue kingston DDR3. Not that fancy Gaming Ram).
Intel Core i7 3770k CPU 3.50ghz (3rd Gen)
AMD 7750 1GB VGA (Display Memory: 2775 MB Dedicated Memory: 999 MB Shared Memory: 1776 MB)
USB 3.0 ports
Page File: 3737MB used, 14518MB available (Details of Custom pagefile config is next to each drive that is set to have a pagefile)
-Lightroom 4.1 (Multiple Catalogues - One for each shoot (+preview folder) located in the Main folder with the corresponding photos). 60% of all photos are processed and edited with this program.
-Autowrite XMP turned off
-Photoshop CS5 & CS6 30% of all photos are processed with CS5. 10% with CS6 (I can't get some filters (ColourEfex 2) to work with CS6 and haven't bothered sorting that out yet).
-I defrag regularily with MyDefrag
-Multitasking involves running LR, PS and Chrome/Gmail simultaneously.
-Each folder per shoot can range from 4GB - 50GB of Raw/DNG files.
Hard Disks:
(A) 1TB Sata/300 Driver: Windows/OS , LR & PSCS5 & PSCS6 Drive, partitioned 300GB for OS and 630GB for the other partition (B) which holds some music (400GB free). (Overkill I know but this was the only drive I had at one point...) (Pagefile 2000mb)
(C) 2TB Sata/600 Drive: Primary DATA drive, where Folders and their respective LR catalogues are stored and accessed. (Freespace ranges from 850GB-1.1TB) (Pagefile 2000mb)
(D) 2TB Sata/600 Drive: Backup of (E). These should be mirrored but it didn't happen that way for some reason...! (Mostly as above)
(F) 500gb Sata/300 Drive: This was an older drive that ended up as a PS scratch disk (The only one). It is mostly used for movies and LR catalogue backups. The LR Camera Raw Cache is also here (It's set to 100GB). (Freespace: 300GB) (Pagefile 2000mb)
Here are the intensive processes I wouldn't mind speeding up:
1) Importing and converting 1000's of 5D Mark 3 CR2 (RAW) files into DNG
2) Rendering 1:1 previews
3) Exporting apprx 500-800 full-res or Resized Jpegs from DNGs at a time
4) Exporting apprx 100-200 ZIP compressed tifs at a time
5) Saving Tif's with ZIP Compression in Photoshop CS5 & CS6 (If I'd like anything to be faster - it would be this. It can take anywhere from 20 seconds to 1 1/2 minutes to compress (ZIP) and save a flattened Tif with only minor changes eg. Cross process color changes. Resulting Tif files range from 60mb to 120-180mb). I don't have too many large (500GB+) layered Tif files on this computer so we can discount that. I also don't work with video or panoramas (for the forseeable future anyway!)
6) Batch processing SilverEFX Pro actions onto DNGs which are then saved as Tifs (as above in no 5)
*OS and Application start times are quite fast and are not an issue*
-The Usual process is to Export and Save images to the same harddrive as the Source file eg. a DNG on Drive (C) will be exported as a JPG to Drive (C)
I find no difference in terms of speed when Exporting from (C) to (D). Especially when you factor in the time it takes to copy the exported files back to (C) so that they're located in the same folder.
The Questions:
Q1: Does this setup make sense esp with regards to the Win Pagefile? Or is there something that isn't quite recommended (bottlenecking)?
Q2: If so how can I improve it? How might an SSD work best to improve performance esp with regards to point 5) - saving Tifs in PS? Or in LR perhaps as an Export or Save Disc? This may prove cumbersome as it requires the files to be moved back to disc (C) but if the performance gain is significant then whoopee!
Unfortunately SSD's are fairly pricey in the country I live in. A 120GB drive will cost about $160 a(affordable) and my current OS Drive (A) is already at 150GB of usesage (although this can be trimmed down). Mostly I'd rather not have to reinstall Windows and all my PS filers, fonts, minor programs etc but if the performance gain is worth it I could consider it.
The next level of SSD is at a currently unaffordable US$380.
I'd be grateful for any thoughts, this machine has come a long way since it was first assembled...!
Thank you in advance,
KCHello Epoch,
I think, there not many things that you can do do speed up your workflow. A few suggestions:
1) Importing and converting 1000's of 5D Mark 3 CR2 (RAW) files into DNG
-> USE an internal USB3 Card Reader ( reads more than 90 MB/s on my SDCard)
2) Rendering 1:1 previews
-> perhapst you skip the geneation of these files on import and first select images to delete. After deletion you can generate the previews.
3) Exporting apprx 500-800 full-res or Resized Jpegs from DNGs at a time
> exporting images is recorded inside the catalog. Updates on your catalog take place on every image you export. Therefor put the catalog on your fastest drive!
4) Exporting apprx 100-200 ZIP compressed tifs at a time
-> from you setup, I saw that you are using a 3770k @3.5 GHz. Overclocking? Mine 2500k is running @4.3Ghz since more than 1 1/2 year.
I tried myself to seedup LR by tweaking Windows7 - does not really worked for me.
You can try LR 4.2. There some reports, that is faster than 4.1. -
I need to know what hardware to order in setting up an 11g RAC database without a single-point-of-failure. My question centers on the networks. If each server has one NIC for the public interface and one NIC for the private interconnect, aren't the routers these NICs are attached to a single point of failure? If each server has two NICs bonded together for the public interface and two NICs bonded together for the private interconnect, does each NIC attach to a different router?
user11292683 wrote:
If each server has two NICs bonded together for the public interface and two NICs bonded together for the private interconnect, does each NIC attach to a different router?That depends on the redundancy of the switch or router. You get boxes that are fully redundant. 2 chassis, 2 power supplies and 2 CPUs - with failover from one to the other. In such a case you only need a single router or switch. But these redundant units can be pretty pricey and it may well be worth while to buy two smaller switches (e.g. 12 ports each) and use that instead.
And of course you need 2 of each - one for the Interconnect and one for the public network. I would not run both via the same switch (i.e. total of 4 switches). -
RAC: "srvctl config scan" output
Hi,
My company is using a RAC on Oracle 11gR2.
I am trying to undersand the beast and there is something I am stucked on.
we have this :
[root@devrac1 admin]$ srvctl config scan
Nom SCAN : <host>.<domain>, réseau : 1/10.12.2.0/255.255.255.0/eth0
Nom d'adresse IP virtuelle SCAN : scan1, IP : /10.12.2.206/10.12.2.206
Nom d'adresse IP virtuelle SCAN : scan2, IP : /10.12.2.207/10.12.2.207
Nom d'adresse IP virtuelle SCAN : scan3, IP : /10.12.2.208/10.12.2.208
this output looks like the output described in this document in ORACLE SUPPORT ID 975457.1
I read this whitepaper : http://www.oracle.com/technetwork/products/clustering/overview/scan-129069.pdf
And the output is more like this.
[root@<host> admin]$ srvctl config scan
Nom SCAN : <host>, réseau : 1/10.12.2.0/255.255.255.0/eth0
Nom d'adresse IP virtuelle SCAN : scan1, IP : /<host>.<domain>/10.12.2.206
Nom d'adresse IP virtuelle SCAN : scan2, IP : /<host>.<domain>/10.12.2.207
Nom d'adresse IP virtuelle SCAN : scan3, IP : /<host>.<domain>/10.12.2.208
Why is it different ? What does it means ?
Thanks for your help.Hi, thanks for the quick answer !
I think my post is a little misleading. (i was too busy obfuscating)
In the first documentation it's the same adress twice
Nom d'adresse IP virtuelle SCAN : scan1, IP : /10.12.2.206/10.12.2.206
In the second documentation
Nom d'adresse IP virtuelle SCAN : scan1, IP : /<dns name for the adress scan>/10.12.2.206
I dont understand the meaning of this difference. -
Oracle RAC Hardware suggestions..
Hello,
I wanted to practice RAC setup at home. I wanted to practice on Linux, wanted to practice RAC on ASM.
I appreciate if anyone did that and suggest what hardware what I need to buy.
Thanks,Refer to this article for hardware:
http://www.oracle.com/technology/pub/articles/hunter_rac10gr2.html
Or you can check this link for RAC on VMWare:
http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnCentos4UsingVMware.php -
Changed my hardware config, need help changing my overclock.
About a year ago I completed my new OC system using a considerable amount of input here. It has run well overall. However, my original intent was to use the PC solely for productivity apps and internet access. Recently, my XBox 360 died (you're shocked, I know) and I picked up Rome Total War to pass the time. It didn't take me very long to realize the video card I was using wasn't adequate. Here's my original system config (100% stable):
MSI K9N SLI Platinum nForce 570 BIOS 1.8
AMD Athlon 64 X2 5200+ 3.12Ghz (240*13) @ 1.45v
SuperTalent 1GB DDR2 800 445.8Mhz DDR2 (CPU/7) @ 1.90v, 5-5-5-15-20-2T timings
SuperTalent 1GB DDR2 800 445.8Mhz DDR2 (CPU/7) @ 1.90v, 5-5-5-15-20-2T timings
MSI NX8500GT-TD256E 576Mhz core, 512Mhz GDDR2
XClio Stablepower 500W +3.3V@30A, +5V@30A, +12V1@18A, +12V2@18A
I purchased an MSI NX8800GTS 320M OC to replace the NX8500GT. The NX8800GTS uses a dedicated 6-pin PCIE power connector, while the NX8500GT does not. In addition, the 8800GTS generates ALOT of heat. The 8800GTS was increasing ambient case temp, which in turn was increasing core temp to the point that the system was becoming unstable after long periods of playing RTW. I decided to lower my overclock in an effort to reduce voltages and hopefully reduce system temps across the board (I can't add anymore cooling, I already have 3 fans plus a Zalman CPU cooler). Here's my current config:
MSI K9N SLI Platinum nForce 570 BIOS 1.8
AMD Athlon 64 X2 5200+ 2.99Ghz (230*13) @ 1.35v
SuperTalent 1GB DDR2 800 427.1Mhz DDR2 (CPU/7) @ 1.80v, 5-5-5-15-20-2T timings
SuperTalent 1GB DDR2 800 427.1Mhz DDR2 (CPU/7) @ 1.80v, 5-5-5-15-20-2T timings
MSI NX8800GTS 320M OC 630Mhz core, 930Mhz GDDR2
XClio Stablepower 500W +3.3V@30A, +5V@30A, +12V1@18A, +12V2@18A
Here's where my question comes from:
I ran Orthos small FFT test, priority 9 with the above settings for 12 hours -- no errors.
I ran memtest86+ test #5, 24 passes -- no errors
However, I cannot get the Orthos large FFT test or blended CPU + RAM test to run for more than a few hours without starting to error out. I'm pretty sure the CPU is stable because of the small FFT test. So I think the problem is my memory.
At 1.90v, the large FFT test errors out after about 30 minutes. The blended FFT test errors out after about 4 hours.
At 1.95v, the large FFT test errors out after about 3 hours. The blended FFT test errors our after about 6 hours.
At 2.00v and above, the PC starts behaving erratically. It will freeze up sometimes, requiring me to pull the power and restart. And sometimes it will just power down.
Should I just trust the first two tests and ignore these results? I guess I'm not sure what to do at this point. I accomplished my goal of reducing temps (almost 8C degrees) but I want to make sure the thing is stable.
Also, I asked this question a while back and got conflicting answers - is it better to have a high memory frequency or tight timings? Because my cheap RAM really can't do both.Here is what I meant by Fine Tuning or finding that Sweet Spot.
There are 36 different CPU Speeds and I figured 152 possible memory speeds based on figures I have seen here.
200 to 250MHZ FSB (10 MHZ steps I am guessing)
10 to 15 Multiplier
7-13 divider
308MHZ to 1071MHZ memory speeds
Many more are possible not just the 4 that CPUZ shows for your current settings.
Here are a few you might try with the 4-4-4-10 timings.
FSB Multi GHZ Divider DRAM Freq.
220 13 2860 8 715
230 14 3220 9 716
240 15 3600 10 720
240 12 2880 8 720
250 13 3250 9 722
220 15 3300 9 733
210 14 2940 8 735
240 14 3360 9 747
230 13 2990 8 748
250 15 3750 10 750
250 12 3000 8 750
200 15 3000 8 750
230 15 3450 9 767
220 14 3080 8 770
250 14 3500 9 778
240 13 3120 8 780
210 15 3150 8 788
240 15 3600 9 800
200 14 2800 7 800
230 14 3220 8 805
250 13 3250 8 813
220 13 2860 7 817
240 12 2880 7 823
220 15 3300 8 825
250 15 3750 9 833
240 14 3360 8 840
210 14 2940 7 840
230 13 2990 7 854
Some are slower CPU clock speed than what you are at now but might Benchmark better at 4-4-4-10. Some have higher CPU speeds and might fail, but I listed them in case you wanted to give it a shot. After all you might have hit the memory wall at 890MHZ and not the CPU wall at 3.12GHZ
Set memory volts to max allowed by manufacturer and have fun. You can lower them later in steps to fine tune. -
Env:
10.2.0.4 with ASM
RedHat Linux
We are planning to replace hardware (no DB upgrade but with a Linux OS upgrade RH-4 to RH-5 ) for our RAC database. It appears I have to add new nodes, and remove the existing to accomplish this.
So if I have four existing RAC nodes :a,b,c,d and the replacement new nodes are: aa,bb,cc,dd
then
-add aa,bb,cc,dd
-shutdown a,b,c,d
-Make sure things are fine, which may mean running it for a week (rollback is easy , shut the new bring up the old).
-drop a,b,c,d
RAC is now made up of aa,bb,cc,dd
And if we were to retain the same host names, since RAC host name (public) cannot be changed, I have to go through another iteration of delete-node and add-node. Something like :
drop aa
rename host aa as a
add a
All changes will be done in a maintenance window. Any suggestions or comments?
Thanks
-MadhuI have not considered DG, because it requires double the storage (our DB size is 5TB) , plus you have to consider licensing costs. The rollback should be easy for my approach too - just shutdown the new cluster , and fall back to old. I don't plan on removing old cluster nodes until about a week of successful running on new cluster.
Re the raw device deprecation, the only thing that should impact us is the OCR, Voting disk, which we can move it to Netapps NFS if needed. I will let my sysadmin worry about it.
I have checked with Oracle support, and the analyst agrees with my approach (not sure how qualified he is though :-( ) . Surprisingly, judging by the information on Metalink or internet, very few seem to have upgraded h/w on RAC.
Thanks for your input.
-Madhu
Maybe you are looking for
-
Macbook pro mid 2012 very slow - hardware?
Hi there, usually all challenges I have to face with my apple devices are pretty solvable with google but this one brings me to despair. Perhaps somebody can help me out. The situation: Since I started to use apple devices I am using a MBP Mid 2009
-
Need help on use of OBYC-FRN key
Hi Gurus Need your help urgently, We are using manual freight condition type ZFRC and it has ac key FRE and FR1 in PO. When we do GR it is trying to post to a GL account which is configured in OBYC-FRN. Is this correct? Need to confirm how GL will be
-
Large file processing in file adapter
Hi, We are trying to process a large file of ~280 MB file size and we are getting timeout errors. I followed all the required tunings for memory and heap sizes and still the problem exists. I want to know if installation of decentral adapter engine j
-
ware is type the currency code i put the journal entry in using template but currency code are missing error is given message (131-20) so please give the solution
-
JSTL expressions inside JSF Component
Hello * I am experiencing c:if tag issues if embedded inside h:panel tags. Nesting a single c:if tag works fine, but nesting two breaks the app. Therefore I have to seprate out the JSTL tags into there own h:panel...bit messy. Anyone else experence s