RAID 50 or SAN

I have an XServe Raid that I would like to make a single volume for all 14 drives. Before I create software RAID 0 over the 2x RAID 5 volumes I'm wondering about my upgrade path.
Can I expand any of the RAID 5 volumes once they are part of the RAID 0 array?
Am I better off buying XSan and would that make the 2 volumes appear as one leaving me in a better position to add storage to that one big volume later?
Thanks

There are file systems that allow on-line growth, though HFS+ is not among those.
You're clearly working on a budget (and aren't we all?), so you'll want to look at how you can incrementally grow most effectively; petabyte-scale storage is pretty easy to configure these days, but overbuilding can be expensive. And if you're going to grow from a four terabyte array up toward maybe a petabyte or two over time, then you're probably going to be rolling in a few small arrays over time (as rack space and I/O connections permit), and eventually rolling the smaller arrays out, and rolling additional servers and direct-attached or maybe SAN-based arrays into the configuration. (Growth is easy for a while, then you hit cable length limits and rack space and available PCIe slots and...)
The other aspect is reliability and recovery; RAID-0 across RAID-5 arrays exposes you to outages within your configuration, and RAID-5 recovery absolutely hammers your throughput. And whether you have a backup archive available, as RAID does nothing to protect you from accidental or malicious errors, nor from volume corruptions.
I'd drop the requirement for one big array and for scale-up if I could manage it and head toward scale-out, as that'll make this problem go away, and it'll avoid problems with backup windows and data volume as the individual volumes (real disks, or controller-presented virtual disks) get larger; how you can reasonably deal with rolling in and rolling out storage over time, and also how you can grow to use the segmentation slice up your load across servers; sharding your processing.
If y'all have control over the implementation of the access, then [Cassandra|http://cassandra.apache.org> atop [Hadoop|http://hadoop.apache.org>, CouchDB, or equivalent might be of interest; that's a couple of steps above the RAID-level hardware stuff, and largely disconnects you from having to deal with disks and with ever-larger RAID volumes.
RAID-0 is a tad odd here; I'd tend to look at what the I/O load is. RAID-0 is centrally for performance, and isn't used when reliability is a factor, and an Xserve serving up files tends to be limited by the speed of the network connections. By the (possibly aggregated) gigabit Ethernet links or (more commonly for remote-access sites) by the speed of the remote network links; those tend to be way below what a disk can get. (A floppy disk is probably slower, but any self-respecting hard disk drive should be vastly faster than what can be stuffed down a typical cable internet or DSL link or T1 network line; a common bottleneck will be in the comms and not in the storage array.)
Now if you're slamming I/O at your storage from a maxed-out PCIe Xserve server box, then all bets are off. RAID-0 or RAID-10 or SSD might well be very appropriate for that target.

Similar Messages

  • To RAID or not to RAID: 4Tb Mac Pro Server

    Hi,
    Hope someone can steer in the right direction.
    I've got a Mac Pro Server. It has 4 x 1Tb HDs and a 6TB Drobo S for backup. It has a RAID card.
    My query is, as I need at least 2Tb space instantly to store Final Cut Pro files, etc. Won't be using the server to actually edit anything, or any other tasks.
    a) Do I set up RAID and get only approx. 2TB of usable space - therefore filling it up straightaway
    b) Leave it as 4 separate 1Tb harddrives (to get most space)and let the Drobo do its BeyondRAID thing? (At the moment the data is all a Mac and on single Firewire external drives).
    c) If I go RAID (it's brand new so I'm at the setup stage). Which is the best option for me?
    d) If I do RAID, should I partition 100gb on drive 1, for OSX and programs, etc and then RAID the rest? I guess that means I have to reinstall the whole system? Or can I 'live' partition on a Mac these days?
    The Mac Server is a fairly temp solution as we're getting a proper SANs this year, but at present the data lives on a Mac tower (with MyStudio backup drive) and also on a few Firewire drives.
    HELP!
    Many thanks.

    1. With only 4 drives, raid 5 should be fine. That gives 3TB (1TB for redundancy).
    2. A backup is more important than raid.
    3. The raid setup for the san can be tricky. Depending on the size, you may not want raid 5, but 50, or 6. Consider: A 16TB raid 5 array will need to read all bits CORRECTLY to rebuild a failed raid 5 array. What is the bit error rate of modern disks? Approximately 1 bit for 16TB-20TB of data. So a large raid 5 san may fail to rebuild. Got a backup? This is why they created raid 6.
    4. I would setup an extra partition, maybe 100GB, on the raid array. Why? I find it handy, if you need to check the disks, you can reboot from that 100GB partition and check the big partition. Also, you can have 2 different versions of the os on the raid array. You will find this useful, when a update comes out that breaks some things, you can run off the older os and you don't have to live with the broken functionality.

  • Creating less expensive small xSAN for 2 editors - suggestions?

    I've been setting up a second editing bay for our company which is supposed to have identical capabilities to my own. It's a Quad with Kona LHe, just like I'm using. We mostly edit TV comercials in DV50, uncompressed 10bit SD and DVCProHD 720P so our bandwidth requirements usually out spec editing over ethernet.
    Since both editors need to have equal capability, I have chosen to create a small xSAN to allow both editors to access the same media drives simultaneously. With a SAN, large server volumes (located on a fibre channel network) show up on the editor computers just as a simple local hard drive would at very fast speed. You can point FCP scratches to the same places on each machine, then use any machine to edit any project instantly. My budget is large by my standards, but not very large on the scale of typical xSAN implementations so I've been collecting "deals" on the gear I need before I put it all together.
    Since I'm on a budget, it has been fun finding much of the gear on ebay, etc. FYI, here is a quick list of the stuff I've acquired:
    xServe RAID 5.6TB refurbished from Techrestore.com: $5900
    xServe 2Ghz Metadata controller w/ Tiger Server 1.5GB ECC RAM, PCIX fibre card new, but last year's model from Smalldog: $2600
    Brocade Silkworm 3200 8 channel 2Gb entry fibre switch new on ebay: $950
    2 PCI Express fibre cards new for 2 editor G5 quads: $1100
    3 xSAN licenses new on eBay: $1300
    Mini GB ethernet network stuff and switch (for metadata only - separate from our LAN network): $150
    Misc. fibre cables and tranceivers: $700
    One of my editing G5's will act as the fallback Metadata controller in case the xServe goes down at any time.
    I also am planning on at least 8 hours of time from a local xSAN authorized tech to help me set everything up and teach me what I need to maintain the system. This should be about $700 or so.
    I have found several articles posted at xsanity.com very helpful in planning this.
    If any of you have any experience with xSAN, you might agree this is a very low cost of entry into this very exciting new workflow. $13.5K for a fully functioning xSAN of this size is not bad at all. Many would spend that on the xServe RAID alone, sans SAN;-) And I can expand very easily since my switch will still have 3 unused ports. Note: the 5.6TB xServe RAID will only be about 4TB after accounting for the RAID 5 and dedicated (and mirrored) metadata volumes. Only 4TB. Pity!
    Now that I have the main hardware components ready, it's time to install and setup the system. I'll be posting my progess in the next few weeks as this happens, but first would like to hear any impressions on this. Suggestions or warnings are appreciated by those with experience with xSAN. The xSAN forum here at Apple is used mostly by IT professionals and I'm mostly interested in hearing comments from editors and those that use the system in small settings like my own.
    One question for the Gurus: I don't believe FCP projects can be opened and used by two people at the same time, but if there is a way to do this without corrupting the project file, I would love to know.
    I'm also seeking to hire an assistant to occupy the new editing bay. Broad multimedia skills are needed, but I can train to a degree. We're an advertising agency just north of Salt Lake City, Utah. Please let me know if any of you are interested.

    Thanks for the suggestions. Brian, I'll be sure to get you some points once I close the topic.
    I didn't realize the Project files are best copied to the hard drive. Is this for a permissions related reason or just to avoid short spikes in bandwidth during saves?
    I agree that Metadata is best on a dedicated controller with full scale xSANs, however with just 2 systems editing mostly DVCPro50 resolution projects I can't imagine burning up more than 100MB/sec at any given time. OK, maybe, but this is unlikely for next year or so. I've read that a single controller can achieve 80MB/sec easily, so 2 should be around 150MB/sec under heavy load. I'll have the metadata mirrored to 2 drives on one side of the XSR and the remaining 5 drives on that controller in a RAID 5. The other side of the XSR will be a full 7 drive RAID 5. These 2 data LUNs will be striped together in xSAN to achieve a full bandwidth of about 150MB/sec. I was told that the XSR controller can handle multiple RAID's at the same time so I can send metadata to one mirror array and designate the other as a RAID 5 LUN. Considering the small size of the data going to the metadata volumes and the relative simplicity of RAID 1 mirroring, I believe the controller shouldn't be adversly affected by this. Is this incorrect in your experience?
    I do plan on turning off the cache of the XSR since the system will be used for editing, yet it would be nice to have cache for the metadata so that's a point to consider.
    THe metadata should be segregated on its own XSR controller.
    Are you saying that the metadata sharing the same controller as the video data is going to slow the whole system down even though the metadata is located on separate, dedicated drives in that controller? I thought metadata was tiny and required very little bandwidth on the bus of a controller. If this is the case, the only bottleneck would be the RAID chip in the XSR. Again, these metadata files are very small and RAID 1 is very simple, so I don't see how it could slow things down enough to justify another $4K for a new XSR. If you still disagree, please let me know.
    As per your suggestion, and considering your stellar reputation in this forum, I'm shopping right now for a mostly empty xServe RAID to use this for just the Metadata volumes mirrored. It just seems like a huge waste to get an XSR just to use 2 drive bays mirrored. The plus side of this is I could begin filling the other controller in the future as my storage needs expand.
    It would be really cool to use the 2 drive bays in the Xserve Metadata controller for the metadata volumes, but I can see how that would cause problems if the xServe goes down, making the metadata invisible to the fallback MDC. 100% uptime isn't that big of a deal for me, however. As long as the XSAN comes back online safely after the xServe reboots without trouble, I'm OK with such a setup. Have you ever seen this done? It seems a bit of a hack to use anything but fibre channel data. I'd hate to introduce too much complexity just to save some bucks, but it is an intriguing idea and would cost a fraction of a new XSR. It would be fast since the writing the metadata would be local with very little latency.
    For this reason, I'm also very interested to find any other simple and less expensive Fibre based storage solutions that could host my metadata as an alternative to full blown XSR for this. There are all kinds of fibre drives out there, but I don't want to waste a valuable fibre switch port just for one drive. All I need is 2 hardware mirrored bays accessible over fibre, preferably sharing the same channel on my switch. Does anyone know where I might find something like this?

  • Oracle Unbreakable Linux Version 5.3 64 bit

    ANY assistance would be greatly appreciated?
    when installing OEL 5.3 x86_64 version, had no issues with the NICs.
    The system detected both eth0 and eth1.
    During the installation, it provided the screen to configure eth0 and eth1.
    When installing OEL 5.2 x86_64, experienced different behavior:
    1. After configuring the disk partitions, it did not bring up the screen for network configuration;
    It continued to time zone configuration. The remainder of the installation process completed without
    Issue. I checked the installation guide. This is normal behavior if the system does not detect NICs.
    Post installation, the system did not see eth0 nor eth1. I tried to manually populate
    The following network files in order to bring up the NICs:
    A, /etc/modprobe.conf
    Added: alias eth0 tg3
    Alias eth1 tg3
    I used driver tg3 since this is the driver used on other linux servers on HP Blade servers.
    May be different drivers.
    B. /etc/sysconfig/hwconf
    Added: class NETWORK
    bus: PCI
    driver: tg3
    detached: 0
    device: eth0
    Rebooted with no success.
    Additional Info: We are doing boot from san
    OS LUN Size:: 60 GB - Raid 5
    SAN configuration: We use SVC 2145 appliance. Actual storage array is IBM DS8100
    Hardware: HP BL490 (blade server)
    Will reinstall OEL 5.3 shortly and gather network files in order to see what driver information we need.

    just as you concluded, the NIC's are just not found with OEL (and therefore RHEL) 5.2, and they are quite probably added with version 5.3.
    first thing is looking into your system using lspci -v you should be able to see the devices.
    next: see all NIC's the kernel sees: ifconfig -anext: insert the kernel module: modprobe tg3 next: see if the NIC's are seen by the O/S now: ifconfig -a it's possible the NIC's are not supported by an earlier version of the tg3 driver.

  • Database Performance (Tablespaces and Datafiles)

    Hi guys!
    What's the best for performance in database, tablespace with various datafiles distribuited in diferents filesystems or tablespaces with various datafiles in only one filesystem?
    Thanks,
    Augusto

    It depends on contents of the tablespaces, tablespace level LOGGING/NOLOGGING and env such as either OLTP or OLAP and LUN presentation to the server with RAID or without RAID and SAN Read per second and write per second.
    In general, tablespace with various datafiles distribuited in diferents filesystems/LUN's is in practice for non dev/system test databases.
    Moreover using ASM is better then standard filesystems.
    Regards,
    Kamalesh

  • Disk space configuration

    Hi,
    Platform - Oracle 11gr2 OS - RHEL 6. IBM Blade Server.
    The blade server comes with default storage of 140G having RAID 1.
    We installed the OS Binaries. Around 80 G left.
    1> Is it a good idea to use up the space to put the Oracle binaries and create the DB files in SAN only.
    2> Any best practice on links on how we distribute the mount points (logical volumes) and LUN.
    3> Oracle recommends RAID 1+0 but also RAID has it's own overhead. Whats the best practice for RAID on Oracle.
    4> We don't have much space on SAN if we configure RAID 1+0. Can we go for NO-RAID on SAN Box. Will it affect the performance ?

    Keep in mind that some log files may wind up on the same device as your binaries. Do an emctl status dbconsole to see where some are. Also watch out for dump directories (see the ADRCI docs, and be especially cognizant of OS audit files).
    Also consider the size of the binaries install - does each patch set these days have an entire binary distribution? (Yes, partly because there was difficulty replacing pieces in use that were assumed to be shut down).
    Google SAME (Stripe and Mirror Everything) - the general idea is, the more spindles, the better performance, so simply using all the spindles with RAID 10 will be as good as any LUN's you can come up with without going into Obsessive Configuration Disorder. On the other hand, redo and archiving have very sequential characteristics, while data and undo have very random characteristics, and there is a complex system of buffering between Oracle and the actual storage. You also want to be obsessively protective of redo. So you need to understand the reliability and performance characteristics of your database, the latter of which can only be determined empirically, and the former requires management input and direction. Unfortunately, too often management doesn't do that part well and just gives you insufficient cheap disk.
    Some configurations work just fine with RAID-5, until they don't (that is, a buffering limit is reached, or operating under degraded mode with failing disk).
    Listen to Ed.

  • New Mac Pro Tower Configuration

    Hi
    I am currently looking at purchasing a new Mac Pro Desktop Tower to run CS 5.5.  I am planning to use it for longer video projects with premier and AE.  I would also be using audition to some extent.
    I don't think that I can afford to go with a twelve core version of the Mac Pro, but I am wondering what is the best way to configure the ram/processor/video card?
    Also are there any compatability issues with OSX Lion?
    Any help will do.
    Thanks

    While the case for building a PC is strong and valid, and while I would build a PC if I needed a new system right now, I also understand the needs of some institutions to build a Mac. Here is my advice in that regard:
    1. You don't need dual processors. I have a single 6-core 3.33GHz chip in my 2009 Mac Pro (which I upgraded myself) and it handles heavy work in Premiere and After Effects CS5.
    2. Don't buy any RAM from Apple. With the price of hard drives today, it isn't such a big deal to have Apple install extra drives for you, but if you're even slightly technically inclined, you can install more RAM and hard drives for less if you shop and buy from other vendors. There is a great Mac-specialty website where I buy my RAM from called OWC (Other World Computing link) that has excellent deals and service. I think 24GB is minimum ($322), and 32GB is better... only $425 right now.
    3. If you have the budget, a proper third party RAID card is a much better deal and performer than Apple's Mac Pro RAID Card, so don't waste your money there unless you really want to limit yourself to an internal RAID 5 of no more than four disks. I got by with a 3-drive RAID (AID) 0 stripe of disks for a year before needing to expand to a proper RAID. With three Apple drives striped together, you'll get a sustained data throughput of 330MB/sec both read and write speed, which is good enough to edit some HD video. I edited a feature-length film shot on P2 at 108060i/24p using only three internal Apple 1TB disks, with backups to external drives holding a safety net for my data.
    4. You don't need to buy an NVIDIA /CUDA card (Quadro 4000). I have the 5870 from Apple, and I can edit DSLR 5DMkII footage in smooth real-time, but that is helped by the 32GB of RAM and fast RAID. I have a CUDA card, an older GTX285, and while it was good for Premiere, I felt it was slower than the 5870 in After Effects, which is where I spend more time rendering anyway.
    I'm not sure how much weight you'll be placing on student projects being safe from loss in the event of a drive failure, but you could either have them back their projects up themselves on their own external drives, or build a robust RAID. I have an Areca RAID card that runs eight 2TB disks in RAID6, which yields 12TB of very fast data that is also still safe if any two of the eight drives fails before being replaced. Depending on the number of students using the system, it could cost less to build a similar RAID than have each student buy their own drives. Question is, does the school eat that cost of security for the students, or let them shoulder that cost themselves?
    I was a film school student myself, and I know what it's like to sign up for a four-hour block on a limited number of edit suites. You want the system to run faster than the students, so they can be creative without wasting their time slot waiting for the computer.
    With your education discount from Apple, you should be able to get a 6-core 2010 Mac Pro, RAM, disks (and RAID if you go that route) from 3rd parties about $6000... much less if you skip the RAID system. If I can help in any way, let me know. I love my system, even if I could build it in a PC today at half the cost. Adobe works the same (or better) on a PC as it does on a Mac once you're in the software, but if you have to go Apple and can't wait for what may come out in the future, consider a system like mine.
    2009 Mac Pro (updated to 2010 firmware... you'd buy a 2010 and not worry about that.)
    3.33GHz 6-core CPU
    ATI Radeon HD 5870 (still runs software acceleration very nicely)
    32GB RAM @1333MHz
    (4) internal hard disks (1=OS X, Adobe CS5, 2+3=RAID0 (stripe) for scratch files, 4=other backups and storage)
    Areca 1880ix-12 RAID card
    Sans Digital 8-bay tower
    (8) 2TB WD RE-4 hard disks in RAID6 = 12TB parity array (sustained 816MB/sec write, 714MB/sec read speeds)
    Various backups in external disks.
    Good luck!

  • Solutions for 2 editors working on same projects

    I work as a lone video editor in a magazine publication/web design company, producing videos for our various website. I will soon to be joined by a second editor.
    I would like a solution where we could easily work on each others projects from our own machines when needed.
    I have looked into Edit Share, ideal but expensive, wondered what other solutions there might be out there?
    Could you just use a large network drive as the capture scratch for both, would there be other issues to consider?
    Many thanks
    John

    Final Cut Server has nothing to do with sharing files between computers or shared storage. It is a cataloging and organizational program. It helps keep track of things, but it doesn't supply bandwidth or data management across a wire. The data management in FCS is organizational only.
    If you have the money for a SAN, don't look at Edit Share. I had a one month demo and I sent it back after a week. Couldn't handle capture and print to video at the same time. Couldn't even capture from two computers at the same time. You have to buy their capture module (basically a buffer) which costs extra. The overall price back then was about $56K for 6 TB.
    Check out Rorke Data Systems and talk to Jim Boas. He can put together a nice package for you at the best rate around. My company is about to buy a 16TB RAID 6 SAN from them, complete with management software, and iSCSI bridge. Can get towers AND laptops connected. Used it, works perfectly. They have a managed SATA version now as well, not too pricey. The package we are getting is under $45k. But storage is cheap these days, and we are paying for connectivity to 12 edit stations. If you only have 2, that's $10k off the top. If you only need say, 6TB, that's another $10k off the top. Have tried the NAS option. Listen to Shane. It doesn't work well. Sure, some people may have tested it and say it works, but you have to wonder what they did to test it, and did they really put it through its paces. You can't test shared storage on an edit suite by importing footage and screwing around in the timeline. That's not editing. That's screwing around. If you're serious, you'll need a managed, shared storage network, independent of your LAN. DO NOT USE A TCP/IP NETWORK!!!

  • Extreme Performance for an Extreme Task

    Hi Everybody, this post was originally in the Mac Pro discussions, but it seemed to fit in more in the 'Xserve' discussions - as it has been established that the Xserve is better suited for this job than the Mac Pro.
    ==
    Hello,
    Some of you will read this question in disbelief or think I have gone crazy (laughs), but its all entirely true - and it tied in with several large businesses plus university studies.
    Imagine if you would, a space station = or to be more precise, a space hotel. Where every room was fitted with computer systems, enabling users/residents to do anything they wanted to do (listen music, view photos, watch HD video etc), and the systems powered a communications system across the station, plus monitored critically important equipment and controlled the space-craft/station.
    I have established that on-board Xserve's with Xserve RAIDS would be the best option possible, but what about processor? amount of RAM? etc.. When you think, these systems will be given lots of work to do all at the same time, 24/7
    I will just add, this is a university project backed by a large enterprise/business which intends to build a space hotel in the very near future.
    Also I will add, Is it possible to develop a 'super-computer' with the Xserve, which is extremely powerful and can be given a heavy workload - which might be generated with the following tasks... (HD Video editing, HD video playback, AV Communications, photo and music sharing and playback, database management, plus on-board control systems inc realtime mapping and navigation etc) ??
    This is, of course, a large task, and we really do want to use Apple machines and Apple's Mac OS X as we all see it as the operating system of the future.
    Many Thanks,
    James
    (Project Researcher)

    Hi James,
    yes, it really sounds like a lot of fun and OS X seems to be a cool solution for many of the things you suggest. It is really relatively easy to program for OS X, especially if you want to show off nice little things fairly quickly (e.g. with Dashcode).
    If we can dream here, I would go with the following setup: Probably Xserve + Xserve RAID on SAN as the server for entertainment purposes. This server would connect to TVs in each room via ethernet, connected to native 720p 50 inch plasma displays. The TVs are <a href"http://www.appletvhacks.net/">highly customized to be able to upload user Photos and Videos to the main system. These videos and photos would be accessible privately by the guest that has uploaded the media, and (if indicated by the guest) also publicly to other guests (e.g. in the form of a HD video podcast edited nightly by the station guest manager: "Space Station Highlights Daily Podcast. Today's episode: The earth viewed from space. Tomorrow's episode: Pro-tipps for taking sunrise photographs"). Every guest gets also a Mac Pro with Final Cut Studio 2 to do their own video editing and to encode the newest Blue-Ray disks to H.264 for viewing on the TV.
    For the scientists an Xgrid of Xserves (64 nodes at least!) to evaluate their experiments immediately (for example climate modeling and DNA sequencing). For each scientist at least two 30 inch Cinema Displays. Who wants to wait to evaluate the data back on earth when you can do it immediately on the station. Additional benefit: heat generated by the servers can be used to heat the station if the station is on the dark side of planet earth. Disadvantage: computer failure due to overheating when the station gets direct sunlight.
    But to be more realistic: I think a plasma screen to see the latest movies is not why people would go to a space hotel (especially if you have the breathtaking view of planet earth from your window) and if you ever want to get the station off the ground without the requirement to have a small nuclear power reactor on board, you don't want to shoot a traditional "Supercomputer" into space.
    I think your approach should be a smart one, not a brute force one. Think about the exact requirements and then look for highly specialized solutions. Check how much power is available. I guess not much, everything is probably powered by battery and solar panels, maybe some radioisotope thermoelectric generators. Check how much can be transported realistically into space (volume and weight).
    Use lightweight, low power devices wherever you can (flash for storage instead of hard disks, monochrome, non backlight displays for system control computers).
    Think about small, energy efficient computer chips like the ones used in Apple laptops. They definitely have the computing power to control the space craft. Just look at the puny computers used in the space shuttle.
    Think about networking possibilities. Have all your supercomputers earthbound and control them remotely via a network (satellite internet
    Also for entertainment think about small, lightweight, energy efficient. Something like a customized iPod connected to a little larger OLED display with 640x480 resolution.
    This is a much bigger challenge than just spending tons of money on energy hungry, powerful Xserves, Xserve RAIDS and 30 inch cinema displays that will never make it into space anyways.
    Have fun! Lars
    12" PowerBook G4 - 1.25 GB RAM - Mac Pro - 2 GB RAM -   Mac OS X (10.4.9)   - 40 GB iPod 4G

  • Cutting in DVCPRO HD

    I've been looking into DVCPRO HD as a compression to cut our HDCAM footage in and just need some feedback.
    We're working on FCP Studio 2 and After Effects CS3 on a Raid 5 SAN and we're capturing with KONA3.
    Does FCP allow us to capture HDCAM footage with DVCPRO HD compression upon ingest? Shane Ross, I know you work with DVCPRO HD; what's your experience with it; or anyone else for that matter? Have you had any trouble with the format? Do you use a different compression when working with it? Does DVCPRO HD material work well when bringing it from FCP to AE and back and other such programs? How does it compare to working with native HDCAM material?
    Basically, I'm just trying to find out if it's a viable compression to work with and find any possible pitfalls. I've done a lot of reading on it but I need to know an editor's and graphic artist's experience with it.
    Thanks.

    *Shane Ross, I know you work with DVCPRO HD; what's your experience with it; or anyone else for that matter? Have you had any trouble with the format? Do you use a different compression when working with it? Does DVCPRO HD material work well when bringing it from FCP to AE and back and other such programs?*
    We shot for three months on a HDV cam (Z1U) and its LONG GOP format was too difficult to deal with in post and with round trips to AE.
    We switched to a HVX200 and shoot almost exclusively 720/24pNative.
    We use DVCProHD footage in all of Adobe's Production suite programs that apply, and with Motion, LT, FCP, and a variety of 3rd party utilities.
    One of the simplest codecs to work with, and easy to edit. Titling and FX are far superior than HDV. Don't know about HDCAM.
    However, it you are capturing via Kona3, stay with prores.

  • Mixing G5 w/ Intel

    My company is about to invest in some new equipment and I was wondering something. Right now we have 1 G5 2.5G and we are going to get another Mac to have at least a G5 in both edit suites. (Right now edit 2 has a G4.) Would it be a good idea to get a Mac Pro to run along side the G5 as long as they are both running the crossgrade working from the same RAID 6 SAN, or should we get another G5 instead. Just want to know if there are any issues of compatibility between the G5 and MacPro as far as file sharing, network connectivity... anything I might not have thought of...
    Thanks. If this is vague, let me know what else I can include in the post.
    Matt
    EDIT: never mind, I found the answer here.

    Shane had posted on this earlier... sorry for reposting...

  • Oracle clusters

    Gurus
    If you all can help me in this.I have a fair idea about this but i want experts advice.
    we want to set up a database server cluster with two nodes to start - dual processor each connected to a RAID 5 SAN
    We want both servers to be Active (no Passive) and we want the ability to add more Storage and add more servers to the cluster.
    Why can't we do this with LINUX AS 3.0 and Oracle 9i enterprise edition?
    What are our options for doing this with ANY OS running on a NON Unix/AIX machine? We want to be able to set this up using PCs.
    Please let me know which licenses and OS combination will work. one hosting company says with LINUX AS 3.0 and Oracle 9i Enterprise we would only be able to run the 2 server cluster as Active/Passive.
    Please give me some advice on this issue
    Thanks
    Regards

    you can run Redhat Linux 3.0, Oracle Release 2 Enterprise Edition in Active Active mode that is Two node real application clusters. If you feel that Enterprise Edition is too expensive, you can go for Oracle 10g Standard edition with Redhat 3.0,dual processor nodes(licenced max of 4 cpu s ).

  • What switch do i need?

    Hi All,
    I am not familar with the range of products from cisco in regards to content swtiching so i thought i would ask here :)
    What we are going to have:
    6 http/https/imap+ssl servers
    3 incoming smtp servers
    1 big ass RAID or SAN
    all beefy machines, we are probably look at providing 35,000 webbased/imap email accounts where say 15,000 people could be logged in at any one time.
    would a CSS 11500 be overkill for this?
    cheers
    dave

    Cisco Content Switching Products include:
    Cisco CSS 11500 Series Content Services Switches
    Cisco CSS 11000 Series Content Services Switches
    Cisco LocalDirector 400 Series
    Cisco Catalyst. 6500 Series Content Switching Module
    It is recommended to CSS 11500 for Small/Medium networks and
    CSM over 6500 for large networks.
    Am not sure about the no. of connections it can support though.

  • Can InterConnect support D3L and XML at the same time?

    Hi,
    In file Adapter.ini, ota.type can only be one of D3L and XML. Does it mean InterConnector can only support either D3L or XML parsing at one time?
    If I have different type of incoming files, I need to install at least two interconnectors to work, right?
    Thanks,
    Jie

    ASM diskgroup redundancy and SAN redundancy are not the same and work completely different. You cannot use ASM extern and high redundancy at the same time.
    A SAN does not know anything about Oracle database files and redundancy is usually accomplished by RAID. ASM is not RAID and does not duplicate complete disks. ASM data redundancy is based on disk failure groups and file extents. ASM knows about Oracle database files and automatically adjusts the stripping of data, which a SAN cannot. ASM also has the advantage that the storage size can be dynamically adjusted while the storage is online, without having to rebuild your storage devices. Rebuilding or extending a RAID based SAN solution usually means to reinitialize the storage.
    Beware of SAN snapshot and replication features as they may not provide complete backup or redundancy if you loose the complete storage.

  • Data File Management on SAN & RAID

    Hi everyone,
    this is more a question for some generic feedback rather than a particular problem. I'm advising on a system which runs 10g, Archiving, Flashback and Standby on a really fast machine. The whole database, which currently is some 5GB, runs pretty much of an 40GB SGA :)
    When i started working with Oracle we managed data files very detailed and paid much attention to disk placements, that was back at 8i. Today we have NAS, SAN and RAID systems which make I/O tracking a lot harder. This particular system runs an HP storage system, which runs virtual RAID's and everything appears under just 1 mount point in the OS and to the DB.
    I'm aware of the standard rules, e.g. Logs on RAID10 etc., but i'm just wondering how, everyone of you out their setting up production systems, usually deal with the placement of DB Files, Log Files, Archive File and Flashback Files on today's low end or enterprise class Storage Systems?
    If you need a particular problem to answer ..... the issue here is that IT says it's not the storage system, but out of 13h database time i have over 3h log file sync and write wait events.
    Thanks.

    Well the first thing I'd do with a 5GB database is not run a 40GB SGA.
    But to your question ... I like to place my files so that, hypothetically, I can lose a disk shelf not just a disk, and that in so doing if I lose storage I can lose the database but not its associated backups, archived redo logs and redo logs. Or, if I lose the backups I still have a running database.
    And 1 LUN is almost never the right answer to any question if performance is an issue.
    Also familiarize yourself with Clos networks and create multiple paths from everywhere to everywhere. This means that blades and 1U servers are almost never the right answer to a database question.

Maybe you are looking for

  • Audio output changed to 'Digital Out' and speakers don't work! Please help!

    I'm not exactly sure what's happened, but after starting up my macbook pro, my speakers/audio haven't worked. The volume control in the menu bar is inactive (looks light grey) and in the Sound menu, I have no option for output devices except for 'Dig

  • Cannot access Bam Administrator via URL

    Do I need to configure something in IIS? I used a mix of the Quick Install and the Install instructions. I discontinued the Quick Install after I ran bam\scripts\misc\orabam_ts_users.sql, and install.cmd failed to recognize that the tablespace and us

  • Function Module to download Raw data as a PDF file

    Hi, Is there any function module which will allow to convert raw data to pdf and download in the local system. The requirement is to download a payslip as a pdf format. Function Module 'CONVERT_PAYSLIP_TO_PDF' converts the data into a raw format. The

  • How to view DB schema in SAP B1 company

    Hi, Would you please tell me how to view Company database schema. When I use Database browser It gives Exception as "B1DBBrowser ERROR: The Type initializer for 'B1wizard.Globals'" threw an Exception. Thank you Buddhika

  • How to copy OS audit files into DB

    Hello All!, I'm wondering if there is a possibility to manage OS audit files from database. We've a lot of audit files in /u01/app/oracle/admin/******/adump directory that need to be copied into database. Of course audit_trail should be set to DB but