ESATA 6g, USB 3 on PCI, real world

I am looking to buy a
CalDigit FASTA-6GU3
2 Port USB 3.0 & eSATA 6Gb/s Host Adapter
to put in my 2008 Mac Pro PCI slot to run an external Blu-Ray writer and hard drive enclosure off the eSATA ports.
Looking for info on what real-world Gps/Mbps the Mac Pro PCI slot can actually handle -- it is great if the card is 6Gbps eSATA & 5Gbps USB3.0 -- but the bottleneck is what the PCI bus can move (also limited by what the hadrdrive and writer can actually read/write).
I read on the reviews that the Mac Pro PCI bus can only perform tops 500Mbps (slightly above Firewire 400Mbps).
I also read the actual PCI card performance was limited to 250MBps due to its PLX chip design.
As USB2.0 is based on 480Mbps -- I have to wonder what all these numbers mean.
Can anyone put this into layman's perspective -- why put a 6Gps eSATA and USB 3.0 card in a PCI slot that is limited to 250Mbps?

Bytes will come to byte you if you think they are bits.
Kilobytes was the same issue. Or gigabits and bytes.
10 bits to a byte basically. sometimes 8 bits if there is no check bit.
Wikipedia is your friend, more so than Google. And there are dictionaries online of tech terms.
Some poeple come here talking about their MacPro except that they don't know what a 65lbs Mac is and my guess is there are a dozen notebook users asking.
The Sonnet E4P ~$250 w/ 4 x eSATA ports is nice for adding direct connect to SATA devices, more than enough bandwidth.
Your Mac has those four internal SATA2 drive bays. Those are all on a single shared controller bus, so while in theory you can get 275MB/sec max from each, there also is NOT full 1GB let alone 1.2 (1200MB/sec) but rather 750MB/sec - more than enough for most disk drives still. Only when you got to SSDs that it was more apparent.
Here is what your Mac's Serial ATA drives and bus are offering in bandwidth performance, sortof.
180MB/sec is about the max of 4TB enterprise drive, or WD 10K latest VelocityRaptors.
SATA2 SSD was about 275MB/sec also, and 450-500/550 (max, writes are often less with SSDs, reads tend to do very well, but it is the very high I/O's per second that they shine, along with lowest response time in seeks being 1/1000th - instead of being 4-12 milliseconds you are into less than 1.0 nanoseconds.
Geeks were happy with a Base2 numbering - binary 1's and 0's and base16 whereas Apple and others now wanted to change one gigabyte from being binary 1024MB and turn it into 1000MBs. The dumbing down of math, specs, and the i-ification  of consumer devices. iDevice, iOS and consumers that never liked those 1024 binary and hex based numbering system.

Similar Messages

  • Need a Esata or USB 3.0 PCI-E card that supports Server 2012. ANY card.

    I can't find any card that supports Server 2012 yet, but I really need to transfer and move 2-3TB of data on a weekly basis.  USB 2.0 + external drive is too slow.  Any other solutions I'm over looking?
    I'd like to get a Esata 6ghz and a few Esata Externals.  USB 3.0 is an option if I can't find Esata, although It's slower than I'd like.
    I'm open to any ideas.

    I ended up getting two of these:
    http://www.newegg.com/Product/Product.aspx?Item=N82E16815201051
    Below is the review I wrote on newegg.com about the product.
    Pros: Contacted Koutech (and 5 other companies) asking what eSATA and USB 3.0 cards they had that would work in Windows Server 2012. A support tech responded that same day. Every other company still hasn't responded 2 months later. The rep said
    he hasn't tested any of their cards in Server 2012, but he would start testing and give me a product list in 2 days.
    He did exactly that. In less than 2 days I had a list of 18 different PCI-E cards that worked in Server 2012.
    I ordered two of these cards (PEU437) for my Dell Servers (T620, & T710).
    I have installed the card in the T620 Server, and the card did require the 15 pin SATA power cable in order to work properly (Instructions make it sound optional). My server didn't have any extra 15 pin SATA power cables, I had to buy a 15 pin SATA extension
    cable and daisy chain off the SATA CD-ROM drive, but it works just fine.
    I'm transferring at speeds over 170mb/s (which is the fastest I've seen on usb 3.0 thus far) My M18X, and MacBook Pro usb 3.0 seem to cap around 100mb/s. I'm going to attribute the extra speed to the ridiculous raid set up, and H710 controller card as I've
    done file replication well over 1gb/s.
    All in all, the product is working exactly as described, and tech support was amazing from Koutech.
    I will definitely purchase future PCI-E cards from them.
    And of course NewEgg pricing and delivery speed was top notch.
    Cons: Directions were slightly misleading. I interpreted them as saying the 15 pin SATA power connection were "optional" for USB drives that didn't have their own power source.
    Windows Server 2012 did see the PCI-E card with out the 15 pin SATA power connection hooked up, but it wouldn't acknowledge any USB External HDD, even if it had it's own power source.
    In most cases this isn't an issue, desktop workstations usually have an abundance of 15 pin SATA power sources... however my Servers didn't have any. I had to daisy chain off of my SATA DVD-ROM drive. The Cable wasn't long enough, so a 12" extension cables
    was needed, which I got off newegg for under 9 dollars each.
    Minor issue, and acceptable. Directions should be more clear, but that's a technology limitation not a design flaw.
    Other Thoughts: I did a Bare Metal Server Backup of slightly over 1TB of data in 1 hour and 42 minutes. A vast improvement over USB 2.0.

  • USB 2.0 PCI card VT6202 chipset?

    Hi there,
    I own a QS 733 and want to put get a bit more life out of the machine, so thought I would install a USB 2.0 card. Trawling through eBay, the inexpensive cards all seem to be based on the VIA VT6202 chipset. The Via website claims to have Mac drivers for this set:http://www.viaarena.com/default.aspx?PageID=420&OSID=23&CatID=2470&SubCatID=122 but unfortunately I get a time out when I try to get the linked file.
    Does anyone have experience with these cards on a Mac?
    Many thanks in advance for your help!
    All the best,
    Dan

    Hi-
    The linked PCI card with the NEC chip-set will be better than a VIA or ALI chip-set card. It should work, and you don't need to worry about drivers- all you need is within Tiger.
    So, as far as issues with sleep, either not sleeping, or, the most common problem, not waking from sleep. VIA and ALI chip-sets can cause problems with no device attached at all. NEC chip-sets allow the card to reside nicely in the computer (no conflicts), but the connected device may cause problems.
    Memory writing USB hardware, or, as I like to say, mass storage devices, can be used on the USB 2.0, but, to keep from having sleep problems, especially if they are USB powered (like the iPod, and some card readers), the device must be ejected, and disconnected, before sleeping the computer.
    Other devices, printers, scanners, cameras, some card readers, and such, are on a case by case basis. If the device has it's own power supply, often times they won't cause any problems. If, the device does cause a sleep problem while connected to the PCI card, often times a powered USB 2.0 hub between the device and the USB card will take care of the problem.
    Real world results- I have a Logicool QCam Ultra, a powered mouse-pad, a Wacom tablet, and a HP All-in-one printer connected to my USB 2.0 card. All devices work without causing sleep problems. I also have a 30gb iPod. If I leave the iPod connected when I command sleep, or if the computer goes to sleep while the iPod is connected, I cannot wake the machine. It requires a hard restart (holding the power button for 5 seconds) to bring the machine back. As long as I eject and disconnect the ipod, all is fine with sleep. To aid in the connect/disconnect, I leave the iPod cable connected to the PCI card, and leave the cable where it is easily accessed.
    Many people will inquire as to external USB hard drives-I say, firewire is better, period. USB HDD's and USB 2.0 PCI cards are the most common culprits with USB 2.0 incompatibilities. If you want headaches, use USB HDD'S and USB 2.0 PCI cards.
    Testing each USB 2.0 device, individually, will soon confirm whether or not special needs are required. I realize that most all of us are "plug it and leave it" conditioned, but, with some modification of habits, USB 2.0 can work, and work well, on non USB 2.0 native machines.

  • Pb with X200, NEC based USB 3.0 PCI card and external USB 3.0 HD

    Hello, I ordered a USB 3.0 Express Card based on the NEC chipset. I am using a Lenovo X200 under XP SP3 and I have an external Buffalo Ministation USB3.0 HD. No problem to install the Express Card driver, the card is recognized but the HD won't mount. I have the USB connection beep and in the next 10 sec the USD disconnection beep. This is not a low power issue, I connected to the Express Card a power supply and have the same issue. In addition, this config, Express Card and Ministation Buffalo just work fine on a T60 under Windows 7. Any idea? Many thanks ! Tom

    craig stewart wrote:
    I would like to add IO to my Mac Pro Details of it are below
    I have looked at several options and would be happy to go with either eSata or USB 3.0.  I have heard some bad reviews about the USB 3.0 PCI offerings out there though.
    On the one hand, I have an unused eSata enclosure so I would not have to buy one,  on the other hand a few USB 3.0 ports on the machine would allow grerater expandability with a USB 3.0 hub hung off the back.  ( can you get eSata hubs?  never thought of that until just now)
    The nearest thing to an eSATA hub is a 'SATA port-multiplier'. This allows connecting up to 4 SATA drives to a single SATA port.
    Note: Not all SATA or eSATA cards support 'port-multiplier' functionality. The CalDigit card says it does support port-multiplier enclosures, what this means is that a single eSATA cable can be connected to an external box containing multiple SATA drives and see and use them all.

  • How much real world difference would there be between the 1600MHz memory of a 4,1 Mac Pro and the 800MHz memory of a 3,1 Mac Pro? My main app is multitrack audio with Pro Tools. Thanks so much.

    How much real world performance difference would there be between the 1600MHz memory of a 4,1 Mac Pro and the 800MHz memory of a 3,1 Mac Pro? My main app is multitrack audio with Pro Tools. Thanks so much. The CPU speed of either one would be between 2.8GHz and 3.0GHz.

    What are the differences.... firmware and build, there were tweaks to the PCIe bus itself. As a result 3rd party cards and booting is better.
    Support in 5,1 firmware for more 56xx and W35/36xx processors. Also memory timing.
    The 4,1 was "64-bit boot mode optional" and 5,1 was default. I don't know if there are changes but I assume so, even if it is not reflected elsewhere or in version number.
    I don't know what the prices are but 2009, to buy one today, when the 2010 is $1800.
    The 2008 of course was test bed for 64-bit UEFI and it sure seems even Lion and then ML are not as well engineered - outside of Linc who would be the least likely to have a problem.
    I would assume 2010 has better support for 8GB and even 16GB DIMMs as well as for 1333MHz.
    Nehalem family had only come out in fall 2008 and a lot of work went into making improvements well past 2009.
    If you remember, there were serious heat problems with those and 10.5.7+ up thru 10.6.2 even with iTunes, audio, and hyperthreading and cores hitting and staying in 80*C range. That I assume was both poor code (sleep does not mean poke and ask constantly) as well as changes in SMC and kernel improvements, to work around. Microcode can be patched in firmware, kernel, by drivers and by code, but it is best when the chips and core elements don't need to be.
    If someone is stretched, and can get 2009 for $1200 it might be a fine fit. That year offered the OEM GT120 which isn't really as nice and matched for today both OS and apps that rely on a GPU. And for odd reasons two such 120's don't work well in Lion+ but that is probably minor. Having the 5770 is just "nicer" though.
    There are some articles about trouble booting with PCIe SATA/SAS/SSD and less trouble with 2010. Also support for graphic card and audio I think was one of those "minor" 5770 related support issues. But shows some small changes were made there too.
    I wish someone would come out and pre-announce DDR4 + SATA3 along with PCIe 3.x (for bandwidth and more power per rail) along with say Ivy Bridge-E socket processors was going to be this summer's 3 yr anniversary and to replace the 2010 designed motherboard. But that is what is on Intel's and others drawing boards simmeringn in the pot.

  • Making Effective Use of the Hybrid Cloud: Real-World Examples

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, and it was clear that NetApp's approach to hybrid cloud and Data Fabric resonated with the crowd. NetApp solutions such as NetApp Private Storage for Cloud are solving real customer problems.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that allows you to move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    Check out the following blogs for more perspectives:
    Microsoft Ignite Sparks More Innovation from NetApp
    ASR Now Supports NetApp Private Storage for Microsoft Azure
    Four Ways Disaster Recovery is Simplified with Storage Management Standards
    Introducing OnCommand Shift
    SHIFT VMs between Hypervisors
    Infront Consulting + NetApp = Success
    Richard Treadway
    Senior Director of Cloud Marketing, NetApp
    Tom Shields
    Senior Manager, Cloud Service Provider Solution Marketing, NetApp
    Enterprises are increasingly turning to cloud to drive agility and closely align IT resources to business needs. New or short-term projects and unexpected spikes in demand can be satisfied quickly and elastically with cloud resources, spurring more creativity and productivity while reducing the waste associated with over- or under-provisioning.
    Figure 1) Cloud lets you closely align resources to demand.
    Source: NetApp, 2015
    While the benefits are attractive for many workloads, customer input suggests that even more can be achieved by moving beyond cloud silos and better managing data across cloud and on-premises infrastructure, with the ability to move data between clouds as needs and prices change. Hybrid cloud models are emerging where data can flow fluidly to the right location at the right time to optimize business outcomes while providing enhanced control and stewardship.
    These models fall into two general categories based on data location. In the first, data moves as needed between on-premises data centers and the cloud. In the second, data is located strategically near, but not in, the cloud.
    Let's look at what some customers are doing with hybrid cloud in the real world, their goals, and the outcomes.
    Data in the Cloud
    At NetApp, we see a variety of hybrid cloud deployments sharing data between on-premises data centers and the cloud, providing greater control and flexibility. These deployments utilize both cloud service providers (CSPs) and hyperscale public clouds such as Amazon Web Services (AWS).
    Use Case 1: Partners with Verizon for Software as a Service Colocation and integrated Disaster Recovery in the Cloud
    For financial services company BlackLine, availability, security, and compliance with financial standards is paramount. But with the company growing at 50% per year, and periodic throughput and capacity bursts of up to 20 times baseline, the company knew it couldn't sustain its business model with on-premises IT alone.
    Stringent requirements often lead to innovation. BlackLine deployed its private cloud infrastructure at a Verizon colocation facility. The Verizon location gives them a data center that is purpose-built for security and compliance. It enables the company to retain full control over sensitive data while delivering the network speed and reliability it needs. The colocation facility gives Blackline access to Verizon cloud services with maximum bandwidth and minimum latency. The company currently uses Verizon Cloud for disaster recovery and backup. Verizon cloud services are built on NetApp® technology, so they work seamlessly with BlackLine's existing NetApp storage.
    To learn more about BlackLine's hybrid cloud deployment, read the executive summary and technical case study, or watch this customer video.
    Use Case 2: Private, Nonprofit University Eliminates Tape with Cloud Integrated Storage
    A private university was just beginning its cloud initiative and wanted to eliminate tape—and offsite tape storage. The university had been using Data Domain as a backup target in its environment, but capacity and expense had become a significant issue, and it didn't provide a backup-to-cloud option.
    The director of Backup turned to a NetApp SteelStore cloud-integrated storage appliance to address the university's needs. A proof of concept showed that SteelStore™ was perfect. The on-site appliance has built-in disk capacity to store the most recent backups so that the majority of restores still happen locally. Data is also replicated to AWS, providing cheap and deep storage for long-term retention. SteelStore features deduplication, compression, and encryption, so it efficiently uses both storage capacity (both in the appliance and in the cloud) and network bandwidth. Encryption keys are managed on-premises, ensuring that data in the cloud is secure.
    The university is already adding a second SteelStore appliance to support another location, and—recognizing which way the wind is blowing—the director of Backup has become the director of Backup and Cloud.
    Use Case 3: Consumer Finance Company Chooses Cloud ONTAP to Move Data Back On-Premises
    A leading provider of online payment services needed a way to move data generated by customer applications running in AWS to its on-premises data warehouse. NetApp Cloud ONTAP® running in AWS proved to be the least expensive way to accomplish this.
    Cloud ONTAP provides the full suite of NetApp enterprise data management tools for use with Amazon Elastic Block Storage, including storage efficiency, replication, and integrated data protection. Cloud ONTAP makes it simple to efficiently replicate the data from AWS to NetApp FAS storage in the company's own data centers. The company can now use existing extract, transform and load (ETL) tools for its data warehouse and run analytics on data generated in AWS.
    Regular replication not only facilitates analytics, it also ensures that a copy of important data is stored on-premises, protecting data from possible cloud outages. Read the success story to learn more.
    Data Near the Cloud
    For many organizations, deploying data near the hyperscale public cloud is a great choice because they can retain physical control of their data while taking advantage of elastic cloud compute resources on an as-needed basis. This hybrid cloud architecture can deliver better IOPS performance than native public cloud storage services, enterprise-class data management, and flexible access to multiple public cloud providers without moving data. Read the recent white paper from the Enterprise Strategy Group, “NetApp Multi-cloud Private Storage: Take Charge of Your Cloud Data,” to learn more about this approach.
    Use Case 1: Municipality Opts for Hybrid Cloud with NetApp Private Storage for AWS
    The IT budgets of many local governments are stretched tight, making it difficult to keep up with the growing expectations of citizens. One small municipality found itself in this exact situation, with aging infrastructure and a data center that not only was nearing capacity, but was also located in a flood plain.
    Rather than continue to invest in its own data center infrastructure, the municipality chose a hybrid cloud using NetApp Private Storage (NPS) for AWS. Because NPS stores personal, identifiable information and data that's subject to strict privacy laws, the municipality needed to retain control of its data. NPS does just that, while opening the door to better citizen services, improving availability and data protection, and saving $250,000 in taxpayer dollars. Read the success story to find out more.
    Use Case 2: IT Consulting Firm Expands Business Model with NetApp Private Storage for Azure
    A Japanese IT consulting firm specializing in SAP recognized the hybrid cloud as a way to expand its service offerings and grow revenue. By choosing NetApp Private Storage for Microsoft Azure, the firm can now offer a cloud service with greater flexibility and control over data versus services that store data in the cloud.
    The new service is being rolled out first to support the development work of the firm's internal systems integration engineering teams, and will later provide SAP development and testing, and disaster recovery services for mid-market customers in financial services, retail, and pharmaceutical industries.
    Use Case 3: Financial Services Leader Partners with NetApp for Major Cloud Initiative
    In the heavily regulated financial services industry, the journey to cloud must be orchestrated to address security, data privacy, and compliance. A leading Australian company recognized that cloud would enable new business opportunities and convert capital expenditures to monthly operating costs. However, with nine million customers, the company must know exactly where its data is stored. Using native cloud storage is not an option for certain data, and regulations require that the company maintain a tertiary copy of data and retain the ability to restore data under any circumstances. The company also needed to vacate one of its disaster-recovery data centers by the end of 2014.
    To address these requirements, the company opted for NetApp Private Storage for Cloud. The firm placed NetApp storage systems in two separate locations: an Equinix cloud access facility and a Global Switch colocation facility both located in Sydney. This satisfies the requirement for three copies of critical data and allows them to take advantage of AWS EC2 compute instances as needed, with the option to use Microsoft Azure or IBM SoftLayer as an alternative to AWS without migrating data. For performance, the company extended its corporate network to the two facilities.
    The firm vacated the data center on schedule, a multimillion-dollar cost avoidance. Cloud services are being rolled out in three phases. In the first phase, NPS will provide disaster recovery for the company's 12,000 virtual desktops. In phase two, NPS will provide disaster recover for enterprise-wide applications. In the final phase, the company will move all enterprise applications to NPS and AWS. NPS gives the company a proven methodology for moving production workloads to the cloud, enabling it to offer new services faster. Because the on-premises storage is the same as the cloud storage, making application architecture changes will also be faster and easier than it would be with other options. Read the success story to learn more.
    NetApp on NetApp: nCloud
    When NetApp IT needed to provide cloud services to its internal customers, the team naturally turned to NetApp hybrid cloud solutions, with a Data Fabric joining the pieces. The result is nCloud, a self-service portal that gives NetApp employees fast access to hybrid cloud resources. nCloud is architected using NetApp Private Storage for AWS, FlexPod®, clustered Data ONTAP and other NetApp technologies. NetApp IT has documented details of its efforts to help other companies on the path to hybrid cloud. Check out the following links to lean more:
    Hybrid Cloud: Changing How We Deliver IT Services [blog and video]
    NetApp IT Approach to NetApp Private Storage and Amazon Web Services in Enterprise IT Environment [white paper]
    NetApp Reaches New Heights with Cloud [infographic]
    Cloud Decision Framework [slideshare]
    Hybrid Cloud Decision Framework [infographic]
    See other NetApp on NetApp resources.
    Data Fabric: NetApp Services for Hybrid Cloud
    As the examples in this article demonstrate, NetApp is developing solutions to help organizations of all sizes move beyond cloud silos and unlock the power of hybrid cloud. A Data Fabric enabled by NetApp helps you more easily move and manage data in and near the cloud; it's the common thread that makes the uses cases in this article possible. Read Realize the Full Potential of Cloud with the Data Fabric to learn more about the Data Fabric and the NetApp technologies that make it possible.
    Richard Treadway is responsible for NetApp Hybrid Cloud solutions including SteelStore, Cloud ONTAP, NetApp Private Storage, StorageGRID Webscale, and OnCommand Insight. He has held executive roles in marketing and engineering at KnowNow, AvantGo, and BEA Systems, where he led efforts in developing the BEA WebLogic Portal.
    Tom Shields leads the Cloud Service Provider Solution Marketing group at NetApp, working with alliance partners and open source communities to design integrated solution stacks for CSPs. Tom designed and launched the marketing elements of the storage industry's first Cloud Service Provider Partner Program—growing it to 275 partners with a portfolio of more than 400 NetApp-based services.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    Dave:
    "David Scarani" <[email protected]> wrote in message
    news:3ecfc046$[email protected]..
    >
    I was looking for some real world "Best Practices" of deploying J2EEapplications
    into a Production Weblogic Environment.
    We are new at deploying applications to J2EE application servers and arecurrently
    debating 2 methods.
    1) Store all configuration (application as well as Domain configuration)in properties
    files and Use Ant to rebuild the domain everytime the application isdeployed.
    I am just a WLS engineer, not a customer, so my opinions have in some
    regards little relative weight. However I think you'll get more mileage out
    of the fact that once you have created your config.xml, checking it into src
    control, versioning it. I would imagine that application changes are more
    frequent than server/domain configuration so it seems a little heavy weight
    to regenerate the entire configuration everytime an application is
    deployed/redeployed. Either way you should check out the wlconfig ant task.
    Cheers
    mbg
    2) Have a production domain built one time, configured as required andalways
    up and available, then use Ant to deploy only the J2EE application intothe existing,
    running production domain.
    I would be interested in hearing how people are doing this in theirproduction
    environments and any pros and cons of one way over the other.
    Thanks.
    Dave Scarani

  • IPod 30GB will not work with 3rd party USB 2.0 PCI card

    I've had an IOGear PCI USB 2.0 card (GIC220U-BIO) in my Sawtooth for some time now, and have been using it flawlessly with my Canon i860 printer, and Canon LiDE 30 Scanner.
    Just recently I received a Video iPod (30GB) and wanted to take advantage the high speed USB bus offered on that card to sync my iPod. The problem is, when I plug in the iPod to the USB port on that card, I get the transparent gray box telling me that I need to restart my mac by pushing the reset switch, or holding the power button for several seconds (in other words, freezes).
    The iPod works fine with the built-in USB ports (albeit slow), as well as the high speed port on my iBook G4. Why am I getting this error? There are no drivers or firmware updates for this PCI card available.
    Is there a solution?

    Yes, use a different USB 2.0 PCI card, like from Sonnet. I use a Sonnet in a G5 and it works great. There are (at least) 2 USB chipsets and I find the TI ones are great and the other not so great.
    The fact it works on 1.1 and the 2.0 on the G4 at least tells you it is not the iPod and a USB 2 card is $20 or $30 at the most.

  • Real World Adobe Photoshop CS3 (Real World)

    Real World Adobe Illustrator CS3 (Real World) - Mordy
    Golding;
    Real World Adobe Photoshop CS3 (Real World) - David Blatner;
    these books are in the UPPER LEVEL than "classroom in a book"
    series ?

    > but the part about DNG has convinced me to dive deeper in it and give it a go
    When working in a Bridge/Camera Raw/Photoshop workflow, I tend to ingest the actual native raw files, do initial selects and gross edits and basic metadata work via templates and THEN do the conversion to DNG. I'll use the DNG as my working files and the original raws as an archive. I tend to do this more with studio shoots. I tend to use Lightroom when I'm on the road.
    When working in Lightroom first, I tend to ingest and convert to DNG upon ingestion (when in the road working on a laptop) while using the backup copyusually working on a pair of external FW drives one for working DNG files and 1 for BU of the original raws. Then, when I get back to the studio I make sure I write to XMP and export the new shoot as a catalog and import into my studio copy of Lightroom. Then I'll also cache the newly imported images in Bridge as well so I can get at the image either in Bridge or Lightroom.
    It's a bit of a chore now since I do work in Camera Raw a lot (well, DOH, I had to to do the book!) but I also keep all my digital files in a Lightroom catalog which is now up to about 74K...
    Then, depending on what I'll need to do, I'll either work out of LR or Bridge/Camera Raw...
    If I'm doing a high-end final print, I generally process out of Camera Raw as a Smart Object and stack multiple layers of CR processed images...if I'm working on a batch of images I'll work out of Lightroom since the workflow seems to suit me better.
    In either event, I've found DNG to be better than native raws with sidecar files.

  • RAID test on 8-core with real world tasks gives 9% gain?

    Here are my results from testing the software RAID set up on my new (July 2009) Mac Pro. As you will see, although my 8-core (Octo) tested twice as fast as my new (March 2009) MacBook 2.4 GHz, the software RAID set up only gave me a 9% increase at best.
    Specs:
    Mac Pro 2x 2.26 GHz Quad-core Intel Xeon, 8 GB 1066 MHz DDR3, 4x 1TB 7200 Apple Drives.
    MacBook 2.4 GHz Intel Core 2 Duo, 4 GB 1067 MHz DDR3
    Both running OS X 10.5.7
    Canon Vixia HG20 HD video camera shooting in 1440 x 1080 resolution at “XP+” AVCHD format, 16:9 (wonderful camera)
    The tests. (These are close to my real world “work flow” jobs that I would have to wait on when using my G5.)
    Test A: import 5:00 of video into iMovie at 960x540 with thumbnails
    Test B: render and export with Sepia applied to MPEG-4 at 960x540 (a 140 MB file) in iMovie
    Test C: in QuickTime resize this MPEG-4 file to iPod size .m4v at 640x360 resolution
    Results:
    Control: MacBook as shipped
    Test A: 4:16 (four minutes, sixteen seconds)
    Test B: 13:28
    Test C: 4:21
    Control: Mac Pro as shipped (no RAID)
    Test A: 1:50
    Test B: 7:14
    Test C: 2:22
    Mac Pro config 1
    RAID 0 (no RAID on the boot drive, three 1TB drives striped)
    Test A: 1:44
    Test B: 7:02
    Test C: 2:23
    Mac Pro config 2
    RAID 10 (drives 1 and 2 mirrored, drives 3 and 4 mirrored, then both mirrors striped)
    Test A: 1:40
    Test B: 7:09
    Test C: 2:23
    My question: Why am I not seeing an increase in speed on these tasks? Any ideas?
    David
    Notes:
    I took this to the Apple store and they were expecting 30 to 50 per cent increase with the software RAID. They don’t know why I didn’t see it on my tests.
    I am using iMovie and QuickTime because I just got the Adobe CS4 and ran out of cash. And it is fine for my live music videos. Soon I will get Final Cut Studio.
    I set up the RAID with Disk Utility without trouble. (It crashed once but reopened and set up just fine.) If I check back it shows the RAID set up working.
    Activity Monitor reported “disk activity” peaks at about 8 MB/sec on both QuickTime and iMovie tasks. The CPU number (percent?) on QT was 470 (5 cores involved?) and iMovie was 294 (3 cores involved?).
    Console reported the same error for iMovie and QT:
    7/27/09 11:05:35 AM iMovie[1715] Error loading /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: dlopen(/Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHD Audio, 262): Symbol not found: _keymgr_get_per_threaddata
    Referenced from: /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio
    Expected in: /usr/lib/libSystem.B.dylib

    The memory controllers, one for each cpu, means that you need at least 2 x 2GB on each bank. If that is how Apple set it up, that is minimal and the only thing I would do now with RAM is add another 2 x 2GB. That's all. And get you into triple channel bandwidth.
    It could be the make and model of your hard drives. If they are seagate then more info would help. And not all drives are equal when it comes to RAID.
    Are you new to RAID or something you've been doing? seems you had enough to build 0+1 and do some testing. Though not pleased, even if it works now, that it didn't take the one time.
    Drives - and RAIDs - improve over the first week or two - which, before commiting good data to them - is the best time to torture, run them ragged, use Speedtools to break them in, loosen up the heads, scan for media errors, and run ZoneBench (and with 1TB, partition each drive into 1/4ths).
    If Drive A is not identical to B, then they may deal with an array even worse. And no two drives are purly identical, some vary more than others, and some are best used in hardware RAID controller environments.
    Memory: buying in groups of three. okay. But then adding 4 x 4GB? So bank A with 4 x 2GB and B with twice as much memory. On Mac Pro, 4 DIMMs on a bank you get 70% bandwidth, it drops down from tri-channel to dual-channel mode.
    I studied how to build or put together a PC for over six months, but then learned more in the month (or two) after I bought all the parts, found what didn't work, learned my own short-comings, and ended up building TWO - one for testing, other for backup system. And three motherboards (the best 'rated' also had more trouble with BIOS and fans, the cheap one was great, the Intel board that reviewers didn't seem to "gork" actually has been the best and easiest to use and update BIOS). Hands on wins 3:1 versus trying to learn by reading for me, hands-on is what I need to learn. Or take car or sailboat out for drive, spin, see how it fares in rough weather.
    I buy an Apple system bare bones, stock, or less, then do all the upgrades on my own, when I can afford to, gradually over months, year.
    Each cpu needs to be fed. So they each need at least 3 x 1GB RAM. And they need raw data fed to RAM and cpu from disk drives. And your mix of programs will each behave differently. Which is why you see Barefeats test with Pro Apps, CINEBENCH, and other apps or tools.
    What did you read or do in the past that led you to think you need RAID setup, and for how it would affect performance?
    Photoshop Guides to Performance:
    http://homepage.mac.com/boots911/.Public/PhotoshopAccelerationBasics2.4W.pdf
    http://kb2.adobe.com/cps/401/kb401089.html
    http://www.macgurus.com/guides/storageaccelguide.php
    4-core vs 8-core
    http://www.barefeats.com/nehal08.html
    http://www.barefeats.com/nehal03.html

  • EJB 3.0 in a real world open source project. Great for coding reference!

    If you are interested in seeing EJB 3.0 implemented in a real world project (not just examples) or if you are interested in learning how to use them I suggest you to take a look a the open source project Overactive Logistics.
    It has been written totally using EJB 3.0 (session and entity beans) I found it very helpful in solving several technical situations I was facing.
    You can get more information at:
    http://overactive.sourceforge.net

    Thanks for the ponter, I will check it out.
    hth,
    Sean

  • What is the correct USB 3.0 PCIE card for Z800?

    I'm looking to buy the correct USB 3.0 PCIe card for a Z800, but I cant find any thru HP. Any have any info> There's  12 and 60 dollar HP cards on EBAY, but I don't know if those are correct ones.
    This question was solved.
    View Solution.

    Hi,
    Based on the following specs, the machine has plenty of options:
       http://h18000.www1.hp.com/products/quickspecs/1327​8_na/13278_na.PDF
    You can buy 1-port card, 2-port card, 3-port card or even 4-port card such as:
        http://www.startech.com/Cards-Adapters/USB-3.0/Car​ds/2-Port-PCI-Express-SuperSpeed-USB-3-Card-Adapte​...
    You need to check the slot to get the right bracket for the card.
    Regards.
    ps eBay is a different story, you can buy few dollars card.
    BH
    **Click the KUDOS thumb up on the left to say 'Thanks'**
    Make it easier for other people to find solutions by marking a Reply 'Accept as Solution' if it solves your problem.

  • Character Styles in the Real World

    Rick:
    Thanks for your efforts, and let me add my Amen to both
    subjects (on file locations and on Character styles).
    My real-world use of Character styles is a combination usage
    of Paragraph and Character styles for Notes: I have a Paragraph
    style called Note, which simply adds margins of .15in Left, 10pt
    Top, and 8pt Bottom. Within this paragraph style, multiple labels
    announce the type of Note with the use of Character styles
    NoteLabel (Navy), RecommendLabel (Teal), CAUTIONLabel (Purple), and
    WARNINGLabel (Red).
    This way, you can change the color of one or more labels
    without worrying about the paragraph settings (or vice versa).
    Also, when placing a Note inside a table cell (which might
    have limited horizontal space, especially with three or four
    columns), we still use the "Label" character styles but
    without the Notes paragraph style. This still sets off the
    text visually, without adding unnecessary extra vertical space.
    Thanks again, Rick!
    Leon

    I can tell you about two sites.
    1. A system which allocates and dispatches crews, trucks, backpack hoses, spare socks, etc to bushfires (wildfires to you). It operates between two Government departments here in Australia. Each of those despatchable items is a remote object and there have been up to 50,000 active in the system at a time during the hot summer months. This is a large and life-critical system.
    2. A monitoring system for cable TV channels. A piece of hardware produces a data stream representing things like channel utilization, error rates, delay, etc and this is multiplexed via RMI to a large number of operator consoles. Again this is a major and business-critical system.
    And of course every J2EE system in existence uses RMI internally, albeit almost entirely RMI/IIOP.

  • Mac Pro doesn't recognize USB 3.0 PCI express card

    Mac Pro doesn't recognize USB 3.0 PCI express card.  Does it need specific drivers? 

    I did a little searching for that card.  The more I looked at it the more I felt that, yes, it needs a driver, and no, one doesn't exist for the Mac.  So it looks like that is a card only supported for PC's. But to be sure, if I were in your place at this point, I would contact MSI directly.

  • PI Implementation Examples - Real World Scenarios

    Hello friends,
    In the near future, I'm going to give a presentation to our customers on SAP PI. To convince them to using this product, I need examples from the real world that are already implemented successfully.
    I have made a basic search but still don't have enough material on the topic, I don't know where to look actually. Could you post any examples you have at hand? Thanks a lot.
    Regards,
    Gökhan

    Hi,
    Please find here with you the links
    SAP NetWeaver Exchange Infrastructure Business to Business and Industry Standards Support (2004)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90052f25-bc11-2a10-ad97-8f73c999068e
    SAP Exchange Infrastructure 3.0: Simple Use Cases
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20800429-d311-2a10-0da2-d1ee9f5ffd4f
    Exchange Infrastructure - Integrating Heterogeneous Systems with Ease
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1ebea490-0201-0010-faad-a32dd753d009
    SAP Network Blog: Re-Usable frame work in XI
    /people/sravya.talanki2/blog/2006/01/10/re-usable-frame-work-in-xi
    SAP NetWeaver in the Real World, Part 1 - Overview
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20456b29-bb11-2a10-b481-d283a0fce2d7
    SAP NetWeaver in the Real World, Part 3 - SAP Exchange Infrastructure
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3172d290-0201-0010-2b80-c59c8292dcc9
    SAP NetWeaver in the Real World, Part 3 - SAP Exchange Infrastructure
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9ae9d490-0201-0010-108b-d20a71998852
    SAP NetWeaver in the Real World, Part 4 - SAP Business Intelligence
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1f42d490-0201-0010-6d98-b18a00b57551
    Real World Composites: Working With Enterprise Services
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d0988960-b1c1-2a10-18bd-dafad1412a10
    Thanks
    Swarup

  • New Enterprise Manager Book on Advanced EM Techniques for the Real World

    Dear Friends,
    I am pleased to say my first EM book can be ordered now.
    Oracle Enterprise Manager Grid Control: Advanced Techniques for the Real World
    [http://www.rampant-books.com/book_1001_advanced_techniques_oem_grid_control.htm]
    Please let your colleagues and friends and clients know – it is the first book in the world to include EM 11g Grid Control. It is a great way for people to understand the capabilities of EM.
    Oracle’s Enterprise Manager Grid Control is recognized as the IT Industry’s leading Oracle database administration and management tool. It is unrivalled in its ability to monitor, manage, maintain and report on entire enterprise grids that comprise hundreds (if not thousands) of Oracle databases and servers following an approach that is consistent and repeatable.
    However, Enterprise Manager Grid Control may seem daunting even to the most advanced Oracle Administrator. The problem is you know about the power of Enterprise Manager but how do you unleash that power amongst what initially appears to be a maze of GUI-based screens that feature a myriad of links to reports and management tasks that in turn lead you to even more reports and management tasks?
    This book shows you how to unleash that power.
    Based on the Author’s considerable and practical Oracle database and Enterprise Manager Grid Control experience you will learn through illustrated examples how to create and schedule RMAN backups, generate Data Guard Standbys, clone databases and Oracle Homes and patch databases across hundreds and thousands of databases. You will learn how you can unlock the power of the Enterprise Manager Grid Control Packs, PlugIns and Connectors to simplify your database administration across your company’s database network, as also the management and monitoring of important Service Level Agreements (SLAs), and the nuances of all important real-time change control using Enterprise Manager.
    There are other books on the market that describe how to install and configure Enterprise Manager but until now they haven’t explained using a simple and illustrated approach how to get the most out of your Enterprise Manager. This book does just that.
    Covers the NEW Enterprise Manager Grid Control 11g.
    Regards,
    Porus.

    Abuse reported.

Maybe you are looking for

  • Enhanced forecaster is not compatible with Firefox 3.6 or 4.0. What can I do? I realy like that feature.

    Enhanced forecaster does not work with my current firefox 3.6 I can go to help and hit repair/extensions and it works for a while. When you go to add ons, it says it is not compatible with 3.6. I downloaded 4.0 and it's the same way. It does not work

  • OS X 10.5 client and NON-OS X CUPS server: adding printers, solved

    Dear all, I had trouble in adding printers under OS X 10.5, those printers are shared via a FreeBSD CUPS 1.2.10 server on the local network. So I did the following: - System Preferences; - Print & Fax, right click on the leftmost part of the window,

  • Select from a dictionary table - "timings" problem ?

    Hi all experts, a simple question: in my code on an SRM machine I create a purchase order using some calls to these FM: -1- BBP_PD_PO_CREATE; -2- BBP_PD_PO_SAVE; -3- eventually, a call to BBP_PD_PO_UPDATE. -4- COMMIT WORK AND WAIT. This sequence is u

  • Shipment and cross company...

    Hi all, is it possible to use cross-company function while settling a shipment cost? ex: companyA is using transportation functions. while creating shipment, it uses companyB as forwarding agent. Both companyA and companyB are in the same client. whe

  • Some question about the Mac

    Can I run the" Windows "and the "lion "on one max and how much I should pay for it? I'm not a US citizen,but I entend to  by a computer in USA,so could you give me a hand,about the computers and the system prize in US include the tax. appreciate.