Advice needed - optical bay caddy and HDD for T530

Since I rarely use my optical drive and frequently back up files to an external hard drive, I'd like to put a HDD in the optical bay and just swap it out for the DVD player whenever that is needed.
My T530 (i5-3210M Win7HP) has a 256GB Crucial mSSD for the boot drive and programs;  the original 500GB/7200 rpm in the drive bay has data files, Outlook and Quicken files, music, photos and documents.
For the optical bay HDD 500 GB would be plenty, but the 750 GB is only about $10 more, so why not?  (I'd probably put the 750 in the drive bay and make it the data drive, with the 500 in the optical bay for a back up drive... and this is about convenience more than need.)
Best I can tell the optical drive supports SATA 6GB/sec but most of the 2.5" 750GBx7200 rpm drives are SATA 3GB/sec.  (On the other hand, the Hitachi MK5061GSY 500GBx7200 that came in my T530 appears to be 'only' SATA II, and most forum posters suggest there's not a noticable difference.)
Of the 750 GB drives it seems the Seagate Momentus ($82) and Western Digital Scorpio Black ($78) would be the top two choices.  I think they're both 9.5mm and would fit. Some reviews indicate the Western Digital is quieter.
The caddy is more problematic, as there are so many choices, and many appear to be poorly-fitting Chinese knockoffs.  I'm not inclined to pay $70 for the Lenovo, when the others are in the $10-15 range, but I do want one that fits well and works well and lasts.  (And is at least SATA 3GB/sec.)  I can't tell much from the prices or pictures, and there's contradicting information in the 'product reviews.'
So I need a 12.7mm optical bay (Ultrabay) caddy for a 9.5mm HDD that fits well without gaps, matches the T530 case, and supports at least SATA 3GB/sec.
Will I need the rubber 'bumpers' or is that just for SSD drives?  If I need them, do they come with the caddy or drive?  If not, what do I get and where?
Does either the Seagate or Western Digital offer an advantage in function or reliability?
Please correct anything that I've mistated or misunderstood.  Thank you for your advice and recommendations.
Solved!
Go to Solution.

How the four screws work. 
HD's and SSD's usually have four threaded holes on the bottom and also two threaded holes on each long side. The video you may have seen with the screws going through holes in the bottom of the caddy into the threaded bottom holes of the HD is a video of the $45 Newmodus caddy. That's how they affix the drive to the caddy.
The Chinese caddy I linked uses the screws in a different way, and it is actually pictured in the top right illustration on the caddy.
While the drive is out of the caddy, you just thread the screws (through nothing) into the side holes on the drives. So, the drive now has two little protruding screw heads on each side. The only purpose is to create those four slight protrusions - two on each side. 
Then you insert the drive into the caddy by sliding from one end. The thing on the bottom of the caddy picture is like a little door that folds up and down. You lift up the door and slide the drive into the other end. The protruding screw heads slide under little rail protuberances that are molded inside the sides of the caddy. When you have slid the drive all the way forward into the electrical connectors, you then snap down the door behind it. The door prevents the drive from moving forward or back. The screw heads wedged under the rails prevent the drive from moving up or down. You can do the entire operation in 5 seconds.
Ny-compu-tek seems to have the same caddy listed under various titles and prices, including one that specifically says T530. I think that's just a marketing gimmick, as the T530 takes the same caddy as the T520. Anyway the one I linked is the least expensive from that source and fits perfectly. 
It was also shipped out in one day with a tracking number. 

Similar Messages

  • Hard Drive Bay (Caddy) and drive for T430

    Very confused about what the recommended HDD Caddy for T430 is. (to swap DVD drive with a 2nd HDD)
    Also, is there a non Lenovo HDD that works with this? 1 TB drive preferred.
    Thanks.

    Ultrabay caddies come in two thicknesses, 12.7mm for T430 and T530 (0A65623), and 9.5mm for T430s (43N3412.)
    You can buy these from Lenovo or you can get "clone" caddies quite inexpensively on eBay, e.g. 9.5mm Bezel SATA 2nd HDD Hard Drive caddy bay for T400s T500 T410s X200 is $10 for a 9.5mm unit. This is what I have in my T430s. A 9.5mm caddie will fit into your T430 but will leave a small gap at the top. You can search to see what 12.7mm caddies are available.  
    I haven't tested the clone caddies with HDDs larger than 500GB. There's no reason they won't work. Note that the HDD must be no thicker than the caddy in order to fit physically. Since you have a T430 with 12.7 Ultrabay opening this shouldn't be an issue for you either.
    Cheers... Dorian Hausman
    X1C2, TPT2, T430s, SL500, X61s, T60p, A21p, 770, 760ED... 5160, 5150... S360/30

  • Need info about CPU and HDD for Tecra 8200

    Hi Guys im new and have just bought the above laptop. However i want a faster CPU and bigger Harddrive (10GIG at the mo after 40 GIG if possible) and my CPU is Intel Pentium III 750MHZ. Please could someone point me in the right direction of where i can get a 40 gig h/drive for the right sort of money and what is the fastest / suggest CPU i can get / go for.
    Thanks for your help .

    Hi
    The fact is that the Tecra 820 was delivered with different CPUs:
    PIII-M (Mobile) 750MHz;
    PIII-M (Mobile) 850MHz;
    PIII-M (Mobile) 900MHz;
    PIII-M (Mobile) 1.0GHz
    In this case the fastest CPU which you can use is a PIII 1GhZ.
    However the CPU changing is not easy and you should change anything if you have no experience.
    Important: You will loose the warranty if you open the notebook.
    Furthermore in my opinion you need also a high performance cooling module because the new CPU will produce more warmness.
    I have also found a information that this unit was delivered with 10GB; 20GB; 30GB HDDs. I think you will have no problems to use a 40GB HDD. The compatible one you can order from the Toshiba service partner.

  • Where can i buy a hard drive caddy and connector for a second hard drive for my dv7-7190eo?

    Where can i buy a hard drive caddy and connector for a second hard drive for my dv7-7190eo?
    This question was solved.
    View Solution.

    Gurra wrote:
    Where can i buy a hard drive caddy and connector for a second hard drive for my dv7-7190eo?
    Hi,
    The right part number for the hard drive kit is 681976-001. The support website for the laptop is on HP Sweden so run the part number on google.co.uk (or your Swedish). I have found the cheapest on us amazon for $38 or £42. 
    Dv6-7000 /Full HD/Core i5-3360M/GF 650M/Corsair 8GB/Intel 7260AC/Samsung Pro 256GB
    Testing - HP 15-p000
    HP Touchpad provided by HP
    Currently on Debian Wheeze
    *Please, help other users with the same issue by marking your solved topics as "Accept as Solution"*

  • Need to learn forms and reports for 10g

    Hi,
    I need to learn forms and reports for 10g urgently for windows. Where can I download the software from..........I am a newbie in this field........Thank you.......

    Big red button - top right
    (Didn't someone say just the same thing last week)

  • 2nd HDD drive on optical bay caddy - Pavilion dv6700

    Hi everybody,
    I hope that someone here can help with that, because I'm running out of ideas...
    I recently updated the main drive of my dv6700 notebook (dv6820es) with a SSD drive. I did a fresh installation of Win_7_32 and there was no problem at all.
    But this drive is not that big, so I decided to plug in the old SATA-HDD through the optical drive bay in order to gain storage capacity. I ordered for that a SATA-to-PATA caddy that fits perfectly.
    The big problem now is to make this combination work:
    BIOS behavior: it's hard to tell if the BIOS detects the the drive properly, as long as this model BIOS is quite poor in information/configuration options. Anyway, when I press F9 with the 2nd HDD plugged in, a new boot option appears along with the SSD, and is listed as "7. Notebook hard drive". The HDD is spinning also. Because of all that I assume that the BIOS detects, at least, "something" (but again, I can't say if the HDD as it is or just some unknown decive). By the way, the BIOS is updated to the latest available version: F.58 A.
    Windows behavior: for what I can see, Windows acts as if the drive wasn't there. Nothing in the "Device manager" and nothing in the "Disk manager", just the SSD drive is detected. With one exception, in one of the many restarts I did the drive showed up. The usual text bubble said "intalling drive..." and apparently it installed fine. After the installation there was this new "IDE drive" under "Disk drives" in the "Windows device manager". Fine, it seemed to work! But sadly I saw my hopes getting dashed in the next step, when trying to initialise the disk through the "Disk manager". First it took a long time (like 4-5 minutes) with the program just "thinking", to finally throw one of these infamous "cyclic redundancy" errors. I restarted but the drive was again not there. And it never showed up again, just this one time.
    Sad thing is that I can't tell what made the difference. I updated some things like the Intel chip drivers... this Intel Matrix Storage driver (not really necessary, I guess, but just in case)... during all this process. But actually nothing in particular the time it almost worked.
    My ATA drivers are the newest AHCI available for my chipset. Is there some driver I'm missing? Is there some impossibility for the 2nd HDD (PATA) to work with the SSD running in AHCI mode?
    Well, I think that's all (sorry for the long post).Perhaps someone endured/solved a similar issue.
    Please share your knowledge
    This question was solved.
    View Solution.

    PeterPaul wrote:
    Hello Erico,
    sorry for the confusing information.
    The HDD is a SATA drive and the optical drive is a PATA drive, hence I used a SATA-PATA caddy. Something like that:
    http://www.newmodeus.com/shop/index.php?main_page=product_info&cPath=2_5&products_id=226
    (Though the one I ordered it's not from this manufacturer)
    Considering that, although the HDD is a SATA drive, it should be detected as a PATA one. Is there some way to make both SATA AHCI and PATA interfaces work together?  Not in this case. It takes a ethusiast level motherboard and advanced BIOS.
    [Edit: the chipset is, by the way, from the Mobile Intel 965 Express Family.]
    Perhaps the one in the link has a better  translation cable setup than the one you purchased. Many people appear to be satisfied with it.
    The hard drives physical connection have to be of the same type for the reason I expressed in my previous post unless there is a working translation cable from SATA to PATA.
     If your notebook had a dual drive bay and you installed the drive in the bay with a caddy and SATA cononector cable it would be simple.
    ****Please click on Accept As Solution if a suggestion solves your problem. It helps others facing the same problem to find a solution easily****
    2015 Microsoft MVP - Windows Experience Consumer

  • Unable to install Yosemite using SSD in Optical Bay Caddy

    Thanks for any help in advance. I recently purchased this item from Amazon:
    HDD/SSD SATA III caddy for Apple MacBook (Pro) replaces SuperDrive + slot-in USB enclosure for SuperDrive - 9.5 mm (SATA - SATA) - TheNatural2020: Amazon.co.uk: Computers & Accessories
    I'm having big problems using it to install an SSD in my Macbook Pro Late 2011. When I put an SSD into the caddy (Samsung EVO 120GB SSD) I am unable to install OS X Yosemite onto the SSD. The installation fails instantly stating 'File system verify and repair failed'. I can format the hard drive using Disk Utility and that seems to work fine, but go back to installing Yosemite from USB and it fails again.
    The SSD is fine as when I install it in the regular HDD bay I can install Yosemite just fine (that's how I'm typing just now). If I transfer the SSD back to the caddy, it won't boot from it again (showing a flashing folder with a question mark on it).
    If I keep the SSD in the hard drive bay and put another HD in the caddy, the other HD is visible in OSX but almost unusably slow.
    This leads me to think there's a problem with the caddy, so I ordered a second one from Amazon, but this gives the same problem! Now I'm stuck - neither caddy works properly with SSD or HD, corrupting the data on it somehow. Any suggestions would be very helpful!

    SimonStokes wrote:
    is it possible that it's the cable between the caddy and the motherboard that's the problem?
    Yes.  I have seen that SSDs in the 'normal' boot drive location have problems when the SATA cable is not shielded.  Swapping back the original HDD makes the MBP functional.
    I am guessing that you may be experiencing the same phenomenon.  Wrap the cable in tin foil and see if that makes a difference.
    Ciao.

  • Advice needed: is BDB a good fit for what I aim at?

    Hello everyone,
    I'm not a BDB user (yet), but I really think that this the BDB library
    IS the perfect fit for my needs.
    I'm designing an application with a "tricky" part, that requires a very fast
    data storage/retrieval solution, mainly for writes (but for reads too).
    Here's a quick summary of this tricky part, that should at least use
    2 databases:
    - the first db will hold references to contents, with a few writes per hour
    (the references being "pushed" to it from a separate admin back end), but
    expected high numbers of reads
    - the second db will log requests and other events on the references
    contained in the first db: it is planned that, on average, one read from DB1
    will produce five times as much writes into DB2.
    To illustrate:
    DB1 => ~25 writes / ~100 000 reads per hour
    DB2 => ~500 000 writes / *(60?) reads per hour
    (*will explain about reads on DB2 later in this post)
    Reads and writes on both DBs are not linear, say that for 500 000 writes
    per hour, you could have the first 250 000 being done within 20 minutes,
    for instance. There will be picks of activity, and low activity phases
    as well.
    That being said, do the BDB experts here think that BDB is a good fit for
    such a need? If so or if not, could you please let me know what makes you
    think what you think? Many thanks in advance.
    Now, about the "*(60?) reads per hour" for BD2: actually, data from DB2
    should be accessed in real time for reporting. As of now, here is what
    I thing I should do to insure and preserve a high rate throughput not to
    miss any write in DB2 => once per minute another "DB2" is created that will
    now record new events. The "previous" DB2 is now dumped/exported into another
    database which will then be queried for real-time (not exactly real-time,
    but up to five minutes is an acceptable delay) reporting.
    So, in my first approach, DB2 is "stopped" then dumped each minute, to another
    DB (not necessarily BDB, by the way - data could probably re-structured another
    way into another kind of NoSQL storage to facilite queriing and retrieval
    from the admin back end), which would make 60 reads per hour (but "entire"
    reads, full db)
    The questions are:
    - do you think that renewing DB2 as often would improve or strain performances?
    - is BDB good and fast at doing massive dumps/exports? (OK: 500 000 entries per
    hour would make ~8300 entries per minute on average, so let's say that a dump's
    max size is 24 000 rows of data)
    - would it or not be better to read directly into the current DB2 as it is
    storing (intensively) new rows, which would then avoid the need to dump each
    minute and then provide more real-time features? (then would just need a daily
    dump, to archive the "old" data)
    Anyone who has had to face such questions already is welcome, as well as
    any BDB user who think they can help on this topic!
    Many thanks in advance for you advice and knowledge.
    Cheers,
    Jimshell

    Hi Ashok
    Many thanks for your fast reply again :)
    Ashok_Ora wrote:
    Great -- thanks for the clarification.Thank YOU, my first post was indeed a bit confusing, at least about the reads on DB2.
    Ashok_Ora wrote:
    Based on this information, it appears that you're generating about 12 GB/day into DB2, which is about a terabyte of data every 3 months. Here are some things to consider for ad-hoc querying of about 1 TB of data (which is not a small amount of data).That's right, this is quite a huge lot of data, and will keep growing, and growing... Although the main goal of the app is to be able to achieve (almost) real time reporting, it will also need to be able (potentially) to compute data over different time ranges, including yearly ranges for instance - but in this case, the real time capabilities wouldn't be relevant, I guess: if you look at some data on a year span, you probably don't need it to be accurate on a dayly interval, for instance (well, I guess), so this part of the app would probably only use the "very old" data (not the current day data), whatever it is stored in...
    Ashok_Ora wrote:
    Query performance is dramatically improved by using indexes. On the other hand, indexing data during the insert operation is going to add some overhead to the insert - this will vary depending on how many fields you want to index (how many secondary indices you want to create). BDB automatically indexes the primary key. Generally, any approach that you consider for satisfying the reporting requirement will benefit from indexing the data.> Thanks for pointing that out! I did envisage using indexes, but my concern was (and you guessed it) the expectable overhead that it brings. At this stage (but I may be wrong, this is just a study in progress, that will also need proper tests and benchmarking), I plan to favour write speed over anything else, to insure that all the incoming data is indeed stored, even if it is quite tough to handle in the primary stored form.
    I prefer to envisage (but again, it's not said that it is the right way of doing it) very fast inserts, then possibly re-process (sort of) the data later, and (maybe? certainly?) elsewhere, in order to have it more "query friendly" and efficient for moderately complex queries for legible reports/charts.
    Ashok_Ora wrote:
    Here are some alternatives to consider, for the reporting application:
    - Move the data to another system like MongoDB or CouchDB as you suggest and run the queries there. The obvious cost is the movement of data and maintaining two different repositories. You can implement the data movement in the way I suggested earlier (close "old" and open "new" periodically).This is pretty much "in line" with what I had in mind when posting my question here :).
    I found out in several benchmarks (there are not a lot, but I did find some ^^) that BDB amongst others is optimized for bunch queries, say that retrieving a whole lot of data is faster that, for instance, retrieving n times the same row. Is that right? Now, I guess that this is tightly related to the configuration and the server's performances...
    The process would then feed data into a new "DB2" instance every 60 seconds, and "dumping"/merging the previous one into another DB (BDB or else), which would grow until some defined limit.
    Would the "old DB2" > "main, current archive" be a heavy/tricky process, according to you? Especially as the "archive" DB is growing and growing - what would be a decent "limit" to take into account? I guess that 1TB for 3 months of data would be a bit big, wouldn't it?
    Ashok_Ora wrote:
    - Use BDB's SQL API to insert and read data in DB1 and DB2. You should be able to run ad-hoc queries using SQL. After doing some experiments, you might decide to add a few indices to the system. This approach eliminates the need to move the data and maintaining separate repositories. It's simpler.I read a bit about it, and this is indeed very interesting capabilities - especially as I know how to write decent SQL statements.
    That would mean that DB2 could grow more than just within a 60 seconds time span - but would this growing alter the write troughput? I guess so... This will require proper tests, definitely.
    Now, I plan the "real" data (the "meaningfull part of the data"), except timestamps, to be stored in quite a "NoSQL" way (this term is "à la mode"...), say as JSON objects (or something close to it).
    This is why I envisaged MongoDB for instance as the DB layer for the reporting part, as it is able to query directly into JSON, with a specific way to handle "indexes" too. But I'm no MongoDB expert in any way, so I'm not sure at all, again, that it is a good fit (just as much as I'm not sure right know what the proper, most efficient approach is, at this stage).
    Ashok_Ora wrote:
    - Use the Oracle external table mechanism (Overview and how-to - http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm) to query the data from Oracle database. Again, you don't need to move the data. You won't be able to create indices on the external tables. If you do want to move data from the BDB repository into Oracle DB, you can run a "insert into <oracle_table> select * from <external_table_in_DB2>;". As you know, Oracle database is excellent database for all sorts of applications, including complex reporting applications.
    This is VERY interesting. VERY.
    And Oracle DB is, you're, a very powerful and flexible database for every kind of processes.
    I'll look into the docs carefully, many thanks for pointing that out (again!) :)
    I have not yet decided if the final application would be free nor open source, but this will eventually be a real question. Right now, I don't want to think of it, and just find the best technical solution(s) to achieve the best possible results.
    And BDB and Oracle DB are very serious competitors, definitely ;)
    Ashok_Ora wrote:
    Hope this was helpful. Let me know your thoughts.It definitely is so much useful! Makes things clearer and allow me to get more into BDB (and Oracle as well with your latest reply), and that's much appreciated. :)
    As I said, my primary goal is to insure the highest write throughput - I cannot miss any incoming data as there is no (easy/efficient) way to re-ask for what would be lost and get it again being sure that it hadn't changed (the simple act of re-asking would induce data flaws, actually).
    So, everything else (including reporting, stats, etc.) IS secondary, as long as what comes in is always stored for sure (almost) as soon as it comes in.
    This is why, in this context, "real" real-time is not really crucial, an can be "1 minute delayed" real time (could even be "5 minute delayed", actually, but let's be a bit demanding ^^).
    Ashok_Ora wrote:
    Just out of curiousity, can you tell us some additional details about your application?Of course, I owe you a bit more details as you help me a lot in my research/study :)
    The application is sort of a tracking service. It is primarily thought to serve the very specific needs of a client of mine: they have several applications that all use the same "contents". Those contents can be anything, text, HTML, images, whatever, and they need to know almost in real time what application (used by which external client/device) is requesting ressources, which ones, from where, in which locale/area and language, etc.
    Really a kind of "Google Analytics" stuff (which I pointed out at the very beginning, but they need something more specific, and, above all, they need to keep all the data with them, so GA is not a solution here).
    So, as you can guess, this is pretty much... big. On the paper, at least. Not sure if this will ever be implemented one day, to be honest with you, but I really want to do the technical study seriously and bring the best options so that they know where they plan to go.
    As of me, I would definitely love it if this could become reality, this is very interesting and exciting stuff. Especially as it requires to see things as they are and not to fall into the "NoSQL fashion" for the sake of being "cool". I don't want a cool application, I want an efficient one, that fits the needs ;) What is very interesting here is that BDB is not new at all, though it's one of the most serious identified players so far!
    Ashok_Ora wrote:
    Thanks and warm regards.
    ashokMany thanks again, Ashok!
    I'll leave this question opened, in order to keep on posting as I'm progressing (and to be able to get your thoughts and rewarding comments and advice above all :) )
    Cheers,
    Jimshell

  • Need help to open and look for file by name

    Hi,
            Im needing help to open a folder and look for a file (.txt) on this directory by his name ... The user ll type the partial name of file , and i need look for this file on the folder , and delete it ....
    How can i look for the file by his name ?
    Thx =)

    Hi ,
        Sry ,, let me explain again ... I ll set the name of the files in the follow order ... Name_Serial_date_chanel.sxc ..
    The user ll type the serial that he wants delete ...
    I already figured out what i need guys .. thx for the help ^^
    I used List Directory on advanced IO , to list all .. the Name is the same for all ... then i used Name_ concateneted with Serial(typed)* .. this command serial* ll list all serials equal the typed , in my case , ll exist only one , cuz its a count this serial .Then i pass the path to the delete , and its done !
    Thx ^^

  • Do i need a separate appleID and password for itunes store, icloud and app store apart from my admin login and password?

    after having my mbp upgraded  with new m.lion by apple tech service i noticed that applications like iphoto, garageband and imovie are gone
    researched online and was told that i had to acess the `purchase` via AppStore where it asks me to log in with Apple ID and password
    i used the usual admin login and password and didnt work, also on itunes and icloud the similar case
    do i need to register with a separate appleID and password for itunes store, icloud and app store apart from my admin login and password?

    Just use the same Apple ID and password that you used to access this forum - that's your Apple ID.
    The user name and password on your computer has nothing to do with your Apple ID.
    Good luck,
    Clinton

  • Do we need both management Pack and ADP  for monitoring SOA suite 11g

    Hi,
    Do we need to Both management pack and ADP (OCAMM) Application Dependancy and Performance for monitoring SOA Suite 11g.
    I was creating a monitoring template for SOA Composite and SOA Infrastructure and wanted to know if I need to install additional ( management packs, ADP and Middleware Plugins ) packs to get an effective template.

    A management pack generally refers to the set of pages in EM Grid/Cloud Control which provide functionality for a given set of target types. Since EM Grid Control 11g, the ADP functionality has been part of the Grid Control release and the pages licensed via the Management Pack Plus for SOA or the SOA Management Pack EE.
    To populate the ADP pages with data, additional steps must be performed in order to deploy an ADP manager and ADP agents. Doing this is optional, depending on whether or not you require the additional capability that those pages provide.
    ADP data, however, are collected and stored separately from the core GC/CC metrics and are not related to the monitoring template functionality.

  • What database should i use for a flash application which i need to record score and name for a scoreboard.

    Hi. I wrote down the summary of my project so that you can
    understand and answer me more easily:
    I have to build an application (a little flash game) in which
    I record the number of clicks. I want the player to choose a nick
    at the beginning and when he finishes one or all the four choices
    of target, i want the application to write his nick and final score
    in a database for displaying in a scoreboard (hall of fame). All
    the new entries should be sorted according to the score. If the
    same nick appears again, it's final score will be modified in the
    scoreboard only if is higher then the previous.
    I was thinking of creating a variable for the name that is
    chosen at the beginning and a variable for the score that is
    recorded. When the player finishes the game i want the application
    to write his data in the database, and then to display the
    scoreboard (let's say top 10 players in the database from the
    highest to the lowest score).
    I never did this before so i am asking what database shoud i
    use? I need a programmer to create one for me? Or can flash
    generate the database?
    Is the variable the good way or should I aproach the problem
    by other means? And if a programmer creates a database in MySql is
    that good or he should convert it to XML, or Flash generates the
    XML?
    Thank you.

    I did areal photography one time.  What you have listed would not be on my short list for lenses to take at all.
    Maybe you could supply a little more info of the 50-100mm ?  I am going to guess you have the first version of the Canon EF-S 75-300mm?  It is a faily slow lens but if these two are the only possibility, I'd choose it.
    It and the XTi will work as a mattter of fact I took two XTi's on my flying photo experience.  But I had the fantastic EF 70-200mm.  Another thing, there is NOT a lot of room on these aircraft.
    EOS 1Ds Mk III, EOS 1D Mk IV EF 50mm f1.2 L, EF 24-70mm f2.8 L,
    EF 70-200mm f2.8 L IS II, Sigma 120-300mm f2.8 EX APO
    Photoshop CS6, ACR 8.7, Lightroom 5.7

  • Need internal register info and map for PCI-6036E

    I'm going to be using NI-PCI-6036E Data Acquisition cards using a hard real time extension package for Windows. As a result, we will have to write a driver for the 6036E cards to access them in real time. Consequently, I need a map of and information on the internal registers e.g. a programming model. and the PCI vendor ID and device ID

    I would definitely recommend that you download our Measurement Hardware Driver Development Kit (DDK). This is a free download from our website, and can be found at www.ni.com or at the following direct link.
    NI Measurement Hardware DDK (Driver Development Kit)
    http://sine.ni.com/apps/we/nioc.vp?cid=11737〈=US
    This kit provides development tools and a register-level programming interface for NI data acquisition hardware. This works with E Series devices, including the 6036E.
    For questions specific to the DDK, please leverage the discussion forum catageory "Driver Development Kit (DDK)."
    Best Regards,
    Justin Britten
    Applications Engineer
    National Instruments

  • I need more midi in and outs for logic 8!

    I did a google search but couldn't find anything.
    I am looking to expand the midi ins and outs when using logic 8. I have a macbook pro hooked up to a rme fireface 800 and I need more midi ins and outs.
    I've heard about these unitor devices but wil they work with logic 8? I heard there are different versions of them as well. I will need one that can sync to SMPTE
    Can u buys these new in stores? what kinda devices have you guys used to expand your midi ins/outs in logic 8

    Sadly, it seems that the MIDI Express XT was [discontinued|http://www.musiciansfriend.com/product/MOTU-Midi-Express-XT-USB?s ku=706423&src=3SOSWXXA]
    Here's the MIDI Express 128 8x8 from [Amazon|http://www.amazon.com/MOTU-MIDI-Express-128-Interface/dp/B0002J1PNK]
    I would just go to Sam Ash or Guitar Center and do some 'Midi shopping'.
    What you can also do, if you just like to expend your existing Midi, is to look for older (pre USB) midi interfaces on eBay, they are really cheap. You won't need a USB since you are hocking through Midi out/through (from your USB Midi) to 'IN' of the 'expansion' Midi device.

  • Need to generate *.ecs and *.xsd for EDI 841 data

    Hi,
    I have an EDI 841 data. I want to generate *.ecs and *.xsd for EDI 841 data using B2B document Editor. Please help me generate *.ecs and *.xsd for EDI 841. Thanks in advance.

    Hello,
    I have generated ecs and xsd for EDI 841 Specification/Technical Information, send me a test email i will send you the same.
    Alternatively you can generate the same from Document Editor as well.
    Rgds,Ramesh

Maybe you are looking for

  • Qty is automatically getting zero at the time of PGI from Deliveries in WM

    When we go to PGI an order it changed all of the deliveries to zero and created an invoice at zero value. It is happening dramatically. it is not happening with every delivery. Thanks in advance

  • Problem opening servlet generated PDF in browser

    This is the code i use to show the PDF in a browser ByteArrayOutputStream outStream = new ByteArrayOutputStream(); res.setContentType("application/pdf"); res.setHeader("Content-Disposition","inline;Filename= " + file.getName()); ServletOutputStream s

  • Quick time won't work

    Dear user, please help me! I have been succesfully using iTunes and QuickTime player until yesterday when I changed Windows Home Edition to XP professional. Downloaded a new version of iTunes from this website and tried to install it and the installa

  • Can any one help me to download a file using struts2

    can any send me a sample program to download a file using struts2 regards saradhi

  • Exported images are screwed up

    I have just exported 112 images to the desktop from within Aperture 3 the results are terrible. Each image has "blocks" of the image duplicated on a different part of the same image. These random blocks are in different places on each image. Has anyo