Compressed vs Uncompressed RAW : longer to read / decrypt ?

I'm trying to manage as efficiently as possible the files on my CF cards (Nikon D3 / D3X) and on my Macs.
But I'm mostly running after speed, being on very thight deadlines sometimes.
I'm shooting only RAW, and chose some time ago to shoot "Compressed / Losless RAW" It saves space on the CF cards, but does it take longer to read from Aperture ? Does Ap3 need to decompress the files, which would take more time / consume more energy ? Or is it "lighter" in terms of calculation to simply read an uncompressed RAW ???
I haven't taken the time to compare yet, did someone out there know anything about that ?
Thanks !
Bernard-Pierre

Sorry, I forget the specifics, but did find uncompressed was preferable (Nikon D2x). I did find that key was fast CF cards and fast connectivity for uploading to a folder on the Macbook Pro's SSD. Decreased CF costs IMO have made use of a compression step to save card capacity irrelevant.
I have a 2011 17" MBP so use the EC/34 slot which is fast and convenient. EC/34 rocks for input when paired with fast UDMA camera cards. My recent coarse tests of the 2011 17" MBP's EC/34 slot using a $40 SanDisk Extreme Pro Express Card Adapter for CF cards from Amazon:
    -Sandisk Extreme III CF Card, SanDisk EC/34 Adapter = ~10 MB/sec
    -Sandisk Extreme IV CF Card, SanDisk EC/34 Adapter = ~37 MB/sec
    -Sandisk Extreme IV CF Card, SanDisk EC/34 Adapter = ~36 MB/sec
    -Sandisk Extreme Pro CF Card
     (UDMA6), SanDisk EC/34 Adapter = ~80 MB/sec
I prefer Sandisk Extreme cards because unlike most other cards they are rated for the temperature extremes that I sometimes shoot in.
With fast CF cards upload speeds via the EC/34 slot are sweet; fast enough to literally change workflows. For comparison, a Sony USB card reader's fastest upload was ~12 MB/sec. My emphasis has been on images uploading to date, but cheap prices of slower CF cards are making me look at using the EC/34 slot for backup in the field as well.
EC/34 SD adapters also available, and presumably Thunderbolt adapters will become available that allow the same fast i/o for all 2011 Macs, but it has not happened yet.
Unlike the D2x the newer D3 may actually perform better using UDMA6 cards.
HTH
-Allen

Similar Messages

  • Add option for lossless uncompressed RAW files for A7r

    I'd like to see the option for lossless, uncompressed RAW files for those of us who want it vs. the current situation of only having the option of lossy compressed RAW files (which in some instances, can exhibit posterization). I think this is a logical option for a camera of the A7r's calibre. If this option can be incorporated via firmware update, all the better. If it's a hardware issue, hopefully this is addressed in the next iteration of the camera.

    Honestly, this is pretty deceitful marketting on both the Sony A7 and A7R literature. both promote the use of 14 bit RAW encoding as a feature of the two sony cameras, however, they are really 11 bit + 7 bit compressed lossly RAW formats, delivering no where close to the same tonality of 14 bit true RAW. in no sony literature that i could find, is it mentioned that it's lossly. this is accurate?  Not even close.  14-bit RAW output for rich tonal gradation 14-bit RAW image data of extremely high quality is outputted by the a7. This data fully preserves the rich detail generated by the image sensor during the 14-bit A/D conversion process. When developed with Sony's Image Data Converter RAW development software, these images deliver the superb photographic expression and rich gradation that only 14-bit data can offer.  I was going to purchase two A7R bodies and move my canon gear over to Sony ..until really all this came out.  I'm hoping as well as the OP that a firmware upgrade comes out.

  • GeoRaster performance: Compressed vs Uncompressed

    I tried to read compressed and uncompressed GeoRaster. The different in the performance confused me. I expected better performance for compressed raster, because Oracle needs to read in few times less data from hard drive (1:5 in my case). However, reading uncompressed data is approximately twice faster. I understand Oracle needs to use more CPU for uncompressing data. But I thought that saved time of reading data would be more than time for uncompressing a raster.
    Did anybody compare the performance?
    Thanks,
    Dmitry.

    Dmitry,
    You can try for yourself. QGIS is a free-open-source-software.
    QGIS uses GDAL to access raster and vector data and there is a plugin called "Oracle Spatial GeoRaster", or just oracle-raster, to deal with GeoRaster. To access Geometries you don't need to activate the plugin, just select Oracle as your database "type" in the Add Vector Layer dialog box.
    Displaying GeoRaster work pretty fast, as long as you have created pyramids. Yes, there is a little delay when the GeoRaster is compressed but that is because GDAL request the data to be uncompressed and QGIS has no clue about it.
    Wouldn't be nice to have a viewer that used the JPEG as it is?
    Regards,
    Ivan

  • Compress and uncompress data size

    Hi,
    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.

    I have checked total used size from dba_segments, but I need to check compress and uncompress data size separately.
    DB is 10.2.0.4.
    Unless you have actually performed any BULK inserts of data then NONE of your data is compressed.
    You haven't posted ANYTHING that suggests that ANY of your data might be compressed. For 10G compression will only be performed on NEW data and ONLY when that data is inserted using BULK INSERTS. See this white paper:
    http://www.oracle.com/technetwork/database/options/partitioning/twp-data-compression-10gr2-0505-128172.pdf
    However, data, which is modified without using bulk insertion or bulk loading techniques will not be compressed
    1. Who compressed the data?
    2. How was it compressed?
    3. Have you actually performed any BULK INSERTS of data?
    SB already gave you the answer - if data is currently 'uncompressed' it will NOT have a 'compressed size'. And if data is currently 'compressed' it will NOT have an 'uncompressed size'.
    Now our management wants that how much compressed data size is and how much uncompressed data size is?
    1. Did that 'management' approve the use of compression?
    2. Did 'management' review the tests that were performed BEFORE compression was done? Those tests would have reported the expected compression and any expected DML performance changes that compression might cause.
    The time for testing the possible benefits of compression are BEFORE you actually implement it. Shame on management if they did not do that testing already.

  • Datatypes : Raw, Long , Long Raw

    Hi
    What is the difference between datatypes : RAW,long RAW,
    Long.
    Could you give some examples too.
    Thanks
    R.

    I know that someone will come along and correct this. The LONG is structured and can be used to save text entered by a person in a text field. The LONG RAW is unstructed and generally hold binary data such as images, documents, video and sound. The max length in the area of 2G.

  • Initramfs - compressed vs. uncompressed

    I just recently came to think about this. It's common practice to compress iniramfs, but an uncompressed initramfs is also an option.
    But what about the pros and cons? How time consuming is the decompression actually and what will the extra size of an uncompressed image mean?
    Personally I've got an SSD, so that helps with the extra size and as for the decompression, I am using as rather limited atom CPU, so in theory it seems to work out for me at least.
    So, what do you think, that is theoretically, what is the best way to go? In reality of course, the difference is negligible, maybe not even measurable, but that's not really the point.
    I'll look forward to your answers.
    Best regards.

    blackout23 wrote:
    zacariaz wrote:
    blackout23 wrote:
    I have done 5 runs of uncompressed and gzip initramfs.
    CPU is Core i7 2600K 4,5 Ghz
    archbox% sudo systemd-analyze
    Startup finished in 1328ms (kernel) + 311ms (userspace) = 1640ms
    archbox% sudo systemd-analyze
    Startup finished in 1338ms (kernel) + 277ms (userspace) = 1616ms
    archbox% sudo systemd-analyze
    Startup finished in 1305ms (kernel) + 274ms (userspace) = 1580ms
    archbox% sudo systemd-analyze
    Startup finished in 1305ms (kernel) + 304ms (userspace) = 1610ms
    archbox% sudo systemd-analyze
    Startup finished in 1302ms (kernel) + 287ms (userspace) = 1590ms
    Gzip:
    archbox% sudo systemd-analyze
    Startup finished in 1375ms (kernel) + 347ms (userspace) = 1723ms
    archbox% sudo systemd-analyze
    Startup finished in 1375ms (kernel) + 331ms (userspace) = 1706ms
    archbox% sudo systemd-analyze
    Startup finished in 1368ms (kernel) + 351ms (userspace) = 1720ms
    archbox% sudo systemd-analyze
    Startup finished in 1385ms (kernel) + 340ms (userspace) = 1725ms
    archbox% sudo systemd-analyze
    Startup finished in 1402ms (kernel) + 351ms (userspace) = 1753ms
    Even on a faster CPU you can measure a difference between compressed and uncompressed.
    If you add the timestamp hook to your HOOKS array in /etc/mkinitcpio.conf and rebuild your initramfs, systemd-analyze will also be able to show you how much time was spent in the initramfs alone. Without it kernel is basically both combined.
    I dare say the difference is less than a pitiful atom CPU, still interesting that the difference is still there though.
    I am however done timing the initramfs. I know now which is best for me.
    One thing that catches my eyes is the userspace time. the kernel time is not too different between our two setups, but the userspace is apparently very much so. Is that simply because the CPU play a greater role or do you utilize some fancy optimization technique I haven't heard of?
    Try systemd-analyze blame and see what takes the most time. If it's NetworkManager try setting ipv6 to "Ignore" in your settings and assign a static ip.
    Other than that I don't run any extra services and mount only my SSD no external drives.
    Good point, always forget about ipv6. Isn't there a kernel param to disable it btw?
    Edit:
    WICD is responcible for 922ms
    Last edited by zacariaz (2012-09-03 20:07:04)

  • Migration from RAW LONG to BFILE

    As part of our next release, we are looking at modifying the way we currently store image data. The current table has a RAW LONG column, and i want to move to external storage BLOBs. The new table will be in a different schema.
    I believe I can covert the RAW column to BLOB, but that doesn't solve the external problem.
    AT a high level, it seems like I will need to do the following:
    - Go through the current rows and create an external image
    - Reload all of the images into new table with BFILE column pointing to new image location
    Anyone have any recommendations as to the steps I may need to perform??
    _mike
    Edited by: nibeck on Apr 22, 2009 2:49 PM
    Edited by: nibeck on Apr 22, 2009 2:49 PM

    take a look at this doc...
    http://www.stanford.edu/dept/itss/docs/oracle/10g/win.101/b10118/o4o00288.htm

  • Camera raw does not read D7100 nef files [was:Ernied]

    Camera raw does not read d7100 nef files. did update...still no success.

    I suspect that the DNG converter will work for you if you use it properly. It isn't your fault, but Adobe has created a kind of a silly interface in the DNG converter. Notice that it is asking you for the folder. When you are browsing for a location within the DNG converter, choose the folder, but don't open the folder. If you do then the DNG converter will tell you there are no images to convert. If you choose the folder then the DNG converter will convert all the files within that folder.

  • After upgrading to ios7 I no longer kan read or download iCreate magzine ? Eternal updating database then quits! Aaaugh

    After upgrading to ios7 I no longer kan read or download iCreate magzine ? Eternal updating database then quits! Aaaugh...
    Other mags work just fine though?

    After upgrading to ios7 I no longer kan read or download iCreate magzine ? Eternal updating database then quits! Aaaugh...
    Other mags work just fine though?

  • ORA-12815 while reorg/compression of tables without LONG and LOB with 11g

    Hello fellows,
    I am in the luxury situation that I got a copy of our production R/3 environment that was left over from a project and is no more required by any of our developers.
    As we are still on oracle 9.2.0.7 I upgraded this copy to 11.2 in a two step process (from 9i to 10g to 11g).
    I got myself the SAP dbatools 7.20(3) and the Note 1431296 - LOB conversion and table compression with BRSPACE 7.20.
    I started with some small tablespaces but after a while I thought I'd like to try to reorg/compress the worst of all tablespaces...PSAPPOOLD with ~15.000 tables.
    I first converted tables with LONG fields online that can be compressed, than the onse that can not be compressed, than I reorged the tables that contain old LOB fields online. With these different executions of the brspace commands that are also mentioned in the above note I managed to move ~ 3.000 tables without any issues.
    But now I started with the biggest bunch of tables, the compression of tables without LONG and LOB fields online.
    This is the command I used:
    brspace -u / -p reorgEXCL.tab -f tbreorg -a reorg -o sapr3 -s PSAPPOOLD -t allsel -n psapreorg -i psapreorgi -c ctab -SCT
    ...after a few checks that are performed by brspace, I end up in the screen
    Options for reorganization of tables (which is still nothing I wouldn't have expected)
    1 * Reorganization action (action) ............ [reorg]
    2 - Reorganization mode (mode) ................ [online]
    3 - Create DDL statements (ddl) ............... [yes]
    4 ~ New destination tablespace (newts) ........ [PSAPREORG]
    5 ~ Separate index tablespace (indts) ......... [PSAPREORGI]
    6 - Parallel threads (parallel) ............... [1]
    7 ~ Table/index parallel degree (degree) ...... []
    8 ~ Category of initial extent size (initial) . []
    9 ~ Sort by fields of index (sortind) ......... []
    10 # Index for IOT conversion (iotind) ......... [FIRST]
    11 - Compression action (compress) ............. [none]
    12 # LOB compression degree (lobcompr) ......... [medium]
    13 # Index compression method (indcompr) ....... [ora_proc]
    But independent of what I enter in point 6 and 7, I always end up with below erros during the reorg/compression of the outstanding tables:
    Just one sample, but the issue is always the same.
    BR0301E SQL error -12815 in thread 2 at location tab_onl_reorg-26, SQL statement:
    'CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0#$" ON "SAPR3"."RTXTF#$" ("MANDT", "APPLCLASS", "TEXT_NAME", "TEXT_TYPE", "FROM_LINE",
    "FROM_POS")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 1662976 NEXT 655360 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSAPREORGI" PARALLEL ( INSTANCES 0) '
    ORA-12815: value for INSTANCES must be greater than 0
    Just in case, here it the OBJECT DDL:
    CREATE UNIQUE INDEX "SAPR3"."RTXTF_____0"
        ON "SAPR3"."RTXTF"  ("MANDT", "APPLCLASS", "TEXT_NAME",
        "TEXT_TYPE", "FROM_LINE", "FROM_POS")
        TABLESPACE "PSAPPOOLI" PCTFREE 10 INITRANS 2 MAXTRANS 255
        STORAGE ( INITIAL 1624K NEXT 640K MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1)
        LOGGING
    Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    Many thanks
    Florian

    Hello Florian,
    > Perhaps someone already gained some experience on the compression with brspace and can give me a hint.
    I have not performed any compression operations on Oracle 11g R2 with brspace until yet .. but this error seems to be very obvious.
    It seems like SAP is still not using the procedure DBMS_REDEFINITION.COPY_TABLE_DEPENDENT to create the indexes (and NOT NULL constraints) on Oracle 11g R2. No idea why, i can only think of one case (create a DDL file before reorganisation to change the DDL parameters through the reorganisation in some kind of ways).
    So in your case it seems like SAP is creating a wrong SQL for creating the index on the interim table.
    You can try to create the DDL file first and correct the parameters and after that you can try to run the reorganisation again.
    Please check sapnote #646681 (Remark 5) for more information about the procedure for "creating the DDL first .. and then do the reorg with edited parameters).
    Regards
    Stefan

  • Sorting compressed from uncompressed PSD/PSB files in Bridge/smart collections?

    Is there any info (metadata or something) one could use to sort uncompressed from compressed PSD/PSB files in Bridge?
    Something that could be used in filters for smart collections would be nice.
    Ty in advance.

    Thanks for the quick reply, CP
    FYI, the file is a photographic panorama based on several .CR2 photos taken in July, 2012 and processed in CS5's camera RAW/Photoshop.  Beyond the metadata, here are no fonts or text in the file.
    1.  I changed Maximize ...File Capability from "Never" to "Always" and the flattened file saved and loaded OK, though I can't figure why given Adobe's description of this option. "If you work with PSD and PSB files in older versions of Photoshop or applications that don’t support layers, you can add a flattened version of the image to the saved file...".  The file saved in this manner is ~8% larger and is not my preference as a routine since I've TB's of photo files.  By the way, out of thousands of .psd/.psb files, the one being discussed is the first to give me the error.
    2.  I've tried saving/reloading the file on another computer with Win XP Sp3/CS5...again with Maximize Capability OFF.  Same problem.  Turn ON Maximize Capability and it works.  As with the Win 8 machine, Bridge could neither create a thumbnail or read Metadata.
    3.  I've purged the Bridge Cache, made sure there were no old PS page files, reset PS preferences, and disabled my 2 plugins with no change in the problem.  Windows and Adobe progs are all updated.  Saving to a different HDD has no effect.
    The problem occurs on 2 computers (Win XP/Win 8), two versioins of Photoshop (CS5/CS6), and "Maximize" On/Off togles the problem.  My guess it is a bug in PSD processing routines.
    If I can't find a fix, will keep with Maximize ON and deal with the larger files that result...the cost of insurance, I suppose.
    Thanks again

  • Oracle RAW / Long / LOBs - Buffer cache?

    Hello guys,
    i think i have read sometime ago something about that LOBs are not cached in the buffer cache? Is this right?
    I also think that i can remember that raw or long are stored in the sga buffer cache?
    I can not find any official documentation about that topic... maybe you can help me...
    Thanks and Regards
    Stefan

    Hi,
    Depends on whether you are on about temporary lobs or lobs that are persisted as a column. If it's as a column then you have the cache/cache read/nocache option which is part of the lob storage clause. If it's nocache it's not put into the buffer cache and if it's cache it is (and I'm sure you can figure out from this what "cache read" means)
    If it's a temporary lob then it exists in the temporary tablespace which if a true temp tablespace (which it should be) will be accessed using direct IO (so not buffered).
    An the official documentation is easily found, Oracle wrote a whole manual dedicated to LOBS...
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14249/toc.htm
    HTH
    Chris

  • Unable to load Compressed Tiffs (uncompressed works fine)

    Receiving Data Cartridge error when trying to load compressed Tiff's. Uncompressed Tiff's and PDF work fine. Any suggestions? Thanks.
    DOC.IMPORT(CTX) method on ORDSYS.ORDIMAGE results in:
    ERROR at line 1:
    ORA-29400: data cartridge error
    IMG-00704: unable to read image data
    ORA-06512: at "ORDSYS.ORDIMG_PKG", line 419
    ORA-06512: at "ORDSYS.ORDIMAGE", line 65
    ORA-06512: at "ORDSYS.ORDIMG_PKG", line 506
    ORA-06512: at "ORDSYS.ORDIMAGE", line 209
    ORA-06512: at "DSDBA.LOAD_ORDIMAGES", line 52
    ORA-06512: at line 1
    Oracle 8.1.7

    We support TIFF G3 and G4. It is possible
    that your image is not a valid TIFF file.
    We need to take a look at the actual TIFF
    file you are having problem with. I will
    get in touch with you later to ask you to
    send it to us.

  • Compression taking way to long...??

    Hello...Ok I am exporting a 45 min project out of FCP using compressor, for some reason its giving me these crazy times like 6 hours to complete...I've done other projects this long if not longer and have not had this problem...Any ideas? Ooo..also last night I was trying to export a short 30 second clip that had motion in it and it told me an hour to complete, then it froze..Then I turned off the apple and turned it back on went back to FCP hit export again and it did it in less than 20 minites...Any ideas as to what is going on with my machine???
    Thanks

    I just tried to compress a 27 minute video - compressor wanted to take 35 hours to compress it. After searching a few discussions I found the answer:
    Export the sequence as a quicktime movie, current settings, not self contained. Quit final cut and take the quicktime movie into compressor. 35 hours turned into 18 minutes. No quality loss.

  • Problem: DVD-RAM can no longer be read on MBP C2D

    The "MATSHITA DVD-R UJ-857D" in my MacBook Pro C2D is said to have read (but not write) support for DVD-RAM media.
    During the first few weeks of its lifetime, reading DVD-RAMs definitely worked. But now, no more, no matter what brand of media I try.
    All other types of optical media continue to work as advertised (read and write).
    I wonder whether this is a physical defect (but then why doesn't it affect other types of media?) or an issue caused by recent sopftware updates (but when I boot from an external DVD-drive using the very original MacOS DVD with no updates at all, why do I still not get read support on the internal drive?).
    AFAIK there has not been any recent firmware update for this model. So what can be the reason? Should I let Apple swap the drive? Has anyone else observed the same issue?
    Guido
    MBP C2D   Mac OS X (10.4.8)   MATSHITA DVD-R UJ-857D

    Hey Grant,
    thought you'd like to know my "cleaning the lens with a cloth" worked fine - uh... but for about three days only, then back to the same old, same old.
    I'd originally slid a business card covered with a lens cleaning cloth (from my camera bag) to clean the lens, and although the cloth emerged mostly pristine, I figured it must've done the trick as the drive started working again.
    So this time, I decided to go in deeper. I actually took the whole drive apart - case off, drive out, drive dismantled - and I had a long look around to see if any parts looked dirty or damaged...
    I found just one thing - the small silver rod on which the read/write head travels (the servomechanism that moves up and down the disc radius as it reads/writes) had some 'oily gunk' at the point where the travelling-head's 'sleeve' met it -  when I tried to move the head manually up and down the rod, it sort of staggered along. So I cleaned off the gunk with a cotton bud and then slid the drive-head up and down a few times 'till it seemed to run freely.
    Put everything back - and fired up my Mac Pro. And the drive has been 100% fine since then - two whole weeks!
    Maybe this time I've cracked it? (for my drive at least)
    Roger

Maybe you are looking for

  • I get an error message when trying to connect my iphone to a lenovo thinkpad

    I cannot get my Iphone to work on the Lenovo Thinkpad edge 520, when I connect the phone it brings up an error saying there was a problem loading the device. In the control panel I get a yellow exclamatin mark next to the mobile usb device. I tried t

  • How to clear the Payment Document when posting cashed checks.

    AIM: To clear the payment document when the check is cashed. I am creating a Payment Document (Doc type ZP) using fb01. Then I am creating a check against this payment using FCH5. Once check is created, I am posting cashed check using FCKR upload. FC

  • Mailbox full- emptying doesn't work?

    My friends and family have alerted me that when they try to send me an email, they receive an alert email in response saying that my mailbox is full. Also, when I open my Mail program and the little loading circle flashes next to the Inbox Icon, an e

  • Bracketed expression calculator logical error

    Hi, i've being working on a calculator which can handle expressions such as 5 + 6 * 2 which results in 17 (so far only works well for this expression) (5 + 4) * (4 - 2) which results in 18 (4 + (2 * (2 - 1))) * 1.2 results in 4.8 given an input of an

  • File exists/Modified file

    I was hoping someone could point me out in the right direction. I need to first test if the file exists and if it exists then test to see when it was last modified. If the file was modified within a certain time frame then that calls the function. if