Working in Linear color space confusion

When comping in Nuke we usually use Rec. 709 footage and comp in Linear color space. We can then preview in sRGB or Rec. 709 or whatever othe color space to check what it will look like.
In after effects how do we achieve the same?
You would think that:
1/ Interpret footage as Rec 709. No problem here.
2/ Set Project to Linear colorspace. But this is were the weird stuff happens. Since you have to select a color space other than linear and THEN tick linear compositing. That is the same as saying that you want composite in two different colorspaces at the same time? Thats is confusing so what is the right way of working here to composite in Linear cool space to get after effect to act the way Nuke does?
3/ Then Simulate Output  -> Rec 709
Thank you in advance.

Actually, it seems AE doesn't really do it correctly without specifying color space and clicking "linearize working space" as you can see here (These two images is just a 3D render where I've added the diffuse, reflection, refraction and indirect passes together.): http://i1052.photobucket.com/albums/s443/lostparanoia/AE_LWF_error.jpg
Also, the somewhat correct image is not 100% correct either because of (I assume) the color space adjustments that after effects applies.
Another issue when working in sRGB 2.1 (linear) space is that I can no longer get the correct colors for my solids.
Lets say I need to color pick some company's brand color to use as a background in my comp. When I color pick it, it will be in the completely wrong gamma, but I can't figure out any way to correct it so that it becomes the correct colour. I've tried applying a Color profile converter, and I've tried a inverse 2.2 gamma (0.455). Nothing seems to work. It's just a completely different colour.
So my question is, is there still no way of using a proper linear compositing workflow in AE? And how do you work with solids and the colour picker in sRGB (linear)?

Similar Messages

  • Linear Color Space Issue

    I've run into an interesting issue using 16bit linear exrs in AE.
    All of my 3D footage is rendered in 16 bit linear space, and I am rendering to sRGB IEC611966-2.1 in AE.
    I have various color corrections in the comp.  When I export a frame from the comp, and then reimport that frame into the same comp, it is brighter than the comp that it was exported from.
    I've exported to both png and openEXR with the same result.
    In PShop, I've confirmed that the exported png and exr are identical in color (after reducing the exr to 8bit).  I also know that the color corrections have been applied in the saved frames, because I screen captured the AE screen and pasted it on top of the saved frames in PShop - no color shift.
    My guess is that the re-imported frames are not in the correct colorspace, but I've tried a few using 'interpret footage' w/no luck.
    Any ideas?

    Never mind.
    I had an exposure effect on the layer...

  • Is there a way to assign Color Spaces in AME (Adobe Media Encoder) CC?

    I am trying to output h.264 video for a web project and cannot seem to get sRGB color match when rendering out from AE.
    I see it in AE's native renderer, but not in AME.
    Thanks.

    AME (and Premiere Pro) does not support color management in the way that After Effects does. Via Dynamic Link, which is how AME reads After Effects comps, the color-space-adjusted pixels are not corrected for screen display.
    To get the results you want, add an adjustment layer to the top of the layer stack in the comp and apply the Color Profile Converter effect. Set the Output Profile to Rec.709 (sRGB is practically identical and will also work, but Dynamic Link uses Rec.709 internally so is a better match). This forces After Effects to transform the adjusted pixels into a non-linearized color space that looks correct.
    Note that while the CPC effect is active and View > Display Color Management is enabled (it is enabled by default), this extra layer of color transforms will make the comp appear incorrect in After Effects, at the same time the comp will now look correct in AME or Premiere Pro. Disable Display Color Management to make the appearance of the comp in After Effects match what you see in AME or Premiere Pro. While working on the comp, however, you probably want to work with Display Color Management enabled and the adjustment layer disabled.
    Under the hood, when color management is enabled in After Effects, the pixels it writes into the cache include the appropriate color transforms for the settings you have chosen. When the comp is displayed in the Composition panel in After Effects, an additional transform is added to the screen buffer pixels (not the pixels in the cache) to make it look correct on your computer screen, or not if you have disabled Display Color Management. When the pixels are read through Dynamic Link, no display color management happens, nor does AME or Premiere Pro apply any, so you get the same appearance as having Display Color Management disabled in After Effects.
    Make sense?

  • A Mercury linear color delimma... bad transitions vs. bad animation

    Am I missing something here? I'm writing this as sort of a PSA... if you aren't aware of all sides of this issue, I hope you read the second half because you may not realize it's affecting you, too.
    When I enable CUDA or Maximum Render Quality, I basically can't use any one-sided transitions. They look wrong and "pop" at the ends.
    Apparently, Adobe forgot to rewrite any of their transitions for a linear color space, so Mercury+CUDA basically "breaks" all the transitions that ship with CS5, causing them to render differently than they do in their originally intended gamma of 1.8. I've heard they've added a linear-compatible transition into CS5.5 called "Film dissolve", but there aren't any in CS5. This seems like a pretty huge oversight... I'm surprised they still haven't fixed it in a patch to the original CS5.
    So why not just turn off Maximum Render Quality and CUDA during output?
    Well, there's another big problem.
    If you do any kind of stills animation a la "the Ken Burns Effect" in Premiere, you'll quickly find Premiere leaves a "pixel shimmer" or "ripple" effect across the animation because it is using substandard methods to antialias motion. Looks like a cheap NLE from 2002.
    So, you're left to choose... do you want bad one-sided dissolves or bad animation? Personally, I believe a good NLE should not leave identifiable footprints in the edit. If you happen to have have *both* in a Premiere CS5 timeline and it goes out on TV, I'll be able to "out" your NLE as Premiere because no matter which output setting you choose, one or the other problem will show up.
    SOLUTION 1: Use AE. The simplest solution is probably to avoid Maximum Render quality and avoid keyframing any Motion parameters. Do it all in After Effects.
    SOLUTION 2: Use CS4. It may be possible to edit your sequence with all the benefits of MPE and CUDA, and then send your timeline back to Premiere CS4 and have Premiere/Encoder CS4 render it at max quality. I haven't tested this yet, but I'm intrigued by this possibility. I'll try it soon and report back.
    Hopefully I am missing something here. I'm an Avid/FCP/AE guy who is still relatively new to Premiere.
    P.S. A second issue is lower thirds and alpha graphics. I bet you made all of yours in After Effects with a 1.8 gamma, didn't you? As far as I know, Premiere does not ship with any video effects that easily control the gamma curve of the alpha channel, so there's no quick fix for all of your currently rendered transparent graphics.

    I know if I use the MPE in GPU mode transitions between two PIPs will get a funny black border aroud the PIP. CUDA can do some funny things when transitions and filters that do not have CUDA support are used with GPU enabled. I can only hope Adobe is getting ready to switch to Open CL or Open GL because there are a few glitches with Nvidia's CUDA technology. For Quality I almost think CS 4 was better but you don't get the same amount of realtime. I hope PP CS 6.0 is true broadcast quality instead of just realtime previews. Edius can do it with out the use of GPU acceleration.  I think CUDA is a step in the right direction but I also think it needs a bit more time to mature.

  • CoordinateMapper is not working properly when converting Depth into Camera then into Color Space

    Hello everyone,
    I am trying to map depth point to color space in the defined portion of the depth frame data in my Kinect v2. I am using CoordinateMapper for conversion between various spaces.
    Below is the snippet of the code-
    private void KinectMultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
    var reference = e.FrameReference.AcquireFrame();
    ColorFrame ColorFrame = reference.ColorFrameReference.AcquireFrame();
    DepthFrame DepthFrame = reference.DepthFrameReference.AcquireFrame();
    if (ColorFrame != null && DepthFrame != null)//Do not proceed, if any frame is expired
    ColorFrame.CopyConvertedFrameDataToArray(ColorFramePixels, ColorImageFormat.Bgra);
    Image<Bgra, byte> ColorFrameImage = GetBgraImageFromBytes(colorFrameWidth, colorFrameHeight, colorFramePixels);
    DepthFrame.CopyFrameDataToArray(DepthFrameData);
    CoordinateMapper.MapColorFrameToDepthSpace(DepthFrameData, DepthSpacePoints);
    for (int index = 0; index < DepthFrameData.Length; index++)
    ushort depth = DepthFrameData[index];// Get the depth for current pixel
    if (depth > depthMinReliableDistance && depth < depthMaxReliableDistance)//Ignore outside points
    CameraSpacePoint point = CoordinateMapper.MapDepthPointToCameraSpace(DepthSpacePoints[i], depth);
    //Some processing on CameraSpacePoint are cropped while putting the code here
    ColorSpacePoint colorSpacePoint = CoordinateMapper.MapCameraPointToColorSpace(point);
    DrawMarkInImage(ColorFrameImage, colorSpacePoint);
    ColorImgBox.Image = ColorFrameImage;// See the image
    if (ColorFrame != null) ColorFrame.Dispose();
    if (DepthFrame != null) DepthFrame.Dispose();
    //Convert the bytes into image using EmguCV
    Image<Bgra, byte> GetBgraImageFromBytes(int Width, int Height, byte[] Pixels)
    Image<Bgra, byte> Image = new Image<Bgra, byte>(Width, Height);
    Image.Bytes = Pixels;
    return Image
    //Draw circle (Just for debugging purpose)
    void DrawMarkInImage(Image<Bgra, byte> ColorFrameImage, ColorSpacePoint ColorSpacePoint)
    Point CircleCenter = new Point((int)ColorSpacePoint.X, (int)ColorSpacePoint.Y)
    ColorFrameImage.Draw(new CircleF(CircleCenter, 2), new Bgra(0, 255, 255, 255), 1);
    However, when I tried to converting the depth directly to color space, it works well. But since I am doing processing over CameraSpacePoint, I need to make above working.
    Somebody please tell me where is the error and how to resolve it.

    Hi Carmine,
    I apologize for this confusion. I am trying to process only a defined portion in the depth frame, which resides between MinReliableDistance and MaxReliableDistance. For the testing purpose, I kept my Kinect v2 in the table facing towards the floor at the
    height of 1 meter from the floor. Some objects are kept in the floor. Based on the depth information, I just want to mark their top face.
    I tried following two methods-
    Method1: First compare the depth and ignore the undesired depth points. Then find out CameraSpacePoint of this depth point. Do the processing, whatever is needed. Now convert this CameraSpacePoint into ColorSpacePoint and draw a mark on this point in the
    color image. Below is the code snippet-
    CoordinateMapper.MapColorFrameToDepthSpace(DepthFrameData, DepthSpacePoints);
    for (int DepthIndex = 0; DepthIndex < DepthFrameData.Length; ++DepthIndex)
    ushort Depth = DepthFrameData[depthIndex];// Get the depth for current pixel
    if (Depth > depthMinReliableDistance && Depth < depthMaxReliableDistance)
    CameraSpacePoint CameraSpacePoint = CoordinateMapper.MapDepthPointToCameraSpace(DepthSpacePoints[depthIndex], Depth);
    //some of the code is removed while putting it here
    ColorSpacePoint ColorSpacePoint = CoordinateMapper.MapCameraPointToColorSpace(CameraSpacePoint);
    if ((ColorSpacePoint.X >= 0) && (ColorSpacePoint.X < ColorFrameWidth) && (ColorSpacePoint.Y >= 0) && (ColorSpacePoint.Y < ColorFrameHeight))
    DrawMarkInImage(ColorFrameImage, ColorSpacePoint);
    DisplayImage(ColorFrameImage);
    MinReliableDistance is 676mm whereas MaxReliableDistance is 850mm. (Just for this setup)
    The above code doesn't work. It is unable to find out the objects standing in the floor. Please see attached image and notice the yellow marks are not in correct locations.
    Method 2: Just to check the setup, I tried this method but I can't go with this method further (Since I need the points in CameraSpace. In this method, I first check for valid depth point. Then corresponding color pixel is picked up from the ColorFramePixels
    using ColorSpacePoint. Below is the snippet-
    CoordinateMapper.MapDepthFrameToColorSpace(DepthFrameData, ColorSpacePoints);
    Array.Clear(DisplayPixels, 0, DisplayPixels.Length); //Clear pixels to black
    for (int DepthIndex = 0; DepthIndex < DepthFrameData.Length; ++DepthIndex)
    ushort Depth = DepthFrameData[depthIndex];// Get the depth for current pixel
    if (Depth > depthMinReliableDistance && Depth < depthMaxReliableDistance)
    ColorSpacePoint Point = ColorSpacePoints[depthIndex];
    int ColorX = (int)Math.Floor(Point.X + 0.5);
    int ColorY = (int)Math.Floor(Point.Y + 0.5);
    if ((ColorX >= 0) && (ColorX < ColorFrameWidth) && (ColorY >= 0) && (ColorY < ColorFrameHeight))
    int ColorImageIndex = ((colorFrameWidth * ColorY) + ColorX) * BytesPerPixel;
    int DepthPixel = DepthIndex * BytesPerPixel;
    DisplayPixels[DepthPixel] = ColorFramePixels[ColorImageIndex];
    DisplayPixels[DepthPixel + 1] = ColorFramePixels[ColorImageIndex + 1];
    DisplayPixels[DepthPixel + 2] = ColorFramePixels[ColorImageIndex + 2];
    DisplayPixels[DepthPixel + 3] = 255;
    Image<Bgra, byte> DepthFrameImage = GetBgraImageFromBytes(DepthFrameWidth, DepthFrameHeight, DisplayPixels);
    DisplayImage(DepthFrameImage);
    Please see the attached output image. You can notice the internal pixels are picked from color fame and outside pixels are black as expected.
    Output image of method 1. Notice the yellow marks are not in proper place. They should lie on the top face of objects
    Output image of method 2. Notice the unwanted area and top face of the standing object are black (as expected)-
    Hope is clears your doubt.

  • RGB Working Color SPace ....

    I'm very confused with this stuff ...
    I read that when we setup Photoshop we should set the working color space
    rgb to Adobe RGB or ProPhoto RGB .... fine I understand this up to a point.
    However what happend if the monitor can not display the color space ?
    For example when I display the Color Picker on my external NEC monitor
    everything always looks good but on my internal laptop monitor there is
    banding in the Color Picker.
    If I set the working color space to srgb then the banding disapears.
    So just how should I set all this up ?

    Michael,
    rely on a calibrated precision monitor (not the laptop),
    define all images by AdobeRGB and improve them by Levels,
    Curves and Sharpening.
    Check occasionally by Proof Colors / Gamut Warning
    (using the monitor profile) whether the colors are out
    of gamut for the monitor.
    The monitor is probably not the final output device.
    If the output device should be e.g. offset ISOCoated,
    then check by Proof Colors / Gamut Warning whether the
    colors are out of gamut for ISOCoated. If this should
    be the case, then modify the RGB source until only small
    parts of the image are out of gamut (yellow blossoms).
    Otherwise larger parts might be affected by posterization
    (blue sky).
    Theoretically one can use 'desaturate by 20%', which should
    show larger space colors mapped to the smaller space.
    I don't use it, I'm preferring the gamut warning - recently
    for a couple of landscape photos with very blue skies,
    yellow blossoms and orange sunsets.
    Best regards --Gernot Hoffmann

  • Photoshop and working color spaces

    In how many color spaces Photoshop can work correctly?
    Can only work with gamma 1.8 and 2.2 ones or even with linear or L* gamma color spaces?
    It is enough that the working spaces has RGB=0 0 0 mapped to L*=0 and RGB  255 255 255 mapped to L*=100?
    Thank you
    Marco

    I don't know where in the past I read that in Photoshop the curves, levels and so on work correctly only if the image encoding color space has: pure black (L*=0) mapped to 0 0 0, pure white (L*=100) mapped to 255 255 255, the gray axis effectively acromathic (a,b=0) and the gamma encoding is 1.8 or 2.2.
    These are the reasons because (I read) in the working RGB color space is not advisable choosing the monitor color profile.
    Thank you for your considerations
    Marco

  • Asking the Bridge Team:  Bridge "working color space" setting when one does not have the Suite?

    Common sense tells me there is really no such thing as a
    "working color space" in Bridge, because
    Bridge is not an image editor, just a browser
    Therefore, this may turn out to be a purely academic question; but that doesn't keep my curiosity from forcing me to ask it anyway. ;)
    Is there a way to set the Bridge
    "color settings" when one does not have the suite?
    The only Adobe program I keep up to date is Photoshop, so I've never had the suite. My version of Photoshop is 11 (CS4) and I run updated
    (not upgraded) versions of Adobe Acrobat 7.x, Illustrator 10.x and InDesign 2.x. Consequently, the Synchronize color settings command is not available to me.
    It seems to me that Bridge is behaving like a proper color-managed browser (e.g. Firefox with color management enabled), in that it displays tagged image files correctly and assumes sRGB for untagged image files. This normally works fine.
    But what if I wanted Bridge to assume my
    Photoshop color working space for untagged images
    so that it behaves the same as Photoshop? I'm just curious, as I deal with a minuscule, practically negligible amount of untagged files.
    My reason for bringing it up now is that I don't recall this being explicitly mentioned in forum replies when users inquire about color settings in Bridge. A recent post regarding Version Cue in the Photoshop Macintosh forum got me thinking about this. Just wanting to make sure that I'm right in my assumption that
    there is really no such thing as a
    "working color space" in Bridge, because Bridge is not an image editor, just a browser.
    Thanks in advance.

    Hi Ramón,
    Thanks for sharing the outcome of your tests. However, I may have found a bug/exception to Bridge's colour management policy!
    It appears that CMYK EPS photoshop files are not colour managed in Adobe Bridge, even if they contain an embedded ICC profile.
    I've tried every combination in the EPS 'Save As' dialogue box, so it doesn't seem to be an issue with file encoding. Also, Bridge doesn't rely on the low-res preview that is held within the EPS itself.
    My guess is that Bridge is previewing the CMYK EPS with a Bridge-generated RGB image, but it's being displayed as monitor RGB (assigned) rather than colour managed (converted to monitor RGB). For most users the difference will be barely perceptible, but the problem became very noticeable when using Bridge to preview Newsprint CMYK images on a wide-gamut monitor (images that should have appeared muted really leapt off the screen!).
    How do I report this to the Colour Police at Adobe?!?

  • Color space problem/confusion

    I posted the following message to another thread, but at the recommendation of a member I am starting a new thread here. For a couple of answers see the thread below.
    http://forums.adobe.com/message/3298911#3298911
    I will provide much more information hoping an Adobe support person will chime in. This is extremely odd.
    System: HP, AMD, Windows 7 64-Bit, Nvidia 9100, all updates to Windows, latest Nvidia 9100 driver
    Display: Samsung 226CW, Windows settings 32-bit color, correct resolution,
    Calibration: Done with ColorMunki, D65 target, done after monitor has been on for more than 30 minutes
    Personal:  (I am adding this information with some hesitation, please excuse it if  it sounds like I'm bragging; I am not). I have multiple posts on my  blog, have made many presentations on color managed workflow and am very  comfortable with the settings in Photoshop and Lightroom. Please take  this only as a baseline information, I am not bragging. In fact, I am  begging for information!
    Problem:
    Any, I mean ANY,  original JPEG image in sRGB space coming out of the camera with no  adjustments, any PSD file in sRGB space, any TIFF file in sRGB space  look significantly paler in Lightroom and in Photoshop CS5 than they  look in other Windows based image viewers like FastStone or XnView. This  should not need these applications to be color space aware, but the  situation is the same with or without their color managment turned on or  off. I have done the following:
    1. Totally uninstalled Lightroom 3 and reinstalled it
    2.  Recreated a brand new Lightroom catalog/library and reimported all the  images, converting all the RAW files to DNG (just in case!)
    3. Recalibrated the display
    When  I view a file, any file and I will use for the sake of simplicity a  JPEG file in sRGB color space, in Lightroom it looks pale. Since the  file is in sRGB color space, I have verified this, the rendering in  Lightroom should be the same as rendering in anything else. But it is  not. I took my monitor and connected it to this system with the same odd  behavior of rendering in Lightroom being much paler than outside. It  appears as if I am viewing an image in Adobe RGB in a windows viewer  that is not color managed.
    I further tried the following:
    1.  I copied various versions of one file, all in sRGB color space. One PSD  and two JPEG files from the folders of the above system and copied them  to my system, Intel, Windows 7 64-bit, display calibrated and profiled  with ColorMunki to the same standards as the problem system above.
    2. Imported them to Lightroom on my system
    3.  The rendering in Lightroom is identical to rendering outside Lightroom  for all the files and all are same as the rendering in FastStone on the  problem system. Outside rendering was done using FastStone as on the  problem system.
    My deduction is that something on the  problem system outlined in the opening of the message is interfering  with the Adobe rendering engine and I have no idea what it could be. I  WILL GREATLY APPRECIATE if an Adobe engineer could chime in and steer me  in the right direction. I am willing to try other things but I have run  out of ideas despite the fact that I have reduced much of the problem  to the lowest common denominator of sRGB and JPEG against a PSD in sRGB.
    Waiting anxiously of your help.
    Cemal

    Also, I know enough to calibrate a monitor when it is connected to a new computer. That said, even without calibration the behavior should have changed to display all the images in question the same but perhaps with somewhat off colors. Am I right? I am not arguing the point, I am rhetorically raising the question. If the 226CW is wide gamut and 244T is not, when I connect 244T on the same computer the wide gamut issue should be eliminated, should it not? I am not talking at this point about the "correct" color, but the same color in or out of Lightroom.
    Unfortunately when you connect another monitor to a computer and don't calibrate or manually change it, Windows will not change the monitor profile. Macs will autodetect and change the profile but this innovation has not reached windows yet. The behavior you observe is caused by managed apps using the monitor profile and unmanaged apps not. If the monitor profile is not changed, the behavior doesn't change.
    BTW, for a "cheap" software to be color space aware it does not need a quantum leap in technology I believe. It simply needs to know how to read the ICC profile and the LUT, is that correct?
    It's extremely simple to program color management into apps. Standard API libraries have been available in Windows for over a decade. The reason why this hasn't happened is related to the fact that Microsoft hasn't made IE color managed and the software makers do not want to confuse folks when images look different in their program vs IE. Considering that this still is the biggest issue people wrongly complain about in every color managed application (just check Photoshop fora) that is maybe not that strange.

  • Trying to Export - Quicktime to DPX - in Linear (R709) Color space NOT LOG

    Hello,
    I am trying to export an image sequence through compressor from a Linear (R709) color space. When I try to do the dpx image sequence output it changes the files to LOG.
    Is there a way to pass the material through without any color space changes?
    Thank-you,
    Carl

    FYI, In the Inspector, and filtes, COLOR tab> Output Color Space: Default for Encoder (CAN NOT BE CHANGED - It is greyed out)

  • Color space-creating a book in My publisher-.when I look at the share book pre print the colors are all dulled out. I work in pro photo rgb in LR and PS -.My Pub is sRGB-.where is the problem?

    Color space…creating a book in My publisher….when I look at the share book pre print the colors are all dulled out. I work in pro photo rgb in LR and PS ….My Pub is sRGB….where is the problem?

    I finally got to my references. This had to do with "soft proofing" on screen in Photoshop.
    So this may not help you at all. Re: Strange sRGB soft-proofing behavior  So go ahead and leave that setting at Basic.
    However there is a Color Management forum that you also go to and see if anyone has answers for your particular problem.
    Here is the link: Color management
    I hope they can help you out.
    Gene

  • PS CS6 Smartobject - Changing color space not working

    If I initially open a smartobject from ACR into PS CS6 using one color space, eg. sRGB, should it then be possible to click back into ACR and change color space to Adobe RGB ???
    This is important to me, since working most of the time in sRGB, batch editing lot's of files (for timelapse-video, hence sRGB), but sometimes it may be nessecary to go back and use Adobe RGB to make a best possible print of one of the stills.
    This is not working for me. I thought I should be able to do absolutely non-destructive editing when working with smartobjects from ACR, but this may seem to not be the case.
    Ole

    I understand now that a copy of the original RAW file is made when working with SO.
    Still, when checking what color space I am working in (in PS CS6), after changing from sRGB to aRGB, PS still tells me I am working in sRGB!!
    So what I am asking, going back from a SO in PS, into ACR, to change the color space, does not work.
    Is there a workaround for this?
    Or is it something I still do not understand here?

  • ICC working color space for the System?

    (I posted this in another area but I did not get any replies, so I'm trying here)
    Hey, any color management pros out there...
    What is the ICC color space of the OS itself? Is it the calibrated monitor profile I'm using?
    For example, let's say I'm working in an app. that doesn't use any color management. By default is that color space the app is using by default the ICC profile of the calibrated monitor profile?
    I'm specifically wondering in regards to rendering in 3d applications, such as, Lightwave, Strata3d, etc. These apps. don't save ICC profiles with the final rendered files. Are these files essentially in the Monitor space then? This is my guess, and has proven to be so in my tests, but I want to hear what other people think/know...
    thanks!
    Jeff

    What is the ICC color space of the OS itself? Is it
    the calibrated monitor profile I'm using?
    As Ned said, the OS doesn't have an ICC profile. Profiles describe the properties of image input or output devices. They tell which colour an output device will display when sent a certain RGB value, or which RGB value an input device will return when it sees a certain colour.
    Colour space and profile are basically synonymous, a colour space is the range of colours a device can produce or see, as described by its ICC profile.
    For example, let's say I'm working in an app. that
    doesn't use any color management. By default is that
    color space the app is using by default the ICC
    profile of the calibrated monitor profile?
    No, if the app doesn't use colour management it is oblivious to colour profiles. It will simply throw the RGB values at the output device as they are.
    I'm specifically wondering in regards to rendering in
    3d applications, such as, Lightwave, Strata3d, etc.
    Don't know these two, so can't comment.
    These apps. don't save ICC profiles with the final
    rendered files. Are these files essentially in the
    Monitor space then?
    Strictly speaking, no. You will get whatever the monitor or printer make of the RGB values. However, most apps produce/expect RGB values that look reasonably correct on sRGB monitors, for historical reasons. Hence, even on systems without colour management (or when using apps that aren't CM aware) you can calibrate your monitor to mimick sRGB behaviour (using the on-screen menu) and get reasonable results. Most good CRT monitors come pretty close to sRGB out of the box.
    Cheers
    Steffen.

  • Aperture color space / working space

    Something I have been wondering for some time now, and didn't find a real answer so far: what is Aperture's internal color space? Is it ProPhoto RGB? Does Aperture know about the camera's color space (i.e. how does it map my RAW data to its internal color space)?
    What I noticed is that images are converted to AdobeRGB before sending them to Photoshop for editing; unless this is Apertures working space (which I hope it is not), it would make sense to change that to the working space. One could work around that by first exporting a TIFF in the appropriate color space, editing that and then reimporting it to Aperture, but this sounds rather unpracticable. And again, one should know what exactly the internal working space is, to avoid conversion losses.
    I'm grateful for any suggestions,
    Bernhard

    I don't know that there is a background color space. As I said, if someone is working with a point and shoot digital camera vs. a Canon 1Ds Mark II, the range color the chip can capture is likely to be very different. In that case Aperture make work differently for each camera. I don't really know.
    I think your question touches on the ongoing debate (it's been years now on Photo Forums) as to whether or not you can really profile a digital camera. Some say yes, some say no. The ones who say yes are the ones selling profiling software and charts. Capture One allows for custom input profiles. The ones saying no are promoting software that doesn't allow for custom input profiles (ACR, Aperture, any camera manufacturer's software). I think the theory on the "no" side is that with a camera the range of color is limitless therefor there is no way to really profile it. This is as opposed to something like a scanner that has a limited amount of color it can see and needs to reproduce.
    I know that ACR has two types of general guides within it (one for daylight, one for tungsten) for each camera and then interpolates between the two. I say "guides" because I don't know if they are really termed to be profiles or if another name is more appropriate.

  • Referring to Working Color Spaces in javascript

    I've found the adobe reference to be lacking in this regard as I've been unable to refer to the gray working color space accordingly.
    The reference says that using a string with "Working Gray" or "Working RGB" should refer to their respective working color spaces. In my experience I have not found this methodology to work.
    I can successfully change profiles by doing something like this:
    docRef.convertProfile('KMattePr_50_3_14-3-12.icc', Intent.RELATIVECOLORIMETRIC, true, false)
    but I cannot get it to work this way:
    docRef.convertProfile('Working Gray', Intent.RELATIVECOLORIMETRIC, true, false)
    Given that I'm trying to refer to the "Gray Gamma 1.8" icc profile, it seems like I do not have another option to access it aside from making it my working space and referring to it accordingly.
    Any guidance?

    You may want to post Photoshop Scripting questions on
    Photoshop Scripting
    Have you started this thread?
    ps-scripts.com • View topic - convertProfile() and referring to Working Color Spaces

Maybe you are looking for

  • Problems with the cold?

    i just wanted to see if anyone else has started having problems with the nano in the cold. here's the deal: i'm in nyc and it went down to around 25 degrees today. i had my nano in a pocket by itself (god forbid i put even a tissue near it or it'll s

  • Gray screen & will not start up.

    Hi, I have tried everything from starting in safe mode to installing original RAM to using disk utiltiy from start up disc and i can't get my iMac to get past the gray screen with the spinning (all gray) icon. Now it isn't recognizing any keyboard to

  • Any way of finding out how Apple will solve the Aperture / ACR problem?

    Has anyone out there gotten any comments from Apple support or other people "in the know" about how Apple is going to solve the Aperture / ACR problem, and any potential time frame? Of course, Apple is going to work on problems with its own RAW conve

  • Authority-check problem

    Hi Experts, When i execute a generated SE16 maintenance program for a table EINA(program name: /1BCDWB/DBEINA ) in the background, the job fails after a couple of seconds,but i can execute this normally i.e. live processing. i have not authorization

  • My homepage brings up an unwanted tab.Don't want the tab.I, going back to IP utill it is fixed.

    my home page is ixquick.com and don't want nor use add ons .Only within the last month has this annoying tab been appearing requiring an additional keystrock every time I enter foxfire.