Images processed in lightroom on calibrated monitor then viewed on non calibrated monitors

I process my images to look good on on my calibrated monitor, then export them to my website for viewing. When I show people my images on their monitors which generally are not calibrated, the images of coarse look washed out and way too light. How do you guys compensate for a situation like that? Of coarse you want your images to look the way you want them too on everyones computer that looks at them. Is it best to darken and over saturate in lightroom to compensate for the difference?
Curt

The histogram represents the numerical values of lightness and color that are stored in the file or in the XMP-data or in the LR Catalog.
Different (uncalibrated) monitors interpret these numerical values differently, so an image from one and the same file would look different on these monitors.
But the (uncalibrated) monitors do not shift the black or white point; they just display the tonal values different. So for instance an image file with correct black and white point would have blown-out highlights on a monitor that is too bright. Or the darks could block-up on a monitor that is too dark.
But the monitor doesn't change your image file and doesn't change the histogram.
The purpose of calibrating the monitor is to give you a standard when you edit your images. If your monitor is not calibrated and  - for instance - has a white point of 5000 K instead of the standard of 6500 K,  then your images would appear too red (too warm in photographic terms) on  your monitor. If you then corrected your image (its numerical values) so that it appears "normal" on your screen, it would have a blue cast (too cold) on a calibrated monitor.
So basically we calibrate our monitors to create a standard (a) between different devices (monitors, printers, etc) and (b) between your own images edited at different points in time. Without calibration your monitor would not only display "wrong" colors but display the same numerical color value differently at different time (monitors "drift" and thus have to be re-calibrated at regular intervals).
But naturally, the color management does not work for uncalibrated monitors. So, even if you have a color-managed workflow, you  never know how your images that are posted on the web will look on other people's monitor. And there is no help for that as long as uncalibrated monitors exist.
WW

Similar Messages

  • I have a dual monitor setup.  When I paste develop settings from one image to another, Lightroom switches the active window.

    Strange problem.  Lightroom 5.5.  Windows 8.1. Dual monitor setup.
    I'm processing a large batch of photos.  I have Monitor #1 set as my main develop window, with Monitor #2 set only to display a larger version of the image I'm working with. If I copy the develop settings off of one image and try to paste them onto another image, Lightroom will switch from Monitor #1 to Monitor #2 as the main develop window, and will not display anything on Monitor #1.
    If I then try to paste develop settings onto another image on Monitor #2, LR will then display a larger version of the image I'm working with on Monitor #1.
    To work the way I want to, I have to continually drag my develop window back to Monitor #1 every time I try to paste develop settings onto a new image. How can I make this stop?

    I just tried what I think you’re saying you’re doing and I don’t have the problem with LR 5.5 on Windows 7 SP 2.
    Does the same thing happen whether you use the key shortcuts vs the Settings / Copy/Paste options on the menu?
    I ask this because if you’re using the keyboard shortcuts, maybe you have some display-manager program that has bound itself to the Ctrl-Shift-V (paste settings) to the function of move-to-other-monitor which would do at least part of what you describe.
    You can also test this theory by doing a Ctrl-Shift-V with an application other than LR as the active window and see if it moves.  I suppose Ctrl-Shift-C might also play into it so try both of these hot-key combinations with a program that isn’t LR.
    If none of this sheds any light on the situation, then give a more detailed description of the low-level functions you’re doing to accomplish the copy-paste of settings, so someone else could replicate the steps exactly.
    Specific detailed steps, like:
    left-click on source photo’s thumbnail in the filmstrip
    press Ctrl-Shift-C to copy the settings
    click Check-None
    enable the white-balance and calibration checkboxes
    click Ok
    left-click on destination photo's thumbnail in the filmstrip
    click on Settings / Paste Settings in LR’s menus to paste the settings to the selected photo

  • Image Quality of Lightroom vs Nikon Image Processing Software

    Is anyone out there of the opinion that the quality of images displayed in Lightroom is not as good as when using Nikon's software (for NEF jpeg files)? I am working with some photographers who think so. I love using lightroom and trying to determine if 1)if there is any legitimacy to there opinions, 2) if any others are experiencing the same thing, and 3) if true, how come?

    It's not the monitor calibration that makes the difference, it's the way the RAW engine cooks the image. The calibration mentioned above is meant to mean using the Camera Calibration panel and to tweak the colors to match. I posted a thread a week or so ago, about how I managed to match (to my tastes) the look produced by CaptureNX within Lightroom. Hope this is useful in some way.
    http://www.adobeforums.com/cgi-bin/webx/.3bc80a90/0

  • My student has been using Lightroom 4 with Photoshop CS5 on her Mac for well over a year. Because of the difference between the Camera raw version available to her in Photoshop CS5 and the raw process in Lightroom 4, she has been exporting her raw images

    Any ideas?

    My student has been using Lightroom 4 with Photoshop CS5 on her Mac for well over a year. Because of the difference between the Camera raw version available to her in Photoshop CS5 and the raw process in Lightroom 4, she has been exporting her raw images from Lightroom and choosing the "Render using Lightroom option when it popped up in the dialogue box. She has been doing this for over a year and a half without any problem. Today when she hit Command E no dialogue box appeared and the image just opened in Photoshop. I'm concerned that all of her Lightroom edits are not being rendered to Photoshop. Anyone have an idea as to how we can get the "Render using Lightroom" dialogue back?

  • I am using a MacBook Pro.  I simply cannot find a way to attach images adjusted in Lightroom as attachments and/or without massive degradation in quality.  I follow the LR attach email process as specified by LR, the photos appear in the email seemingly e

    I am using a MacBook Pro.  I simply cannot find a way to attach images adjusted in Lightroom as attachments and/or without massive degradation in quality.  I follow the LR attach email process as specified by LR, the photos appear in the email seemingly embedded and the recipients of the email cannot save the attachment.

    You are welcome.  Just finished a chat session with an Apple support rep and confirmed the matte option no longer available.  Seems lots has changed since I bought my 17” 19 months back:).  They did say that there were after market screen films available from places like amazon
    Have never used anything like that though.  My wife has a 2008 MBP 15” with gloss and I can say it is a nice screen finish, you just have to be careful of lighting from behind you.  All my iMacs were glossy and I did learn to compensate for the added brilliance the screen brought to the photos.  The new soft proofing feature of LR5 seems to better estimate the level of brightness of the printed work, compared to past versions of the s/ware.
    In any case, in my opinion you really can’t go wrong with the apple product.  I bought my first iMac in mid 1999 and have never looked back.  I donated that machine to a pre-school in 2008, it was running OSX version 2 or 3 I think.  I did run Photoshop 7.0 on an IBM laptop for a time (windows XP).  I think I had one of the very first versions of Adobe Camera Raw on that machine.  I digress, sorry.
    The chat representative did confirm that the 17” is out of production and I’m guessing Apple found the market for the big laptop just wasn’t there.  They did mention that 17” MBP’s show up as “certified refurbished” units from time to time.  Suggest you might explore that option with a local Apple store in the UK, assuming  Apple has store front operations off this continent of course.
    Please feel free to contact me with further questions if you wish.
    Take care, Gordy

  • NOT happy with image quality of Lightroom 1.1

    Sure, LR now launches faster and the interface looks a bit nicer. And the more capable sharpening controls and the clarity slider which mimics contrast enhancement with USM are nice additions, but has anyone else notice what happened to the image quality?
    First, while formerly LR and ACR struck a great balance between detail and noise suppressionerring on the side of maintaining detail even at the expense of slightly higher noise levelsit appears the goal for the redesign has been to minimize the appearance of noise at all costs. It just so happens that yesterday afternoon, I'd shot some available light candids (up to ISO 800) of the staff at a local health care facility and was intent on using them as a trial run on Lightroom 1.1. Well, the difference in image quality jumped right out at me: there was no granular noise at all remaining, even in the ISO 800 shots, but neither was there any fine detail. I use a Canon 5D, and while I'm accustomed to slightly higher levels of chroma noise, images up to ISO 1600 in even the worse lighting are always full of fine detail. Fine structures like strands of hair and eye lashes have now lost their delicacy, and have instead become coarse, unnaturally painterly analogs. Looking into shadow areas, I can see the results of what seems to be luminance noise smearing at work, obliterating noise and detail along with it. I never used Raw Shooter because I'm a Mac user (2x2GHz G5 w/2GB RAM and 250GB HD), but if this is the result of incorporating Pixmantic's technology, the result is not a positive one from my standpoint. The images I shot yesterday are to be cropped to 4:5 proportions, then printed 20" x 25", at which size the processing artifacts and lack of fine detail in these LR1.1 conversions becomes even more apparent. I've even tried turning off all image processing options: Clarity, Sharpening and NR (neither of which I ever use in RAW conversion, anyway)... It simply seems this noise smearing is part of the baseline RAW processing, and it really, really bites. Am I missing something? Is there some way to actually turn off this processing that looks uncomfortably like the "watercolor" noise reduction that Kodak and Panasonic use for their compact digicams. Yuck!
    Secondly, is there a way to get back the suppression of hot and stuck pixels that LR used to perform? Now, my high ISO files are riddled with them, the same as they would be when converted with Aperture or Canon's DPP. Default suppression of hot and stuck pixels was a major advantage of LR/ACR, and contributed in no small bit to my adoption of LR as my standard tool for RAW conversion due to the amount of high ISO, low light photography I do. What's even worse, is that the random-color speckles are now smudged into the image along with all the other noise data that's being smoothed out, resulting in images that looks more like impressionist paintings than photographs.
    I thought about reinstalling LR1.0 and just continuing to use that, but if LR1.1 is an indication of the direction Adobe is going to take in the development of the software, I really don't see the point of continuing to use the softwareparticularly when I had a few existing problems with LR1.0 that were never resolved, such as crashing during the import of photos from a memory card and progressively slower preview rendering as the size of my library increased. So, I'm probably going to go back to using Aperture, which is itself not free of IQ foibles, but certainly looks much more attractive now in comparison to LR1.1.
    Anybody notice the same things with IQ? Anybody got any suggestions of how to get more natural-looking conversions before I remove LR and go back to Aperture?

    Jeff,
    I mean no disrespect. But I would like to see samples of 1.1 compared to 1.0 of the same image (ISO 400, and/or 800), because I do not want to convert my library to a catalog until I know whether or not I like the image quality. Why is it so hard to get one good sample. That is all I am asking. I would just rather not jump through hoops to go back to 1.0 if I do not like 1.1....That is all
    And yes, after well over 400 printed articles I can tell what an image will look like in print when I view it 1:1.... I can tell if the eyelashes or pores on someones face, the detail in a rug, or wood grain will be detailed on the off set printed page if I look at the image at 1:1 and see smudging...this means to me that the most detail possible is NOT going to translate to the page. If however I CAN see detail in those types of areas, clearly (ie no smudging), than I know that I will see those fine details on the page. If these fine details were not important than we would all still be shooting with 3 and 4 mp cameras. Those fine details that are only visible to our eyes at a 1:1 preview on screen, are important on the printed page.
    Oh, and I am not chest thumping. You can check my history here, I do not have a history of that type of activity. I am simply asking to see samples before I update....
    I am very discriminating Pro, not some over testing, too much time on my hands, complaining , over paid amateur who only has time to complain that their test chart is out of focus. Or that they can measure toooo much noise at ISO what ever, instead of actually making photos. I actually make my living taking photos. And my clients have come to expect a certain level of quality from me. They comment all the time how much higher quality my images are than some of the other photogs they use. And I am still shooting a D60, where as these others are shooting 5d's and D2X's.
    Jeff, I am not against you or Adobe. Matter of fact, I LOVE LR. It has changed my work flow in a very positive direction. I think it is wonderful. I just want one sample.... I am asking nicely: Please with sugar on top :)
    If you can't give me a sample, than please at least reassure me that it will be easy to go back to 1.0 for the time being. Is it as easy as uninstalling 1.1, reinstalling 1.0 and recovering my DB from a current backup? If so, than fine, I will go this route........... If not, than I am hoping for a sample.
    Thank you very kindly Jeff for engaging in this lively conversation. I do appreciate your comments and participation on this forum. And please note that none of this is said with attitude or malice. I know that some times a writers intent or emotional state is easy to misinterpret in a forum like this. So please know that I am calm and not angry, just curious about image quality.
    Ok. I will shut up now. Thanks again

  • RAW Image Sharpening in Lightroom

    Hi,
    I have been shooting in Camera RAW for some time but just learned that the sharpening in the develop module is for RAW Image Sharpening and not for output sharpening.  My problem is I have lots of images that I have already processed with lightrooms develop module.
    To get the best images, do I need to go back and reset the develop module settings on an image, apply the RAW sharpening, then re-apply the develop module settings?  Or is lightroom smart enough to allow me to apply the RAW sharpening after I have "developed" the photo and still get the same results?
    Thanks in advance to anyone that knows the answer to this one!
    Steve Wetzel
    http://wetzelphoto.wordpress.com

    On Tue, Sep 1, 2009 at 9:11 PM, Skippie2u<[email protected]> said:
    >
    >
    >> If so, then I'd say you should apply the
    >> sharpening. Unless you've further modified the file (using Photoshop
    >> or whatever), then adding the sharpness in the Develop module should
    >> just happen, as another development step.
    >
    Your right, it should, but that does not mean it does.  What I am asking is will to get the best results, should I apply the RAW image sharpening FIRST.  Will that give better results then applying it later or last.
    >
    >
    >
    >> If you were happy with the way they looked before learning about the
    >> sharpening, then I'd say just continue to enjoy your work.
    So you believe the RAW Image sharpening is optional?
    I was trying to point out that if you were happy with the image
    before, you could just leave them as they are. If you feel that
    applying the sharpening makes the image better, then using it would be
    a help. And since you say:
    >I created a virtual copy, reset the settings on it, applied RAW Image Sharpening, then applied my development settings.  The image looked way better doing that then applying no RAW Image sharpening
    then I'd say you probably want to apply the sharpening, since it
    sounds like you would be happier with this much better image.
    My point was that some people learn a fact, or a different method, and
    automatically decrease their happiness and satisfaction with their
    prior efforts. In your case, sounds like you've tested the new method,
    and like the results better. To me, that would indicate that I should
    start using this new method in future. Depending on how much you like
    the improvement will determine whether you spend the time and go back
    and re-process your (already completed) images.

  • Image Processing in C#

    What I want to do is as in the following.
    1. Load image from jpg/bmp file and display on the PictureBox or any other image control keeping image ratio
    2. Perform Image Processing, for example, zoom in/zoom out, change brightness/contrast
    Does any .NET class supports this? or should I implement it with my own code?
    If I should make my own code, is the PictureBox optimal control?

    Hmm.  If you don't know where to start then maybe the do-it-yourself approach is not going to work for you.
    Here's code for a simple control you can use to display an image with a transform matrix and a color matrix.
    using System;
    using System.Drawing;
    using System.Drawing.Drawing2D;
    using System.Drawing.Imaging;
    using System.Windows.Forms;
    class ImageControl : Control
    public Image Image
    get
    return image;
    set
    image = value;
    Invalidate();
    private Image image;
    public Matrix Transform
    get
    return transform;
    set
    transform = value;
    Invalidate();
    private Matrix transform = new Matrix();
    public ColorMatrix ColorMatrix
    get
    return colorMatrix;
    set
    colorMatrix = value;
    Invalidate();
    private ColorMatrix colorMatrix = new ColorMatrix();
    protected override void OnResize( EventArgs e )
    Invalidate();
    public ImageControl()
    this.DoubleBuffered = true;
    protected override void OnPaint( PaintEventArgs e )
    if( image == null ) return;
    e.Graphics.ResetTransform();
    e.Graphics.Transform = transform;
    ImageAttributes imageAttributes = new ImageAttributes();
    imageAttributes.SetColorMatrix(
    colorMatrix,
    ColorMatrixFlag.Default,
    ColorAdjustType.Bitmap );
    e.Graphics.DrawImage(
    image,
    new Rectangle( 0, 0, image.Width, image.Height),
    0, 0, image.Width, image.Height,
    GraphicsUnit.Pixel,
    imageAttributes );
    You can set a transformation matrix to do the zooming, panning, etc.  And set a color matrix to do simple contrast brightness adjustments.
    // Some example matrices
    float[][] colorMatrixElements = {
    new float[] {2, 0, 0, 0, 0}, // red scaling factor of 2
    new float[] {0, 1, 0, 0, 0}, // green scaling factor of 1
    new float[] {0, 0, 1, 0, 0}, // blue scaling factor of 1
    new float[] {0, 0, 0, 1, 0}, // alpha scaling factor of 1
    new float[] {.2f, .2f, .2f, 0, 1}}; // three translations of 0.2
    ColorMatrix colorMatrix = new ColorMatrix( colorMatrixElements );
    Matrix transform = new Matrix(
    1.5f, 0.0f, // horizontal scale of 1.5, 0
    0.0f, 0.5f, // 0, vertical scale of 0.5
    20.0f, 40.0f, // horizontal offset of 20, vertical offset of 40
    Contrast and brightness are performed together in a single matrix.  Contrast scales the output range and brightness is the offset.
    Zooming is just scale and offset.

  • Image processing with imaq vision with 2 webcams on the same computer

    Hi,
    I'm currently trying to set up 2 usb webcams (logitech quickcam for notebooks pro). I want to be able to have them both run simultaneously and do some image processing with the images that I get from both cameras with labview and imaq vision.
    As of right now, I'm having trouble getting both cameras to run at the same time. Any help would be gladly appreciated. Thanks.

    The USB IMAQ driver will not support running 2 USB cameras at a time (I believe it is a limitation of the DirectShow interface). You could open one camera, acuqire an image, close the reference to that camera and then do the same for the second camera.
    If you need simultaneous acquisition, look at possibly moving to 1394 cameras or analog cameras.

  • Image processing over wireless

    Hi,
    One of our subsidaries has a setup where continuous video stream is generated from certain application servers processing real time image processing.
    The setup goes this way; 3 PC's which run these image processing & are connected to a 3750G switch.
    This switch is uplinked to an access point which then carries all these over wireless to the image processing main server.
    About two weeks ago, when they tested this ,all 3 PC's had issues running together when operators viewed/worked upon the images on them.
    When either one or two of them is turned off, the rest works fine with no disturbance( intermediate stop & start of images ).
    Please help with suggestions on what could be the cause. I suspect bandwidth, but since this doesn't cover any WAN links, i doubt that bandwidth is actually the problem.
    Thanks in advance.

    It depends. QoS will help if your having congestion on the wired side. WMM QoS will help for over the air. I would look at the switch port and see if you see drops on both the map and rap side. You might just be over utilizing the backhaul. Remember that it is half duplex link so that can be an issue also. I had an install with regular AP's and 3gb video uploads within 30 minutes and the only way to achieve that is only allowing 4-5 clients per AP. the testing you have done seems to show that the max you can do over that link is 2. How much other traffic is using the wireless. Maybe try to isolate those traffic on a separate RAP/MAP pair using AP Groups.
    Sent from Cisco Technical Support iPhone App

  • Image processing from .txt file onto an intensity graph

    I am doing a mini project in my class and I was wondering if anyone could help me. It about image processing but I am bit stuck.
    Heres the idea:
    "An image is really nothing more than a 2D array of data. The value of every element in the array corresponds to the brightness of the image at that point.
    In this project you will create a VI which loads a 2D array of data and then displays it on the screen using the Intensity Graph. Three example files (boats.txt, gordon.txt and parrot.txt) are available on the module webpage that you can use. However, you can also use any other black and white image you like, but will need to convert it to a “text image” first. To do this you can use some software called “ImageJ” which is available on the computers and is free to download.
    You can vary the brightness of an image by adding the same value to every element in the array.
    The contrast of an image is adjusted by multiplying every element in the array by the same value. Using numerical controls and simple array mathematics, you should adjust the brightness and contrast of your displayed image.
    Some other ideas that you could try with image manipulation are:
    o Invert an image (change black to white and white to black)"
    First I am having problems putting my picture onto the graph. It is in the write file but the colours are not correct and the image has rotated 90 degrees. I will upload my VI so far when I get back onto my computer.
    Would really appricate the help! Thank you for reading

    Hi charlthedancer,
    Here is an example to get you started.
    Kind regards,
    GajanS
    Attachments:
    Test.vi ‏14 KB

  • Image Processing Performance Issue | JAI

    I am processing TIFF images to generate several JPG files out of it after applying image processing on it.
    Following are the transformations applied:
    1. Read TIFF image from disk. The tiff is available in form of a PlanarImage object
    2. Scaling
         /* Following is the code snippet */
         PlanarImage origImg;
         ParameterBlock pb = new ParameterBlock();
         pb.addSource(origImg);
         pb.add(scaleX);
         pb.add(scaleY);
         pb.add(0.0f);
         pb.add(0.0f);
         pb.add(Interpolation.getInstance(Interpolation.INTERP_BILINEAR));
         PlanarImage scaledImage = JAI.create("scale", pb);3. Convertion of planar image to buffered image. This operation is done because we need a buffered image.
         /* Following is the code snippet used */
         bufferedImage = planarImage.getAsBufferedImage();4. Cropping
         /* Following is the code snippet used */
         bufferedImage = bufferedImage.getSubimage(artcleX, artcleY, 302, 70);The performance bottle neck in the above algorithm is step 3 where we convert the planar image to buffered image before carrying out cropping.
    The operation typically takes about 1120ms to complete and considering the data set I am dealing with this is a very expensive operation. Is there an
    alternate to the above mentioned approach?
    I presume if I can carry out the operation mentioned under step 4 above on a planr image object instead of buffered image, I will be able to save
    considerable processing time as in this case step 3 won't be required. (and that seems like the bottle neck). I have also noticed that the processing
    time of the operation mentioned in step 3 above is proportional to the size of the planar image object.
    Any pointers around this would be appreciated.
    Thanks,
    Anurag
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM
    Edited by: anurag.kapur on Oct 4, 2007 10:17 PM

    It depends on whether you want to display the data or not.
    PlanarImage (the subclass of all renderedOps) has a method that returns a Graphics object you can use to draw on the image. This allows you to do this like write on an image.
    PlanarImage also has a getAsBufferedImage that will return a copy of the data in a format that can be used to write to Graphics objects. This is used for simply drawing processed images to a display.
    There is also a widget called ImageCanvas (and ScrollingImagePanel) shipped with JAI (although it is not a committed part of the API). These derive from awt.Canvas/Panel and know how to render RenderedImage instances. This may use less copying/memory then getting the data as a BufferedImage and drawing it via a Graphics Object. I can't say for sure though as I have never used them.
    Another way may be to extend JComponent (or another class) and customize it to use calls to PlanarImage/RenderedOp instances directly. This can hep with large tiled images when you only want to display a small portion.
    matfud

  • Image processing Manupulation problem!!!

    Dear all,
    God Morning! hope u all are fine .. i have an issue regarding Image processing Manipulation.
    i want to give u all the details.. what i have done so far in java.
    i have used third party tool of java called ImageJ editor for image manipulation.i got the source of it i have made neccessary customization to store / fetch the image from my table (Database).now this editor is working fine when i click on open option from menu bar it fetches the image from database and after fetching it put the image on container where one can edit/ modify the image and save it back to the table.
    the above proceeds me to display or store the edited image on the database . now i just want to integrate the editor on browser ..
    how to call the class file when i click on the link on specified page and also how to pass parameter from jsp page to java class file.??
    please help me to sort out d problem
    "Thanks and regards"
    Jonty

    Increasing the contrast springs to mind.
    When used to extremes it is sometimes called
    'cartoonising'
    Suppose your image was in 16 colors, you could
    reasonably safely assume at least two are red, two
    are blue, two are yellow ect.
    With regards an algorithm you are looking to 'round
    off' the color values to (my suggestion only) the
    nearest 64 or even the nearest 127...
    Bamkinjust be careful of that ORANGE color. Orange is a tricky color to define in either RGB or HSB, and is very similar to red and yellow. A little change in either direction makes it difficult to distinguish from one or the other.
    my thought would be to try to force your image to only six colors. Then those colors (it wouldn't really matter if they were accurate, as long as they are correctly distinguished) would be easy to identify.
    - Adam

  • Image processing with BLOBS: how to write BufferedImage to a BLOB

    Hi everybody - thanks in advance for any input on that topic.
    I'm doing image processing using AWT and 2D. Images are stored in a RDBMS as BLOB Type, which I get using JDBC and convert to a BufferedImage using a JDBCImageDecoder.
    Now, I have my BufferedImage and I can process them using the appropriate filters (ConvolveOp e.g.)
    Writing the BufferedImages to disk or display on screen is easy. But I can't get to write them to a BLOB Object. Any Hint ?
    (Of course, I'm speaking of oracle.sql.BLOB objects, not java.sql.Blob).
    Thanks and have a nice day

    Billy,
    Thank you for your answer. I have two questions.
    First what that means "Bob's your uncle ?" I'm a french man, not used to english special sentences ou jargon. Would enjoy to know !
    Second, I have created a PL/SQL procedure to update my table. I face a problem.
    I want to initialize b_lob with the img_blob value but I get an error : "ORA-22922: nonexistent LOB value". WHere do my error comes from ? I am fairly new in this stuff of BLOB.
    Below my procedure.
    Thank for your kind help.
    Christian.
    create or replace
    procedure insert_img as
    f_lob bfile;
    b_lob blob;
    loops number default 0 ;
    lines number default 0;
    stmt varchar2(4000);
    cursor c1 is select img_blob, file_name, pk from photos FOR UPDATE ;
    begin
    NULL;
    dbms_output.enable(900000);
    stmt := 'SELECT COUNT(*) FROM PHOTOS';
    EXECUTE IMMEDIATE stmt INTO LINES ;
    for ligne in c1 loop
    exit when loops >= lines ;
    loops := loops+1;
    update photos set img_blob= empty_blob() where CURRENT OF C1;
    -- return img_blob into b_lob;
    b_lob := ligne.img_blob ;
    f_lob := bfilename( 'MY_FILES', ligne.file_name );
    IF (DBMS_LOB.FILEEXISTS(f_lob) != 0)
    THEN
          DBMS_OUTPUT.PUT_LINE('BFILE exist: '|| ligne.file_name || ', ligne :'|| ligne.pk);
          dbms_lob.fileopen(f_lob, dbms_lob.file_readonly);
          dbms_lob.loadfromfile( b_lob, f_lob, dbms_lob.getlength(f_lob) );
          dbms_lob.fileclose(f_lob);
          dbms_output.put_line('ligne.pk :' || ligne.pk || ', lines : ' || lines || ', loops ' || loops);
      ELSE
        DBMS_OUTPUT.PUT_LINE('BFILE does not exist: '|| ligne.file_name || ', ligne :'|| ligne.pk);
      END IF;
    end loop;
    commit;
    end insert_img;

  • Image Processing Cells Manipulation

    Hi Guys,
    I am working on cells manipulation. Due to the nature of image processing which scans from a top-down-left to right approach, i am unable to fix a index number on the specific cell which I required this information in order to manipulate the cells as my command. The image processing will be running throughout the programme. As attached is the picture of the Imaq count objects i am using. 
    On a side note, just want to ask a biology-related experiment if anyone encounter the same issue before, why is a cell easily stuck onto the surface as this action prevents the cells from being trapped once it get stuck on the surface of the cover slip. 
    Thank you in advance,
    Scott
    Attachments:
    Image Processing.png ‏227 KB

    Hello,
    yes I understand your problem. But it seems to me that the scan direction has no effect on this... What if the scan direction was from bottom-up, left-to right and the cell flows in from the left bottom corner? You would have the same problem...
    Or do new cells flow into FOV of the camera only from top-left corner? Could you count the number of objects and linearly increment the indexes of your cells? For example, the first cell that comes into the FOV of the camera has index 0, and when the next cell is introduced, the first cell will have an incremented index that equals 1. And so on...
    But if the new cells come into the FOV of the camera randomly from left, right, bottom, top, etc... it would be more difficult. What you could do is calculate some parameters (check particle measurements in NI Vision Concepts)  for the cell you want to manipulate (at the time when you are sure this is the correct cell) and then compare these parameters with the cells on every subsequent frame. You can build a feature vector of these parameters and use classification tools. When you classify the cell, then you would have no problems manipulating it.
    If your cell changes shape dynamically, then I do not see a way to do this.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

Maybe you are looking for