Composite images

Hi, I'm pretty new to Macs and am loving using IPhoto but I was wondering if there was any way to create composite images i.e. taking bits of two photos and making them into one image (I think that's called a composite image?!).
If not, which program would you recommend which has such a feature? I was looking at Adobe Photoshop Elements 4 but don't know much about what's available.
Many thanks!

GavK:
Did your Mac come with OmniGraffle installed on it? If so give it a try. It's great for creating composites but has no image editing capabilities. It was included on some Macs. Otherwise if you have to purchase new software I heartily recommend Photoshop Elements 4. You can do some amazing advanced editing with it.
Do you Twango?

Similar Messages

  • How to save a PSD file with a composite image?

    Windows 7 64bit  | Photoshop CS4  |  iMatch DAM
    Somewhere along the line I must have changed a setting, because many of my images now are not viewable in my DAM software (iMatch), or other photo viewers, like Picasa for instance. Instead, a small rectangular black box appears that says "This layered Photoshop file was not saved with a composite image". Can a contributor tell me where the setting is  to save with a composite - Assistance is very much appreciated, IM

    Thank you Chris,for the correct  answer - Best, Bob

  • No composite image, no extension

    I'm on a Mac computer, and having been using CS2 for a long time. All of a sudden something has changed. This is what it's doing:
    When I try to mail someone a file by attaching it, there's no icon from the file. If I try to email the file to someone (as a psd), it sends a message in 4 languages saying: "This layered Photoshop file was not saved with a composite image" - NO extension, no box.
    I tried throwing out preferences, disk utility to repair permissions, and then finally had to reinstall photoshop. It has not changed anything.
    Also, all of the unopened files in my folders have this same little message next to them - none of them have the little picture in the icon (in preview). Does anyone have any idea what is wrong? PLEASE help!  The only thing that has changed in the last couple of days is that I have installed wireless - and my partner is using photoshop at the same time I am. 
    Thanks for any and all suggestions you may have -
    www.beautifulcalligraphy.com

    Are you seeing that message in mail, or after you save the file and open it in Photoshop?
    Again, any application that can't read layers will see that message - and Apple's code doesn't read layers.
    When you originally saved the file, you chose not to maximize compatibility, and that message was written in there to tell you about it.  The file hasn't changed.  Only your method for viewing the file has changed (to something that can't read layers).
    If you open the file in Photoshop and see that message, then something went really wrong with the file - because something removed the layer information. If the layer information had been corrupted, you would have gotten a message about that from Photoshop.

  • Multi-camera IMAQdx systems: shortcuts for stitched composite image

    Imagine a system using for example multiple GigE cameras through the IMAQdx interface where we wish to form a composite stitched image from the multiple camera views. The stitching principle is naive, straigthforward concatenation, one next to another.
    The problem is that where it is trivial to build such a composite image, it's difficult to do it very efficiently. The image sizes are large, tens of megapixels, so every copy matters. Alternative hardware configurations would open up a lot of options but say we're stuck using GigE cameras and (atleast initially) the IMAQdx interface. What tricks, or even hacks, can you guys imagine facing this challenge?
    I've seen some talk about the IMAQdx grab buffers and it appears to me that one cannot manually assign those buffers or access them directly. The absolute optimal scenario would of course be to hack your way around to stream the image data directly next to each other in the memory, sort of as shown below in scenario1.png:
    The above, however, doesn't seem to be too easily achieved. Second scenario then would be to acquire into individual buffers and perform one copy into the composite image. See illustration below:
    Interfaces usually allow this with relative ease. I haven't tested it yet but based on the documentation using ring buffer acquisition and "IMAQdx Extract Image.vi" this should be possible. Can anyone confirm this? The copying could be performed by external code as well. The last scenario, without ring buffer, using "IMAQdx Get Image2.vi" might look like this:
    The second copy is a waste so this scenario should be out of the question.
    I hope this made some sense. What do you wizards say about it?
    Solved!
    Go to Solution.

    Hi,
    Sorry, the contraints are not really well documented as they are dependent on platform, camera type, camera capabilities, and how the driver handles things. All of these are subject to change and so we decided instead to try to make the errors be very self-descriptive to explain how to fix any requirements.
    You are correct that these fundamentally come down to making sure that the image buffer specified is able to be directly transferred to by the driver. The largest requirement is that the image data type is the same and doesn't need any decoding/conversion step. The other requirements are more flexible and change depending on many factors:
    - No borders, since this adds a discontinuity between each line. This error doesn't apply to GigE Vision (since the CPU moves the data into the buffer) or to USB3 Vision cameras that have a special "LinePitch" feature that can allow them to pad the image lines. The USB drivers of more modern OSes (like Win8+) have more advanced DMA capabilities so it is possible/likely that this also can be ignored in the future.
    - Line width must be a multiple of 64-bytes (the native image line alignment on Windows) - same as border
    So, if you end up using GigE Vision cameras, this should just work. If you want to use USB3 Vision you have a few more contraints to work with.
    Eric

  • Preview "this layered Photoshop file was not saved with a composite image" error message?

    hi, i get this message:
    in preview, when i try to open a .psd file by double-clicking it:
    "this layered Photoshop file was not saved with a composite image"
    is there any program that will open this file, that came with my new iMac?
    OS 10.9.5
    thanks!
    ps. when i clicked all through help on my Mac, it would not tell me what i needed to know.
    So, is this forum the best way to find actual useful info?

    Thank you Chris,for the correct  answer - Best, Bob

  • PSD file converted to composite image despite import settings

    I am importing a PSD file (with import settings set
    NOT to "Use flat composite image") and yet all I can see is
    a single layer containing a bitmap. The author can see a full set
    of layers in Photoshop.
    I know that some paths/vectors etc. can get converted to
    bitmaps on import - but is there any reason why the whole file
    should get converted to single bitmap? Is there a setting in
    Photoshop that could cause this?
    A 17 MB psd file becomes a 570KB fireworks PNG file.
    thanks for any insights.
    Geraint
    [edited 570GB to 570KB - whoops]

    I am using Fireworks CS3.
    This morning (as an experiment) I downloaded a trial copy of
    photoshop (not at all intuitive to use unlike Fireworks!!) and did
    some experimentation.
    I eventually found that the PSD file I was looking at was
    rgb/16 - when I switched this to rgb/8 and saved the file it opened
    up perfectly in Fireworks.
    Is this a known limitation of Fireworks?
    Geraint

  • Problem using linear workflow to composite images with nonlinear transparency

    First off: I love what a fully linear workflow does for the proper behavior of gradients, transparency and the like.
    HOWEVER in both Premiere and AE I have found no simple way (within the linear workflow) to properly/accurately composite images that were not linear to start with/have nonlinear alpha channels. For example, I have an 8 bit RGB photoshop file with 2 layers: one with transparency and the other a white background. If I export the transparent layer as a png (or for that matter save it as a standalone psd), bring it into PP or AE and put a white solid behind it, it doesn't look the same as in the original photoshop file unless I turn off "composite in linear color".
    My best workaround so far is to bring the transparent image into AE (working in a linear color space) and manually adjust the gamma of its alpha channel using a combination of of track mattes/levels/shift channels effects to counteract the improper interpretation of the gamma of the alpha channel, since there's no effect or setting to change the linear/nonlinear interpretation of the gamma of the alpha channel.
    Is anyone at Adobe working on addressing this issue? Why isn't there an effect (or setting in "interpret footage") to allow us to specify if the source material has linear or nonlinear gamma? EDIT: I have just realized that there is a color management option in the AE footage interpret dialog (but not PP), but when I attempt to use it with a png it is greyed out/disabled.
    Thanks,
    Henry
    Message was edited by: Henry Reich

    Well, you could just not use PNG but instead TIFF? The behaviors are built into the file format and using a limited 8bit format in a linear workflow doesn't make that much sense in the first place...
    Mylenium

  • Resizing of composite image

    I have a splash panel which consists of one background image. On top of it I have placed company logo, version details. I want to resize this composite image when the panel gets resized.
    Can anyone suggest me a better way of doing this ?

    this methodWhat method ???
    Post a small demo code that is generally compilable, runnable and could reproduce your problem. See: http://homepage1.nifty.com/algafield/sscce.html and this wiki.

  • Can You Create Composited Images w/ Lightroom???

    I'm trying to figure out if Lightroom will satisfy all my needs. As far as image editing is concerned, Lightroom should be sufficient for me. However, I do occassionally have a need to composite several images into one. Can Lightroom do this?
    If not, can PhotoShop Elements 5.0 do this, or do I need the full PhotoShop.

    Lightroom does not have the built in ability currently to stitch images together into a composite, panoramic, or HDR.
    There are Plug-Ins though for Lightroom that will do this for you. LR / ENfuse for example with do exposure layering and blending.
    There are a ton of pre-sets out there, and a growing number of Google. J.Friedly and T. Armes both have some great shareware ones.
    Christopher

  • How to combine 20 identical shots into a single composite image

    I want to shoot a moving water scene with a tripod mounted Canon DLSR and take +/- 20 images from the exact location and then combine them into a single composite shot in photoshop CS6.  How do I go about this?

    It sounds like you want to do image stacking. See Photoshop Help | Image Stacks (Photoshop Extended).

  • Compositing images?

    I'm trying to place an image behind another image. Can you layer images in Pages? When I put the image that I want in the background it completely disappears. In order to "see" it, the image must be in the foreground. But, that places it over the top of the image that I want in the foreground. Is compositing/layering not possible in Pages?

    Can you layer images in Pages?
    Pages does not have layers whereas Photoshop 3 and up, PageMaker and other pro applications for PostScript imaging do have layers.
    Pages has stacking of objects in one and the same base layer, the objects obscuring one another where they overlap.
    What Pages has as an Apple application, which PostScript-based applications did not have until PDF 1.4, is transparency in the stack and blend spaces between the colours in the stack.
    This means that you can change the transparency of the top object so the object it overlaps can be seen. But because transparency is not supported in PDF 1.3 and in PostScript, the transparency level of the imaging model is not as portable as plain stacking with obscuring of overlapped objects. I have a little QuickTime video distributed with the beta of Mac OS 7.5 that shows transparency in Ready,Set,Go! - an application that is still around, if memory serves.
    /hh

  • Use OpenFrameWorks/Quartz Composition image data?

    So I have been tinkering with the Skeleton sample project on a Mac and I have a couple of questions. First off, How can we pass image data from say Quartz Composition or a Openframeworks project(C++,Xcode) or any other to the render function of the plugin. For example from Openframeworks I can get a pixels as an array of unsigned chars. Can I supply it to the render function? If so, how? Also I have seen people using Quartz Compositions to make plugins for AE. Can some one help me figuring out how to pass the image data from the composition to the render function in a plugin? Please bare with me, as I am a newbie and I really hope some one could help me.
    Thank you!

    as for bit depths:
    there a 3 different iteration suites. one for each bit depth.
    i couldn't find them in the documentation but if you go to the definition of any of them in the code, you'll find the others.
    you must use the appropriate suite for the kind of buffer you're using or the function will fail.
    how do you tell which one to use?
    if you created the pixel world yourself, then you know it's kind.
    to find out which depth AE hands you, use PF_GetPixelFormat.
    take a look at the "smarty pants" sample project. it shows how to handle that.
    actually the shifter sample implements that as well, so forget what i just said. (yes you were right)
    as for your x y question, i'm not sure what you're asking.
    i managed to understand that question in 2 ways.
    so here are 2 answers.  :-)
    1. if you're asking how to tell the coordinates of the pixel you're currently being handed by the iteration suite,
    then the x and y coordinates are given to you by the iteration function.
    to be more precise, it gives you the following data:
    refcon - a pointer to whatever piece of memory you sent to the iteration function. (use it to transfer data into your iteration function)
    x - the x coordinate of the current pixel in the output world.
    y - that one's kind of obvious...
    in - the current pixel's content from the input world (if the in and out worlds are of the same size, then the x and y coordinates for both worlds are the same)
    out - the structure into which you put your result RGBA (you may fill that world with whatever you like before the iteration, thus getting two inputs to use for your calculation)
    2. if you're asking how to tell which pixels are being sampled using the sample suite (as used in the shifter sample),
    then these are shorts, bit shifted.
    these values are compatible with what you get reading data from a point param.
    these values are NOT comfortable to use for anything else.
    why? relics of what was most efficient CPU-wise 12 years ago.
    you can sample a pixel buffer in 3 ways:
    "nearest neighbor" for fast performance,
    "subpixel" for accurate results,
    and "area sample" for... when you need... an area. (duh)
    for more info, investigate the "PF_SampPB" structure.
    in any case,
    you don't have to get the input using the sample suite.
    you can always use the input pixel as it is, or get the pixel from the quartz thingie you want to use.
    i tend to think that the first answer was what you we're looking for , but what the heck.

  • Can I apply Burn to a composite image but without flattening?

    I want to apply Burn to a flattened image, but without flattening. My image consists of several layers, and if I apply Burn to the image layer it doesn't work as well as when applied to the flattened image. But I don't want to flatten the image.
    Is it possible?
    Background
    I have uploaded a test image here. It is a crop from a scanned Kodachrome using a Nikon Coolscan V ED which causes flare whenever highlights abut shadows. I can remove the flare fairly effectively on a flattened image with several Burn passes by using a brush size:75, Range: midtones, Exposure: 20%. But if I try it on the unflattened image layer, the results are not as good.

    Guy Burns wrote:
    I have uploaded a test image here…
    That image file consists of a single pixel layer, the background.  The other four layers are all adjusment layers, three curves and one HSL layer.
    No, one cannot apply "burn" to an adjustment layer, only to a bitmap layer, in this case your background.
    I cannot imagine what you're trying to describe here.  ??? 
    I would do any burning on the background before adding any adjustment layer(s) in the first place, but I repeat that I'm not following you at all now that I saw your file. 

  • Composite image, color mask, opacity question?

    Hello,
    Illustrator newcomber who would love some help with a problem. 
    I have a photo taken with a green screen background and I would love for the green to become opaque to let a background artwork layer come through behind the photo.  Seems like a simple task but I can't figure it out.  Can anyone point me to a past discussion where help could be found?
    Thanks

    [scott w] wrote:
    After Effects is not needed unless you are working on video. Silly. Might as well post it could be done on a $6000 Silicon Graphics machine... sure it CAN be done but going that route is sort of ridiculous.
    Well we should ask the Op what they are using the image for after all Illustator is used often for video work, you might not use it but many users do.
    I do not see anything ridulous about any suggestion you made, I think you have tunnel vision and only see Illustrator as a pprint media when Illustrator was integrated with Adobe Premiere when Premiere and After Effects was two of the only video programs around.
    I think you do not realize how often Illustrator is used for projects in After Effects.
    I think you should rethink what you wrote and think beynd your own experience and give other users more benefit of a doubt than you are willing to do.
    For instance this is the New Document window
    It is a video document thatn is being create and it is a preset built into AI.

  • Make Composite image from layer data instead from merged data section...

    I am working on parsing the psd file and saving the data as per doc of photoshop 6. I need to make an image from the layer data alone with different combinations of layers selected, visible etc... and not using the merged data section in the file. Can any one please help me in this issue??
    Thank u.

    Ask in the SDK forum.
    Mylenium

Maybe you are looking for

  • Is anyone else trying to use Edge Animation to create software simulations?

    We're in the process of evaluating Edge Animate to create software simulations. We want to use it because it has many great HTML5 functions. We're also evaluating Captivate7 - which also has an HTML5 option - but it lacks some of the functionality we

  • Setup can't detect network printer Photosmart C7250 all in one

    I volunteer for our local historical society.  The person who did a lot of the computer maintenance work is not active at this time so I've begun to look at things.  There is a Gateway PC running Windows 7 64 bit home premium, a Sony Vio laptop runni

  • 5G iPod, Charges only when connectec to Mac

    My subject pretty much describes it all. After months of working perfectly, all of a sudden I noticed my iPod wasn't charging with my car adapter. I bought a new adapter, and still no luck. Won't even just power the iPod, it'll go from the charging s

  • FILE /usr/sap/trans/bin/DOMAIN.CFG could not be created

    Hi   After a DB copy into the QAS system the TMS will not connect to QAS. I set the CHGR3SHLOC to the domain controller and ran SE06 to delete and reset the TMS as DDIC in client 000. When I run TMS on the QAS box and try to include QAS in the DOMAIN

  • How do I uninstall the wrong driver

    I have the MacBook Pro and am using OS X10.9  Because I am in an RV Park with a weak wifi signal, I installed the driver for an ALFA Network antenna booster (model AWUS036NH)  BUT I goofed and installed the 10.8 driver instead of the 10.9 driver.  Ho