Using ProShow Gold in conjunction with PSE4

I would be interested if anybody has any comments on using ProShow Gold in conjunction with PSE4 (under Windows XP).
I am currently trying out the evaluation version of ProShow Gold 2.6.
My reasons for trying out ProShow as an alternative to the built in PSE4 slideshow maker are:
(1) More and better transitions available.
(2) More control over audio - timing, fading and voice-overs for individual slides.
(3) DVD burning available without going to Nero or other product.
(4) Getting away from the problem of backing up slideshows under PSE4.
ProShow Gold does have a folder pane for picking photos but if the photos required for a particular show are spread over a large number of folders (which is almost certainly the case for most PSE4 users) it is not very convenient.
I therefore use Organizer, with all its tagging and searching and collection facilities as a "front end" for ProShow from which to drag selected photos. The only problem is that it is rather hard to find room on a single monitor for both programs simultaneously.
ProShow also allows for an "external image editor" to be specified. But after nominating PSE4, I find that clicking on ProShow's "Edit" button only invokes Organizer, and does not open the image in Editor as I would have expected.
EDIT - please disregard last paragraph - I did not have things set up properly.

I use Proshow in preference to Elements. I find the quality of the end product is far better.
I also use Elements Orgainser to make a collection of the images for the show and then drag them onto Proshow. I've found it works best by dragging the images into Proshows Light Box, which can be detached from the main window. You can shrink the Proshow main window down very small and the Organiser down too. As long as you have all the images selected in the Organiser and can just see one, you can drag this over to Proshow.
You need to be careful of the order in which they appear in your Collection.
I've found a 'feature' in Proshow that may just be unique to my setup. The preview, Full Screen or otherwise is not sharp. I have to go into the make Executable dialog and set up my screen dimensions. Once this is done the output is nice and clear.
Colin

Similar Messages

  • I need your help with a decision to use iPhoto.  I have been a PC user since the mid 1980's and more recently have used ACDSee to manage my photo images and Photoshop to edit them.  I have used ProShow Gold to create slideshows.  I am comfortable with my

    I need your help with a decision to use iPhoto.  I have been a PC user since the mid 1980’s and more recently have used ACDSee to manage my photo images and Photoshop to edit them.  I have used ProShow Gold to create slideshows.  I am comfortable with my own folder and file naming conventions. I currently have over 23,000 images of which around 60% are scans going back 75 years.  Since I keep a copy of the originals, the storage requirements for over 46,000 images is huge.  180GB plus.
    I now have a Macbook Pro and will add an iMac when the new models arrive.  For my photos, I want to stay with Photoshop which also gives me the Bridge.  The only obvious reason to use iPhoto is to take advantage of Faces and the link to iMovie to make slideshows.  What am I missing and is using iPhoto worth the effort?
    If I choose to use iPhoto, I am not certain whether I need to load the originals and the edited versions. I suspect that just the latter is sufficient.  If I set PhotoShop as my external editor, I presume that iPhoto will keep track of all changes moving forward.  However, over 23,000 images in iPhoto makes me twitchy and they are appear hidden within iPhoto.  In the past, I have experienced syncing problems with, and database errors in, large databases.  If I break up the images into a number of projects, I loose the value of Faces reaching back over time.
    Some guidance and insight would be appreciated.  I have a number of Faces questions which I will save for later. 

    Bridge and Photoshop is a common file-based management system. (Not sure why you'd have used ACDSEE as well as Bridge.) In any event, it's on the way out. You won't be using it in 5 years time.
    Up to this the lack of processing power on your computer left no choice but to organise this way. But file based organisation is as sensible as organising a Shoe Warehouse based on the colour of the boxes. It's also ultimately data-destructive.
    Modern systems are Database driven. Files are managed, Images imported, virtual versions, lossless processing and unlimited editing are the way forward.
    For a Photographer Photoshop is overkill. It's an enormously powerful app, a staple of the Graphic Designers' trade. A Photographer uses maybe 15% to 20% of its capability.
    Apps like iPhoto, Lightroom, Aperture are the way forward - for photographers. There's the 20% of Photoshop that shooters actually use, coupled with management and lossless processing. Pop over to the Aperture or Lightroom forums (on the Adobe site) and one comment shows up over and over again... "Since I started using Aperture/ Lightroom I hardly ever use Photoshop any more..." and if there is a job that these apps can do, then the (much) cheaper Elements will do it.
    The change is not easy though, especially if you have a long-standing and well thought out filing system of your own. The first thing I would strongly advise is that you experiment before making any decisions. So I would create a Library, import 300 or 400 shots and play. You might as well do this in iPhoto to begin with - though if you’re a serious hobbyist or a Pro then you'll find yourself looking further afield pretty soon. iPhoto is good for the family snapper, taking shots at birthdays and sharing them with friends and family.
    Next: If you're going to successfully use these apps you need to make a leap: Your files are not your Photos.
    The illustration I use is as follows: In my iTunes Library I have a file called 'Let_it_Be_The_Beatles.mp3'. So what is that, exactly? It's not the song. The Beatles never wrote an mp3. They wrote a tune and lyrics. They recorded it and a copy of that recording is stored in the mp3 file. So the file is just a container for the recording. That container is designed in a specific way attuned to the characteristics and requirements of the data. Hence, mp3.
    Similarly, that Jpeg is not your photo, it's a container designed to hold that kind of data. iPhoto is all about the data and not about the container. So, regardless of where you choose to store the file, iPhoto will manage the photo, edit the photo, add metadata to the Photo but never touch the file. If you choose to export - unless you specifically choose to export the original - iPhoto will export the Photo into a new container - a new file containing the photo.
    When you process an image in iPhoto the file is never touched, instead your decisions are recorded in the database. When you view the image then the Master is presented with these decisions applied to it. That's why it's lossless. You can also have multiple versions and waste no disk space because they are all just listings in the database.
    These apps replace the Finder (File Browser) for managing your Photos. They become the Go-To app for anything to do with your photos. They replace Bridge too as they become a front-end for Photoshop.
    So, want to use a photo for something - Export it. Choose the format, size and quality you want and there it is. If you're emailing, uploading to websites then these apps have a "good enough for most things" version called the Preview - this will be missing some metadata.
    So it's a big change from a file-based to Photo-based management, from editing files to processing Photos and it's worth thinking it through before you decide.

  • How to use new iPad3 in conjunction with Lightroom on desktop iMac?

    Workflow:  Will this work?  Camera RAW to iPad3 to Lightroom???
    I would like to use my new iPad3 in conjunction with my already-established Lightroom workflow on my desktop Mac.  What I have in mind is to shoot in RAW (Sony SLT A-65 - 24.3 megapixels) and then, in the field, transfer my photos from the camera to the iPad3 (64GB version).  On the iPad I might do some rudimentary editing - deleting duds, etc. - but I will still do all my significant editing in Lightroom on the Mac.  Thus, I need to import the photos, from the iPad, in either RAW or DNG into LightRoom once I get back to my office.  My questions are:
    Is it possible to import RAW photos into the iPad3 and either keep them in RAW or convert them to DNG?
    Is it possible to export RAW or DNG photos from the iPad3 into Lightroom?
    What app(s) do I need on the iPad3 to make this happen?
    FYI - I am using Lightroom3, but will upgrade to Lightroom4 if that makes any difference in whether this will work or not.
    Thanks so much, in advance, for your suggestions and your advice!

    You can load your RAW images to any iPad either using the Camera Connection kit that comes with two dongles. One for inserting an SD card from your camera to load the images. Or the USB dongle which you can connect the USB cord for your camera to the iPad to load the images. There are a few CF card readers that will work with the iPad if your camera uses those a Google search will feret them out for you. Additionally, you can also load images to iPad, iPhone or iPod touch (as well as laptops and desktop cumputers) using WiFi as you capture them using an EyeFi SD card.
    http://www.eye.fi/
    A good app for working with EyeFi on the iPad is Shuttersnitch (though, it does not coordinate with Lightroom)
    http://www.shuttersnitch.com/
    Unfortunately their isn't any RAW processing image editing software for the iPad that Lightroom can make use of the edits. Though there is an app called Photosmith that will allow you to rate, label, add keywords etc. then you can upload that info to Lightroom on your laptop or desktop cimputer and have that info synced to the images. You do have to transfer the image files themselves from the iPad to your workstation seperately.
    http://www.photosmithapp.com/

  • Using Tangosol Coherence in conjunction with Kodo JDO for distributing caching

    JDO currently has a perception problem in terms of performance. Transparent
    persistence is perceived to have a significant performance overhead compared
    to hand-coded JDBC. That was certainly true a while ago, when the first JDO
    implementations were evaluated. They typically performed about half as well
    and with higher resource requirements. No doubt JDO vendors have closed that
    gap by caching PreparedStatements, queries, data, and by using other
    optimizations.
    Aside from the ease of programming through transparent persistence, I
    believe that using JDO in conjunction with distributed caching techniques in
    a J2EE managed environment has the opportunity to transparently give
    scalability, performance, and availability improvements that would otherwise
    be much more difficult to realize through other persistence techniques.
    In particular, it looks like Tangosol is doing a lot of good work in the
    area of distributed caching for J2EE. For example, executing parallelized
    searches in a cluster is a capability that is pretty unique and potentially
    very valuable to many applications. It would appear to me to be a lot of
    synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
    implementation of Kodo JDO's distributed cache would be a natural desire for
    enterprise applications that have J2EE clustering requirements for high
    scalability, performance, and availability.
    I'm wondering if Solarmetric has any ideas or plans for closer integration
    (e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is just my
    personal opinion, but I think a partnership between your two organizations
    to do this integration would be mutually advantageous, and it would
    potentially be very attractive to your customers.
    Ben

    Marc,
    Thanks for pointing that out. That is truly excellent!
    Ben
    "Marc Prud'hommeaux" <[email protected]> wrote in message
    news:[email protected]...
    Ben-
    We do currently have a plug-in for backing our data cache with a
    Tangosol cache.
    See: http://docs.solarmetric.com/manual.html#datastore_cache_config
    In article <[email protected]>, Ben Eng wrote:
    JDO currently has a perception problem in terms of performance.
    Transparent
    persistence is perceived to have a significant performance overheadcompared
    to hand-coded JDBC. That was certainly true a while ago, when the firstJDO
    implementations were evaluated. They typically performed about half aswell
    and with higher resource requirements. No doubt JDO vendors have closedthat
    gap by caching PreparedStatements, queries, data, and by using other
    optimizations.
    Aside from the ease of programming through transparent persistence, I
    believe that using JDO in conjunction with distributed cachingtechniques in
    a J2EE managed environment has the opportunity to transparently give
    scalability, performance, and availability improvements that wouldotherwise
    be much more difficult to realize through other persistence techniques.
    In particular, it looks like Tangosol is doing a lot of good work in the
    area of distributed caching for J2EE. For example, executingparallelized
    searches in a cluster is a capability that is pretty unique andpotentially
    very valuable to many applications. It would appear to me to be a lot of
    synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
    implementation of Kodo JDO's distributed cache would be a natural desirefor
    enterprise applications that have J2EE clustering requirements for high
    scalability, performance, and availability.
    I'm wondering if Solarmetric has any ideas or plans for closerintegration
    (e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is justmy
    personal opinion, but I think a partnership between your twoorganizations
    to do this integration would be mutually advantageous, and it would
    potentially be very attractive to your customers.
    Ben--
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • How to use SOA Suite in conjunction with SOA Analysis and Design Tools

    Hi everybody,
    I am a novice in this field and I need some help regarding integrating analysis and design tools with SOA Suite.
    We used to analyze and design with Oracle Designer and use its powerful form generator to develop a system. It almost covered all the software lifecycle and kept the traceability between anlaysis,design and implementation.
    I have studied about the SOA concepts and read some papaer about SOA Suite. I have also installed the SOA demo based on SOA Suite and I found it absolutely amazing, but my problem is that It seems oracle does not have any tools for SOA Analysis and Design. am I right? if so, How can we analyze and design a system based on SOA concepts and implement it using soa suite in such a way that keeps traceability? What tools is used for this purpose?
    It seems that IBM have some tools like Rational Software Architect and Rational Suite which enable people to design and analyze based on SOA concepts and then generates some pieces of code (like oracle designer in old days) but is it possible to design in these tools and then generating codes for SOA Suite ? (for example generating a bpel file from a design model)
    As I told before I am a novice in this field and I would be so grateful if other users can share their expriences regarding this matter.
    Any help would be highly appreciated.
    Thanks in advance,
    Navid

    Learn About All Things SOA:: SOA India 2007:: IISc, Bangalore (Nov 21-23)
    Aligning IT systems to business needs and improving service levels within the constraints of tight budgets has for long been the topmost challenge for CIOs and IT decision makers. Service-oriented Architecture (SOA) provides a proven strategy to clearly address both of these objectives. Creating more agile information systems and making better use of existing infrastructure are two leading factors that are boosting SOA adoption across large, medium, and small Indian industries from the BFSI, Retail, Telecom, Manufacturing, Pharma, Energy, Government and Services verticals in India. If you are an IT decision maker belonging to any of these verticals, SOA India 2007 (IISc, Bangalore, Nov 21-23 2007) presents a unique opportunity to gather cutting-edge business and technical insights on SOA and other related areas such as BPM, BPEL, Enterprise 2.0, SaaS, MDM, Open Source, and more.
    At SOA India 2007, acclaimed SOA analysts, visionaries, and industry speakers from across the world will show you how to keep pace with change and elevate your IT infrastructure to meet competition and scale effectively. The organisers are giving away 100 FREE tickets worth INR 5000 each to the first 100 qualified delegates belonging to the CxO/IT Decision Maker/Senior IT Management profile, so hurry to grab this opportunity to learn about all things SOA. You can send your complete details, including your designation, e-mail ID, and postal address directly to Anirban Karmakar at [email protected] to enrol in this promotion that is open until 12 October 2007.
    SOA India 2007 will also feature two half-day workshops on SOA Governance (by Keith Harrison-Broninski) and SOA Architecture Deep Dive (by Jason Bloomberg). If you are an IT manager, software architect, project leader, network & infrastructure specialist, or a software developer, looking for the latest information, trends, best practices, products and solutions available for building and deploying successful SOA implementations, SOA India 2007’s technical track offers you immense opportunities.
    Speakers at SOA India include:
    •     Jason Bloomberg, Senior Analyst & Managing Partner, ZapThink LLC
    •     Keith Harrison-Broninski, Independent consultant, writer, researcher, HumanEdJ
    •     John Crupi, CTO, JackBe Corporation
    •     Sandy Kemsley, Independent BPM Analyst, column2.com
    •     Prasanna Krishna, SOA Lab Director, THBS
    •     Miko Matsumara, VP & Deputy CTO, SoftwareAG
    •     Atul Patel, Head MDM Business, SAP Asia Pacifc & Japan
    •     Anil Sharma, Staff Engineer, BEA Systems
    •     Coach Wei, Chairman & CTO, Nexaweb
    •     Chaitanya Sharma, Director EDM, Fair Isaac Corporation
    A partial list of the sessions at SOA India 2007 include:
    •     EAI to SOA: Radical Change or Logical Evolution?
    •     BPEL: Strengths, Limitations & Future!
    •     MDM: Jumpstart Your SOA Journey
    •     Governance, Quality, and Management: The Three Pillars of SOA Implementations
    •     Building the Business Case for SOA
    •     Avoiding SOA Pitfalls
    •     SOA Governance and Human Interaction Management
    •     Business Intelligence, BPM, and SOA Handshake
    •     Enterprise 2.0: Social Impact of Web 2.0 Inside Organizations
    •     Web 2.0 and SOA – Friends or Foe?
    •     Achieving Decision Yield across the SOA-based Enterprise
    •     Governance from day one
    •     Demystifying Enterprise Mashups
    •     Perfecting the Approach to Enterprise SOA
    •     How to Build Cost Effective SOA. “Made in India” Really Works!
    For more information, log on to http://www.soaindia2007.com/.

  • How to use MS Project in conjunction with a time-tracker tool

    In my new job, the company is using a time-tracking tool in a very strict way.
    Allow me to explain...
    Let's say the project has two phases: Ph1 and Ph2.
    Ph1 starts on day 1 and effort is 5 days.
    Ph2 starts after Ph1 and effort is 10 days.
    There are 3 resources available, R1, R2 and R3, and after a careful estimate of skills and avilability it has been decided the following:
    - Ph1 is assigned to R1 from day 1 to day 5, resulting in a duration for Ph1 of 5 days.
    - Ph2 is assigned to...
    ...R2 from day 6 to day 10
    ...R3 from day 8 to day 12
    resulting in a duration of 7 days.
    The above is mapped into the time-tracking tool to allow each resource to track time against Ph1 or Ph2 only in the aplanned timeframe. That is: R3 will be able to track
    hous aganst Ph2 only from day 8 and only till day 12. That is to avoid that resources track more time that what was originally allocated: if this is necessary, it must be explained why, a new plan must be calculated and more budget must be allocated. When
    the ok is given, the tasks in the time-racking tool will be changed accordingly.
    If the above situation is clear, my question is: how would you model this in MS Project?
    If the WBS in MSP is simply
    1. Project X
    1.1 Ph1
    1.2. Ph2
    when allocating R2 and R3 to Ph2, how can I specify that R2 will work only from day 6 to 10 and R3 from day 8 to 12?
    At the moment the only clean solution I found is to go into the task usage view and adjust, for each day, the working hours of each resource assigned to the activity.
    A second less clean solution is to have a WBS like follows
    1. Project X
    1.1 Ph1
    1.2. Ph2
    1.2.1 Ph2.R2
    1.2.2 Ph2.R3
    that is split Ph2 into two sub-activities, one for each resource.
    Thanks fo any help.

    Thanks a lot to both of you for your suggestions.
    Unfortunately the time-tracking tool cannot be changed: it is a new SAP based tool deployed worldwide.
    The problem is that each project involves different teams from differnt countries for a fixed "budget" (or a fixed number of man-days). Therefore there is a global PM in London that will define in the time tracking tool
    a WBS like this
    1. Project X
    1.1 Analysis
    1.1.1 Analyse impact for team in Rome
    1.1.2 Analyse impact for team in Amsterdam
    1.1.3 Analyse impact fr tam in Bangkok
    1.2 Development
    1.2.1 Develop A
    1.2.2 Develop B
    1.2.3 Develop C
    1.3 Customer Acceptance Test
    1.3.1 Prepare test environment by team in Amsterdam
    1.3.1 Test by final user
    1.3.2 Support user testing by team in Rome
    then he would assign the development tasks to the rsources that are supposed to work on it and for only the estimatd time. The reason is easy: he want to make sure that a resource tracks time only against the task she is supposed t work on and for only the
    estimated number of days, to avoid exceeding the budget. If extra effort is required for a resource on a task, before allocating more time on that task, a long battle will start.
    This we cannot change.
    The problem now is on our side, the teams in Rome/Amsterdam/Bangkok: we must keep our own plan in MSP. Our tam in rome is divided in 4 sub-teams, one for each module of the core product, each team with a team
    leader assiging tasks to resources with Asana. Untill las week, for months, there was not PM in Rome leading the four teams and hand and of course there was no global plan for the tasks in Rome.
    My first attempt was to try with a Gantt that would stop at phases level, without going down to task level, that are under responsibility of the 4 team leaders. the reason being that we have something like
    25 projects, some with dependecies with each other. I simply wanted to avoid having one huge MSP file with all projects down to task level: by experience, in my previous project (an ERP upgrade for Oracle EBS in FAO) this approach did not work very well.
    But this is bringing up he problem I described: if I stop at "1.2 development" in the WBS, than when I assign R1 and R3, to mak sure their effort is planned accordingly to what has been setup in the time-raking
    tool, I need to edit work in the task usage resource by resource, activity by activity.
    The other option is to insert under 1.2 development another level with all tasks from Asana, each task being assigned to one resource:
    1.2 Development
    1.2.1 Develop A
    1.2.1.1 Develop class x (by R1)
    1.2.1.2 develop class y (by R3)
    1.2.2 Develop C
    1.2.2.1 Develop new pricing interface (by R4)
    1.2.2.2 Modify all logos (by R7)
    A third option would be to have one dtailed plan for each project down to the Asana task and then one higher level plan summarising all projects, down to phases only.

  • Using external drive in conjunction with music on my internal hard drive

    ok i, like many others, have way too much music haha. i have a 250 GB external drive with all my music on there. This is working great but i'm tired of plugging the hard drive in, using firewire, and then having to eject it when i'm ready to leave. what i want to do is keep a small amount of music on my computer to listen to. I'd like to still be able to plug the hard drive in and listen to the remainder of my music through itunes. but i'm unsure of how to do this.
    my library will be the music that is on my internal drive........so is there a way to set up an alternate library for my external drive? kinda like a playlist or new folder........any help is much appreciated. hope this all makes sense.....if not i'll try and reexplain.

    also sorry if this is a reposted question.....i tried to search around the discussions and couldn't find the answer to my specific situation....actually didn't even see anyone trying to do what i'm talkin about.......again thanks for any help

  • Unable to use dtrace USDT in conjunction with -xipo=2?

    We're currently using Studio 12/CC 5.9 and our C++ application includes an internal static library into which we embed several of our own dtrace userland probes (USDT).
    We're looking at updating to Studio 12.3/CC 5.12 but have run into problems with the dtrace probes in release/optimised builds only.
    The basic build process for the library is as follows:
    1) Use dtrace -h to generate a probe header file from a foo.d definition file
    2) Compile (C++) source file foo.cpp which references the macros from (1) to create foo.obj
    3) Use dtrace -G to generate a dtrace probe object foo_probe.obj and update foo.obj
    4) Use CC -xar to create a static library libfoo.a containing both objects from (3)
    This works fine in debug builds with both CC 5.9 and CC 5.12, however release builds with CC 5.12 fail when we try to link a binary against the static library with "Undefined symbol" errors referencing the dtrace probe functions in foo.obj.
    I noticed that with CC 5.12 step (4) above seems to 'undo' the changes made to foo.obj file at step (3). I also noticed that step (4) is much slower with 5.12, hence suggesting that the compiler is doing some extra work, which then led me down the path of removing -xipo=2 from my commandline, at which point I get a library which I can link ok without missing symbols.
    Questions:
    1) Have there been changes to CC -xar which would explain this (eg previously -xipo=2 was ignored for archive creation)?
    2) If it does 'make sense' that I'm having problems here, is there any better solution than simply removing -xipo=2 from the library creation step? Obviously I'd like to get the performance benefits if possible, but I suppose that it may simply not be compatible with the way dtrace USDT providers work...?
    Thanks,
    Matt.

    Hi,
    The way that -xipo works is that at link time it recompiles all the object files doing inlining and optimisation between the files at that point. This will have the effect of undoing the dtrace -G processing, leaving just the raw dtrace function calls.
    You are using archive libraries so one workaround is to use the flag -xipo_archive=... to either none - meaning don't process the archive libraries or readonly which will inline code from the archive libraries but won't inline code into the archive libraries.
    Regards,
    Darryl.

  • Airport in conjunction with Linksys

    Im using an airport in conjunction with a linksys that is not wireless. The linksys is connected to 3 PC's via Ethernet, and the 4th port on the linksys is plugged into the airport. This produces internet that has no problems with our apple laptop. however i was wondering if it is possible to share files between the laptop and the PC's with this configuration. If it is possible i havnt been able to figure out how to do it. any information on how to use this configuration and share files or printers would be greatly appreciated. Also any suggestions if there is another way to use this configuration by changing which is plugged into what without purchase of any new items, i.e. connect the airport to the broadband connection and the linksys to the airport (the reverse of what it is currently).

    Hello sundaydrive. Welcome to the Apple Discussions!
    however i was wondering if it is possible to share files between the laptop and the PC's with this configuration. If it is possible i havnt been able to figure out how to do it. any information on how to use this configuration and share files or printers would be greatly appreciated. Also any suggestions if there is another way to use this configuration by changing which is plugged into what without purchase of any new items
    You would just need to reconfigure the AirPort Express Base Station (AX) as a bridge to allow wireless devices connected to it to "see" the wired devices connected to the Linksys. In a bridge configuration, the AX will "pass thru" the NAT/DHCP services that are provided by the Linksys router.
    To set up the AX as a bridge, either connect to the AX's wireless network or temporarily connect your computer directly (using an Ethernet cable) to the Ethernet port of the AX, and then, using the AirPort Admin Utility (located in the \Applications\Utilities folder), make these settings:
    Network tab
    o Distribute IP addresses (unchecked)
    o Apply the new setting.

  • I am using an Macbook pro in conjunction with a Time Capsule. I back up all my aperture librarys on it, but how can i view these images that are stored on the capsule please ??

    I am using an Macbook pro in conjunction with a Time Capsule. I back up all my aperture librarys on it, but how can i view these images that are stored on the capsule please ??

    If you want to see what is in a file that is backed up by time machine then you have to restore the entire file. Having said that I have had good experience with Time Machine and individual files... if the file is there then all of it's components are almost for sure there. A way to get around this in Aperture would be to use referenced images. The images then still exist as individual files and can be backed up and restored individually. You would have to do so on a file by file basis though and your album information would still only be saved within the Aperture library.

  • I have a new time capsule, want to use it in conjunction with iMac G5 and MacBook; laptop runs Snow Leopard  but G5 can't install Snow Leopard, is stuck at OS 10.4.11.  Am I doomed?  Can anyone advise me?  Thanks..

    I have a new Time Capsule, want to  use it in conjunction with an iMac G5 and a MacBook.  MacBook runs Snow Leopard, but G5 (lacking Intel processor) can't install Snow Leopard, is stuck at OS 10.4.11.  Am I doomed?  Will appreciate any advice.  Thanks.

    you should still be able to get a copy of Mac OS X 10.5 (Leopard), which should run on your G5.

  • Can. I use my MAC Mini in conjunction with my iMac?

    Can I use my MAC Mini in conjunction with my iMAC? I can not upgrade the OS of the iMAC due to software that I need and Lion does not support it.
    If I could bounce back an forth or use the software from the MAC Mini with the iMAC, this would solve my dilemma.

    Before I buy a networked drive can I use the airport to connect with an external
    USB drive and set both macs  use the  drive as a time machine
    Can you hook this up? Sure.
    Is is supported by Apple?  No.
    Some users report that this seems to work when they try it.
    Others....like me....have corruption problems anywhere from a few days to a few months after I set up a test....which I do whenever a new version of firmware or a new operating system becomes available.
    We cannot predict your results, so if you decide to try this, and your backups are valuable, you might want to have a secondary backup plan in place as well....just in case.

  • Is it possible to use run-time configuration in conjunction with Session Manager?

    I'm trying to avoid using MAX, basically because it's a pain for users to configure MAX when the application already knows what the configuration is. I actually get the configuration from a database, but that's another story. I've experimented with run-time configuration (see page 4-31 of the 'IVI Driver Toolset' manual) and it works fine. However, when I try to use this in conjunction with Session Manager, session manager complains that my logical name is not a driver session name. I suspect that Session Manager is looking up logical names from a different source, therefore run-time configuration won't work. Any suggestions? I've attached the files I'm working with.
    Attachments:
    SessionMgrTrial.zip ‏41 KB

    Hello Mark:
    The Session Manager GetLogicalNames method does not distiguish between logical names and virtual instrument names. A user of the method cannot ask for just logical names or just virtual instrument names. Right now you always get both. The new function is called InstrSessionMgr.GetNames. Notice that the online help file says Boolean for the flags parameter when it is an enum.
    Hope this helps. If you have any further queries, please let us know.
    -NI support
    National Instruments - Software IS the instrument!
    Attachments:
    help.gif ‏22 KB

  • GUIBB Form used in conjunction with IF_FPM_TRANSACTION

    Hello,
    I looked at the sample GUIBB Forms with provision to save the data in the form. It seems to me that it is not possible to use the interface IF_FPM_TRANSACTION in conjunction with GUIBBs. Is this true, otherwise, how do we incorporate the interface IF_FPM_TRANSACTION, say into the feeder class of GUIBB Form, i.e. IF_FPM_GUIBB_FORM ? Thank you.
    Regards
    Kir Chern

    Hah!, well, I suppose that's to be expected. We're putting seriously powerful technology in the hands of everyone with a few bucks to spare; it's inevitable that some oddball uses are going to crop up. When I think about the things one could do with an iPhone - without all that much skill or effort, even - it kinda gives me the willies.

  • White lines using Quick Selection Tool in conjunction with the Content Aware Fill.

    When using quick selection tool in conjunction with the content aware fill i get a white line where the image join is.
    How do i stop this from happening?
    Basically, i need the select tool to 'over select' into the next photo so its a seamless fills.
    This isn't a complex seletion, its a photomerge with blank, white areas that i am filling in using content away fill.

    thanks for your help.

Maybe you are looking for

  • Time machine external hard drive in windows not working.

    I recently bought a Western Digital external hard drive to use with time machine so that I would have my laptop backed up. When I set up the time machine, it had to format my hard drive, erasing the WD files that were on the drive. I was able to cont

  • Supplier Wise Aging Report As of Date

    Hi, We are on 12.1.3 I have wrote a query to find the Supplier Wise Aging Report .But it is not working for back date i.e if we applied Amount 1000  today, in report patameter if a pass the value from to yesterday date - report should display sum of

  • Can you use Quicktime Pro on more than one computer?

    I'm thinking about buying a Quicktime Pro 7 key, but I couldn't tell if it is just for one computer, or if I could use it for both my iMac and labtop. Anyone know? Thanks. -Ed

  • Timeline and full screen view in PSE11

    I have read as many reviews of PSE11 as I can find but none have mentioned the Timeline, which I use every day. Does it still exist in PSE11? What about the full screen view and compare side by side view? Do they still exist. If they do, can you stil

  • Interactive PDF file size

    Hi all, Is there a way I can reduce an interactive PDF file size. As per my understanding it can be done with ES2 PDF generator in AdminUI, but I am looking to do this in the process(work flow). Any suggestions? Many thanks, Tanmay