Some thoughts on Previews

To provide a little background on why I am thinking about how previews are used in Lr it might help to understand my workflow. I import my raw files, cull the ones I do not want to keep, keyword and add metadata, and tweak in develop. I then export to JPG. I do this because:
1. I can quickly browse through images in any program I choose. Others in my work environment can do the same as the files are saved on a network.
2. I can quickly email or FTP the JPGs to others.
3. I can edit the JPGs in any other program, because all programs can efficiently use JPG.
4. For 99% of my usage, a high quality JPG works just fine.
Now, it just so happens that Lr is maintaining previews of my images in a proprietary JPG file so that:
1. It can display images quickly in the library.
2. It can show the previews even if your images are offline.
Lr has a neat shortcut, where you can print a preview, instead of rendering the raw file, which speeds up the printing process, albeit with a loss in quality. However, if you render 1:1 previews, I have a hard time telling the difference. In other words, one can have the best of both worlds, high speed and high quality.
So, I want to use JPGs for most of my work and Lr can save 1:1 previews. However, I cannot access the previews directly because Lr does not save them as JPGs, even though the preview is in a JPG format (as I understand it). Also, it seems wasteful to have both the previews and exported JPGs.
I would like to be able to bypass my current step of exporting to JPG, and then reimporting the JPG into my library, and just use a 1:1 preview JPG. This would require the following:
1. Lr save the previews as JPG files.
2. Provide a "Show preview in explorer".
3. Provide a "Copy selected previews". This is needed to aggregate a selection of previews, scattered throughout the Lr catalog preview folder structure, to a common folder for other applications to use.
4. In the catalog settings, provide the same options for JPG quality (0-100) as in the export dialog. I would like to be able to set this separately for the 1:1 rendering (use a high quality) and the standard previews that load first, which could be a much lower quality.
Now, I fully understand that the whole idea of Lr is not to bother with the JPG (or PSD, TIFF) and just work directly in Lr, happily saving all the changes as metadata. This is fine in theory, but as has been discussed here a number of times, breaks down in the real world because Lr does not do everything, and even if it did, someone else will always be providing some capability that is better than Lr. Consequently, at present, at least for me, it makes sense to maintain a "developed" version of the raw files as a JPG.
I realize this is a complex issue and that I may not have thought through all the consequences of this, so I am interesting in any other thoughts on this idea.
Regards
Rory

>Rory, do you have any tricks for how best to arrange the catalog given that you have two of everything? Do you use "Find" to select just JPEGs, etc.?
Hi Dave
I am currently going through my library and attaching the keyword "PRIME" to the most representative image (usually the JPG) for every image.
When (if) Lr becomes capable of stacking across folders and autostacking works for a 0 difference in capture time, I plan to stack my multiple versions with the PRIME on top.
In the meantime I can filter on PRIME and just see the "final" version of each image and count unique images and so forth.
Cheers
Rory

Similar Messages

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Just some thoughts

    Just some thoughts.
    Adobe says that piracy is one of the reasons for going to the cloud.  CC has to verify the subscription periodically why not add that process to the perpetual license for a nominal yearly fee.  On the other hand, why can't a user subscribe to CC for a month to get the latest version and then go on the web to get a crack?
    I also own, unless they go to a subscription, Elements 10.  Lots of functions that CS6 has for photo enhancements.  I found some plugins that really enhance Elements. e.g. ElementsXXL adds new menu items, icons, buttons, key shortcuts and dialogs, so they seamlessly integrate into the user interface of Photoshop Elements. ElementsXXL bridges the gap between Photoshop Elements and Photoshop and greatly enhances the image editing experience in Photoshop Elements.
    I am sure other plugins will appear to make Elements more like Photoshop.

    "Adobe says that piracy is one of the reasons for going to the cloud. CC has to verify the subscription periodically why not add that process to the perpetual license for a nominal yearly fee."
    Why on earth would anyone want to pay a "nominal yearly fee" for another nuisance created by Adobe.  They can already shut down perpetual license holder's software if they pull the plug on the activation servers.  Don't you know that their servers check our software now as it is?
    Just recently with the release of the CS2 software to previous owners, many people could no longer use their already-installed CS2 software because Adobe's CS2 servers were shut down.  What does that tell you?
    I'm not going the way of cruising the internet for cracks.  That's for disillusioned kids who think everything should be free.  When my CS6 software no longer works I'm going with another company.  Probably Corel for some of the programs.  CorelDRAW was always better than Illustrator anyway.  Industry standard doesn't always mean it's great.  It just means everyone is using it and they expect you to do as well.  Personally, I think Macromedia made better software before Adobe took them over and MM software was a lot cheaper.

  • I cant open some applications like preview or text edid after installing mavericks, im missing the Base.lproj files, can some one help?

    I cant open some applications like preview or text edit after installing mavericks, i found a post that said im missing the Base.lproj files, this fixed my calendar application but i need the file for preview, can some one help?

    Hi vampyro33,
    Thanks for visiting Apple Support Communities.
    If it seems like system files are missing and applications are quitting unexpectedly, I'd suggest reinstalling Mavericks:
    OS X Mavericks: Reinstall OS X
    http://support.apple.com/kb/PH13871
    Best Regards,
    Jeremy

  • Nokia X/X2 -- Some thoughts and questions

    Moved from Symbian Belle, some thoughts:
    1. Email widget doesn't automatically refresh. Have to manually tap to load new mail despite specifying a sync. interval.
    2. Asking me to Sign in to a MS, Exchange, Nokia or Google account etc. to use the calendar. Can I not get a local one for which I can choose to Sync if desired, similar to Symbian?
    3. Does the X2 have a play via radio capability? I don't see the app but wanted to if the hardware capability is there?
    4. How do I know the phone is using Gps, Glonass or Beido? Specifications listed positioning capabilities.
    5. Don't see the mix radio app. Possibly not available in certain countries.
    6. Belle had option to automatically switch between GSM and 3G where available. X2 only allows a 3G only setting and GSM/CDMA.
    7. People and Phone apps need to be bundled together. While scrolling through contacts via phone app, if you accidently tap one, it starts dialing. Plus it doesn't show items as together and repeats contacts with more than one number.
    8. Didn't really like the Metro and Fastlane UIs. One reason was that the wallpaper was not changeable. So if you have gaps, then all you see is black.
    9. Glance screen doesn't seem customizable. Time appears on 2 lines, seems a bit odd.

    Reply to 9:
    Glance Screen on X v2 is almost amazing. http://developer.nokia.com/resources/library/nokia-x-ui/essentials.html
    A past Nokia X user, now a X user.

  • Discussions: Some Thoughts From Abroad

    Hi all,
    I would like to add some thoughts on the points debate. I think we all recogonise that the majority of users of Apple Discussions are in the USA, I don't have a problem with that, it's a fact of life. However, the new points system does have a bias towards those replying from the same country as the OP due to time zone differences.
    I don't ever see the day when I can imbibe in the Lounge with the 4s and 5s. By the time I get to my computer each day, all the questions I am able to answer have already been answered unless there are a few insomniacs around, or I become one myself!
    I liked the old system of awarding points. Not just because I was able to get a few myself, but mainly because I was able to give recognition to someone who had helped me expand my knowledge of Macs, even when I was not the OP or after the question had been answered. I realise that this system was open to abuse, but only by a few surely. Limiting the amount of votes one could award each day according to your Level would address that.
    In closing I would just like to say a big thankyou to all who keep these Discussions going. Your work is appreciated.
    Cheers
    NikkiG

    737/3183
    Hi NikkiG,
    "I liked the old system of awarding points. Not just because I was able to get a few myself, but mainly because I was able to give recognition to someone who had helped me expand my knowledge of Macs, even when I was not the OP or after the question had been answered. I realise that this system was open to abuse, but only by a few surely. Limiting the amount of votes one could award each day according to your Level would address that."
    And this possibility of awarding others was shaping a very different spirit:
    A lot of tips could be found among "side-replies" that were on topic but coming later, after the OP was already satisfied with the urgent fix.
    It was also shaping a spirit of sharing knowledge in a deeper, more elaborate way. To me this was really about a community of people who shared this kind of enthusiasm for good knowledge, for sense and for general improvement.
    Now that we live in immediateness, the "game" is much more about Shooting First, the one short and "simplified" answer that would, yes, quick-fix, but also kill most of the "side-discussion" as considered almost off-topic...
    "By the time I get to my computer each day, all the questions I am able to answer have already been answered"
    Well, the general feeling, anyway, no matter what time is it in Australia, is more like
    "By the time I'm done reading the question as it appears on top of a forum, one or two "Link-to-TheXLab"-type of quick answer have already been copy-pasted within the same minute"!
    (not only Donald but some others who are better at self-censorship, who manage equally well in the new system or who don't speak much because they don't know enough about true reasons/motivations, have left a word or two here, in their own way, about your feeling)
    "A bias towards those replying from the same country as the OP due to time zone differences"
    I see what you mean,
    although for me European at the contrary there is not anymore this period of disabling the "20% chances" thing that I was feeling in the Old Discussions
    you still can give quick replies to some newbies from Japan/New Zealand/Australia, but I agree there may not be as many as those from the US...
    Your suggestions:
    I second and support them, but I'm not sure, now that the system works again smoothly with no slowing down
    - Thanks so much Mods!! -
    whether they are very high anymore on the list of priorities?
    This depends also on the "political" point of viewing them. If with Tuttle we consider that "many of the new features mainly enhance the forums' usefulness to Apple"
    (my main personal guess about this aspect, would be about the gain in Phone Technical Support Crew management)
    possibly there will be no more changes about this?
    Possibly even more immediateness in the future,
    after all we are seriously entering the "iPod world" now, although let's hope the good old times of computing being a matter of a few Cupertino geniuses and a bunch of truly enthusiastic pure Mac people, is not too far gone now?
    Let me finish with your words, as they are exactly those I'd have liked to write myself:
    In closing I would just like to say a big thankyou to all who keep these Discussions going. Your work is appreciated.
    Axl

  • Some thoughts on issues on portal-building

    I have been studying a few portal platforms to use in some prospective projects.
    Quite often there is the issue of integrating a portal with an own developed content publishing system of ours.
    With "publishing system" I refer to a system that can provide a
    html-template based function where users can create/edit and preview dynamic texts which are usually stored in a database and then publish them on an external site. I understand that the portal has some kind of a repository where users can upload files, but are there any template / preview or publishing functions in the portal ?
    Also, Can I easily change the presentation layer of the portal, I don't mean by using the admin-functions "edit stylesheet" or "edit layout", but suppose a customer give me a full site in html layout and I want to use oracle web portal for the underlying server logic but with the new site layout. Is this possible ? How tied up am I with the present portal presentation?
    Another thing that also has suprised me is that there does not appear to be any java api's to navigate and retreive information from the repository (read content areas and documents). I had expected to find a full java-API similar to your IFS product API that would allow me to perform operations on the repository ? Is this planned in future api releases ?
    I would really appreciate if anyone out there with any ideas or similar thoughts could drop me a line here ?
    Greetings. Johan Ekman.

    I have been studying a few portal platforms to use in some prospective projects.
    Quite often there is the issue of integrating a portal with an own developed content publishing system of ours.
    With "publishing system" I refer to a system that can provide a
    html-template based function where users can create/edit and preview dynamic texts which are usually stored in a database and then publish them on an external site. I understand that the portal has some kind of a repository where users can upload files, but are there any template / preview or publishing functions in the portal ?
    Also, Can I easily change the presentation layer of the portal, I don't mean by using the admin-functions "edit stylesheet" or "edit layout", but suppose a customer give me a full site in html layout and I want to use oracle web portal for the underlying server logic but with the new site layout. Is this possible ? How tied up am I with the present portal presentation?
    Another thing that also has suprised me is that there does not appear to be any java api's to navigate and retreive information from the repository (read content areas and documents). I had expected to find a full java-API similar to your IFS product API that would allow me to perform operations on the repository ? Is this planned in future api releases ?
    I would really appreciate if anyone out there with any ideas or similar thoughts could drop me a line here ?
    Greetings. Johan Ekman.

  • Colour Management - who does what - Some thoughts now the smoke is clearing

    First up, thanks very much to everyone who contributed their ideas and expertise to my recent query here, when I was seeking help for a problem with colour management issues when printing a magazine I edit. I have a ton of suggestions  to work through and study but the smoke is slowly clearing and it raises some interesting points which I think are worth recounting.
    First of all, I have been editing short run magazines now for 25 years, at first part time and later on a professional contract basis.  I am not a trained graphic designer nor a trained printer. I did start out training as a graphic designer, many years ago but gave it up for a career in IT (as a networking specialist). That was full time until 10 years ago, although I did some freelance writing and editing in my spare time.
    And yes, I did start originally with scissors and cut and paste, and moved on through black and white with spot colour and Pagemaker software  to full colour and InDesign today. One thing which may be different about my experience to most of yours is that I am a PC user and always have been. All my editing and graphics work has always been done on a PC - Pagemaker was our DTP package of choice for a long time and we supplemented this with Corel-Draw (which has a range of graphics handling options). All my software is legal and I always register it and keep it up to date. I have used the same graphic designer for quite a few years now and whenever we upgrade our software he goes and gets trained on the latest release.
    Around 10 years ago I was offered the chance to edit a specialist short run magazine (not the current one). This was a chance I took and gave up the day job and became a full time freelance. Editing is not my main or only source of income. I am also  a freelance writer and photographer and heritage consultant and I have a specialist image library.   I sell my own sell my work - articles and pictures - to the national and local press. I also write books (non fiction) on commission. The magazine editing is really an extension of my interest in historic landscapes. I have never had any complaints, or problems, with the freelance work, photos and archived images I sell.  Clients include national newspapers here in the UK, national magazine groups and my books are available in national bookstore chains. I supply my work digitally, naturally, and it includes photos I have taken myself and items which I have scanned into my library of historical images and store on line. No reported colour management issues there.
    I have always enjoyed a good relationship with my publishers and printers because I seek to be as professional as possible, which means delivering my stuff on time, to the required standard so that minimum intervention is required from them. This does assume that I have a clear brief from them on what they need from me.
    Recently this approach has not been enough to avoid colour management issues with the short run magazine I currently edit. I have been wondering when  and where things went astray and date it back to the upgrade to InDesign two years ago. However it may have its roots in my earlier decision to use PCs not Macs for my work.
    Until 4 years ago I had used the same printers for magazine editing for many years. They were a well respected firm specialising in short run magazines. They were not far from where I live and work and if there was a problem I would go over and discuss it with them. They were happy, and competent, to handle Pagemaker files generated on a PC and convert my rgb images to cmyk if there was any concern about the colour balance. On a few occasions I paid them to scan a photo for me. However 4 years ago the owner decided to retire and shut up shop. I needed to find a new printers and it had to be someone who specialised in short run magazines and could meet the budget of the charity I edit for. Also someone who could handle copy generated using Pagemaker running on a PC. I chose a printers I had used briefly in the past  where I knew some of the staff and was promised PC based Pagemaker would not be a problem. I even got this in writing. I started to send them proofs generated using Pagemaker v7 on my PC.
    I soon found that although they had agreed they could handle Pagemaker on a PC in fact they had only a few PC based clients and were using a single ageing PC running Pagemaker to proof their work. In fact nearly all their jobs were Quark based. I was also told we had to supply CMYK images although not given any further requirement so I now did the conversions from rgb to CMYK using my PhotoPaint software. (There are quite a few settings in Corel for the conversion but there was no guidance  by the printer on which to use so to be honest it did not occur to me that it might be a problem).
    Now of course I understand that the drive to get customers to supply CMYK images was a Quark driven requirement back in the late 1990s. I did not and do not use Quark so knew nothing for this.  I did have some early colour problems and font incompatibilities with the new printers and was pressured by their senior Graphic Designer (who designed for their own contract clients) to upgrade to InDesign and provide them with a .pdf, which I was assured would solve all my problems. The .pdf would be the same as the final printed magazine because "it would not require any further intervention by the printers".
    I expect you are collectively throwing up your hands in horror at this point, but I think he was speaking genuinely. The creation of a .pdf  using InDesign, is widely promoted as the ultimate answer to all printing issues.   I have encountered it recently with a lot of printers' salesmen and my friend, who edits a learned journal, has just been told the same thing by her printers, to get her to upgrade to ID. Incidentally she also uses a PC.
    So we upgraded our design process in house to InDesign and our graphic designer went on a course, two courses in fact. When we came to produce our first .pdf using ID, the printers'  Senior Graphic designer came on the phone and talked our designer through the ID Export function. I think he may at that time have told him to create a preset profile with MPC and the defaults, but to be honest I don't recall. We were never sent anything in writing about what settings we needed to match theirs. I continued to have intermittant colour management problems but put this down to my photos. Things came to head with the most recent issue where the colours were badly out on the cover, supplied by a press agency and taken by a professional photographer. The printers seemed to have little or no idea about possible causes.
    Initially I thought that part of the underlying cause must lie in some mismatch between what I was sending the printers and what they expected to receive so I asked them to specify what I should send. All they said was use Profile preset as MPC setting and accept  the defaults which accompany it.
    So I came on here looking for a solution. A lot of people were keen to offer their own experience which I really appreciate. However the messages could be conflicting. Some of you suggested it was the underlying cover photo which was at fault, some that it was my monitor which needed better calibration.
    Many of you here said that part of the problem, if not the whole problem, was the way I was generating my CMYKs for the printer and I should use Photoshop to do this. You also mentioned a number of possible colour management settings which I should try.
    At times the advice seemed to change tack. There were suggestions that the colour management issues I had  were nothing to do with the printers, that it was up to me not them. Quite a lot of you said I needed to be better informed about Colour Management issues. I agree, but I had never had any previously (maybe good luck, maybe good support from my previous printer) so I was not even aware that I needed to be better informed.  Some of you mildly chastised me for not finding out more and doing more to manage my own colour management with the switch to ID. To which I can only say if I had needed to train up, I would have done. I did not realise I needed to.  Nor was my designer aware of the issues as colour management was not really covered on his ID courses which were about typesetting and design.
    Some of you even seemed to hint that unless I was prepared to use an expensive high end printer or effectively retrain as a print specialist or get my graphic designer to do so, then I probably shouldn't be in the magazine editing game at all. OK maybe that is a bit harsh but you get the drift.
    The fact is that printing is much more accessible these days to all sorts of people and in particular to people with PCs. My brother lives in a large village in an isolated area and produces a village magazine which has been a great success. It is in black and white with spot colour but he would like to move to an all colour issue. He is a bit nervous of the colour management issues as he has no experience of graphic design and is his own designer using a low end entry level design package. He too uses a PC. The printers reps all tell him the same thing they tell me, that all he needs to supply is a .pdf using InDesign.
    Somewhere I feel a black hole has developed, maybe back in the 1990s with Quark 4.11. A lot of printers standardised on that, and set up a work flow and prepress dependent on CMYK images as provided by the clients. They assumed the the clients would doing their own colour management. This approach also assumes everyone is using Quark on a Mac with the full range of Adobe software. When it became possible to generate .pdfs using InDesign, this was held out to users as the Holy Grail of magazine printing, even though their workflows and prepress were still based on Quark 4.11 principles. Any underlying colour management issues the clients now have to tackle themselves.
    So now we have the situation in which I find myself, having to learn from scratch a good deal about colour management issues so that I can tell the printers what is needed for my magazine. Meanwhile all the printing salesmen, the ones I encounter anyway, are still busy pushing the InDesign to .pdf as the "be all and end all" solution. Some re-education is needed for all parties I think.

    I am glad to see that the sun is peeping through the clouds.
    I apologise for my Aussie-style straight talk earlier, but as I said before it was not directed personally at you but in the direction of others whom you epitomize, repeating a conversation I have had many times over the last 10 years or so where respectable, well-meaning photographers, designers and other contributors refuse to accept that colour management is being thrust upon them.
    It is a simple fact of life, there is this 'new' thing that has butted into the very root of our trades and changed the most basic principles of printing and photography.  We expect that this kind of thing does not happen but the industry we now work in is not the same one we trained in twenty years ago.
    Many printers are still struggling with the same conflict, so many tradespeople cannot accept this change.
    This is exacerbated by the fact that colour management is so complicated to learn and implement and confounded by the fact that the default settings and a clumsy workflow often yield acceptable results with incorrect, generic settings, hence the old 'use InDesign and make a PDF and it will be ok' route.
    When the chain of colour management includes the photographer, the photographer's client, the designer, the other designer maybe, the prepress person, and the platemaker, and a single incorrect click by any one of those can kill the CM it is not surprising that in the end when someone is looking back to see where it fell over they usually never find out.....   They will meet someone who says ' I never touched it, I simply opened the file and scaled it and closed it'.  And that person will be a reputable photographer or designer (and CLIENT) who has no idea they just broke it.  So what do we do?  We go with the generic setting that seems to yield adequate results therefore avoiding the confrontation. 
    You need to understand the situation of the printer who took his business through the 'early' days of colour management, we had all kinds of very reputable sources supplying incorrect files, we did not have the expertise yet to be able to address the entire workflow, it would have meant training photographers and designers all through the best design houses and national institutions, because they blamed the printer.  Only in the last few years have I seen these people coming around to the fact that they bear responsibility for implementing their own cm and maintaining it through their own work.
    Sadly, many high end sources are still not there, and I mean HIGH end!  Probably the ones that don't even visit this forum because they want to keep blaming the printer... They tend to live with the poor quality reproductions and just pull up the worst ones and fiddle with those and try to avoid the 'elephant in the room'.
    I am sorry to say that it was not practical for a printer to reject mismanaged files for fear of losing clients who would happily accept less than perfect results in order to avoid the painful truth that was being told to them.  The best thing we could do was to gently make those clients aware that their workflow was imperfect and hope to show them how we could help...  Many print shops do not have someone knowledgeable enough or patient enough to do this, or the boss does not understand the issue either and tries to work around it to keep his jobs flowing in the expectation that all those experts in the chain will eventually tame the thing.
    The many experts on this holy forum are waaaaayyyy ahead of the printing industry in general and photographers and designers in general in their understanding of colour management workflow.  I have seen first hand how reputable local industry people and trainers alike are spreading misinformation and bad techniques, when I discovered these forums back in about 2002 I found that they opened up a whole new galaxy of knowledge and facts that actually worked and made sense, unlike what I had been told locally....  This forum taught me what the Adobe text books did not, the Tech' teachers did not, local 'experts' did not! 
    I tell all interested people to join these forums and learn to discriminate between the good and bad information.

  • Some thoughts on new features and workarounds

    Hey Gang,
    I am a recent FCP7 convert, actually used to use PPro up unitl 2005. A couple things have really bugged me and I think that these requests may be helpful to some of you or perhaps you know an easy workaround.
    One thing that is annoying is the lack of "Apple" keystrokes in the MacOS version. For isntance when trying to select multiple clips with the selection tool I am unable to hold down the Apple key to add additional clips to my selection.
    Secondly, we should have the option to when using paste attributes, to select what attributes we want to paste. I like going through a project adjusting volume throughought and then applying color correction. Many times I'll want a whole scene to use Magic Bullet Looks but I cannot copy and then "Paste Attributes" without changing all the audio as well. You can tell PPro what effects to remove but not to paste. Yes I can nest the sequence and apply the Magic Bullet Looks to the nested sequence, but that makes fine tuning additional clips more difficult.
    Why am I not able to cut an effect from the Effects Controls window or even drag and drop it onto multiple selected clips? That's another one that doesn't make sense. Again if I am color correcting and many clips need the same correction. I suppose I could create a million presets for all these things, but I don't want to clutter up my effects menu.
    In FCP I used to be able to rightclick a selected audio or video clip near an edit and find "Add Video/Audio Crossfade". Yes there is a shortcut for this but some times the dissovles show up on tracks that are highlighted on the left. This is another funny Premiere thing that I guess I'll get used to, but I am not sure why I need to highight the clip and the track when all I want to do is apply something to that clip?
    I won't even get into the stereo/mono track situation. That seems to have been covered all over the place.
    Just my 2 cents as I sit here editing away! Your thoughts?

    Please be sure to submit these feature requests where the right folks will be certain to see them:
    http://www.adobe.com/go/wish
    More on feedback for Premiere Pro: http://adobe.ly/q6pEBy

  • Some files not previewing in Bridge CS4, only showing icons

    This is not new but I finally got to the point where I'm bothered enough to write. I'm browsing folders that have Illustrator, Photoshop, and .tif files, and some files preview, and some just show the icon of the file type. What needs to happen for me to be able to preview ALL the files in the folder I'm browsing? b.t.w., it's not just this folder - it's happened before with other folders and there seems to be no sense to it - some folders I browse everything shows up, others only some files preview and others are as described above. Thanks for any advice...

    I would go to tools/cache/purge cache for xxx folder.  Let the thumbs rebuild and see how it acts.  It is best not to start a new action until the spinning wheel in lower right corner stops to indicate it is finished.

  • Some thoughts on Arch's way forward.

    I was reading the thread on messed up /opt. when I came accross this article:
    http://www.osnews.com/story.php?news_id=8761
    I have read a few articles which have critisised either arch (quoted elsewhere, MS critisising Linux) and in this one in Arch v Slackware.
    One of the point is that there is paid for support for Slackware but not for Arch. I already do install (for the company I work for) Arch for money, and support it afterwards, so there is paid for support for Arch.
    Question is though, should there be "official paid support" for Arch, if so:
    1.  Who decides who can do paid support?
    2.  What could be the criteria for it?
    I also noted that Arch is supposed to be less stable on the server than Slackware, though I have never had a problem with Arch on the server as I only install what is needed and it runs fine.
    Any way, anyone any thoughts on paid for support?

    I see that is now two people who think I am the spawn of Satan!  :twisted:
    tomk wrote:
    Official support would be a bad idea, IMO. As you say, Benedict, many businesses like to pay for support, because in doing so, they gain entitlement to the rights and protections afforded to paying customers. In negotiating their contracts or support agreements, they can request SLAs, service guarantees, and related penalties to ensure that these are enforced.
    Well, yes. The company I work for already does this for Novell, Microsoft and Linux. On the Linux front it is a minority of Suse box's, quite a few IPCop ones and quite allot of Arch. The businesses don't actually care what version of Linux, in fact many don't care if it is Linux, they just expect it to do what we say it does. We do that in exchange for cash.
    A support provider who enters into such an agreement becomes bound by it, immediately limiting their freedom and flexibility. I would hate to see this happen to Arch.
    Well, what support we provide for any of the platforms we support does not bind the vendor who provides the platform. So I don't see this as a problem.
    Mind you Judd and co should be making money out of it as well.
    Dusty wrote:
    Interesting idea, interesting discussion....
    I think it would be great if some company offered additional Arch support, but I think it would need a lot more funding than the current Arch project has. So third party support, with the company providing the support offering huge donations to Arch Linux would be the best case scenario.
    Thanks for saying it was an interesting idea. 
    Well, we don't sell "Arch" but we do install systems which run on Arch. (Some at any rate) and this will probably expand. Being a for profit organisation we do have the resource to do this, especially for companies who are geographically close to us.
    phrakture wrote:
    I am now providing un-official arch support - if you want me to post in your thread, you must send me $1.99 (paypal, visa, mastercard, and cashier's check only) - then I will respond... this will be my last free post...
    Well, the company I work for already charges for paid for support. That does not stop me provided free support. Mind you I am unlikely to fly over and stay for a week and fix something you broke for free.

  • Only some pictures have preview images in 'Years' view

    Has anyone else encountered a problem where many pictures have a preview image, but no matter how long you leave Photos running, some pictures' preview image is just a grey square? Is there any way to manually make Photos go through and build preview images for every photo?
    Screenshot below.

    macopp wrote:
    Here you go...
    What the heck happened to your auto-sync enable button! - is this some difference in Mac vs. Windows or something .
    (My screenshot was from Windows version).
    Maybe you'll have to use the Metadata menu to enable metadata auto-sync, e.g.
    macopp wrote:
    Yes, version 5.3.
    OK - good...
    macopp wrote:
    I think you're giving up too easily. There are some quirks in Lr's keywording to be sure, but if you are going to use Lightroom, and you want to keyword your images, it pays to learn those quirks and how to overcome...
    Yes I totally am! But sometimes keeping it simple (like using collections instead) gets the job done.
    Well, only you can determine if it's worth your while - not sure what to say...
    ........lots of stuff answered by Richard.......
    ........lots of stuff answered by Richard.......
    ........lots of stuff answered by Richard.......
    macopp wrote:
    is there a way to show multiple, larger images in develop mode?
    If you have multiple monitors, display the grid on the other monitor (see Window menu)
    macopp wrote:
    Thanks heaps for you help.
    Cheers
    Not sure how much help I've been, but you're welcome just the same.
    Cheers 2U2,
    Rob

  • OT Some thoughts on CSS

    Some interesting thoughts! See Dvorak's column
    http://www.pcmag.com/article2/0,1895,1987181,00.asp

    "Michael Hager" <[email protected]> wrote in
    message
    news:e9m65d$qfb$[email protected]..
    > the WWW consortium loves standards... that's why they
    have so many of
    > them!
    They ought to hire a good jazz band and make an album..
    Patty Ayers | www.WebDevBiz.com
    Free Articles on the Business of Web Development
    Web Design Contract, Estimate Request Form, Estimate
    Worksheet

  • Some thoughts / questions on createVirtualCopies() LR5

    I've been playing a little with the createVirtualCopies funciton in LR5.
    First adding the createVirtualCopies() function is great!
    Thought / wise: could there be a similar function in the LrPhoto class, so that based on a photo a Virtual copy is created?
    Then I'm not dependant upon any active selection.
    Some questions:
    Why is it a catalog namespace?
    Now as a plug-in developer I have to deal with selected photos and I would love to create a Virtual copy on a particular photo object as I like.
    And what happens if the user selects a different photo when the plug-in is running?
    What is the return value?
    At least it is not a properly LrPhoto object.
    I did a little test:
    local LrDialogs = import 'LrDialogs'
    local LrTasks = import 'LrTasks'
    local LrFunctionContext = import 'LrFunctionContext'
    local LrApplication = import 'LrApplication'
    Name          inspect
    Purpose          module for listing a table
    Source          https://github.com/kikito/inspect.lua
    local inspect = require 'inspect'
    LrFunctionContext.postAsyncTaskWithContext( "CreateVirtualCopyTest", function( context )
        LrDialogs.attachErrorDialogToFunctionContext( context )
        local catalog = LrApplication.activeCatalog()
    local activePhoto = catalog:getTargetPhoto()
    LrDialogs.message( inspect (activePhoto:getDevelopSettings()) )
    if activePhoto ~= nil then
    local virtualCopy = catalog:createVirtualCopies( "CreateVirtualCopyTest" )
    LrDialogs.message( inspect (virtualCopy:getDevelopSettings()) )
    else
    LrDialogs.message( "No photo selected! Please select a photo" )
    return
    end
    end )
    Any thoughts?
    (Example based on code of Rob Cole.)

    dhmc05 wrote:
    Any thoughts?
    Poor design (sorry Adobe).
    There are a number of new functions which bypass the catalog write methods (which is convenient) but are tied to the Lr user interface, e.g. library view mode, or develop module, which is bad - especially bad since there is no way to:
    A. Tell which module / mode Lr is in.
    B. Force which module / mode to be in.
    C. Assure photos are targeted, even if being filtered, or are underlings in a stack..
    I hope somebody who knows what they're doing with the SDK sorts this mess out before releasing Lr5 SDK.
    Rob

  • Some thoughts emerged based on the OVP sample-AkamaiSampleRandomSeek

    Thanks a lot Brian.
    Thousands of thanks to everyone!
    I have checked the AkamaiSampleRandomSeek which import AkamaiEnhancedNetStream for the jump point service's implementation.
    When do the seek,we should have a check first.
    If the seektime is in range,which means the content is loaded already,just call flash seek method;
    If not,we may call the override seek function,build a new request based on the seektime and
    send it to the server to get the new content.
    Now,my work is going follows:
    1.A plugin which can get the random seek action,and return the seektime.
    2.Imitate AkamaiEnhancedNetStream's implementing thoughts, I should build a new request url such as
    http://aaa.com/a.flv?starttime=xxxx based on the seektime referring to step 1, and starttime indicates the seektime.
    3.send the new request.
    Question:
    1)Whehter my thoughts is accorded with the OSMF framework?Is it feasible?
    2)if yes,would you offer me some opinion?
    Thanks.

    1. Yes, this should be feasible.
    2. I suspect the implementation would involve the creation of a custom seek trait (perhaps a subclass of NetStreamSeekableTrait) which implements the logic you describe (check whether the seek can work with the downloaded bytes, or whether the client needs to make another request ).  Once this custom trait is integrated into a MediaElement (probably a subclass of VideoElement), then it would just be a matter of including it in a plugin.

Maybe you are looking for

  • Cannot view raw images in iphoto 11

    Hi there, I have just upgraded to a new imac running Mountain Lion and iphoto 6. Now I can't view the latest raw images I have taken (on my canon eos 550D) - When I scroll through the images they appear for a second or so, then they disappear and are

  • Urgent ! how to put check box in the smartform

    <b>can anyone tell me how to display a check box (unselected) in the smartform.....</b>

  • Select List based on in-line query

    Hello everyone, im having a problem creating a select list with the following query: select result.A d, result.B r from dual, (select RMF.nome A, RMF.chapa B from medfa_marco_has_colaborador MC, medfa_projeto_has_colaborador MP, rm_funcionario RMF wh

  • PageFlow Context lost when Popup window opens

    I have a page flow that has a particular object with values set when the page first loads. I am calling a control using javascript onClick on a netui:anchor from one of the jsp's in the page flow. When the control calls my method in the page flow the

  • Inventory Data

    Hi all, In Item master data, under Inventory data tab there is one filed called 'Set G/L Accounts by'.  Please explain the concept behind this.  In addition to this, if I'm setting-up the G/L accounts by Item group or Item level, what are the other i