Missing Histogram

Hi everyone. I am new to the forum. I recently upgraded to AP2. In the adjustments tab, I am able to see the main histogram but often it disappears briefly while making any adjustments. It usually comes back, but not always. It doesn't appear at all in the levels adjustment section. Obviously, this renders levels useless. Is there a fix for this? I have included my computer specifications. Any suggestions is greatly appreciated. I should note that I am pretty much a novice with the program and apologize if this has been discussed before.
Thanks,
Mark

I have seen this in the past when I had an iMac with the same Graphics card as you have. The histogram just does not show. Usually I had to quit Aperture completely if it started happening, pointing to a memory leak in Aperture perhaps. I also had 4GB of RAM. In addition sometimes the image would go totally black which was even worse than just the histogram not appearing.

Similar Messages

  • Histograms issue During DB Upgrade.

    Hi,
    We are currently migrated our database to Oracle 11g i.e 11.2.0.3 (from 10.2.0.3). Now we are facing application slowness on 11g, we found the issue is with missing histogram or some unnecessary histograms in place. Given below are my details.If anyone having such experience during migration please help.
    When we were in 10g, we were having schemalevel gather stats job with 'METHOD_OPT' as 'AUTO' running each weekend and gathering onle STALE stats.And we had faced some performance issue due to optimizer following wrong path of execution due to some missing histograms and some extras too.So we had done skew ness(for each column in table in database) analysis on DB level using 'WIDTH_BUCKET' function manually on each column,and we deleted no of unnecesary histograms and created some on 10g DB. And updated the default schemalevel gather stats job with 'METHOD_OPT' as 'FOR ALL COLUMN SIZE REPEAT'. And we were getting stable enviroment.
    Now we just migrated and its done by our techops(Oracle) team, i am confused how the histograms has been changed/modified for columns during migration, causing headache now. So now below are my plans and fears associated with them. Need advice on same.
    1. If i will make the histograms exactly same in both the environments, i.e. will delete histograms which are present in new DB(11g) but were not in old DB(10g).and creating those which were there in old DB(10g) but now not in new DB(11g). My question is , will it be safe to do this or it may impact negatively with 11g optimizers features in place?
    2. Or should i try easier option once by making 'METHOD_OPT' for the schema level stats job as 'AUTO' on 11g and try once, as because i believe 11g is more stable and optimizer is equipped with more intelligence in case of stats gathering using AUTO? Need experts advice on same.
    3. Is there any other way to follow, in this scenario? Please advice.

    933257 wrote:
    Now we just migrated and its done by our techops(Oracle) team, i am confused how the histograms has been changed/modified for columns during migration, causing headache now. So now below are my plans and fears associated with them. Need advice on same.
    If you're now in production you have a big problem. ANY change you make could make things worse.
    If you haven't done it already I would get the plan that techops(Oracle) used to do the migration and compare it with the plan used to when you tested the upgrade to see if there are any differences or (possibly) any gaps. Given that you think the problem is the presence of an unexpected number of histograms, I would guess that the test plan DIDN'T say very much about stats gathering after the upgrade, and that somewhere there should have been a note to ensure that any conflicting data collection jobs in 11g should be modified, or disabled, to match what you were doing in 10g.
    In the absence of hard information I would guess that either (a) the automatic overnight stats collection job has not been disabled in 11g, or (b) your weekend task had left a number of parameters in the dbms_stats calls to default and the defaults have changed in the upgrade.
    Step 1: find out what has changed in the implementation
    Step 2: revert to previous elimination and keep your fingers crossed
    Step 3: (optional) - if the stats return to their previous state but you still have performance issues, a short-term fix may be to run with optimizer_features_enable set to its previous value).
    Regards
    Jonathan Lewis

  • Testing Process for Gathering Single Object stats.

    Hello Oracle Experts,
    I work a critical system and due to some high stakes all and every change is very heavily scrutinized here whatever the level is. And one of such changes which is currently under scrutiny is gathering object stats for single objects. Just to give you a background its an Oracle eBusiness site so fnd_stats is used instead of usual dbms_stats and we've an inhouse job that depending on the staleness of the objects gather stats on them using FND_STATS. (RDBMS : 10.2.0.4 Apps Release 12i).
    Now, we've seen that occasionally it leaves some of the objects that should ideally be gathered so they need to be gathered individually and our senior technical management wants a process around it - for gathering this single object stats (I know!). I think I need to explicitly mention here that this need to gather stale object stats has emerged becs one of the plans has gone pretty poor (from 2 ms to 90 mins) and sql tuning task states that stats are stale and in our PROD copy env (where the issue exists) gathering stats reverts to original good plan! So we are not gathering just because they are stale but instead because that staleness is actually causing a realtime problem!
    Anyway, my point is that it has been gathered multiple times in the past on that object and also it might get gathered anytime by that automatic job (run nightly). There arguments are:
    i. There may be several hundred sql plans depending on that object and we never know how many, and to what, those plan change and it can change for worse causing unexpected issues in the service!
    ii. There may be related objects whose objects have gone stale as well (for example sales and inventory tables both see related amount of changes on column stock_level) and if we gather stats only on one of them and since those 2 cud be highly related (in queries etc.) that may mess up the join cardinality etc. messing up the plans etc.
    Now, you see they know Oracle as well !
    My Oracle (and optimizer knowledge) clearly suggests me that these arguments are baseless BUT want to keep an open mind. So my questions are :
    i. Do the risks highlighted above stand any ground or what probably do you think is there of happening any of the above?
    ii. Any other point that I can make to convince the management.
    iii. Or if those guys are right, Do you guys use or recommend any testing strategy/process that you can suggest to us pls?
    Another interesting point is that, they are not even very clear at this stage how they are gonna 'test' this whole thing as the 'cost' option like RAT (Real Application Testing) is out of question and developing an inhouse testing tool still need analyzing in terms of efforts, worth and reliability.
    In the end, Can I request top experts from the 'Oak Table' network to make a comment so that I can take their backings!? Well I am hoping here they'll back me up but that may not necessarily the case and I obviously want an honest expert assessment of the situation and not merely my backing.
    Thanks so much in advance!

    >
    I work a critical system and due to some high stakes all and every change is very heavily scrutinized here whatever the level is.
    Another interesting point is that, they are not even very clear at this stage how they are gonna 'test' this whole thing as the 'cost' option like RAT (Real Application Testing) is out of question and developing an inhouse testing tool still need analyzing in terms of efforts, worth and reliability.Unfortunately your management's opinion of their system as expressed in the first paragraph is not consistent with the opinion expressed in the second paragraph.
    Getting a stable strategy for statistics is not easy, requires careful analysis, and takes a lot of effort for complex systems.
    >
    In the end, Can I request top experts from the 'Oak Table' network to make a comment so that I can take their backings!? Well I am hoping here they'll back me up but that may not necessarily the case and I obviously want an honest expert assessment of the situation and not merely my backing.
    The ideal with stats collection is to do something simple to start with, and then build on the complex bits that are needed - something along the lines suggested by Dan Morgan works: a table driven approach to deal with the special cases which are usually: the extreme indexes, the flag columns, the time-based/sequential columns, the occasional histogram, and new partitions. Unfortunately you can't get from where you are to where you need to be without some risk (after all, you don't know which bits of your current strategy are causing problems).
    You may have to progress by letting mistakes happen - in other words, when some very bad plans show up, work out WHY they were bad (missing histogram, excess histogram, out of date high values) to work out the minimum necessary fix. Put a defensive measure in place (add it to the table of special cases) and run with it.
    As a direction to aim at - I avoid histograms unless really necessary, I like introducing function-based indexes where possible, and I'm perfectly happy to write small programs to fix columns stats (low/high/distinct) or index stats (clustering_factor/blevel/distinct_keys) and create static histograms.
    Remember that Oracle saves old statistics when you create new ones, so any new stats that cause problems can be reversed out very promptly.
    Regards
    Jonathan Lewis

  • How do i do the SQL Statement tuning

    I had a basic question
    After writing a query that satisfies the functional requirements - how do I optimize it's performance so that it runs faster.
    I would be grateful if you describe the query optimization process in detail..
    especially what should be the query on the plan table that is used to see the execution plan in hierarchial structure
    Thanking you in advance
    regards,
    Manoj Gokhale

    Hi,
    how do I optimize it's performance so that it runs faster.http://www.oracle.com/technology/oramag/webcolumns/2003/techarticles/burleson_cbo_pt1.html
    Start by gathering an execution plan (explain plan), and check-out autotrace:
    http://asktom.oracle.com/tkyte/article1/autotrace.html
    - Look for possible unnecessary large-table full-table scans, caused by missing indexes
    - Look for sub-optimal table join orders (see the ORDERED hint)
    - Look for sub-optimal table join methods (nested loops vs. hash join)
    You can adjust the execution plan in many ways:
    - Adding misisng indexes or building materialized views (see the 10g SQLAccess advisor). Here are my notes:
    http://www.dba-oracle.com/art_dbazine_911_pt5.htm
    - Hints (especially table join hints, ordered hint and optimizer mode hints). Only use "good" hints:
    http://www.dba-oracle.com/oracle_news/oracle_faq/faq_tune_hints_list.htm
    - Enhancing optimizer stats (especially missing histograms)
    http://www.dba-oracle.com/art_otn_cbo_p4.htm

  • Cost of a query

    When we say the cost of a query is 7 thwn what is this 7 signifies??

    burleson wrote:
    Hi Richard,
    There's unfortunately way too much misinformation floating around Amen!
    CBO must always pick the plan with lowest cost (from available set of plans). If it doesn't, then this is a bug.
    You are of course absolutely correct. No, that's not quite correct, at least according to some folks that you respect:No, it's quite correct and the folks I respect and the quotes you've selected don't contradict my comments in the slightest.
    The CBO will obviously (well for some anyways) always select the plan that it has calculated to have the lowest cost. And yes, these costs are exactly those that are displayed in explain plans and the such.The point you fail to understand is that a plan that the CBO has calculated to have the lowest cost may not necessarily be the most efficient/fastest plan possible, because it may have miscalculated what the true cost should have been (eg. segment statistics are incorrect, missing histograms, incorrect cardinality estimates, system statistics are incorrect, optimizer parameters are set incorrectly, etc. etc. etc.).
    If the CBO is provided with all the information it needs and it's 100% accurate, it will calculate the correct costs and pick the most efficient execution plan (unless you hit a bug or some such). If it doesn't have accurate enough information, it may calculate incorrect costs and as such select the wrong execution plan. Therefore, understanding what these costs mean, how they're generated by the CBO and (this being the important point) knowing what the correct inputs to the CBO should be, you can hence determine whether or not the selected execution is indeed costed correctly and hence likely to be correct and the most efficient.
    >
    http://jonathanlewis.wordpress.com/2009/06/23/glossary/
    "The “cost” column of an execution plan is the optimizer’s way of expressing the amount of time it will take to complete the query.
    Unfortunately there are defects and deficiencies in the optimizer’s cost model that mean the calculations may fail to produce a reasonable estimate.
    Because of this it is possible for two queries to have the same cost but hugely different execution times; similarly you can have a “low-cost” query that run for ages and a “high-cost” query that completes almost instantly"
    http://jonathanlewis.wordpress.com/2006/12/11/cost-is-time/
    *"Tom Kyte pretty much says if you are using cost to tune different queries, you’re barking up the wrong tree. Cost is not reliable."*
    *"In many cases, cost is not “very reliable” as an indicator;* but if you (a) know what it means, (b) know what makes the optimizer go wrong, and (c) don’t mess too much with poorly understood parameters – then it makes sense to take note of what it is telling you."
    I can't put it any better than your quote from Tom Kyte "if you (a) know what it means, (b) know what makes the optimizer go wrong, and (c) don’t mess too much with poorly understood parameters – then it makes sense to take note of what it is telling you".
    As Jonathan puts it in your other quote, it's possible to have two queries that have the same cost but hugely different execution plans. That's because one or both of the costs have been incorrectly calculated by the CBO. For example, the CBO thinks a FTS only has a cost of (say) 10 because it thinks the table only has 1000 rows when in actual fact it has 1,000,000,000 rows and it takes hours to complete. The true costs should have been (say) 100000 but the CBO has got it wrong because the table stats are wrong. Another query has a plan with a greater cost of (say) 50 and runs so much faster because in this case the costs are correct and accurately reflect the work the CBO needs to perform.
    Understanding how the initial cost of 10 is calculated and by appreciating based on ones knowledge of the data that it's not correct, one can determine that the execution plan is likely also wrong, why it's wrong and (this being the important bit) can make informed decisions on how to rectify the problem.
    So yes, the costs in the execution plans are of course the real costs used by the CBO, yes the CBO will chose the lowest calculated cost and yes the costs have meaning and can be useful in determining and resolving problematic execution plans.
    I can't emphasize enough how important it is to have a good understanding of these basic CBO concepts when dealing with an Oracle database. I wholeheartedly agree!
    So, how do you explain why these people disagree with your assertions?
    These people I respect of course don't disagree with my assertions, but you need to understand what we're saying to appreciate this fact.
    Richard Foote
    http://richardfoote.wordpress.com/

  • IPhoto2.0.1-Missing Retouch Function & Histogram Adjust

    I have iPhoto 2.0.1 running on a Mac G4 10.2.8 that is a gift to a Newbie convert from PC
    My version of iPhoto has a Retouch key in Edit which does nothing.
    I see from reading David Pogue's " The Missing Manual " there could be a Histogram pane that's part of " Adjust " which must be in some upgrade version I don't have in 2.0.1 ???
    There's mention of the control key toggling between the before and after. I do not have either the Histogram or toggle feature. What to do ?
    Do I need to update iPhoto to get these features ?
    Thanks for any help

    I'm not sure what you're expecting from the retouch function. Since retouch changes must be subtle, you probably aren't looking at a sufficiently zoomed area, and depending upon how large of a localized area you've stroked, the full adjustment might take a few seconds.
    iPhoto 2 has no histogram adjust feature. While later versions do include this feature, keep in mind the iPhoto's mission is to organize your photo collection. Other applications are better suited to photo adjustments beyond those very basic options offered in iPhoto.

  • Photoshop color histogram is missing.

    This program has been in use for several years with no problems.  When booting up the program today, the color histogram is missing. several reboot attempts were unsuccessful.  The other presentations on the screen were normal.  What can be done to restore the color histogram?

    Go to Window in the options panel and click colour from the drop
    down menu....
    regards,
    Herugrim123

  • I have lightroom 5.7 and a macbook pro 10.9.5.  Occasionally when I am editing a photo in the develop module, I suddenly get a "file not found" message and the editing ceases to work.  There is no "exclamation" icon below the histogram box in the Library

    I have lightroom 5.7 and a macbook pro 10.9.5.  Occasionally when I am editing a photo in the develop module, I suddenly get a "file not found" message and the editing ceases to work.  There is no "exclamation" icon below the histogram box in the Library module or on the photo in grid view, and the histogram is gone.  What is going on? Trying to find it using the "find missing photos" in the library menu does nothing.

  • Photo missing in Lightroom

    I am trying to export my photos, but it keeps saying in my histogram that the photo is missing?
    How can I resolve this?
    Any help would be appreciated thanks!

    Adobe Lightroom - Find moved or missing files and folders

  • Histogram fails to show exif info

    I shoot in raw, I import into lightroom by doing a sync folder. Then I edit my picture using photoshop cs6. I save it as a jpeg. Finally I export this image as a jpeg to a sub folder called Publish, which means its ready for the web.
    problem is when i look at image in lightroom on the right side the histogram has no details, it wont show iso, fstop or anything.
    why is all this data gone? if I look at my original RAW file i can see the exif info in the histogram but on my final exported image its all gone.
    why is this and can it be fixed?
    furthmore it looses all its tags and meta data, I have to copy the metadata back to this file.
    it would be nice if lightroom would reinsert the meta info and exif once its reimported into lightroom.

    maybe i'm doinh something wrong in my workflow. This problem only came recently when I started using LR4
    I import my pictures using Nikon transfer
    than I use View NX to Rename, Tag and delete what i consider bad images.
    now I geotag them using a program called Geosetter.
    finally i go into lightroom 4 and tell it to sync folder 2012
    it finds the images with no problems.
    i go into my folder ie 2012-05-05
    next I stack my images, since i shoot hdr i usually end up with 5 images that are part of a bracket set.
    here is where my confusion and problems begin.
    i have a metadata preset already made "meta info"
    it has my name, and all other details i want each image to have.
    but after i sync my folder this meta info does not apply.
    is there a way after a sync to make it automatically do this.
    problem is i tend to forget.
    so i do this manually. I high light all images on the folder than on right side panel i slect from drop down my Meta info
    it applies ok.
    now i start my post work, i edit file (as original) in photoshop. do what I need and than save it as a jpeg, full rez and full size.
    now i go back to lightroom where my picture i edited appears, as it does not from my photoshop editing.
    but its missing exif info and meta data info also. in fact its been stripped of all data
    at this point i dont know what to do
    so i pick the original raw file right click and select "metadata presets" > copy
    a box appears and it lets me select what fields to copy. I copy.
    now i select the photoshop edited file and right click paste meata data info.
    i double check and the info is good its all normal again.
    here is my dilemma
    if I upload this image to flickr or the web it looks different than what it looks on my computer
    i dont know why and i'm guessing is because it has lightroom adjustments to it
    so my workaround is to Export this picture.
    I right click and choose export to hard drive, and export with full size and res as a jpeg.
    now when i upload this exported JPEG it looks identical to one on my computer but serious problem
    exif and meta info have been stripped
    i dont get it??? i did not tell it to do this
    so i try to copy meta info from the original file but it fails
    it lets me do it but info simply wont stick
    i open file in windows explorer to verify and its NOT read only.
    so i'm at a loss

  • I lost the Development Panel on right side. Only Histogram shows.

    I am using LR5 and it has been working for me pretty well but today I may have accidentally clicked on something that resulted in the loss of those Development panels. Only the Histogram shows. I wish to restore all the missing panels.  I was too afraid to check on various things in the Toolbar (such as Settings, View, etc) and at any rate now the Toolbar has also disappeared. In short, I cannot do any post processing of my RAW files.
    I am using an Intel Mac OS X 10.7. Any help would be much appreciated.

    Can you attach a screenshot?
    *http://en.wikipedia.org/wiki/Screenshot
    *https://support.mozilla.org/kb/how-do-i-create-screenshot-my-problem
    Use a compressed image type like PNG or JPG to save the screenshot.
    See also:
    *http://kb.mozillazine.org/Corrupt_localstore.rdf
    Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
    *Do NOT click the Reset button on the Safe mode start window or otherwise make changes.
    *https://support.mozilla.org/kb/Safe+Mode
    *https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes

  • Visualizing video levels (oscilloscope or histogram)

    I'm an old television engineer, and I'm used to seeing video on an oscilloscope so I can adjust the brightness and contrast to get proper white and black levels. In FCE, it appears I have to eyeball the video and guess about the dynamic range of my content. There's no objective measuremnet tool. A histogram would help. For example, how can I tell if I'm clipping the whites? Am I missing a simple way to visualize the video levels?
    iMac G5 (B) 20-inch   Mac OS X (10.4)  

    Jeffery, FCE has no scopes on its own. There may be plug-ins, I'm not sure. You can search Version Tracker. I believe Final Cut Pro has Scopes, not sure how useful they are.
    Ian, I wish I was in your situation. As a freelance editor for several wedding studios, I am often forced to work with some very sub par footage. Many of the cameramen involved have fairly hi-end, 3 chip cameras ... and I am often struggling with bad focus, bad lighting, bad contrast, etc.
    Eyeballing is how I go about it, but I find FCE's Color Corrector my useful tool. When I find a shot that actually has the proper contrast and color temp for the scene, I use that (and some tweaking with the Mids) to color match the funkier shots.
    Too bad that hi-end equipment does not always equate to hi-end product. Oh well, that is what they pay me for

  • Issues Missing files & duplicate files and not enough screen space

    Hello everyone,
    I'm using a 1.67 Ghz 15 in tibook with 1 gig of ram
    I had issue yesterday, missing files, I was in the rating/keywords panel scrolling through my images, one was a bit soft so I deleted it using the command delete combo. Somehow all of the images in the project disappeared. They hadn't been moved to the trash, I looked, and looked and looked, I couldn't find them using a search, and it should have been easy, they were the first images that had been imported into Aperture.
    So since I had copied the 70 files to my hardrive first I re-imported them. Couldn't see those either.
    Deleted the project and re-imported again, worked fine, and I couldn't reproduce the problem.
    When I went to export five images, (all the images had been re-named with a sequential suffix, Smith_01, Smith_02, Smith_03...), to my desktop so I could copy to our server, I remembered that I needed to update a caption on one image. (Could that caption box be any smaller?), anyway I exported again, I meant to only export the individual image but exported all five again. Instead of telling me there were duplicate files on the desktop it increased the sequential number of each duplicate file by 1. So now instead of one file of Smith_01, Smith _08, Smith 33, etc, I have the original and the Smith_01 file now has a matching image slugged Smith_02 on the desktop as well, along with all the others that had a match with a suffix one digit higher. And of course back in the project there is a Smith_02 that is different..
    If I were to copy different files with the same name to other folders or our archive server one of those images could be easily discarded or copied over by the file with the duplicate name.
    And lastly, when importing I tried to rename my files and add a generic caption, I imported the same images a few times, the first time I couldn't scroll down to see the caption box and metadata, the second time I could see the scroll and add the metadata below. The third time I couldn't scroll down. Very Wierd!
    And lastly, is there any way to show just the adjustments or just the metadata boxes on the Adjustments & Filters layout? The boxes are so small already I'd like to be able to at least use the whole side of the screen for one box. I'd really like to be able to put them where I want them. The adjustment HUD works this way on a full screen but I want to get rid of the strip of thumbnails in the full screen and just have the HUD and the image, nothing else.
    Any insights would be great.
    Thanks,
    Craig

    Hello everyone,
    <...>
    I had issue yesterday, missing files, I was in the
    rating/keywords panel scrolling through my images,
    one was a bit soft so I deleted it using the command
    delete combo. Somehow all of the images in the
    project disappeared. They hadn't been moved to the
    trash, I looked, and looked and looked, I couldn't
    find them using a search, and it should have been
    easy, they were the first images that had been
    imported into Aperture.
    I think they were in the trash - as far as I know search does not work within the trash itself.
    Nope not in the trash, that's the first place I looked, I also found that deleting a project is the fastest way to export the masters. Just pull them out of the trash can
    They should have been moved to the trash folder, inside an Aperture directory, and then from there inside of a folder with the name of the project the image was in.
    Alternatley (and I'm not sure how this migh thave happened) they may have been marked as "rejected". All views have a default search >>criteria that starts with "unrated or higher" but rejected photos are lower than this and are not seen. By default there is a smart folder at the >>top of your library that shows Unrated pictures, you might check to see if they are there.
    I'll double check the shortcut for rejects, this is the only plausible explanation, but it doesn't explain why the second import of images couldn't be found either.
    So since I had copied the 70 files to my hardrive
    first I re-imported them. Couldn't see those
    either.
    Deleted the project and re-imported again, worked
    fine, and I couldn't reproduce the problem.
    Well, I'm sorry to say I have no theories to offer there. Just glad it worked eventually!
    When I went to export five images, (all the images
    had been re-named with a sequential suffix, Smith_01,
    Smith_02, Smith_03...), to my desktop so I could copy
    to our server, I remembered that I needed to update a
    caption on one image. (Could that caption box be any
    smaller?), anyway I exported again, I meant to only
    export the individual image but exported all five
    again. Instead of telling me there were duplicate
    files on the desktop it increased the sequential
    number of each duplicate file by 1. So now instead
    of one file of Smith_01, Smith _08, Smith 33, etc,
    <...>
    Yes, Aperure will refuse to overwrite any file ever - and will always add those other numbers if it encounters a file with the same name as the one it is trying to generate!
    I agree that this is a good thing, but there should be an alert of some kind if you have a duplicate. The finder certainly tells you if you are trying to copy a duplicate file to the same location. Adding an increment of one to the sequential suffix without an alert is very poor planning. On deadline trying to get files into our system I could have easily overlooked the duplicates and overwritten other files with the same name.
    The only way to work around that is either to delete all files from the directory before you export from Aperture, or to export to an empty directory and copy them over yourself.
    The work around is easily figured out, but like some other things I shouldn't have to work around it.
    <...>
    And lastly, is there any way to show just the
    adjustments or just the metadata boxes on the
    Adjustments & Filters layout? The boxes are so small
    already I'd like to be able to at least use the whole
    side of the screen for one box. I'd really like to
    be able to put them where I want them. The
    adjustment HUD works this way on a full screen but I
    want to get rid of the strip of thumbnails in the
    full screen and just have the HUD and the image,
    nothing else.
    Well, another option there would be to drag the divider between the metadata and the adjustments all the way up (as much as you can >>anyway, the histogram will still be visible).
    That would be wonderful, but I'm on a 15 in ti book and there isn't enough space to do that. The divider show up but you can't move it. Ideally you should be able to just view the metadata or just the adjustments if you so choose. Binding them together is stupid, if I'm making adjustments, I don't care what is happening in the metadata screen and when I'm adding captions and changing metadata I don't need to see the adjustments screen. Give me a metadata template like the IPTC screen in Photomechanic that I can save and apply and put it on the screen where I want it.
    Then to see adjustments just use the adjustment HUD - key "H" to bring up or dismiss. It will stay where you put it when you make it go >>away and come back and you can have it whereever you like on the screen unlike the inspector.
    I'd love to do that but it can't be done on a 15 in laptop screen.
    For maximum space value, you could arrange the Adjustments HUD to exactly overlay the space where the Inspector is - then you can press >>"I" to make the inspector come and go, and "H" for the Adjustments Hud all in the same space.
    For maximum space on adjustments I'm using the full screen with the HUD. At least I can then move the HUD where I want it to be, not anchored on the right side.
    In general look over the keyboard shortcuts as there are a lot of keys to make panels come up and go away quickly (one key, no modifiers) >>which really helps to maximize use of screen space.
    I'm still learning them and I'm sure that will help, but the adjustments panel and the metadata panel have to be changed to make it more usable on a laptop in my opinion. Maybe it's fine on a large monitor, I wouldn't know as I can't load it on my G5 desktop without a hack. And so far I'd rather use Photomechanic for sorting and captioning it's much much faster for newspaper work.

  • Statistics gathering in 10g - Histograms

    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    Following one is post from Oracle forums.
    Statistics
    In the above post Mr Lewis mentions about adding
    method_opt => 'for all columns size 1' to the DBMS job
    And in the same forum post Mr Richard Foote has mentioned that
    "Not only does it change from 'FOR ALL COLUMNS SIZE 1' (no histograms) to 'FOR ALL COLUMNS SIZE AUTO' (histograms for those tables that Oracle deems necessary based on data distribution and whether sql statements reference the columns), but it also generates a job by default to collect these statistics for you.
    It all sounds like the ideal scenario, just let Oracle worry about it for you, except for the slight disadvantage that Oracle is not particularly "good" at determining which columns really need histograms and will likely generate many many many histograms unnecessarily while managing to still miss out on generating histograms on some of those columns that do need them."
    http://richardfoote.wordpress.com/2008/01/04/dbms_stats-method_opt-default-behaviour-changed-in-10g-be-careful/
    Our environment Windows 2003 server Oracle 10.2.0.3 64bit oracle
    We use the following script for our analyze job.
    BEGIN DBMS_STATS.GATHER_TABLE_STATS
    (ownname => ''username'', '
    'tabname => TABLE_NAME
    'method_opt => ''FOR ALL COLUMNS SIZE AUTO''
    'granularity => ''ALL'', '
    'cascade => TRUE, '
    'degree => DBMS_STATS.DEFAULT_DEGREE);
    END;
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.
    I would appreciate any suggestions, insight or further readings regarding the same.
    Appreciate your time
    Thanks
    Niki

    Niki wrote:
    I went through some articles in the web as well as in the forum regarding stats gathering which I have posted here.
    http://structureddata.org/2008/03/26/choosing-an-optimal-stats-gathering-strategy/
    In the above post author mentions that
    "It may be best to change the default value of the METHOD_OPT via DBMS_STATS.SET_PARAM to 'FOR ALL COLUMNS SIZE REPEAT' and gather stats with your own job. Why REPEAT and not SIZE 1? You may find that a histogram is needed somewhere and using SIZE 1 will remove it the next time stats are gathered. Of course, the other option is to specify the value for METHOD_OPT in your gather stats script"
    This anayze job runs a long time (8hrs) and we are also facing performance issues in production environment.
    Here are my questions
    What is the option I should use for method_opt parameter?
    I am sure there are no hard and fast rules for this and each environment is different.
    But reading all the above post kind of made me confused and want to be sure we are using the correct options.As the author of one of the posts cited, let me make some comments. First, I would always recommend starting with the defaults. All to often people "tune" their dbms_stats call only to make it run slower and gather less accurate stats than if they did absolutely nothing and let the default autostats job gather stats in the maintenance window. With your dbms_stats command I would comment that granularity => 'ALL', is rarely needed and certainly adds to the stats collection times. Also, if the data has not changed enough why recollect stats? This is the advantage of the using options=>'gather stale'. You haven't mentioned what kind of application your database is used for: OLTP or data warehouse. If it is OLTP and the application uses bind values, then I would recommend to disable or manually collect histograms (bind peeking and histograms should not be used together in 10g) using size 1 or size repeat. Histograms can be very useful in a DW where skew may be present.
    The one non-default option I find myself using is degree=>dbms_stats.auto_degree. This allows dbms_stats to choose a DOP for the gather based on the size of the object. This works well if you dont want to specify a fixed degree or you would like dbms_stats to use a different DOP than the table is decorated with.
    Hope this helps.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Help with histogram related query

    Hi there,
    I'm trying to collect data through a query to draw an histogram. Its between lag count and user count.
    Here's my query,
    select (Y.lag_range /10) * 10 as range_start,
    (((Y.lag_range/10) + 1) * 10) -1 as range_end,
    sum(Y.user_count) as user_count
    from (
    (SELECT ((a.lag )) AS lag_range, COUNT(DISTINCT a.user_id) user_count
    FROM testobj a INNER JOIN testobj b
    ON a.user_id = b.user_id
    GROUP BY ((a.lag) )
    )Y
    group by ((Y.lag_range /10) * 10), (((Y.lag_range/10) + 1) * 10) -1
    The inner query gets the different lag count and user count. The outer query tries to group the lag count in different ranges like 0-9, 10-19, 20-29 and so on.
    In my query, the outer query doesnt seem to have any impact and still gives the same answer like the inner one.
    I get something like this.
    RANGE_START RANGE_END USER_COUNT
    0 9 2
    1 10 1
    2 11 2
    3 12 1
    when the actual out should be:
    RANGE_START RANGE_END USER_COUNT
    0 9 6 ( 2 + 1 + 2 + 1)
    10 19 6
    20 29 5
    30 39 4
    Could someone tell me what am I missing?
    I appreciate any suggestion.
    Thanks,

    Hi,
    Whenever you have a question, you should post a little sample data and the results you want from that data.
    MyNewWorld wrote:
    I'm trying to collect data through a query to draw an histogram. Its between lag count and user count.
    Here's my query,
    select (Y.lag_range /10) * 10 as range_start,
    (((Y.lag_range/10) + 1) * 10) -1 as range_end,
    Could someone tell me what am I missing?You might be missing a call to TRUNC.
    (x / 10) * 10is just the same as
    xIf you want to round to the next multiple of 10 toward 0, you should say
    TRUNC (x / 10) * 10In some languages, dividing one integer by another always returns an integer. SQL is not one of those languages. For example, if you divide 6 by 10 in SQL, the result is .6, and multiplying that by 10 brings you back to 6. <tt>TRUNC (6 / 10)</tt>, however, returns 0 (because 0 is the next integer towards 0 from .6).

Maybe you are looking for

  • A Question about RAW and Previews

    I have just recently starting shooting in RAW (mostly for the post production editing abilities - I am an avid amateur photographer bent on learning as much as I can). I set my camera to capture in RAW + L. I don't know why I feel like I want it to c

  • Uploading XML file to SharePoint Online Document Library

    Hi, I got a scenario to upload the xml file the sharepoint library along with that the XML has some information like UserName, Email. Phone and these information has to be captured to the metadata information of that file in the sharepoint library. L

  • Logon groups configuration for JAVA instances using NWA method

    Hello guys, we have a BI dual System landscape (CI ABAP+ JAVA plus 2 dialog instance ABAP+JAVA, based on NW04s. I have set up a Web Dispatcher to do a load balancing for JAVA instances. I configured the logon groups according this documentation using

  • Problems with CORBA deployed BC4J used by JSP app

    Hi JDev Team all is beginning to run , we are trying with EJB's but now we are turn to CORBA for close view, in a first look CORBA is more complicated than EJB's but the performance for EJB's is poor from our point of view, if you are interested in o

  • My mac with os x 10.6.8 won't turn on

    My mac 10.6.8 won't start up, any ideas on what to try?