H.264 Droplet producing poor results in Compressor 4

I was happy to see my compressor droplet library updated with the new version icon but when I ran some tests the exact same settings (via droplet) produced noticeably different results. Compressor 4 is consistently distorting moving high contrast edges. Am I the only one seeing this? Any remedies besides avoiding Compressor 4 all together?
Compressor 4
Compressor 3

It appears restarting my machines after the App Store install cleared up the issue. 

Similar Messages

  • Poor result using Compressor

    I need help. I use a Sony camera (HDR CX560) bought in US with NTSC as standard. I live in Europe and would like to burn a DVD for my Blue-Ray player showing it on my PAL TV.
    The footage is 1080i, 1920x1080, 29,97 fps.
    I have tried several different ways of achieving good results, but with no luck:
    1. Editing in Final Cut Pro with original specs (1080i, 29,97). The result showing in FCP viewer is not perfect, but quite good. A little bit of stuttering and a bit blury pictures. I have tried "Send to Compressor...", selected MPEG-2 for DVD in Compressor (customized to 16:9 TV format). The process stops after a minute - a file is being created, but only the first 30 seconds of the movie is visible. I also tried Share>Export file using Compressor settings. The file created does not have acceptable quality, mostly bescause of very disturbing stripes. I also tried Share>Master file but the output is not good enough (using codec Apple ProRes422 (HQ) - same stripes.
    2. Editing in Final Cut Pro with european standard (1080i, 25i). The result in FCP viewer is good with one exception - a lot of stuttering images while panning. I tried all the output/share alternatives mentioned above, with poor result.
    I use OS X 10.8.3, Final Cut Pro X 10.0.8 and Compressor 4.0.7
    Can anyone help?

    pnils wrote:
    . Created a project with the original settings (1080i, 1920x1080, 29,97 fps).
    FCP is usually pretty bullet proof in handling formats – and most of the time letting it set the project properties is the best way.
    From what you've said about the camera and the footage, you shouldn't encounter any potential problems until after you export and start the standards conversion.
    Create another project. When the new project dialogue window appears, make sure the settings are set to automatic (video properties set by first clip). Take one of the 1080i 60 clips and edit it to the timeline to set those properties. Export that clip as a test. If it plays smoothly, and it should, proceed to delete the clip from the project and paste the problem sequence to the new project. Delete all project render files and event render files. Now export (don't bother to re-render).
    Post back with results.
    Russ

  • Poor results importing PSE7 slideshows into PRE7

    I originally posted this query into the Photoshop Elements forum ('cos I couldn't find this one originally...) and received some very helpful replies from Barb__O - thanks Barb.  These have given me some useful leads on things to try, which I've done and did see some improvements, but I'm still not at the point of being happy with the results.  The problem appears to be a PRE problem, so reposting here...
    I am trying to put together a video presentation for output directly to PC monitor (more specifically, LCD data projector of 1024 x 768 resolution) and then record to DVD, in PAL SD format.
    The presentation is to contain slideshows put together in PSE7 (so they already include title slides, transitions, Ken Burns effects and music tracks) and video clips from a couple of different sources - one clip is a scanned Standard 8 film (format is a frame-to-frame AVI at 720 x 576 resolution ie SD PAL) and some clips from a 1440 x 1080 HD (H.264) video camera (Canon HF10).  Reading through some of the forums, I'm doubting that PRE7 will be able to handle this variety of input formats...
    The still images come from many sources - digital camera, scanned prints (B&W and colour) and scanned transparencies.  All images were dropped into the PSE7 slideshow in their respective original sizes/resolutions.  There are close to 100 images in the slideshow.  Apart from a few scanned B&W print enlargements that were spat out by PSE7 because they were too large (other photo editors had no problem with them - seems PSE is a little precious in this regard...) the process of putting the slide show together went pretty smoothly.
    Playing the finished slideshow in the PSE7 preview window, the results were excellent.  Exporting the slideshow as a 1024 x 768 WMV file and playing it in any media player yielded the same, excellent results!  So far, so good.
    I then wanted to put everything together using PRE7 - slideshows, video clips and titles.  From PSE7 I clicked the "send slideshow to Premiere Elements" at which point the slideshow was dutifully dropped into my currently opened PRE7 project.  As the project already contained the aforementioned scanned Standard 8 clip, the project setting was PAL DV 48kHz.  I was feeling exuberant at this stage.  I added a couple of title slides and a transition or two and played the result in the PRE7 full screen preview window...
    I was absolutely horrified with what I saw when the slideshow started to play.  All of the still images were badly pixellated, had jagged edges and any vertical wipe transitions had a horrible, angular ripple effect.  In short, unwatchable.
    As I had to get the presentation done and was clearly getting nowhere with PRE7, I exported all my slideshows out of PSE7 as WMVs, my video clip out of PRE7 as an AVI and dropped everything into Windows Movie Maker, with excellent results on the PC monitor and the LCD projector.  Clearly there was/is a problem with PRE7 or one or more of its esoteric settings.
    I now want to add another slideshow and the HD video clips mentioned above to the presentation and then burn to DVD.  I would like to use PRE7 to do this, as I can set up menu markers where I want and I want to make finer adjustments to relative positions of stills and video than Movie Maker will allow - but obviously this is all dependent on resolving the quality problems.
    After suggestions given by Barb, I played around with the PRE7 project setting and tried various settings here, eg HDV, SVHCD etc.  These certainly improved the quality of the still images in the slideshows (to the point where I was happy with them), but the transitions were still ugly and the pan/zoom effects became jerky and distracting, not only that, but the music track that I had synchronised with the slideshow, with a nice fade out at the end, just ended abruptly - again distracting and not at all good.
    To be honest I'm confused with that project setting - should this be set to the format I want the finished project output to end up in, or is it set to the format of video input to the project?  If the latter, how does it deal with different format video clips?
    I've also read about the recommendation to set all still images to 1000 x 750 pixels - this would have been fine if I'd known this *before* I put together the slideshows, I don't want to have to re-do them all again (or can I re-size the images while they are actually *in* the slideshow?)!  Besides, PSE7 itself and Windows Movie Maker handle images larger than this, quite happily.  Why can't PRE7?  Also, how is this resolved for large portrait orientated images?  What is the primary dimension (width or height)?
    As an aside, I tried re-sizing a single image to 1000 x 750 and dropping straight into PRE7, but I still get poor results (ie grainy, pixellated image)!  ("Scale to framesize" is off)
    So am I flogging a dead horse here or can I resurrect things so that I can end up with a reasonably professional looking result?  I really want to love these two products, they do some things really well, but so far, my overall user experience has been one of complete frustration and disappointment.  The user interfaces of these two products have to be about the worst I've come across!
    Thanks.

    Steve,
    FYI - I think that MK had already tried sending directly from PSE 7 to PRE 7 see his mention of
    I then wanted to put everything together using PRE7 - slideshows, video
    clips and titles.  From PSE7 I clicked the "send slideshow to Premiere
    Elements" at which point the slideshow was dutifully dropped into my
    currently opened PRE7 project.  As the project already contained the
    aforementioned scanned Standard 8 clip, the project setting was PAL DV
    48kHz.
    One of my concerns about the specifc scenario is that the PE project setting was PAL SD video and the expected projector resolution was thought to be 1024x768 (and therefore higher resolution). even with Scale to Framesize off, I was not sure if that resolution difference would be a problem.
    Your suggestion is making the PAL SD DVD and then using a software DVD player to play that DVD to the 1024x768 projector.   That is a different approach that I had not considered -- but it would probably avoid some of the awkwardness of dealing with 2 distinct types and resolutions of output.
    MK,
    Your earlier post (if I understood it correctly) said that after the presentation using the projector, you would be making DVDs for distribution. Now I suspect that these would be standard definiton DVDs and probably PAL.
    I recommend that you confirm in this thread what your current plans are for distributiion and playback.
    Once the best fit for a PE project type is resolved, I do think it is worth another try of sending the slideshow from PSE 7 slide show editor to the PE 7 project.
    Yes, you may expereince some timing differences between the playback you saw in the PSE slide show editor and what you see in the playback under Premiere Elements (note Steve's point to be sure that you should render in PE before doing playback).   Based on what you observe for your specific slide show, these differences can be discussed.
    ADDITIONAL comments
    played the result in the PRE7 full screen preview window...
    I missed this earlier.  Because your computer monitor is probably much higher resolution, I question whether this full screen preview is an effective evaluation of the final quality. Steve and Hunt, what are your comments on this?
    MK
    can I re-size the images while they are actually *in* the slideshow?)!
    Besides, PSE7 itself and Windows Movie Maker handle images larger than
    this, quite happily.
    re-size images while they are actually in the slide show -- maybe but not easy
    1 --When in the PSE slide show or from the PE Timeline, you can edit an individual photo and replace the existing photo with the results of that edit. However that is a one at a time operation and most probably not what you want.  It is better suited to adjustments to a specific photo when you determine it needs to be differrent for this slide show.
    2 --Once your slide show is on the Premiere Elements Timeline, a few people have swapped out a folder of photos and brought in a different folder of downsized same photos. This is tricky but probably can be done: it is probably simpler if all photos are in the same folder. Also portrait photos probably need to be handled separately from landscape photos.
    FYI - PSE 7 slide show can sometimes handle larger images and you did not have a problem - but others do have problems and you might in the future. This seems to depend on both the specifics of the photo files and the computer system configuration.

  • Filter expression producing different results after upgrade to 11.1.1.7

    Hello,
    We recently did an upgrade and noticed that on a number of reports where we're using the FILTER expression that the numbers are very inflated. Where we are not using the FILTER expression the numbers are as expected. In the example below we ran the 'Bookings' report in 10g and came up with one number and ran the same report in 11g (11.1.1.7.0) after the upgrade and got two different results. The data source is the same database for each envrionment. Also, in running the physical SQL generated by the 10g and 11g version of the report we get different the inflated numbers from the 11g SQL. Any ideas on what might be happening or causing the issue?
    10g report: 2016-Q3......Bookings..........72,017
    11g report: 2016-Q3......Bookings..........239,659
    This is the simple FILTER expression that is being used in the column formula on the report itself for this particular scenario which produces different results in 10g and 11g.
    FILTER("Fact - Opportunities"."Won Opportunity Amount" USING ("Opportunity Attributes"."Business Type" = 'New Business'))
    -------------- Physical SQL created by 10g report -------- results as expected --------------------------------------------
    WITH
    SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33231.USD_LINE_AMOUNT else 0 end ) as c1,
    T28761.QUARTER_YEAR_NAME as c2,
    T28761.QUARTER_RANK as c3
    from
    XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
    XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
    XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
    where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
    group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK)
    select distinct SAWITH0.c2 as c1,
    'Bookings10g' as c2,
    SAWITH0.c1 as c3,
    SAWITH0.c3 as c5,
    SAWITH0.c1 as c7
    from
    SAWITH0
    order by c1, c5
    -------------- Physical SQL created by the same report as above but in 11g (11.1.1.7.0) -------- results much higher --------------------------------------------
    WITH
    SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33142.TOTAL_OPPORTUNITY_AMOUNT_USD else 0 end ) as c1,
    T28761.QUARTER_YEAR_NAME as c2,
    T28761.QUARTER_RANK as c3
    from
    XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
    XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
    XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
    where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
    group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK),
    SAWITH1 AS (select distinct 0 as c1,
    D1.c2 as c2,
    'Bookings2' as c3,
    D1.c3 as c4,
    D1.c1 as c5
    from
    SAWITH0 D1),
    SAWITH2 AS (select D1.c1 as c1,
    D1.c2 as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    sum(D1.c5) as c6
    from
    SAWITH1 D1
    group by D1.c1, D1.c2, D1.c3, D1.c4, D1.c5)
    select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6 from ( select D1.c1 as c1,
    D1.c2 as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    sum(D1.c6) over () as c6
    from
    SAWITH2 D1
    order by c1, c4, c3 ) D1 where rownum <= 2000001
    Thank you,
    Mike
    Edited by: Mike Jelen on Jun 7, 2013 2:05 PM

    Thank you for the info. They are definitely different values since ones on the header and the other is on the lines. As the "Won Opportunity" logical column is mapped to multiple LTS it appears the OBI 11 uses a different alogorthim to determine the most efficient table to use in the query generation vs 10g. I'll need to spend some time researching the impact to adding a 'sort' to the LTS. I'm hoping that there's a way to get OBI to use similar logic relative to 10g in how it generated the table priority.
    Thx again,
    Mike

  • SQL Query produces different results when inserting into a table

    I have an SQL query which produces different results when run as a simple query to when it is run as an INSERT INTO table SELECT ...
    The query is:
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      The INSERT INTO code:
    TRUNCATE TABLE applicant_summary;
    INSERT /*+ APPEND */
    INTO     applicant_summary
    (  account_number
    ,  main_borrower_status
    ,  num_apps
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      When run as a query, this code consistently returns 2 for the num_apps field (for a certain group of accounts), but when run as an INSERT INTO command, the num_apps field is logged as 1. I have secured the tables used within the query to ensure that nothing is changing the data in the underlying tables.
    If I run the query as a cursor for loop with an insert into the applicant_summary table within the loop, I get the same results in the table as I get when I run as a stand alone query.
    I would appreciate any suggestions for what could be causing this odd behaviour.
    Cheers,
    Steve
    Oracle database details:
    Oracle Database 10g Release 10.2.0.2.0 - Production
    PL/SQL Release 10.2.0.2.0 - Production
    CORE 10.2.0.2.0 Production
    TNS for 32-bit Windows: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    Edited by: stevensutcliffe on Oct 10, 2008 5:26 AM
    Edited by: stevensutcliffe on Oct 10, 2008 5:27 AM

    stevensutcliffe wrote:
    Yes, using COUNT(*) gives the same result as COUNT(1).
    I have found another example of this kind of behaviour:
    Running the following INSERT statements produce different values for the total_amount_invested and num_records fields. It appears that adding the additional aggregation (MAX(amount_invested)) is causing problems with the other aggregated values.
    Again, I have ensured that the source data and destination tables are not being accessed / changed by any other processes or users. Is this potentially a bug in Oracle?Just as a side note, these are not INSERT statements but CTAS statements.
    The only non-bug explanation for this behaviour would be a potential query rewrite happening only under particular circumstances (but not always) in the lower integrity modes "trusted" or "stale_tolerated". So if you're not aware of any corresponding materialized views, your QUERY_REWRITE_INTEGRITY parameter is set to the default of "enforced" and your explain plan doesn't show any "MAT_VIEW REWRITE ACCESS" lines, I would consider this as a bug.
    Since you're running on 10.2.0.2 it's not unlikely that you hit one of the various "wrong result" bugs that exist(ed) in Oracle. I'm aware of a particular one I've hit in 10.2.0.2 when performing a parallel NESTED LOOP ANTI operation which returned wrong results, but only in parallel execution. Serial execution was showing the correct results.
    If you're performing parallel ddl/dml/query operations, try to do the same in serial execution to check if it is related to the parallel feature.
    You could also test if omitting the "APPEND" hint changes anything but still these are just workarounds for a buggy behaviour.
    I suggest to consider installing the latest patch set 10.2.0.4 but this requires thorough testing because there were (more or less) subtle changes/bugs introduced with [10.2.0.3|http://oracle-randolf.blogspot.com/2008/02/nasty-bug-introduced-with-patch-set.html] and [10.2.0.4|http://oracle-randolf.blogspot.com/2008/04/overview-of-new-and-changed-features-in.html].
    You could also open a SR with Oracle and clarify if there is already a one-off patch available for your 10.2.0.2 platform release. If not it's quite unlikely that you are going to get a backport for 10.2.0.2.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Spatial Queries Not Always Producing Accurate Results

    Hi,
    Spatial queries are not always producing accurate results. Here are the issues. We would appreciate any clarification you could provide to resolve these issues.
    1. When querying for points inside a polygon that is not an MBR (minimum bounded rectangle), some of the coordinates returned are not inside the polygon. It is as though the primary filter is working, but not the secondary filter when using sdo_relate. How can we validate that the spatial query using sdo_relate is using the secondary filter?
    2. SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT returns true when validating geometries even though we find results that are invalid.
    3. Illegal geodetic coordinates can be inserted into a table: latitude > 90.0, latitude < -90.0, longitude > 180.0 or longitude < -180.0.
    4. Querying for coordinates outside the MBR for the world where illegal coordinates existed did NOT return any rows, yet there were coordinates of long, lat: 181,91.
    The following are examples and information relating to the above-referenced points.
    select * from USER_SDO_GEOM_METADATA
    TABLE_NAME      COLUMN_NAME      DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)      SRID
    LASTKNOWNPOSITIONS      THE_GEOM SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05), SDO_DIM_ELEMENT('Y', -90, 90, .05))      8307
    POSITIONS     THE_GEOM SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05), SDO_DIM_ELEMENT('Y', -90, 90, .05))      8307
    Example 1: Query for coordinates inside NON-rectangular polygon includes points outside of polygon.
    SELECT l.vesselid, l.latitude, l.longitude, TO_CHAR(l.observationtime,
    'YYYY-MM-DD HH24:MI:SS') as obstime FROM lastknownpositions l where
    SDO_RELATE(l.the_geom,SDO_GEOMETRY(2003, 8307, NULL,
    SDO_ELEM_INFO_ARRAY(1, 1003, 1),
    SDO_ORDINATE_ARRAY(-98.20268,18.05079,-57.30101,18.00705,-57.08229,
    54.66061,-98.59638,32.87842,-98.20268,18.05079)),'mask=inside')='TRUE'
    This query returns the following coordinates that are outside of the polygon:
    vesselid : 1152 obstime : 2005-08-24 06:00:00 long : -82.1 lat : 45.3
    vesselid : 3140 obstime : 2005-08-28 12:00:00 long : -80.6 lat : 44.6
    vesselid : 1253 obstime : 2005-08-22 09:00:00 long : -80.0 lat : 45.3
    Example 2a: Using SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT
    Select areaid, the_geom,
    SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(the_geom, 0.005) from area where
    areaid=24
    ResultSet:
    AREAID THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO,
    SDO_ORDINATES) SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(THE_GEOM,0.005)
    24 SDO_GEOMETRY(2003, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1), SDO_ORDINATE_ARRAY(-98.20268, 18.05079, -57.30101, 18.00705, -57.08229, 54.66061, -98.59638, 32.87842, -98.20268, 18.05079)) TRUE
    Example 2b: Using SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT
    Select positionid, vesselid, the_geom,
    SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(the_geom, 0.005) from positions where vesselid=1152
    ResultSet:
    POSITIONID VESSELID THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z),
    SDO_ELEM_INFO, SDO_ORDINATES) DO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(THE_GEOM,0.005)
    743811 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-82.1, 45.3, NULL), NULL, NULL) TRUE
    743812 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-82.1, 45.3, NULL), NULL, NULL) TRUE
    743813 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-80.2, 42.5, NULL), NULL, NULL) TRUE
    743814 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-80.2, 42.5, NULL), NULL, NULL) TRUE
    Example 3: Invalid Coordinate values found in POSITIONS table.
    SELECT p.positionid, p.latitude, p.longitude, p.the_geom FROM positions p
    WHERE p.latitude < -180.0
    2 lines from ResultSet:
    POSITIONID LATITUDE LONGITUDE THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    714915 -210.85408 -79.74449 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-79.74449, -210.85408, NULL), NULL, NULL)
    714938 -211.13632 -79.951256 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-79.951256, -211.13632, NULL), NULL, NULL)
    SELECT p.positionid, p.latitude, p.longitude, p.the_geom FROM positions p
    WHERE p.longitude > 180.0
    3 lines from ResultSet:
    POSITIONID LATITUDE LONGITUDE THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    588434 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
    589493 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
    589494 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
    Example 4: Failure to locate illegal coordinates by querying for disjoint coordinates outside of MBR for the world:
    SELECT p.vesselid, p.latitude, p.longitude, p.the_geom,
    TO_CHAR(p.observationtime, 'YYYY-MM-DD HH24:MI:SS') as obstime,
    SDO_GEOM.RELATE(p.the_geom, 'determine',
    SDO_GEOMETRY(2003, 8307, NULL,SDO_ELEM_INFO_ARRAY(1, 1003, 1),
    SDO_ORDINATE_ARRAY(-180.0,-90.0,180.0,-90.0,180.0,90.0,
    -180.0,90.0,-180.0,-90.0)), .005) relationship FROM positions p where
    SDO_GEOM.RELATE(p.the_geom, 'disjoint', SDO_GEOMETRY(2003, 8307,
    NULL,SDO_ELEM_INFO_ARRAY(1, 1003, 1),
    SDO_ORDINATE_ARRAY(-180.0,-90.0,180.0,-90.0,180.0,90.0,-80.0,90.0,
    -180.0,-90.0)),.005)='TRUE'
    no rows selected
    Carol Saah

    Hi Carol,
    1) I think the results are correct. Note in a geodetic coordinate system adjacent points in a linestring or polygon are connected via geodesics. You are probably applying planar thinking to an ellipsoidal problem! I don't have time to do the full analysis right now, but a first guess is that is what is happening.
    2) The query window seems to be valid. I don't think this is a problem.
    3) Oracle will let you insert most anything into a table. In the index, it probably wraps. If you validate, I think the validation routines will tell you is is illegal if you use the signature with diminfo, where the coordinate system bounds are included in the validation.
    4) Your query window is not valid. Your data is not valid. As the previous reply stated, you need to have valid data. If you think in terms of a geodetic coordinate system, you will realize that -180.0,-90.0 and 180.0,-90.0 are really the same point. Also, Oracle has a rule that polygon geometries cannot be greater than half the surface of the Earth.
    Hope this helps.

  • IWeb Error: "Your search did not produce any results."

    I just updated my iWeb and added the new search and comment features to my iWeb site. I am happy to see these features added but the search feature gives me a page displaying "Your search did not produce any results." Does anyone else have this issue? I am using my .Mac account and did a "Publish all to .Mac" to see if that would help. Comments work great. Can anyone help? Not good to have a feature that does not work.
    Cheers!
    Patrick

    I am having the same problem and can probably give you an answer as to why the search feature is not working...
    Look carefully and you may notice that by re-publishing your site with 1.1, iWeb has converted all text on your site to image files (probably .png). This is what happened when I re-published.
    Therefore, there is technically no text to "search" through...
    No fix as of yet... I will ttry to republish yet again tonight

  • Robohelp 9 WebHelp - Searching doesn't produce any results.

    Hi, I upgraded to RoboHelp 9 last week and now searching in Webhelp doesn't produce any results. I've created a new project, used the sample project, tried generating it to a new folder and results remain the same.
    When I select the search tab, the status bar flutters with the "Waiting for file...." displayed indefinetly with the occassion display of the javascript: (void);
    Screens shots are from the RH 9 sample project. Results are the same with my projects.  CHMs generated work fine!
    Any suggestions?

    Mary
    You added the problem to another thread that was not related as the issue there is the use of HTTPS. I have deleted that post.
    You also started a second thread with a the same question. I don't understand how asking the question twice will help so I have locked that thread but created a link to this thread so that anyone with an answer can help you. If there is a reason for starting a second thread, please let me know.
    Please just ask once and wait for a reply as this wastes moderator's time that is better used trying to answer questions.
    Now to the problem. Searching with Chrome installed works just fine for most people so the problem seems to be local to you. If I have read you correctly, after installing Chrome, the search breaks no matter what browser is used.
    Was that checked on other machines or just yours?
    IGNORE THIS QUESTION - The answer is YES and that was in the post.
    Did you also check that search was broken with the supplied sample projects? I know that works with the three browsers being discussed here?
    Where was the help installed when tested by you both standalone and from the application?
    Questions 2 and 3 are really only related to problems with Chrome. I have not seen anyone report Chrome also breaking the search in the other browsers. There's more to this than meets the eye as Chrome has not caused that problem for anyone else, or at least, no one has reported it.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Old outer join syntax produces different results from new syntax!

    I have inherited a query that uses the old outer join syntax but that is yielding correct results. When I translate it to the new outer join syntax, I get the results I expect, but they are not correct! And I don't understand why the old syntax produces the results it produces. Bottom line: I want the results I'm getting from the old syntax, but I need it in the new syntax (I'm putting it into Reporting Services, and RS automatically converts old syntax to new).
    Here's the query with the old outer join syntax that is working correctly:
    Code Snippet
    SELECT   TE = COUNT(DISTINCT T1.ID),
             UE = COUNT(DISTINCT T2.ID),
             PE = CONVERT(MONEY, COUNT(DISTINCT T2.ID)) / 
                  CONVERT(MONEY,COUNT(DISTINCT T1.ID))
    FROM     TABLE T1, TABLE T2
    WHERE    T1 *= T2
    In this query, much to my surprise, TE <> UE and PE <> 1. However, TE, UE, and PE seem to be accurate!
    Here's the query with the new outer join syntax that is working but not producing the results I need:
    Code Snippet
    SELECT   TE = COUNT(DISTINCT T1.ID),
             UE = COUNT(DISTINCT T2.ID),
             PE = CONVERT(MONEY, COUNT(DISTINCT T2.ID)) / 
                  CONVERT(MONEY,COUNT(DISTINCT T1.ID))
    FROM     TABLE T1 LEFT OUTER JOIN TABLE T2 ON T1.ID = T2.ID
    Though not producing the results I need, it is producing what would be expected: TE = UE and PE = 1.
    My questions:
    1) Can someone who is familiar enough with the old syntax please help me understand why TE <> UE and PE <> 1 in the first query?
    2) Can someone please tell me how to properly translate the first query to the new syntax so that it continues to produce the results in the first query?
    Thank you very much.

    How can we reproduce the issue?
    Code Snippet
    USE [master]
    GO
    EXEC sp_dbcmptlevel Northwind, 80
    GO
    USE [Northwind]
    GO
    SELECT
    TE
    = COUNT(DISTINCT T1.OrderID),
    UE = COUNT(DISTINCT T2.OrderID),
    PE = CONVERT(MONEY, COUNT(DISTINCT T2.OrderID)) /
    CONVERT(MONEY,COUNT(DISTINCT T1.OrderID))
    FROM
    dbo
    .Orders T1, dbo.Orders T2
    WHERE
    T1
    .OrderID *= T2.OrderID
    SELECT
    TE
    = COUNT(DISTINCT T1.OrderID),
    UE = COUNT(DISTINCT T2.OrderID),
    PE = CONVERT(MONEY, COUNT(DISTINCT T2.OrderID)) /
    CONVERT(MONEY,COUNT(DISTINCT T1.OrderID))
    FROM
    dbo
    .Orders T1
    LEFT OUTER JOIN
    dbo.Orders T2
    ON T1.OrderID = T2.OrderID
    GO
    EXEC sp_dbcmptlevel Northwind, 90
    GO
    Result:
    TE
    UE
    PE
    830
    830
    1.00
    TE
    UE
    PE
    830
    830
    1.00
    As you can see, I am getting same results.
    AMB

  • Formula Node Floor(x) Produces Different Result

    Hi, A search didn't find anything about the Floor(x) function, so... I'm using LabVIEW 6.0.2, and the Floor(x)function in a Formula Node seems to be producing different results depending on previous calculations. I've got the following 2 lines in a Formula Node:
    MSS = Ref / RefDiv;
    MDN = floor(RF / MSS);
    Ref is always 20.0M, RefDiv always 500.0, and for this calcualtion RF is always 1539.4, all numbers Double with 6 digits of precision. I generate an array of frequencies given a start, step, and frequency count. These frequencies then go to a subVI with a Formula Node that calculates the byte values to send to a couple PLLs.
    If Start = 70.1, Step = .025, and Count = 20, at frequency 70.2 the Floor function gives 38.485.
    If Start = 70.0, Step = .025, and Count = 20, at frequency 70.2 the Floor function gives 38.484.
    I've omitted some calc steps here, but I've verified the starting values in the subVI are the same in both cases. Why the result changes I'm hoping someone can tell me...
    Thanks,
    Steve

    I want to thank those involved again for their help. With ideas and hints from others I found a solution without scaling.
    In recap, what had bothered me was it *appeared* like the same subVI was giving correct results one time and incorrect results only randomly. While I understand binary fractional imprecision, I wasn't doing any looped calculations 100+ times or anything.
    I did some more checking though. The problem was indeed introduced by cumulative fractional addition. In this case 10 additions were enought to cause the error. Apparently, floor(71.199_94) produces 72.0. However, using a shift register and constant fraction to add an offset to produce an array introduces enough error in under 10 iterations to be a problem. By the time the loop got to what was supposed to be 72.0, it was actually 71.199_84 or something, enough to throw the floor function. Now I understand why the error occurred, and why it wasn't a problem before.
    I fixed this problem by changing the real frequency number to a I32 before introduction to the formula node. This corrected the error introduced by the fractional addition by forcing 71.199_84 to 72, instead of letting it propagate through the rest of the calculations. And it was a whole lot easier than changing all the VIs to allow scaling! Also, I prefer to know where and why the problem happened, instead of just scaling all my calcualtions. Maybe I can recoginse potential problems in the future.
    My boss wants to go back and look at his program to see if HPVee somehow bypassed the problem or if he did the calculations differently.
    Thanks again for the insight and help,
    Steve

  • Your search did not produce any results

    Hi,
    I'm publishing my iWeb site to my ME.COM account.
    I have the site password protected as offered in iWeb.
    But when the site is password protected the search with the iWeb search field does not provide any results.
    Only the message:
    "Your search did not produce any results."
    If I disable the password protection for the site in iWeb and republish, the search provides the expected results.
    Any idea what's wrong?
    Thanks and Regards,
    JO

    Sorry, didn't test enough.
    After publishing a few new blog entries under the password protected site, I tested the search again and it does not return any results.
    It was working for a short time when I republished unprotected and then again protected.
    Now that I added block entries under the protected site mode, search is brogen, again.
    Any Help?
    Thanks and Regards,
    JO

  • Having installed aperture 3 and imported iphoto library (15000 photos) Apertue 3 does not render most of my photos corectly most a blurred pixellated and distorted. Reverting to photo produces perfect results. Any suggestions

    Having installed aperture 3 and imported iphoto library (15000 photos) Apertue 3 does not render most of my photos corectly most a blurred pixellated and distorted. Reverting to photo produces perfect results. Any suggestions

    Galdaplh,
    What do you mean by "reverting to photo"?  "Revert" is not a function in Aperture so you must be talking about something else.
    If you just performed the import of that many photos, Aperture will take some time to generate thumbnails.  You can see if it is doing this sort of thing by {Command}-{Shift}-0 to show the Activity window.
    Once Aperture generates all its thumbnails, you will have nearly instantaneous access to much, much better thumbnails than Aperture uses immediately after importing photos.
    nathan

  • Getting poor results with Refine Edge tool

    Not sure if anyone else has run into this issue. But after following numerous tutorials on lynda.com as well as tv.adobe.com and youtube, I just can't get good selection results using the Refine Edge tool in Photoshop CC and I'm convinced it's something I'm doing wrong, but I have no idea what.
    After downloading this Wikipedia image of Bismarck for a history documentary, I tried isolating the figure from the background using Refine Edge and keep getting poor results. Maybe it's my settings, I'm just not sure. But after painting over my edges, I end up with terrible results. Unfortunately, I'm getting poor results with other images as well. Can anyone venture to guess what I might be doing wrong? Thanks very much in advance.

    Hi ninose11,
    For this type of project and idea, I would recommend trying some of the magnetic lasso tool. The tool is located as the third tool in the toolbar (you should see a lasso or rope looking icon), hold down the icon and it should expand - then select the magnetic lasso tool. The magnetic lasso tool will allow you to scroll over the edge of his profile and it will automatically detect the outline edges. The slower you roll-over the shape the higher the accuracy will be. If the magnet is not detecting a particular edge or sharp corner (for example, the ear) you can manually drop points by simply right-clicking at any point during the selection. If the edge is too sharp, you can also create a light feather which will blend the edge softly (this should be located at the top).
    If this is unsuccessful still, you can try selecting by color, using the Magnetic Wand Tool (forth icon in the toolbar, again hold down for all options). Once you are using the magnetic wand tool, you should be able to select all the black areas of the background to delete. You can increase or decrease the tolerance to change the sensitivity of the selection.
    Hope this helps!

  • SQL job running DTEXEC.EXE via PowerShell step produces wrong characters in result tables. Running the same DTEXEC.EXE in PowerShell x86 window produces correct results.

    I've created a package in SSDT (Visual Studio 2012) to import DBF tables into MSSQL 2014 via a wizard.
    Source DBF files contain tables with Russian (Cyrillic) characters in records fields.
    Created package works fine, produces correct results (imports Cyrillic characters as needed) while:
    •debugging in VS2012
    •deploying package in SSISDB and running the package via PowerShell x86:
    &"c:\Program Files (x86)\Microsoft SQL Server\120\DTS\Binn\DTExec.exe" /Server "server\mssql2014" /ISServer "\SSISDB\import\Integration Services Project16\Package1.dtsx";
    But when I create a SQL Agent job with a single PowerShell step that contains the same command and run the job it produces wrong characters in resulting tables. So, the problem is in wrong code page (or encoding). The correct tables with records are
    produced and job runs without errors.
    I tried
    chcp 1251; # 855, 866 -- all of them
    &"c:\Program Files (x86)\Microsoft SQL Server\120\DTS\Binn\DTExec.exe" /Server "server\mssql2014" /ISServer "\SSISDB\import\Integration Services Project16\Package1.dtsx";
    in SQL Agent PowerShell step, but with no positive result.
    Anyone encountered such a behavior? How to fix it?
    v

    What if you run this package without PowerShell?
    Arthur My Blog

  • Query that only returns items that will produce a result

    Thanks to Mack for his help yesterday.  I would really appreciate some help from anyone who is more SQL competent than I am.  I have an SQL problem that is just completely over my head.  I've created a nifty tagging system for the blog, that sorts by tags and by multiple tags, check out the beta here: http://committedsardine.com/blog.cfm
    When a user selects a tag, it adds it to the value list SESSION.blogTags.  If the selected tag is there already, it removes it.  When the list for tags pops up, I output all the tags, and show their state.  You'll see what I mean if you try it.
    What this leads to is the ability to select a group of tags for which there are no query results.  What I want to do is only show those that will generate results and how many results they'll show.  Like this, select "fluency" by itself there are 310 entries
    fluency (310) | digital (234) | writing (12)
    Once fluency is selected, there are 13 articles that ALSO are tagged by "digital", but none that are tagged by writing:
    fluency | digital (12) | writing
    I have a table called blogTagLinks, that is just for tying a tag to a blog.  It lists a blogID and a tagID.  Here is a sample of it for reference:
    blogTagLinkID
    blogID
    tagID
    4
    2
    2
    5
    2
    3
    6
    2
    5
    39
    1
    18
    49
    1
    1
    42
    1
    9
    44
    1
    19
    47
    5
    14
    48
    1
    22
    54
    16
    22
    I'm including all my sql, but the spot that I need help with is marked in red below:
    <!---if URL.tg is defined, check to see if it exists in the database, then the SESSION, and either add or delete it from SESSION--->
    <cfquery name="rsAllTags" datasource="">
    SELECT tagsID, tagName
            FROM tags
            WHERE tagActive = 'y'
    </cfquery>
    <cfset allTags = ValueList(rsAllTags.tagsID)>
    <cfif isDefined("URL.blogTags")>
        <cfif ListFind(allTags, URL.blogTags) NEQ 0>
            <cfif ListFind(SESSION.blogTags, URL.blogTags) NEQ 0>
                <cfset SESSION.blogTags = ListDeleteAt(SESSION.blogTags, ListFind(SESSION.blogTags, URL.blogTags))>
                <cfelse>
                <cfset SESSION.blogTags = ListAppend(SESSION.blogTags, URL.blogTags)>
            </cfif>
        </cfif>
    </cfif>
    <!---get a list of all available tags, tags that if added to the already selected tags, will return a result--->
    <cfquery name="rsAvailableTags" datasource="">
    SELECT tagsID, tagName
            FROM tags
            WHERE tagActive = 'y'
            NEED SOME STATEMENT HERE OF BLOGTAGLINKS TO DETERMINE WHAT TAGS WILL PRODUCE A RESULT
    </cfquery>
    <!---if searching by tags, get a list of the currently selected tags for display, the 0 returns an empty result if there are no tags--->
    <cfif isDefined("SESSION.sb") AND SESSION.sb EQ "tg">
        <cfquery name="rsTags" datasource="">
            SELECT tags.tagName, tagsID
            FROM tags
            WHERE tagsID <cfif SESSION.blogTags NEQ "">IN(#SESSION.blogTags#)
            <cfelse> = 0</cfif>
        </cfquery>
        <cfset variables.newrow = false>
    </cfif>
    <!---get the information for the blogs list, filtered by keyword or tag if requested--->
    <cfquery name="rsBlog" datasource="">
        SELECT blog.blogID,
            blog.storyID,
            blog.blogDate,
            blogStories.storyID,
            blogStories.blogTitle,
            SUBSTRING(blogStories.blogBody,1,200) AS blogBody,
            images.imageName
        FROM blog, blogStories, images
        WHERE blog.storyID = blogStories.storyID AND images.imageID = blog.photoID AND blog.blogDate < "#todayDate#" AND blog.deleted = 'n'
    <cfif SESSION.sb EQ "kw">AND  CONCAT(blogStories.blogBody, blogStories.blogTitle) LIKE '%#SESSION.blogKeywords#%'</cfif>
        <cfif SESSION.sb EQ "tg" AND SESSION.blogTags NEQ "">
                AND  blog.blogID IN (
                SELECT blogID
                FROM blogTagLink
                <cfif SESSION.blogTags NEQ "">
                    WHERE tagID IN(<cfqueryparam cfsqltype="cf_sql_integer" value="#SESSION.blogTags#" list="true">)
                    GROUP BY blogID
                    HAVING count(tagID) = #ListLen( SESSION.blogTags )#)
                </cfif>
         </cfif>
    ORDER BY blog.blogDate DESC
    </cfquery>

    There might be a single query solution but here's a query that you
    will need to run for each tag in the database (cfloop over all the
    tags) and will give you the number of blogs that have the selected
    tags + the current tag
    SELECT Count(*) AS blog_count
    FROM (
        SELECT blogID
        FROM blogTagLink
        WHERE tagID IN(<cfqueryparam cfsqltype="cf_sql_integer"
    value="#SESSION.blogTags#" list="true">)
             OR tagID = #currentTagID#
        GROUP BY blogID
        HAVING count(tagID) = #ListLen( SESSION.blogTags )#
             OR count(tagID) = #ListLen( SESSION.blogTags )# + 1
        ) AS blogs
    Mack

Maybe you are looking for

  • Software for testing the hardware?

    Hey experts I have an unfortunate ongoing, unresolved problem with my mac (http://discussions.apple.com/thread.jspa?threadID=1429403&tstart=0). My machine has been deemed 'working' by a 3rd party repair shop, which I personally doubt tremendously. Is

  • How to restore table data to one week old status

    I have a table vu_med_reimburse and  some of columns are VMR_MM, VMR_YYYY, VMR_ACTUAL_AMT the insert process to this table is done through another program but update of three columns are done throgh another program, by mistake all the values of above

  • BIAR file import error

    Hi, We had BO XI 3.1 without SP in my previous server, from this we exported a BIAR file with repository contents. After that we remove that server and installed BO XI 3.1 SP3 and trying to import the same file and geting this error message Failed to

  • Accessing Private class data

    Hi I have Created Class in SE24 and I have declared three public methods in that class and Finally I made Class as Private. then How can access the public methods of the class From the Report Program. Regards, D.Kiran Kumar.

  • Date and Time formats change in transfer rules

    Hello Experts, Could someone please help me to fix the following issue with DATE AND TIME format. I have two info objects zdate with data type DATS (length 8) and ztime with data types TIMS (length 6). The data in the data source (alalertdb table) is