Spatial Performance with join

I have a Oracle Spatial table with 3.5 million rows plus another auxillary table with 3.5 million rows. A query over these two tables joined returns a full result (250 rows) based on one to one join - here's an example:
Select count(*)
from F, N
where F.id = n.id and (F.GEOM,mdsys.sdo_geometry (2003,8307, null, mdsys.sdo_elem_info_array (1,1003,1), mdsys.sdo_ordinate_array(-120.0,49.5,-119.0,49.5,-119.0,60.35,-120.0,60.35,-120.0,49.5)),
'mask= ANYINTERACT querytype=WINDOW') = 'TRUE') AND N.PNUM = '4';
It takes an average of 35 seconds to get the full result set back. I've gathered statistics, tweaked memory parameters and this is the best I can get. Does anyone have any suggestions?

This is an interesting problem. It looks like Oracle is doing the right thing for each of the table accesses - use the index and fetch by rowid.
The only thing you have to play with if you don't go to materialized views or temp tables is how the results of the two table queries are joined.
You don't have a lot of options. Hash join seems to be slow, but you don't know if it is faster or slower compared with nested loops or merge join.
I'd compare what you have done with something like the following to test nested loops:
select /*+ no_merge use_nl (f1,n1) */ count(*)
from
(select id
from f
where sdo_anyinteract ( F.GEOM,
sdo_geometry (2003,8307, null, sdo_elem_info_array (1,1003,1),
sdo_ordinate_array (-120.0,49.5,-119.0,49.5,-119.0,60.35,
-120.0,60.35,-120.0,49.5))) = 'TRUE') f1,
(select id
from n
where n.pnum='4') n1
where f1.id=n1.id ;
and presort with a merge join hint to see how it performs:
select /*+ no_merge use_merge (f1,n1) */ count(*)
from
(select id
from f
where sdo_anyinteract ( F.GEOM,
sdo_geometry (2003,8307, null, sdo_elem_info_array (1,1003,1),
sdo_ordinate_array (-120.0,49.5,-119.0,49.5,-119.0,60.35,
-120.0,60.35,-120.0,49.5))) = 'TRUE'
order by id) f1,
(select id
from n
where n.pnum='4'
order by id) n1
where f1.id=n1.id ;
It might be that you already have the best Oracle can do, but I'd be curious to know how you make out.
Dan Abugov
VP Software Support and Services
Acquis Inc.

Similar Messages

  • Oracle Spatial Performance with 10-20.000 users

    Does anyone have any experience when Oracle Spatial is used with say 20.000 concurrent users. I am not interested in MapViewer response time, but lets say there is:
    - an app using 800 different tables each having an sdo_geometry column
    - the app is configured with different tables visible on different view scales
    - let's say an average of 40-50 tables is visible at any given time
    - some tables will have only a few records, while other can hold millions.
    - there is no client side caching
    - clients can zoom in/out pan.
    Anwers I am interested in:
    - What sort of server would be required
    - How can Oracle serve all that data (each Refresh renders the map and retrieves the data over the wire as there is no client side caching).
    - What sort of network infrastructure would be required.
    - Can clients connect to different servers and hence use load balancing or does Oracle have an automatic mechanism for that?
    Thanks in advance,
    Patrick

    Patrick, et al.
    There are lots of things one can do to improve performance in mapping environments because of a lot of the visualisation is based on "background" or read-only data. Here are some "tips":
    1. Spatially sort read-only data.
    This tip makes sure that data that is close to each other in space are next to each other on disk! Dan gave a good suggestion when he referenced Chapter 14, "Reorganize the Table Data to Minimize I/O" pp 580- 582, Pro Oracle Spatial. But just as easily one can create a table as select ... where sdo_filter() where the filtering object is an optimized rectangle across the whole of the dataset. (This is quite quick on 10g and above but much slower on earlier releases.)
    When implementing this make sure that the created table is created such that its blocks are next to each other in the tablespace. (Consider tablespace defragmentation beforehand.) Also, if the data is READ ONLY set the PCTFREE to 0 in order to pack the data up into as small a number of blocks as possible.
    2. Generalise data
    Rendering spatial data can be expensive where the data is geometrically detailed (many vertices) esp where the data is being visualised at smaller scales than it was captured at. So, if your "zoom thresholds" allow 1:10,000 data to be used at 1:100,000 then you are going to have problems. Consider pre-generalising the data (see sdo_util.simplify) before deployment. You can add multiple columns to your base table to hold this data. Be careful with polygon data because generalising polygons that share boundaries will create gaps etc as the data is more generalised. Often it is better to export the data to a GIS which can maintain the boundary relationships when generalising (say via topological relationships).
    Oracle's MapViewer has excellent on-the-fly generalisation but here one needs to be careful. Application tier caching (cf Bryan's comments) can help here a lot.
    3. Don't draw data that is sub-pixel.
    As one zooms out objects become smaller and smaller until they reach a point where the whole object can be drawn within a single pixel. If you have control over your map visualisation application you might want to consider setting the SDO_FILTER parameter "min_resolution" flag dynamically so that its value is the same as the number of meters / pixel (eg min_resolution=10). If this is set Oracle Spatial will only include spatial objects in the returned search set if one side of a geometry's MBR is greater than or equal to this value. Thus any geometries smaller than a pixel will not be returned. Very useful for large scale data being drawn at small scales and for which no selection (eg identify) is required. With Oracle MapViewer this behaviour can be set via the generalized_pixels parameter.
    3. SDO_TOLERANCE, Clean Data
    If you are querying data other than via MBR (eg find all land parcels that touch each other) then make sure that your sdo_tolerance values are appropriate. I have seen sites where data captured to 1cm had an sdo_tolerance value set to a millionth of a meter!
    A corollary to this is make sure that all your data passes validation at the chosen sdo_tolerance value before deploying to visualisation. Run sdo_geom.validate_geometry()/validate_layer()...
    4. Rtree Spatial Indexing
    At 10g and above lots of great work went in to the RTree indexing. So, make sure you are using RTrees and not QuadTrees. Also, many GIS applications create sub-optimal RTrees by not using the additional parameters available at 10g and above.
    4.1 If your table/column sdo_geometry data contains only points, lines or polygons then let the RTree indexer know (via layer_gtype) as it can implement certain optimizations based on this knowledge.
    4.2 With 10g you can set the RTree's spatial index data block use via sdo_pct_free. Consider setting this parameter to 0 if the table/column sdo_geometry data is read only.
    4.3 If a table/column is in high demand (eg it is the most commonly used table in all visualisations) you can consider loading (a part of) the RTree index into memory. Now, with the RTree indexing, the sdo_non_leaf_tbl=true parameter will split the RTree index into its leaf (contains actual rowid reference) and non-leaf (the tree built on the leaves) components. Most RTrees are built without this so only the MDRT*** secondary tables are built. But if sdo_non_leaf_tbl is set to true you will see the creation of an additional MDNT*** secondary table (for the non_leaf part of the rtree index). Now, if appropriate, the non_leaf table can be loaded into memory via the following:
    ALTER TABLE MDNT*** STORAGE(BUFFER_AREA KEEP);
    This is NOT a general panacea for all performance problems. One should investigate other options before embarking on this (cf Tom Kyte's books such as Expert Oracle Database Architecture, 9i and 10g Programming Techniques and Solutions.)
    4.4 Don't forget to check your spatial index data quality regularly. Because many sites use GIS package GUI tools to create tables, load data and index them, there is a real tendency to not check what they have done or regularly monitor the objects. Check the SDO_RTREE_QUALITY column in USER_SDO_INDEX_METADATA and look for indexes with an SDO_RTREE_QUALITY setting that is > 2. If > 2 consider rebuilding or recreating the index.
    5. The rendering engine.
    Whatever rendering engine one uses make sure you try and understand fully what it can and cannot do. AutoDesk's MapGuide is an excellent product but I have seen it simply cache table/column data and never dynamically access it. Also, I have been at one site which was running Deegree and MapViewer and MapViewer was so fast in comparison to Deegree that I was called in to find out why. I discovered that Deegree was using SDO_RELATE(... ANYINTERACT ...) for all MBR queries while MapViewer was using SDO_FILTER. Just this difference was causing some queries to perform at < 10% of the speed of MapViewer!!!!
    6. Consider "denormalising" data
    There is an old adage in databases that is "normalise for edit, denormalise for performance". When we load spatial data we often get it from suppliers in a fairly flat or normalised form. In consort with spatial sorting, consider denormalising the data via aggregations based on a rendering attribute and some sort of spatial unit. For example, if you have 1 million points stored as single points in SDO_GEOMETRY.SDO_POINT which you want to render by a single attribute containing 20 values, consider aggregating the data using this attribute AND some sort of spatial BUCKET or BIN. So, consider using SDO_AGGR_UNION coupled with Spatial Analysis and Mining package functions to GROUP the data BY <<column_name>> and a set of spatial extents.
    6. Tablespace use
    Finally, talk to your DBA in order to find out how the oracle database's physical and logical storage is organised. Is a SAN being used or SAME arranged disk arrays? Knowing this you can organise your spatial data and indexes using more effective and efficient methods that will ensure greater scalability.
    7. Network fetch
    If your rendering engine (app server) and database are on separate machines you need to investigate what sort of fetch sizes are being used when returning data from queries to the middle-tier. Fetch sizes for attribute only data rows and rows containing spatial data can be, and normally are, radically different. Accepting the default settings for these sizes could be killing you (as could the sort_area_size of the Oracle session the application server has created on the database). For example I have been informed that MapInfo Pro uses a fixed value of 25 records per fetch when communicating with Oracle. I have done some testing to show that this value can be too small for certain types of spatial data. SQL Developer's GeoRaptor uses 100 which is generally better (but this one can modify this). Most programmers accept defaults for network properties when programming in ADO/ODBC/OLEDB/JDBC: just be careful as to what is being set here. (This is one of the great strengths of ArcSDE: its TCP/IP network transport is well written, tuneable and very efficient.)
    8. Physical Format
    Finally, while Oracle's excellent MapViewer requires data its spatial data to be in Oracle, other commercial rendering engines do not. So, consider using alternate, physical file formats that are more optimal for your rendering engine. For example, Google Earth Enterprise "compiles" all the source data into an optimal format which the server then serves to Google Earth Enterprise clients. Similarly, a shapefile on local disk to the application server (with spatial indexing) may be faster that storing the data back in Oracle on a database server that is being shared with other business databases (eg Oracle financials). If you don't like this approach and want to use Oracle only consider using a dedicated Oracle XE on the application server for the data that is read only and used in most of your generated maps eg contour or drainage data.
    Just some things to think about.
    regards
    Simon

  • Performance with join

    Hi,
    Could you please give me tips to improve the performmance of the following joins.
    SELECT ekbe~ebeln
               ekbe~ebelp
               ekbe~vgabe
               ekbe~matnr
               ekbe~menge
               ekbe~shkzg
               ekpo~werks
               ekpo~meins
               ekpo~afnam
               ekpo~lgort
               INTO TABLE i_temp
               FROM ekbe
               INNER JOIN ekpo ON  ekpoebeln = ekbeebeln AND
                                   ekpoebelp = ekbeebelp
                            WHERE  ekbe~cpudt IN s_date    AND
                                   ekpo~werks IN s_werks   AND
                                   ekbe~ebeln IN s_ebeln   AND
                                   ( ekbe~vgabe EQ '1' )   OR
                                   ( ekbe~vgabe EQ '6' )
                                    GROUP BY
                                    ekbe~ebeln
                                    ekbe~ebelp
                                    ekbe~vgabe
                                    ekbe~matnr
                                    ekbe~menge
                                    ekbe~shkzg
                                    ekpo~werks
                                    ekpo~meins
                                    ekpo~afnam
                                    ekpo~lgort.
      DELETE i_temp WHERE werks NOT IN s_werks  .
      DELETE i_temp WHERE ebeln NOT IN s_ebeln.
    Thanks & regards
    Frank

    Hi frank,
    1. why do you want to use GROUP By ?
    2. u are not using any aggregate functions in sql
       like sum, count etc.
    3. so if not required, you may remove
      group by,
       it will improve the performance a lot.
    4.Further i think that
      the BRACKETS for OR
      are misplace. it should be like this :
    ekpo~lgort
    INTO TABLE i_temp
    FROM ekbe
    INNER JOIN ekpo ON ekpoebeln = ekbeebeln AND
    ekpoebelp = ekbeebelp
    WHERE ekbe~cpudt IN s_date AND
    ekpo~werks IN s_werks AND
    ekbe~ebeln IN s_ebeln AND
    <b>( ekbevgabe EQ '1'  OR  ekbevgabe EQ '6' )</b>
    GROUP BY
    regards,
    amit m.

  • Performance for join 9 custom table with native SQL ?

    Hi Expert,
    I need your opinion regarding performance to join 9 tables with native sql. Recently i have to tunning some customize extraction cost  report. This report extract about 10 million cost of material everyday.
    The current program actually, try to populate the condition data and insert into customize table and join all the table to get data using native sql.
    SELECT /*+ ordered use_hash(mst,pg,rg,ps,rs,dpg,drg,dps,drs) */
                mst.werks, ....................................
    FROM
                sapsr3.zab_info mst,
                sapsr3.zab_pc pg,
                sapsr3.zab_rc rg,
                sapsr3.zab_pc ps,
                sapsr3.zab_rc rs,
                sapsr3.zab_g_pc dpg,
                sapsr3.zab_g_rc drg,
                sapsr3.zab_s_pc dps,
                sapsr3.zab_s_rc drs
            WHERE mst.zseq_no = :p_rep_run_id
            AND mst.werks = :p_werks
            AND mst.mandt = rg.mandt(+)
            AND mst.ekorg = rg.ekorg(+)
            AND mst.lifnr = rg.lifnr(+)
            AND mst.matnr = rg.matnr(+)
            ...............................................   unitl all table (9 tables)
            AND ps.mandt = dps.mandt(+)
            AND ps.knumh = dps.knumh(+)
            AND ps.zseq_no = dps.zseq_no(+)
            AND COALESCE (dps.kbetr, drs.kbetr, dpg.kbetr, drg.kbetr) <> 0
    It seems the query ask for database to using hashed table. would that be it will burden the database ? and impacted to others sap process ?
    Please advise
    Thank You and Best Regards

    you can only argue coming from measurements and that is not the case.
    Coming from the code, I see only that you do not understand it at all, so better leave it as it is. It is not a hash table, but a hash join on these table.

  • Materialized View  with Joins and Possibilities of Partitioning

    Hi,
    We have a materialized view which has a data to query around 12 gb. The query goes some thing like
    this.
      Select a.c1,a.c2,b.c1,b.c2,c.c1,c.c2
      from a,b,c
      where a.c1=b.c1
      --and the where condition goes on...
      --i.e Basically joining 3 different tables with complex join conditions but without any group functions and subqueries.
       Now i have few questions here.
    Firstly as the Mview is created with joins we will have to create separate logs for each tables and we have did the same. The logs are created with rowid and sequence for better performance during refresh.
    Question No 1
    Is this is a best approach for materializing a query with complex join conditions. Or Is it better to make 3 different materialized views for each 3 tables and join those 3 MViews and write a query for my report. I ask this question just to make sure the refresh time is brought down by this method. But as such as we have created 3 different logs for 3 tables the refresh must be happening separately.
    Question No 2
    Data is likely to grow faster and faster on this. So we have a idea of making this a partitioned Mview. Can the Pro's advice me on this and suggest me the right practice for the same and help me to correct links from where i can pick a example and continue from there.
    Question No 3
    How to find whether the materialized view has refreshed on a fast mode or complete mode after the last refresh.
    Apart from querying and checking the rowid which i feel is quiet cumbersome for a mview with a data volume like what we have. By default when i see the info in TOAD for this Mview it shows Refresh Mode as "Force". But how to ascertain this.
    Thanks in anticipation for a good round of discussion

    Hi,
    We have a materialized view which has a data to query around 12 gb. The query goes some thing like
    this.
      Select a.c1,a.c2,b.c1,b.c2,c.c1,c.c2
      from a,b,c
      where a.c1=b.c1
      --and the where condition goes on...
      --i.e Basically joining 3 different tables with complex join conditions but without any group functions and subqueries.
       Now i have few questions here.
    Firstly as the Mview is created with joins we will have to create separate logs for each tables and we have did the same. The logs are created with rowid and sequence for better performance during refresh.
    Question No 1
    Is this is a best approach for materializing a query with complex join conditions. Or Is it better to make 3 different materialized views for each 3 tables and join those 3 MViews and write a query for my report. I ask this question just to make sure the refresh time is brought down by this method. But as such as we have created 3 different logs for 3 tables the refresh must be happening separately.
    Question No 2
    Data is likely to grow faster and faster on this. So we have a idea of making this a partitioned Mview. Can the Pro's advice me on this and suggest me the right practice for the same and help me to correct links from where i can pick a example and continue from there.
    Question No 3
    How to find whether the materialized view has refreshed on a fast mode or complete mode after the last refresh.
    Apart from querying and checking the rowid which i feel is quiet cumbersome for a mview with a data volume like what we have. By default when i see the info in TOAD for this Mview it shows Refresh Mode as "Force". But how to ascertain this.
    Thanks in anticipation for a good round of discussion

  • Poor query performance when joining CONTAINS to another table

    We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
    For example, we can find all the records a user has access to from our base table by the following query:
    SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID;
    This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
    Our search query looks like this:
    SELECT score(1), d.*
    FROM duns d
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
    2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
    WITH subset
    AS
    (SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID
    SELECT score(1), d.*
    FROM duns d
    JOIN subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
    Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
    Thanks!!

    Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc  NUMBER,
      3       text_key  VARCHAR2 (30))
      4  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc  NUMBER,
      3       emp_id       NUMBER)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO duns
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> -- indexes:
    SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
      2  ON duns (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
      2  ON primary_contact (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
    SCOTT@orcl_11gR2> -- as suggested by Roger:
    SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- original query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> WITH
      2    subset AS
      3        (SELECT d.duns_loc
      4         FROM      duns d
      5         JOIN      primary_contact pc
      6         ON      d.duns_loc = pc.duns_loc
      7         AND      pc.emp_id = :employeeID)
      8  SELECT score(1), d.*
      9  FROM   duns d
    10  JOIN   subset s
    11  ON     d.duns_loc = s.duns_loc
    12  WHERE  CONTAINS (TEXT_KEY, :search,1) > 0
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 4228563783
    | Id  | Operation                      | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |   1 |  SORT ORDER BY                 |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |*  2 |   HASH JOIN                    |                   |     2 |    84 |   120   (3)| 00:00:02 |
    |   3 |    NESTED LOOPS                |                   |    38 |  1292 |    50   (2)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  5 |      DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | DUNS_DUNS_LOC_IDX |     1 |     5 |     1   (0)| 00:00:01 |
    |*  7 |    TABLE ACCESS FULL           | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
       5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
       6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
       7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
    SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
    SCOTT@orcl_11gR2> WITH
      2    subset1 AS
      3        (SELECT pc.duns_loc
      4         FROM      primary_contact pc
      5         WHERE  pc.emp_id = :employeeID),
      6    subset2 AS
      7        (SELECT score(1), d.*
      8         FROM      duns d
      9         WHERE  CONTAINS (TEXT_KEY, :search,1) > 0)
    10  SELECT subset2.*
    11  FROM   subset1, subset2
    12  WHERE  subset1.duns_loc = subset2.duns_loc
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
    SCOTT@orcl_11gR2> SELECT subset2.*
      2  FROM   (SELECT pc.duns_loc
      3            FROM   primary_contact pc
      4            WHERE  pc.emp_id = :employeeID) subset1,
      5           (SELECT score(1), d.*
      6            FROM   duns d
      7            WHERE  CONTAINS (TEXT_KEY, :search,1) > 0) subset2
      8  WHERE  subset1.duns_loc = subset2.duns_loc
      9  ORDER  BY score(1) DESC
    10  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- ansi join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  JOIN   primary_contact
      4  ON     duns.duns_loc = primary_contact.duns_loc
      5  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      6  AND    primary_contact.emp_id = :employeeid
      7  ORDER  BY SCORE(1) DESC
      8  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- old join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns, primary_contact
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc = primary_contact.duns_loc
      5  AND    primary_contact.emp_id = :employeeid
      6  ORDER  BY SCORE(1) DESC
      7  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- in clause:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc IN
      5           (SELECT primary_contact.duns_loc
      6            FROM   primary_contact
      7            WHERE  primary_contact.emp_id = :employeeid)
      8  ORDER  BY SCORE(1) DESC
      9  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 3825821668
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN SEMI              |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2>

  • Need complex query  with joins and AGGREGATE  functions.

    Hello Everyone ;
    Good Morning to all ;
    I have 3 tables with 2 lakhs record. I need to check query performance.. How CBO rewrites my query in materialized view ?
    I want to make complex join with AGGREGATE FUNCTION.
    my table details
    SQL> select from tab;*
    TNAME TABTYPE CLUSTERID
    DEPT TABLE
    PAYROLL TABLE
    EMP TABLE
    SQL> desc emp
    Name
    EID
    ENAME
    EDOB
    EGENDER
    EQUAL
    EGRADUATION
    EDESIGNATION
    ELEVEL
    EDOMAIN_ID
    EMOB_NO
    SQL> desc dept
    Name
    EID
    DNAME
    DMANAGER
    DCONTACT_NO
    DPROJ_NAME
    SQL> desc payroll
    Name
    EID
    PF_NO
    SAL_ACC_NO
    SALARY
    BONUS
    I want to make  complex query  with joins and AGGREGATE  functions.
    Dept names are : IT , ITES , Accounts , Mgmt , Hr
    GRADUATIONS are : Engineering , Arts , Accounts , business_applications
    I want to select records who are working in IT and ITES and graduation should be "Engineering"
    salary > 20000 and < = 22800 and bonus > 1000 and <= 1999 with count for males and females Separately ;
    Please help me to make a such complex query with joins ..
    Thanks in advance ..
    Edited by: 969352 on May 25, 2013 11:34 AM

    969352 wrote:
    why do you avoid providing requested & NEEDED details?I do NOT understand what do you expect ?
    My Goal is :
    1. When executing my own query i need to check expalin plan.please proceed to do so
    http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9010.htm#SQLRF01601
    2. IF i enable query rewrite option .. i want to check explain plan ( how optimizer rewrites my query ) ? please proceed to do so
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/ex_plan.htm#PFGRF009
    3. My only aim is QUERY PERFORMANCE with QUERY REWRITE clause in materialized view.It is an admirable goal.
    Best Wishes on your quest for performance improvements.

  • SQL Optimization with join and in subselect

    Hello,
    I am having problems finding a way to optimize a query that has a join from a fact table to several dimension tables (star schema) and a constraint defined as an in (select ....). I am hoping that this constraint will filter the fact table then perform the joins but I am seeing just the opposite with the optimizer joining first and then filtering at the very end. I am using the cost optimizer and saw that it does in subselects last in the predicate order. I tried the push_subq hint with no success.
    Does anyone have any other suggestions?
    Thanks in advance,
    David
    example sql:
    select ....
    from fact, dim1, dim2, .... dim &lt;n&gt;
    where
    fact.dim1_fk in ( select pf from dim1 where code = '10' )
    and fact.dim1_fk = dim1.pk
    and fact.dim2_fk = dim2.pk
    and fact.dim&lt;n&gt;_fk = dim&lt;n&gt;.pk

    The original query probably shouldn't use the IN clause because in this example it is not necessary. There is no limit on the values returned if a sub-select is used, the limit is only an issue with hard coded literals like
    .. in (1, 2, 3, 4 ...)Something like this is okay even in 8.1.7
    SQL> select count(*) from all_objects
      2  where object_id in
      3    (select object_id from all_objects);
      COUNT(*)
         32378The IN clause has its uses and performs better than EXISTS in some conditions. Blanket statements to avoid IN and use EXISTS instead are just nonsense.
    Martin

  • Simple spatial query with different SRID

    Can anyone help me with just a real simple spatial query with different SRID.
    Here is the error i get when i perform the query:
    AND SDO_RELATE(A.geometrie_point, B.GEOMETRIE_POLYGONE, 'mask=anyinteract querytype=WINDOW') = 'TRUE'
    ERROR at line 4:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-13207: incorrect use of the [IN COMPATIBLE BOUNDS in SDO_RELATE] operator
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 84
    ORA-06512: at line 4

    Hi,
    I am using 8.1.7.2 and i have the lattest spatial patch.
    From the start i had different SRID. The only thing i changed was the bounds of the coordinate system.
    I reset my bounds of the coordinate system with SRID 8307 to -180,
    180 in X (longitude), and -90,90 in Y (latitude) to see if the query window geometry
    (geom2) will be transformed to the coordinate system of the layer geometries (geom1)
    Her is the content of the user_sdo_geom_metadata :
    GIS_PCP GEOMETRIE_POINT
    SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -5800000, 3600000, .00000005), SDO_DIM_ELEMENT('Y', 3516000, 6000000, .00000005))
    82227
    GIS_TEST GEOMETRIE_POLYGONE
    SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .00000005), SDO_DIM_ELEMENT('Y', -90, 90, .00000005))
    8307
    Here is my query :
    select /*+ ordered */ A.no_point
    from gis_pcp A, gis_test B
    WHERE SDO_RELATE(A.geometrie_POINT, B.GEOMETRIE_POLYGONE, 'mask=anyinteract querytype=WINDOW') = 'TRUE';
    Here is my result :
    WHERE SDO_RELATE(A.geometrie_POINT, B.GEOMETRIE_POLYGONE, 'mask=anyinteract querytype=WINDOW') = 'TRUE'
    ERROR at line 3:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-13207: incorrect use of the [IN COMPATIBLE BOUNDS in SDO_RELATE] operator
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 84
    ORA-06512: at line 4
    If i change the bounds of GIS_TEST to the exact bouns of GIS_PCP then i dont have the error message above and i have correct results. Do i have the adjust the bounds of every layer to fit the biggest bound?

  • EclipseLink Direct Map with Joining mapping issue

    Hi EclipseLink Team,
    We encountered an issue to map an attribute as Direct Map with Join Fetch enabled in EclipseLink Workbench 1.1.1 (Build v20090430-r4097).
    Basically, we have the following data model:
    SESSION
    SESS_NO (PK),
    PARAM_SESS
    SESS_NO (FK) (PK),
    PARAM_NAME (PK),
    PARAM_VALUE
    Then we have Session.java persistent entity associated with SESSION table. This class contains Map attribute sessionParameters which we map as Direct Map with PARAM_SESS.PARAM_NAME as key and PARAM_SESS.PARAM_VALUE as value (referenced by PARAM_SESS.SESS_NO = SESSION.SESS_NO).
    The described mapping works fine without Joining enabled (both for Lazy and Eager Loading).
    But we have cases when we want to get parameters for a number of sessions. And disabling of Joining leads to a number of performed SELECT SQL queries (both for Lazy and Eager Loading) those are bad for performance.
    So we have chosen the Eager Loading and set the Join Fetch option to Outer. Then we have got the following:
    I see in the log the only SQL query performed: SELECT DISTINCT t0.SESS_NO, …, t1.PARAM_VALUE, t1.PARAM_NAME FROM {oj SESSION t0 LEFT OUTER JOIN PARAM_SESS t1 ON (t1.SESS_NO = t0.SESS_NO)}. This is pretty good for us. This query returns exactly what we expect when executing on the database. But the Map attribute for every session is populated incorrectly: Maps are empty (not corresponding to available relational data).
    Could you please let us know if this is a bug, or kind of known issue or we made something wrong? Some hints and proposals would be very helpful and appreciated.
    I should mention that for now we want to map all of this for read only purpose.
    Thanks.
    Best Regards,
    Alexey Elisawetski

    James,
    I've tried both 1.2 release and 2.0 (v20091121-r5847) but received the same result - empty Map.
    Moreover, for both versions the following string was absent in deployed XML file:
    +<direct-key-field table="PARAM_SESS" name="PARAM_NAME" xsi:type="column"/>+
    Therefore, on application initialization I have got an exception: org.eclipse.persistence.exceptions.DescriptorException with message This descriptor contains a mapping with a DirectMapMapping and no key field set.
    So I was forced to add the line manually.
    This seems buggy to me...
    Regards,
    Alexey

  • Help with Joining two SharePoint lists using LINQ

    Hi Guys,
    I have found many threads with this question. Although I had one doubt. I wanted to know that while performing a Join operation on two SharePoint Lists using LINQ does the column on which we are performing the join operation need to be a Lookup column?
    I was initially using CAML but since my lists does not contain lookup columns I switched to LINQ but my doubt still remains.
    I would really appreciate any help from you guys and also would appreciate if I could get some examples that I could refer to.
    Thank you

    Joins in LINQ to SharePoint 2010
    How to: Query Using LINQ to SharePoint
    This post is my own opinion and does not necessarily reflect the opinion or view of Slalom.

  • Hi! Everyone, I have some problems with JOIN and Sub-query; Could you help me, Please?

    Dear Sir/Madam
    I'm a student who is interested in Oracle Database and
    I have some problems with JOIN and Sub-query.
    I hope so many of you could help me.
    if i use JOIN without sub-query, may it be faster or not?
    SELECT field1, field2 FROM tableA INNER JOIN tableB
    if i use JOIN with sub-query, may it be faster or not?
    SELECT field1,field2,field3 FROM tableA INNER JOIN (SELECT field1,field2 FROM tableB)
    Thanks in advance!

    Hi,
    fac30d8e-74d3-42aa-b643-e30a3780e00f wrote:
    Dear Sir/Madam
    I'm a student who is interested in Oracle Database and
    I have some problems with JOIN and Sub-query.
    I hope so many of you could help me.
    if i use JOIN without sub-query, may it be faster or not?
    SELECT field1, field2 FROM tableA INNER JOIN tableB
    if i use JOIN with sub-query, may it be faster or not?
    SELECT field1,field2,field3 FROM tableA INNER JOIN (SELECT field1,field2 FROM tableB)
    Thanks in advance!
    As the others have said, the execution plan will give you a better idea about which is faster.
    If you're trying to see how using (or not using) a sub-query affects performance, make the rest of the queries as similar as possible.  For example, include field3 in both queries, or ignore field3 in both queries.
    In this particular case, I guess the optimizer would do the same thing either way, but that's just a guess.  I can't see your execution plans.
    In general, simpler code is faster, and better in other ways, too.  In this case
    tableB
    is simpler than
    (SELECT field1, field2  FROM tableB)
    Why do you want a sub-query in this example?

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • Are there Issues with poor performance with Maverick OS,

    Are there issues with slow and reduced performance with Mavericks OS

    check this
    http://apple.stackexchange.com/questions/126081/10-9-2-slows-down-processes
    or
    this:
    https://discussions.apple.com/message/25341556#25341556
    I am doing a lot of analyses with 10.9.2 on a late 2013 MBP, and these analyses generally last hours or days. I observed that Maverick is slowing them down considerably for some reasons after few hours of computations, making it impossible for me to work with this computer...

  • Performance with the new Mac Pros?

    I sold my old Mac Pro (first generation) a few months ago in anticipation of the new line-up. In the meantime, I purchased a i7 iMac and 12GB of RAM. This machine is faster than my old Mac for most Aperture operations (except disk-intensive stuff that I only do occasionally).
    I am ready to purchase a "real" Mac, but I'm hesitating because the improvements just don't seem that great. I have two questions:
    1. Has anyone evaluated qualitative performance with the new ATI 5870 or 5770? Long ago, Aperture seemed pretty much GPU-constrained. I'm confused about whether that's the case anymore.
    2. Has anyone evaluated any of the new Mac Pro chips for general day-to-day use? I'm interested in processing through my images as quickly as possible, so the actual latency to demosaic and render from the raw originals (Canon 1-series) is the most important metric. The second thing is having reasonable performance for multiple brushed-in effect bricks.
    I'm mostly curious if anyone has any experience to point to whether it's worth it -- disregarding the other advantages like expandability and nicer (matte) displays.
    Thanks.
    Ben

    Thanks for writing. Please don't mind if I pick apart your statements.
    "For an extra $200 the 5870 is a no brainer." I agree on a pure cost basis that it's not a hard decision. But I have a very quiet environment, and I understand this card can make a lot of noise. To pay money, end up with a louder machine, and on top of that realize no significant benefit would be a minor disaster.
    So, the more interesting question is: has anyone actually used the 5870 and can compare it to previous cards? A 16-bit 60 megapixel image won't require even .5GB of VRAM if fully tiled into it, for example, so I have no ability, a priori, to prove to myself that it will matter. I guess I'm really hoping for real-world data. Perhaps you speak from this experience, Matthew? (I can't tell.)
    Background work and exporting are helpful, but not as critical for my primary daily use. I know the CPU is also used for demosaicing or at least some subset of the render pipeline, because I have two computers that demonstrate vastly different render-from-raw response times with the same graphics card. Indeed, it is this lag that would be the most valuable of all for me to reduce. I want to be able to flip through a large shoot and see each image at 100% as instantaneously as possible. On my 2.8 i7 that process takes about 1 second on average (when Aperture doesn't get confused and mysteriously stop rendering 100% images).
    Ben

Maybe you are looking for

  • Skillbuilders Modal Page - End Open event

    Hi, I'm using Skillbuilders Modal Page 2.0.0. plugin on Apex 4.2, and I can't seem to get the event End Open to work. I stumbled upon this thread Skillbuilder Modal Page - Dynamic Title but the solution Tom suggested didn't help; nothing happens even

  • Under what circmustances, is BodyContent null?

    Hi, I got a problem with BodyContent in my custom tag. I created the class that extends BodyTagSupport, and I use bodyContent object, which is a field of BodyTagSupport, in doEndTag method. The problem is "bodyContent is always null." Why is that? Do

  • Powering off while on charger with alarm clock set!!!

    So last night I put my droid 2 global on the charger and had my alarm set for Am, before I went to sleep I heard it beep letting me know it had reached full charge, I wake up at 930 today (slept through my classes) and looked over to see that my phon

  • Regarding Selection views in LDB

    Hi Experts, What is use of Selection Views in LDB ?? and how can we generate different Selection Views in LDB because i am getting only two Options either SAP or CUS for every LDB ?? I want to declare 4-5 and how to use Those Selection Views ( as by

  • Flash Video and sub-titling

    Hi Guys, Just like to know, if anyone has done any project with flash video with subtitling. I have around 20 videos of 5 mins each, I have to put the subtitling on the videos, the text for the sub titling is available in Excel sheets, I also have th