Concatenated datastore performance with other predicates

Hi
I am using context indexes with a concatenated datastore.
The query is like this -
select *
from my_table
where contains ( my_column, 'token_1 within xx or token_2 within yy ', 1 ) > 0
and some_other_column = 'xxx'
There is no index on "some_other_column".
Would it help to include "some_other_column" in the concatenated datastore? Will this increase the performance of the query, or does it always depends on the type of data we have?
How is the query of a concatenated datastore fired? Is the $I table queried for each token in the query?
Thanks and regards
Pratap

Yes, it should generally be faster to include "some_other_column" in the
list for the concatenated datastore.
The query would then be
select * from my_table where contains
( my_column, '(token_1 within xx or token_2 within yy) and (xxx within some_other_column)', 1 ) > 0
Note that this is not exactly the same as your query - for example if some_other_column contained "abc xxx xyz" then my query would be a hit but yours would not. If you know the column will only ever contain one word, then they are identical.
- Roger

Similar Messages

  • Concatenated datastores

    I have developed applications around several intermedia indexed text columns, but now would like to create a concatenated datastore indexing those same columns and using sections within new applications. Will my older applications still be able to access the individual intermedia text indexes that are already in place or will I have to modify the older applications to access the new concatenated datastore?

    I have developed applications around several intermedia indexed text columns, but now would like to create a concatenated datastore indexing those same columns and using sections within new applications. Will my older applications still be able to access the individual intermedia text indexes that are already in place or will I have to modify the older applications to access the new concatenated datastore?

  • Concatenated datastore fuzzy searches and performance...

    Oracle 8.1.7:
    I am using the concatenated datastore and indexing two columns.
    The query I am executing includes an exact match on one column and a fuzzy match on the second column.
    When I execute the query, performance should improve as the exact match column is set to return less values.
    This is the case when we execute an exact match search on both columns.
    However, when one column is an exact match and the second column is a fuzzy match this is not true.
    Is this normal processing??? and why??? Is this a bug??
    If you need more information please let me know.
    We are under a deadline and this is our final road block.
    TIA
    Colleen GEislinger

    This is more information about our scenario:
    We have two groups in the datastore:
    concat:
    1.) hierarchy:(example text) 321826 325123 543123
    2.) page: Actual document text.
    321826 325123 543123 represents ids in a hierarchy structure. As you move from left to right the number of times the number occurs is less so there should be less exact matches.
    Example: In this index all pages have 321826 as the first value. A few pages have 543123 and all others will have some other number as the last value.
    if I do this query:
    contains(concat,(321826 within hierarchy ) and ('personnel') within page)
    it takes about 10 seconds because it 321826 will hit all pages.
    if I do this query:
    contains(concat,(543123 within hierarchy ) and ('personnel') within page)
    it takes only about 1 second because it 543123 will hit just a few pages.
    BUT:::::::
    Fuzzy search....
    if I do this query:
    search A.) contains(concat,(321826 within hierarchy ) and ?('personnel') within page)
    it takes about 30 seconds because it 321826 will hit all pages. This is okay for performance for this.
    BUT if I do this query:
    search B.) contains(concat,(543123 within hierarchy ) and ?('personnel') within page)
    it takes about 30 seconds even though 543123 will hit only a few pages.
    This should be faster than 30 seconds because you're searching over only a fraction of material for the fuzzy search part.
    We've played with different variations on the () and the '' but nothing seems to change this.
    Any advice on how to make search B.) faster??
    We don't understand why see the different speeds in the exact match and we DON'T see the different speeds in the fuzzy search...
    I can send you some test data with the index and query scripts if you want.
    Our indexes are on large tables (2,000,000) rows.
    TIA
    Colleen Geislinger.

  • Fuzzy searching and concatenated datastore query performance problems.

    I am using the concatenated datastore and indexing two columns.
    The query I am executing includes an exact match on one column and a fuzzy match on the second column.
    When I execute the query, performance should improve as the exact match column is set to return less values.
    This is the case when we execute an exact match search on both columns.
    However, when one column is an exact match and the second column is a fuzzy match this is not true.
    Is this normal processing??? and why??? Is this a bug??
    If you need more information please let me know.
    We are under a deadline and this is our final road block.
    TIA
    Colleen GEislinger

    I see that you have posted the message in the Oracle text forum, good! You should get a better, more timely answer there.
    Larry

  • Problem with Concatened Datastore

    Hello,
    We try to implement the "Concatened Datastore". And we've got errors.
    So we download the zip file from the link "Download the kit" on the page http://www.oracle.com/technology/sample_code/products/text/htdocs/concatenated_text_datastore/cdstore_readme.html#Installation. We unzip the file and got several files.
    1) We try the "full_example" file, but the script "cdstore_10g_user" (line 12) don't exist ; sick.
    2) So, to workaround, we try to follow the instructions in the "full_example.sql" file, with little corrections :-)). We create a user test_user, we load the cdstore.sql file, and here, we'got some errors. One for table they don't exist (ORA-00942 ; the first time it's normal. New tables don't exist), and one other for an invalid identifier (ORA-00904). Here the exact log :
    Warning: Package Body created with compilation errors.
    Errors for PACKAGE BODY CTX_CD:
    LINE/COL ERROR
    48/7 PL/SQL: SQL Statement ignored
    51/16 PL/SQL: ORA-00904: "CDSTORE_NAME": invalid identifier
    3) We continue the "full_example.sql" file. Create a table mytab for the test_user, insert data into this table, drop data into cdstore. And when we try to create a new cdstore, we've got this error :
    SQL> prompt new...
    begin
    ctx_cd.create_cdstore('my2_cdstore', 'mytab');
    end;
    new...
    SQL> 2 3 4 begin
    ERROR at line 1:
    ORA-04063: package body "TEST_USER.CTX_CD" has errors
    ORA-06508: PL/SQL: could not find program unit being called: "TEST_USER.CTX_CD"
    ORA-06512: at line 2
    Is there anyone, who can tell us what's wrong ?
    Best regards
    Laurent PELLETIER

    ATTENTION ROGER FORD
    It looks like you need to make a few fixes to your full_example.sql and cdstore.sql. It looks like there are some mismatches between names of scripts and what is called and between names of tables created and selected from and so forth and some missing values for inserts. When I initially ran it with the cdstore.sql under the scott schema it ran o.k. because it was using the old definitions from the previous version, but then it wouldn't run under the test_user, so I suspect it was using the old definitions when you tested as well. After making the following corrections, it ran successfully with the test_user.
    -- changes to full_example.sql:
    --   changed password for system (not a bug)
    --   changed @cdstore_10g_user to @cdstore
    -- changes to cdstore.sql:
    --   first: 
    --     changed each occurrence of ctx_cdstore to ctx_user_cdstore
    --   then:
    --     added column cdstore_name varchar2(30) to ctx_user_cdstore_cols table:
    --       create table ctx_user_cdstore_cols (... cdstore_name varchar2(30) ...) 
    --     added cdstore_name and l_name to both inserts into ctx_user_cdstore_cols:
    --       insert into ctx_user_cdstore_cols (... cdstore_name ...)
    --       values (... l_name ...)

  • Nice to see 13" retina but it has only Intel HD Graphics 4000 and does not have NVIDIA GeForce GT 650M with 1GB of GDDR5 memory card. How it will affect the speed, performance and other things compared to 15" retina where NVIDIA GeForce card is available.

    Nice to see 13" retina but it has only Intel HD Graphics 4000 and does not have NVIDIA GeForce GT 650M with 1GB of GDDR5 memory card. How it will affect the speed, performance and other things compared to 15" retina where NVIDIA GeForce card is available.

    The 15" Retina's will have better performance than any 13" Retina. Not only do the 15" machines have dedicated GPU's, but they also have quad-core processors, whereas the 13" Retina's only have dual-core processors.

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Concatenated Datastore doesn't work for me

    I'm using oracle InterMedia Text.
    Now I'm trying to build a Concatenated Datastore.
    I executed the sql-file cdstore.sql. 0 Errors.
    But when I'm trying to execute
    "exec ctx_cd.add_column('my_cdstore',columnname)
    I get this error:
    ORA-01749: you may not GRANT/REVOKE privileges to/from yourself
    ORA-06512: at "TWEB.CTX_cd", line 315
    ORA-06512: at "TWEB.CTX_cd", line 443
    ORA-06512: at "TWEB.CTX_cd", line 816
    ORA-06512: at line 2
    I'm fairly confused. I read that "The Concatenated Datastore is an additional datastore for Oracle Text".
    Can I use it when I'm working with Intermedia, or what am I doing wrong???
    THX for help!

    There are two possibilities here. I think the most likely one is that you've accidently run cdstore.sql as the user "tweb" at some stage. The ctx_cd package is therefore over-riding the version owned by user CTXSYS.
    To check this, log on as user tweb, and do this:
    DROP PACKAGE CTX_CD;
    if that succeeds - then this is the problem. You'll also want to do all this as tweb:
    DROP PACKAGE FRIEDMAN;
    DROP TABLE CTX_CDSTORES;
    DROP TABLE CTX_CDSTORE_COLS;
    DROP SEQUENCE CTX_CDSTORE_SEQ;
    DROP VIEW CTX_USER_CDSTORES;
    DROP VIEW CTX_USER_CDSTORE_COLS;
    The other possibility is that you've modified the package so that it runs under "invoker rights" rather than "definer rights", using the AUTHID clause. This will not work.
    Please let me know if this solves your problem.
    - Roger

  • Oracle Text Concatenated Datastore

    I have read this:
    http://www.oracle.com/technology/sample_code/products/text/htdocs/concatenated_text_datastore/cdstore_readme.html
    I've been trying to follos the 'Installation' section.
    I've downloaded cdstore.sql but I get error ORA-00942 (table does not exist) because ctx_user_cdstore_cols does not exist (at line 618 in the file).
    Indeed, the table created is 'ctx_cdstore_cols' and not 'ctx_user_cdstore_cols'.
    I've changed it to ctx_cdstore_cols and now get ORA-00904 because CDSTORE_NAME is not a column of ctx_cdstores.
    Anyway, I believe that this code should work as is so there is something big I must be missing.
    Has anyone managed to install this package and how please?

    It's not a problem with the concatenated datastore, it's about operator precedence.
    If you search for 'A or B within SECTION', "within" has a higher precedence than "or", so this becomes 'A or (B within SECTION)'. What you need to say is '(A or B) within SECTION', or in your case '(BROOKS or BONDS) within name'
    Hope this helps.
    Roger

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • Are there Issues with poor performance with Maverick OS,

    Are there issues with slow and reduced performance with Mavericks OS

    check this
    http://apple.stackexchange.com/questions/126081/10-9-2-slows-down-processes
    or
    this:
    https://discussions.apple.com/message/25341556#25341556
    I am doing a lot of analyses with 10.9.2 on a late 2013 MBP, and these analyses generally last hours or days. I observed that Maverick is slowing them down considerably for some reasons after few hours of computations, making it impossible for me to work with this computer...

  • Performance with null parameters

    The below is the main cursor for a small report with some horrible performance. The report is taking 30 to 40 minutes to generate just a few rows of data. After checking my joins and whether the indexes were correct and all the other simple things, I began removing the lines on the bottom that are just used to insert incoming parameters. What I found is that if I comment out the two lines on the bottom of the statement (with v_person_id and v_sup_id), the query returns immediately if I pass a start and end date only and leave the other parameters as null. If I run the query with those lines in and pass null for those values also, it goes back to 30 minutes or so. I am confused as passing them as null should make the line evaluate to ppf.person_id = ppf.person_id. We use this technique with all of our reporting when we pass null values but for some reason it is failing horribly in this query. Can anyone see what I am doing wrong?
    SELECT ppf.employee_number,
            ppf.full_name,
            loc.attribute3,
            loc.location_code,
            glc.segment1,
            glc.segment2,
            glc.segment3,
            org.name,
            (SELECT ppf1.employee_number||'~'||ppf1.full_name
                 FROM per_all_people_f ppf1
                WHERE trunc(sysdate) between ppf1.effective_start_date and ppf1.effective_end_date
                  AND ppf1.person_id = asg.supervisor_id
                   AND rownum = 1) supervisor,
            past.per_system_status,
            hts.approval_status
      FROM hr.per_all_people_f ppf,
              hr.per_all_assignments_f asg,
            hr.per_assignment_status_types past,
              hr.hr_all_organization_units org,
           hr.hr_locations_all loc,
            hxc.hxc_timecard_summary hts,
            hr.per_person_type_usages_f ppu,
            gl.gl_code_combinations glc,
            hr.per_person_types ppt
    WHERE ppu.effective_end_date BETWEEN ppf.effective_start_date AND ppf.effective_end_date
       AND ppu.effective_end_date BETWEEN asg.effective_start_date AND asg.effective_end_date
       AND (   (:v_start BETWEEN ppu.effective_start_date AND ppu.effective_end_date
            OR (:v_end BETWEEN ppu.effective_start_date AND ppu.effective_end_date
            OR (ppu.effective_start_date BETWEEN :v_start AND :v_end
       AND asg.primary_flag = 'Y'
       AND asg.assignment_type = 'E'
       AND asg.assignment_status_type_id = past.assignment_status_type_id
       AND asg.organization_id = org.organization_id
       AND asg.location_id = loc.location_id
       AND asg.default_code_comb_id = glc.code_combination_id
       AND ppf.person_id = ppu.person_id
       AND ppf.person_id = asg.person_id
       AND ppu.person_type_id = ppt.person_type_id
       AND ppt.user_person_type = 'Employee'
       AND ppf.person_id = hts.resource_id
       AND hts.start_time between :v_start and :v_end
       AND org.name = NVL(:v_department, org.name)
       AND loc.location_code = NVL(:v_location, loc.location_code)
       AND glc.code_combination_id = NVL(:v_gl_code_id, glc.code_combination_id)
       AND asg.supervisor_id = NVL(:v_sup_id, asg.supervisor_id)---------------------------------------------------------------
       AND ppf.person_id = NVL(:v_person_id, ppf.person_id)-------------------------------------------------------------------

    user7726970 wrote:
    Now I am more confused as we tried something similar yesterday and it did not help but your version makes it respond immediately. Thanks for this. Would you please explain a little why your version works where my version does not?
    and decode(:v_sup_id,null,'x',asg.supervisor_id) = decode(:v_sup_id,null,'x',:v_sup_id)
    and decode(:v_person_id,null,'x',ppf.person_id)  = decode(:v_person_id,null,'x',:v_person_id)that depends on your version. the example that i have posted is simply to verify if your variable is null then use a character 'x'. that is like where 'x' = 'x' which does not use any table column or simply does nothing at all. and if the variable is not null simply use the table column with the variable and i am assuming that the table column has an index on it will utilize it for performance.

  • Performance with the new Mac Pros?

    I sold my old Mac Pro (first generation) a few months ago in anticipation of the new line-up. In the meantime, I purchased a i7 iMac and 12GB of RAM. This machine is faster than my old Mac for most Aperture operations (except disk-intensive stuff that I only do occasionally).
    I am ready to purchase a "real" Mac, but I'm hesitating because the improvements just don't seem that great. I have two questions:
    1. Has anyone evaluated qualitative performance with the new ATI 5870 or 5770? Long ago, Aperture seemed pretty much GPU-constrained. I'm confused about whether that's the case anymore.
    2. Has anyone evaluated any of the new Mac Pro chips for general day-to-day use? I'm interested in processing through my images as quickly as possible, so the actual latency to demosaic and render from the raw originals (Canon 1-series) is the most important metric. The second thing is having reasonable performance for multiple brushed-in effect bricks.
    I'm mostly curious if anyone has any experience to point to whether it's worth it -- disregarding the other advantages like expandability and nicer (matte) displays.
    Thanks.
    Ben

    Thanks for writing. Please don't mind if I pick apart your statements.
    "For an extra $200 the 5870 is a no brainer." I agree on a pure cost basis that it's not a hard decision. But I have a very quiet environment, and I understand this card can make a lot of noise. To pay money, end up with a louder machine, and on top of that realize no significant benefit would be a minor disaster.
    So, the more interesting question is: has anyone actually used the 5870 and can compare it to previous cards? A 16-bit 60 megapixel image won't require even .5GB of VRAM if fully tiled into it, for example, so I have no ability, a priori, to prove to myself that it will matter. I guess I'm really hoping for real-world data. Perhaps you speak from this experience, Matthew? (I can't tell.)
    Background work and exporting are helpful, but not as critical for my primary daily use. I know the CPU is also used for demosaicing or at least some subset of the render pipeline, because I have two computers that demonstrate vastly different render-from-raw response times with the same graphics card. Indeed, it is this lag that would be the most valuable of all for me to reduce. I want to be able to flip through a large shoot and see each image at 100% as instantaneously as possible. On my 2.8 i7 that process takes about 1 second on average (when Aperture doesn't get confused and mysteriously stop rendering 100% images).
    Ben

  • Performance with Boot Camp/Gaming?

    Hi,
    I just acquired a MBP/2GHz IntelCD/2GB RAM/100GB/Superdrive, with Applecare. Can anyone comment about the performance with
    Boot Camp -- running Windows XP SP2, and what the gaming graphics are like?
    Appreciate it, thanks...
    J.
    Powerbook G4 [15" Titanium - DVI] Mac OS X (10.4.8) 667MHz; 1GB RAM; 80GB

    Well, I didn't forget to mention what I did not know yet.... So that's not exactly correct..
    As per Apple's support page, http://support.apple.com/specs/macbookpro/MacBook_Pro.html
    My new computer does have 256MB of video memory...

  • Performance with external display

    Hello,
    when I'm connecting my 19'' TFT to the MacBook the performance (with the same applications runnig) is realy bad. It tooks longer to switch between apps and if I don't use an app for some time, it can took up to 30 sec to "reactivate" the app.
    Because the HD is working when I switch apps, it looks like the OS is swapping. My question: Would it help to upgrade the MacBook to 2GB ram? AFAIC the intel card uses shared memory.
    Thanks for your help
    Till
    MacBook 1.83 GHz/1GB   Mac OS X (10.4.8)  

    How much RAM do you have? Remember that the MB does not have dedicated VRAM like some computers do and that it uses the system RAM to drive the graphics chipset.
    I use my MB with the mini-DVI to DVI adapter to drive a 20" widescreen monitor without any of the problems that you describe, but I have 2GB of RAM. If you only have the stock 512MB of RAM, that may be part of what you are seeing.

Maybe you are looking for

  • Remove null values from string array

    Hi , I have a string array in a jsp page which I save some values inside. After I store the values I want to print only those who are not null. How can I do this? Is there a way to delete the null values?

  • MSI G33 PCI-E Stuck at x1

    Just installed a new 8800GT with the latest drivers and when I open GPU Z and it says my gpu is running at x1. Bus interface stuck at x1 is severely hindering performance. I double checked the manual and this board definitely supports x16, which was

  • How can I write some informations in a text file ?

    Hello. I am a beginner in java in general and in Web Dynpro Java in particular. I have to work in an application written by a consultant wich works no more at my office. Due to performances problems (slowness of the application at unpredictable insta

  • Low res SD video on widescreen HDTV... again g

    Have read a lot of comments about taking 1280x768 output from FCP and making an SD DVD. With the SD only having a maximum of 768 wide when it tries to show on a widescreen tv it looks low resolution (ie crappy!) What I am looking for is any help with

  • Apple's Update Server Slooooow?

    i'm getting about 70kbps when downloading the 10.5.2 combo update, yet when i test my download speed at several other speed test sites, i get between 700kbps and 1.6mbps. is this a dns thing? i had the same issue the other day when trying to rent a m