How to reduce time for replicating large tables?

Hi
Any suggestions on how to reduce the amount of time it takes to replicate a large table when it is first created?
I have a table with 150 million rows in it, and it takes forever to start the replication process even if I run it in parallel, and I can’t afford the downtime.

What downtime are you referring to? The primary doesn't need to be down when you're setting up replication and you're presumably still in the process of doing the initial configuration on the replicated database, so it's not really down, it's just not up yet.
Justin

Similar Messages

  • How to reduce time for gather statistics for a table.

    I have a table size 520 gb
    Its one of the partition size is 38 gb
    and total indexes of related table is 412 gb.
    Server/instance details.
    ==========
    56 cpu -> Hyper threading enable
    280 gb ram
    35 gb sga
    27 gb buffer cache
    4.5 gb shared pool size
    25 gb pga
    undo size 90gb
    temp size 150 gb
    Details :
    exec dbms_stats.gather_table_stats('OWNER','TAB_NAME',PARTNAME=>'PART_NAME',CASCADE=>FALSE,ESTIMATE_PERCENT=>10,DEGREE=>30,NO_INVALIDATE=>TRUE);
    when i am firing this in an ideal time when there is no load that time also is is taking 28 mins to complete.
    Can anybody please reply me how can we reduce the stats gather time.
    Thanks in advance,
    Tapas Karmakar
    Oracle DBA.

    Enable tracing to see where the time is going.
    parallel 30 seems optimistic - unless you have a large number of discs to support the I/O ?
    you haven't limited histogram collection, and most of the time spent of histograms may be wasted time - what histograms do you really need, and how many does Oracle analyse for and then discard ?
    Using a block sample may help slightly
    You haven't limited the granularity of the stats collection to the partition - the default is partition plus table, so I think you're also doing a massive sample on the table after completing the partition. Is this what you want to do, or do you have an alternative strategy for generating table-level stats.
    Regards
    Jonathan Lewis

  • How to create index for Telecom large table

    Hi ,
    I'm working on DB 10G on REHL 5 for telecom company with more than 1 million recorded per day , we need to speed the query result ,
    we know there are many types of the INDEX and I'm need a professional advice to create a suitable one ,
    many of our queries depend on the MSID ( the MAC address of the Modem ) column ,
    Name           Null Type        
    STREAMNUMBER        NUMBER(9)   
    MSID                VARCHAR2(20)
    USERNAME            VARCHAR2(20)
    DOMAIN              VARCHAR2(20)
    USERIP              VARCHAR2(16)
    CORRELATION_ID      VARCHAR2(64)
    ACCOUNTREASON       NUMBER(3)   
    STARTTIME           VARCHAR2(14)
    PRIORTIME           VARCHAR2(14)
    CURTIME             VARCHAR2(14)
    SESSIONTIME         NUMBER(9)   
    SESSIONVOLUME       NUMBER(9)   
    .please any help ,

    really i have 3 queries for the subscriber activity like (usage details , the date of bundle start the the total of the download , he's working out of bundle or not )
    and any of the subscribers can check those queries at any time thorw web ,
    select nvl(min(substr(a.starttime,1,8)),0) Service_Start_Time, nvl(sum(a.sessionvolume),0) Total_Traffic_KB
    FROM aaa_bill a
    where msid='84A8E46E929D'
    and starttime >=(select  max(fee) FROM aaa_bill
    where msid='84A8E46E929D' and accountreason=5);and the expected result is
    service_start_date  totoal_traffic_KB
    20120225                   440554the MSIDs examples
    (84A8E46E7F43,
    84A8E46E7A82,
    84A8E46E7C84,
    84A8E46E7CBF,
    also i have this query ,
    select
    substr(nvl(
    (select nvl(starttime,'0') as starttime
    from (
    select nvl(starttime,0) starttime,sum(sessionvolume) over(partition by msid order by starttime) sum1
    from aaa_bill
    where msid='84A8E46E90BC' and starttime >=(select  max(fee) FROM aaa_bill
    where msid='84A8E46E90BC' and accountreason=5))
    where sum1>=44987000
    and rownum<2)
    ,0),1,8) Reached_45GB
    from dual;and this one ,
    select min(to_char(to_date(starttime,'yyyymmddhh24miss'),'yyyy-mm-dd hh24:mi:ss')) "Accounting Start Time",
    max(to_char(to_date(curtime,'yyyymmddhh24miss'),'yyyy-mm-dd hh24:mi:ss')) "Accounting Stop Time",sum(sessiontime) Duration1,
    TO_CHAR (TRUNC (SYSDATE) + NUMTODSINTERVAL (sum(sessiontime), 'second'),'hh24:mi:ss') hr,
    sum(sessionvolume) Traffic
    from aaa_bill
    where msid='84A8E46E78EF'
    and starttime >=(select  max(fee) FROM aaa_bill
    where msid='84A8E46E78EF' and accountreason=5)
    group by correlation_id
    order by min(starttime);

  • In  a SQL query whihc has join, How to reduce Multiple instance of a table

    in a SQL query which has join, How to reduce Multiple instance of a table
    Here is an example: I am using Oracle 9i
    is there a way to reduce no.of Person instances from the following query? or can I optimize this query further?
    TABLES:
    mail_table
    mail_id, from_person_id, to_person_id, cc_person_id, subject, body
    person_table
    person_id, name, email
    QUERY:
    SELECT p_from.name from, p_to.name to, p_cc.name cc, subject
    FROM mail, person p_from, person p_to, person p_cc
    WHERE from_person_id = p_from.person_id
    AND to_person_id = p_to.person_id
    AND cc_person_id = p_cc.person_id
    Thnanks in advance,
    Babu.

    SQL> select * from mail;
            ID          F          T         CC
             1          1          2          3
    SQL> select * from person;
           PID NAME
             1 a
             2 b
             3 c
    --Query with only ne Instance of PERSON Table
    SQL> select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
      2         max(decode(m.t,p.pid,p.name)) to_name,
      3         max(decode(m.cc,p.pid,p.name)) cc_name
      4  from mail m,person p
      5  where m.f = p.pid
      6  or m.t = p.pid
      7  or m.cc = p.pid
      8  group by m.id;
            ID FRM_NAME   TO_NAME    CC_NAME
             1 a          b          c
    --Expalin plan for "One instance" Query
    SQL> explain plan for
      2  select m.id,max(decode(m.f,p.pid,p.name)) frm_name,
      3         max(decode(m.t,p.pid,p.name)) to_name,
      4         max(decode(m.cc,p.pid,p.name)) cc_name
      5  from mail m,person p
      6  where m.f = p.pid
      7  or m.t = p.pid
      8  or m.cc = p.pid
      9  group by m.id;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 902563036
    | Id  | Operation           | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT    |        |     3 |   216 |     7  (15)| 00:00:01 |
    |   1 |  HASH GROUP BY      |        |     3 |   216 |     7  (15)| 00:00:01 |
    |   2 |   NESTED LOOPS      |        |     3 |   216 |     6   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL| MAIL   |     1 |    52 |     3   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL| PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       4 - filter("M"."F"="P"."PID" OR "M"."T"="P"."PID" OR
                  "M"."CC"="P"."PID")
    Note
       - dynamic sampling used for this statement
    --Explain plan for "Normal" query
    SQL> explain plan for
      2  select m.id,pf.name fname,pt.name tname,pcc.name ccname
      3  from mail m,person pf,person pt,person pcc
      4  where m.f = pf.pid
      5  and m.t = pt.pid
      6  and m.cc = pcc.pid;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4145845855
    | Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |        |     1 |   112 |    14  (15)| 00:00:01 |
    |*  1 |  HASH JOIN           |        |     1 |   112 |    14  (15)| 00:00:01 |
    |*  2 |   HASH JOIN          |        |     1 |    92 |    10  (10)| 00:00:01 |
    |*  3 |    HASH JOIN         |        |     1 |    72 |     7  (15)| 00:00:01 |
    |   4 |     TABLE ACCESS FULL| MAIL   |     1 |    52 |     3   (0)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |   6 |    TABLE ACCESS FULL | PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    |   7 |   TABLE ACCESS FULL  | PERSON |     3 |    60 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("M"."CC"="PCC"."PID")
       2 - access("M"."T"="PT"."PID")
       3 - access("M"."F"="PF"."PID")
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement
    25 rows selected.
    Message was edited by:
            jeneesh
    No indexes created...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Down time for 2LIS_03_BX setup table

    Hello SDN,
    I have data reconcilation issue in my BW server for inventort managenent, for data source 2LIS_03_BX and 2LIS_03_BF. For this I have to refill setup table of 2LIS_03_BX in R3 server. For filling setup table for R3 system I need to ask for down time from cleint
    I NEED TO KNOW HOW DO I CALCULATE REQUIRED DOWN TIME FOR FILLING SETUP TABLE FOR DATA SOURCE 2LIS_03_BX.

    Dear Pravender,
    I understand your statement (" For the complete load, check in how much time you can do initialization? That much down time only you need, later you can fill setup tables for history data and load.") as to follow following steps:
    1. In the down time, start initialization without data transfer info package in BIW for 2LIS_03_BF and 2LIS_03_UM.
    2. Then during up time (after releasing down time, transactional data posting allowed in R3), fill setup table for 2LIS_03_BX.
    3. Run full upload info package for 2LIS_03_BX
    4. Start Delta info package for data sources 2LIS_03_BF and 2LIS_03_UM
    Let me know if I am right for the above procedures. These steps will allow us take very less down time.
    Thanks for the reply
    Regards,
    Jaydeep
    Edited by: Jaydeepsinh Rathore on Sep 4, 2009 8:23 AM

  • How to find Transaction for a Known Table maintenance View

    Hello Friends,
    May I know how to find transaction for a known table maintenance View.
    Thanks,
    Best Regards,
    Sudhanshu Garg

    Goto SE16 Transaction and enter TSTCP Table
    Here PARAM = /SM30 VIEWNAME=Table name*;UPDATE=X;
    enter table name in bold.
    Thanks
    Seshu

  • How to give comments for a particular table

    hi..
    how to give comments for a particular table ..
    select * from user_tab_comments;
    tx in advance..

    Try this.
    SQL> COMMENT ON TABLE EMP IS 'THIS IS SAMPLE EMPLOYEE TABLE' ;
    Comment created.
    SQL> select * from user_tab_comments where table_name = 'EMP'
      2  /
    TABLE_NAME                     TABLE_TYPE
    COMMENTS
    EMP                            TABLE
    THIS IS SAMPLE EMPLOYEE TABLE

  • How to configure ActiveSync for a database table in IdM 7.0

    Hi All,
    Please suggest me the steps to configure ActiveSync in IdM 7.0.
    when i try it by resource-->activeSync wizard it gives
    "The ActiveSync Wizard has been deprecated in Identity Manager 7.0 in favor of using MetaView and the resource action "Edit Synchronization Policy". "
    how to configure ActiveSync for a database table.
    Thanx
    Shant

    Hi,
    You need to a script and run it on os level. Here is an example:-
    emcli relocate_targets -src_agent=agentmachine1.domain:3872
    -dest_agent=agentmachine2.domain:3872 -target_name=RACDB
    -target_type=oracle_database -copy_from_src -force=yes
    -changed_param=MachineName:agentmachine2.domain
    Regards
    Jomon
    Edited by: JohnJomon on Nov 17, 2011 2:27 PM

  • How to create IDOC for customer defined table

    hi,
    How to create IDOC for customer defined table Records and how to send this IDOC to target system.
      what message type will be used and on receiving system how to post these records.
      thankx.
      pillac.

    Hi,
    You need to create a custom message type and custom IDOC type for this with whatever fields you want send. You need to create segments (WE30), IDOC type (WE30), Message types (WE81) and assign the message type to the IDOC type (WE82).
    You will have trigger the IDOC using a Report or something after doing the partner profile settings.
    Similary in the target system also, you will have do all the settings.
    Take a look the links to find out what settings needs to be done.
    http://help.sap.com//saphelp_470/helpdata/EN/0b/2a611c507d11d18ee90000e8366fc2/frameset.htm
    http://www.sappro.com/downloads/OneClientDistribution.pdf
    Regards,
    Ravi
    Note : Please mark the helpful answers and close the thread if the issue is resolved.

  • Reduce Time for Rman Backup

    Dear Experts;
    rman for 0 level backup is taking about 5:26 hours, backup size is now 312gb I have enabled block track checking & it reduces time for incremental level 1 from 2hour to almost 3 minutes.
    database shows biggest tablespace is "users"
    I want valuable suggestions for reducing its time or is there any way to break 0 level backup. I can allocate channels but ultimately it will take time when taking "users" tablepace backup
    Right now I am taking backup at usb drive & its version is 2.0
    Regards

    As you are taking backup to a usb drive there is not much that can be done to improve the speed. If you are concerned about the backup being slow.. then you could think of taking the backup on local disk( which would be faster and more efficient) and then move the backups from the disk to usb drive.
    This can be done in a single backup script as 2 part operation.
    1) take backup to disk.
    2) copy the backup to usb drive and delete the backups from the disk.
    There are many additional features that u can add to enhance it thoe.
    Regards,
    V

  • How to reduce downtime for setup table

    Scenario u2013According to system data, Setup table will normally take 5 days to fill but client agreed only for max 2 days downtime. User can do change only last 3 month documents not before that. For filling 3 month data in set up table 1 day required so I have to mange options accordingly.
    Datasource u2013 2LIS_13_VDITM -> DSO u2013 ZBIllIG ->Info cube
    I have to Reduce Downtime for Setup table so planning following optionsu2013
    1.     First run the info package for Initialization without data transfer. Then start filling setup table without blocking the User. In case Users changes any document at the time of filling setup table then these changes will move to delta queue. Once setup table filled then execute full repair request and then Delta info package.
    2.     Early delta initialization u2013 no idea how to perform steps.
    Please share your views with detail steps.
    OLI*BW doesnu2019t have any date range in selection criteria so manually I will find out document for particular dates and use these document range.
    Checked lot of post in SDN but still expecting final answer to go ahead in Production.

    Hi ,
    Your requirement is Billing ODS and Cube - Reset up in R/3 SYSTEM & Initialization in BW SYSTEM .
    Before starting find the previous data load volume and size.
    1.Go to LBWG application value=13 (Always Schedule the job in the back-ground mode)
    2.Verify using tcode u2018SE16u2019 that there are NO records in u2018MC13VD0ITMSETUPu2019 table after above delete job is complete.
    3.Suspend the process chain job in BW.This is to avoid it getting kicked off while the reload process is still in progress.
    4.Need to check LBWQ in R/3 system for MCEX13, unprocessed Outbound queue (records). This should be empty as the last delta would have processed all.
    5.Delete the initflag in BW.
    6.Need to check RSA7 in R/3 SYSTEM to verify that there is NO record for 2LIS_13_VDITM    (to be done right before the Setup job).
    7.Create New Info Package for Info Source '2LIS_13_VDITM' for u2018Initialize without Data Transfer Optionu2019 .Execute the package.Re-establish the Delta processing flags in R/3 and BW for the Billing TD load .
    8.Save the record count for table u2018VBRPu2019 using SE16 right before the setup job.
    9.Schedule Billing Data Setup Job 'OLI9BW'  in R/3 SYSTEM .
    10.After the Billing Setup job is complete in R/3 system, get the record count of table u2018VBRPu2019 again using u2018SE16u2019
    Expeted time in R/3:5 to 7 hrs(setupjobs)
    Expeted time for init and fullload : 6 hrs
    ODS activation : 3hrs
    Cube and with agrregates fill all : 8hrs.
    Thanks,
    naidu.

  • How dicrease the time for a bult insert.

    Hi, we have a the following process..
    1) create a cursor
    2) insert in a table t1 acording the rows
    selected in the cursor
    3) a before insert on t1 create others rows on t2, t3...
    This proccess cteate about 300,000 rows a take near 3 hours... my question is how dismiss this time???

    Lets imagine I want to INSERT 10 rows into the following table, set a sequential product i.d. for each record and a quantity of zero:
    CREATE TABLE sales (product_id NUMBER(10),
    quantity NUMBER(5)) ;
    Traditionally, I might write something like this (O.K., I know I could use a single INSERT statement for this, but let's assume you want to do some additional calculation inside the loop) :
    BEGIN
    FOR i IN 1..10 LOOP
    INSERT INTO sales (product_id,quantity)
    VALUES(i,0) ;
    END LOOP ;
    COMMIT ;
    END ;
    All well and good - except each INSERT causes the PL/SQL engine to request that the SQL engine should insert 1 row. This is an overhead that can be reduced by using BULK BIND coding (I mixed my terms previously - a BULK COLLECT allows multiple rows to be retrieved into a PL/SQL collections - it uses the same performance-enhancing technique, but in reverse) :
    DECLARE
    TYPE t_sales IS TABLE OF sales.product_id%TYPE INDEX BY BINARY_INTEGER ;
    a_sales t_sales ;
    BEGIN
    FOR i IN 1..10 LOOP
    a_sales(i) := i ;
    END LOOP ;
    FORALL j IN a_sales.FIRST..a_sales.LAST
    INSERT INTO sales (product_id,quantity)
    VALUES (a_sales(j),0) ;
    COMMIT ;
    END ;
    The INSERT above is now passed to the SQL engine as a single request, which gives a significant performance improvement for large numbers of rows. Just remember that all those rows need to go somewhere in memory prior to the INSERT - namely the PGA for your session.
    There are a few limitations to this method, the most significant being that fact that the PL/SQL table must be "simple" - i.e. no tables or records.
    (Oracle Moderators: Please don't ask "Why is Andy writing such an pointless piece of code?" - it's just to demonstrate the technique!)
    null

  • How to reduce the size of System tables(RS*) in SAP BW?

    Hi All,
    We need to reduce the size of a system tables(RS*) in SAP BW system without impacting anything to system.
    Could you please let us know is there any Global program/Function module to do the same.
    If not if you know any individual program or other way to reduce the system table size it will be very much useful.
    Sample System tables(RS*) are given below.
    RSHIENODETMP
    RSBERRORLOG
    RSHIENODETMP~0
    RSBMNODES
    RSBKDATA
    RSBMNODES~001
    RSRWBSTORE
    RSBMLOGPAR
    RSBERRORLOG~0

    Sudhakar,
    There are tables you can archive / clean up and then there are tables you cannot do anything about. For example - if your system has a million queries - the RSRREPDIR , RZCOMPDIR tables will be large.
    The tables that typically get archived are :
    1. BALDAT / BALHDR - application log tables
    2. Monitor tables - search for Request archiving which will tell you how to archive the same
    The other tables -
    First you would have to understand why they are large in the first place ... if you have too many hierarchies - then some tables can be huge - delete some of the hierarchies you do not need and the table sizes should come down.
    RSRWBSTORE - this is the internal store for workbooks - this will have the last executed version of the workbook stored in the table. This information is called when the workbook is executed without refreshing the variables - which is why you get the workbook output first and then get prompted to refresh the variables.

  • Best practice for replicating Partitioned table

    Hi SQL Gurus,
    Requesting your help on the design consideration for replicating a partitioned table.
    1. 4 Partitioned tables (1 master table with foreign key constraints to 3 tables) partitioned based on monthly YYYYMM
    2. 1 table has a XML column in it
    3. Monthly switch partition to remove old data, since it is having foreign key constraint; disable until the switch is complete
    4. 1 month partitioned data is 60 GB
    having said the above, wanted to create a copy of the same tables to a different servers.
    I can think of
    1. Transactional replication, but then worried about the XML column,snapshot size and the alter switch will make the same thing
    on the subscriber or row by row delete.
    2. Logshipping with standby with every 15 minutes, but then it will be for the entire database; because I have other partitioned monthly table which is of 250 GB worth.
    3. Thinking about replicating the Partitioned table as Non Partitioned, in that case how the alter switch will work. Is it possible to ignore delete when setting up the replication.
    3. SSIS or Stored procedure method of moving data on a daily basis.
    4. Backup and restore on a daily basis, but this will not work when the source partition is removed.
    Ganesh

    Plz refer to
    http://msdn.microsoft.com/en-us/library/cc280940.aspx

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

Maybe you are looking for

  • Images imported from QT exported image sequence have jagged edges

    Hi I've come across something strange which I'd like to resolve. I export an image sequence from QuickTime. I then import this into Aperture. But the images imported into Aperture now have jagged edges where there has been any movement in the origina

  • Formatting a report in OFR.

    We have multi Scenarios for children of a Line of Business, We have them in one col on a report. some of the scenarios are %. We want to use Conditional formation on the cols, but OFR will not let you do this. If we break the scenarios into seperate

  • SbRIO-9612 unable to close a TCP connection without causing TCP failure

    Hello, I'm working on a multi-server (sbRIO-9612's), multi-client (Windows PCs) application which uses the STM 2.0 libraries and LV2009 SP1.  The server listens on a UDP port for the client to send a message - once sent, the server opens the TCP conn

  • Re: AppleWorks for the modern day, a migration tip

    Hi a brody, Thanks for the tip. 4. For some reason the link to "these directions" was not working, pointing to http://discussions.apple.com/thread.jspa?messageID=607635&#607635 I would suggest pointing to the Thread ID instead: http://discussions.app

  • As Shot isn't even close.

    This problem is not specific to ACR, but rather an example of industry-wide brain damage, Aperture and LightZone get this wrong too. BreezeBrowser and dcraw get it right. The issue is simple, or rather oversimplification. All of these new "Pro" tools