Aggregation growing more than expected

Hi All,
I have loaded the cube with data and it takes up 37 GB. I ran aggregation on the cube. See following statement.
execute aggregate process on database "APP123"."DATA123" stopping when total_size exceeds 1.5;
I expected the cube to grow to about 55 GB (Considering aggregation factor of 1.5) but the cube grew to 117 GB.
My understanding is that since I set the factor at 1.5 the cube should aggregate until it reaches 55 GB and then stop. Any reason why its growing to 117 GB. Any help/pointers are appreciated.
Thanks

I'm also seeing this behavior in several cubes I've created. I have an SR open and have spent a significant amount of time conversing with developers. The problem is the estimation algorithm. Essbase needs to have some estimate of each view size to determine what will fit inside the space indicated by your stop value, but for cubes of a significant enough size, and with a skewed distribution (certain outline members associated with much more data than others) the estimation Essbase comes up with can be wildly off.
There are 1 or more views that Essbase is building that are a significant percent of the overall level-0 data size.
The only work-around is to look at the generated views and to manually remove those from the .csc file that are too large. There is no way to see what the size of the aggregate actually is (even after the build if you query for the size it's still an estimate), so you'll need to build the aggregates one at a time to figure out which ones are too big.
Thanks,
Michael

Similar Messages

  • Backup size growing more than expected

    I have about 180GB of data on my system and a 500GB volume dedicated to Time Machine. For the past few weeks I've been scanning a lot of old photos, which go into a folder before being imported into iPhoto, and are then deleted from the folder. As expected, the free space on the Time Machine volume steadily decreased as the photos were backed up from the folder and the iPhoto library.
    When I started running out of space on the Time Machine volume I deleted all backups of the temp folder, but I only got a fraction of the expected space back. I know I can exclude the above folder from backups if I want, but how do I tell what is taking up the additional space?

    There's been some confusion while talking about iPhoto because prior to version '08 the library wasn't a package, it was just a folder. Everyone who has indicated that TimeMachine is only backing up new photos has turned out not to be using the newest version. I'll be honest, I've not taken a look at how iPhoto '08 behaves with Time Machine - I'll put that on my list of things to do - my long list of things to do.
    The link you posted makes out that TimeMachine just backs up the changes that occur in a package, not the entire package. It is possible that the engineers have specifically hard coded special treatment for iPhoto, but it makes no sense for TM to just back up changes to packages. That would make rewinding to previous applications, among other things, more difficult than it need be.

  • Backup VHD size is much more than expected and other backup related questions.

    Hello,I have few windows 2008 server and i have scheduled the weekly backup (Windows Backup) which runs on saturday.
    I recently noticed that the actual size of the data drive is only 30 GB but the weekly backup creates a VHD of 65 GB.This is not happening for all servers but for most of the server.Why is it so..and how can i get the correct VHD size..60 GB VHD does't make
    sence for 30 GB data.
    2. If any moment of time if i have to restore entire Active Directory on windows 2008 R2 then is is the process same as windows 2003..means going to DSRM mode,restoring the backup,authoritative restore or is there any difference..
    3. I also noticed that if i have a backup VHD of one server (Lets Say A) and if i copy that Backup to other server (Let Say B) then windows 2008 only gives me an option to attach the VHD to server B but through backup utlity is there any way to restore the
    data from the VHD,Currently i am doing copy paste from VHD to server B data drive but  that is not the correct way of doing it..Is it a limitation of windows 2008?
    Senior System Engineer.

    Hi,
    If there are large number of files getting deleted on the data drive, the backup image can have large holes. You can compact the vhd image by using diskpart command to the correct VHD size.
    For more detaile information, please refer to the thread below:
    My Exchange backup is bigger than the used space on the Exchange drive.
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/3ccdcb6c-e26a-4577-ad4b-f31f9cef5dd7/my-exchange-backup-is-bigger-than-the-used-space-on-the-exchange-drive?forum=windowsbackup
    For the second question, the answer is yes. If you want to restore entire Active Directory on windows 2008 R2, you need to start the domain controller in Directory Services Restore Mode to perform a nonauthoritative restore from backup.
    Performing Nonauthoritative Restore of Active Directory Domain Services
    http://technet.microsoft.com/en-us/library/cc816627(v=ws.10).aspx
    If you want to restore a backup to server B which is created on server A, you need to copy the WindowsImageBackup folder to server B.
    For more detailed information, please see:
    Restore After Copying the WindowsImageBackup Folder
    http://standalonelabs.wordpress.com/2011/05/30/restore-after-copying-the-windowsimagebackup-folder/
    Best Regards,
    Mandy
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Copy to Clipboard Method (Table) copies one additional empty column more than expected

    Hi,
    I'm using the Copy to Clipboard Method for a Table, to copy for example 4 rows with 3 columns. When I paste it to Excel I get 4 rows with 3 columns and an extra column, which is empty so the real size is than 4x4.
    Is this a Labview Error or can someone explain it to me why this is happening? Or even better, how can I fix that?
    I have isolated the problem to an extra vi so you can reproduce the error. Just let the vi run once and then paste the clipboard to Microsoft Excel.
    My Labview Version is 11.0 32 Bit, Microsoft Office 2010, WinXP SP3
    Regards
    Marcel
    Solved!
    Go to Solution.
    Attachments:
    LabVIEW2011_Tablebug.vi ‏11 KB

    Snippets apparently hate property and invoke nodes.
    See attached vi for proposed workaround using the Clipboard.Write method.
    Attachments:
    LabVIEW2011_Tablebug mod.vi ‏13 KB

  • RAC Dataguard Switchover timing taking more than expected time

    I have Dataguard setup in RAC environment and my dataguard is also configured and it is working fine.
    Our goal is to do the switchover using DGMGRL withing the 5 minutes. We have followed the proper setup and MAA tuning document and everything is working fine, Just the switchover timeing is 8 to 10 minutes. which varies depending on some parameters but not meeting our goal of less than 5 minutes.
    The only observation that we have seen is as follow
    After switchover to <db_name> comman in DGMGRL
    1) it will shutdown abort the 2nd instance
    2) transfter all the archivelog ( using LGWR in ASYNC mode) of instance 1
    3) Now it looks for the archive log of 2nd instance, this steps take time upto 4 minutes
    we do not know why it takes that much time and how to tune this??
    4) Now converts primary to standby
    5) Now starts the old standby as new primary
    here All steps are tunined except the step 3, that where our lot of time is going any Idea or explanation
    why it takes such a long time to find the exact archive log 2nd instance (Aborted) to transfer to standby site?
    Can any one give explanation or solution to tune this???
    Regards
    Bhushan

    Hi Robert,
    I am on 10.2.0.4 and we have used "MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf", which is available on oracle site.
    Here are by configuration details
    GMGRL> connect sys@dv01aix
    Password:
    Connected.
    DGMGRL> show configuration;
    Configuration
    Name: dv00aix_dg
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    dv00aix - Physical standby database
    dv01aix - Primary database
    Current status for "dv00aix_dg":
    SUCCESS
    DGMGRL> show database verbose dv00aix
    Database
    Name: dv00aix
    Role: PHYSICAL STANDBY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    dv00aix1 (apply instance)
    dv00aix2
    Properties:
    InitialConnectIdentifier = 'dv00aix'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '4'
    ReopenSecs = '300'
    NetTimeout = '60'
    LogShipping = 'ON'
    PreferredApplyInstance = 'dv00aix1'
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '900'
    LogArchiveMaxProcesses = '5'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = ''
    LogFileNameConvert = '+SPARE1/dv01aix/,+SPARE/dv00aix/'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    LocalListenerAddress(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "dv00aix":
    SUCCESS
    DGMGRL> show database verbose dv01aix
    Database
    Name: dv01aix
    Role: PRIMARY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    dv01aix1
    dv01aix2
    Properties:
    InitialConnectIdentifier = 'dv01aix'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '4'
    ReopenSecs = '300'
    NetTimeout = '60'
    LogShipping = 'ON'
    PreferredApplyInstance = 'dv01aix1'
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '2'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = '+SPARE/dv00aix/, +SPARE1/dv01aix/'
    LogFileNameConvert = '+SPARE/dv00aix/,+SPARE1/dv01aix/'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    LocalListenerAddress(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "dv01aix":
    SUCCESS
    DGMGRL>
    log_archive_dest_2 string service="(DESCRIPTION=(ADDRESS
    _LIST=(ADDRESS=(PROTOCOL=TCP)(
    HOST=*****-vip0)(PORT=1527))
    )(CONNECT_DATA=(SERVICE_NAME=d
    v00aix_XPT)(INSTANCE_NAME=dv00
    aix1)(SERVER=dedicated)))",
    LGWR ASYNC NOAFFIRM delay=0 O
    PTIONAL max_failure=0 max_conn
    ections=4 reopen=300 db_uniq
    ue_name="dv00aix" register net
    NAME TYPE VALUE
    timeout=60  validfor=(online
    logfile,primaryrole)
    NAME TYPE VALUE
    fal_client string (DESCRIPTION=(ADDRESS_LIST=(AD
    DRESS=(PROTOCOL=TCP)(HOST=*****
    -vip0)(PORT=1527)))(CONNECT
    DATA=(SERVICENAME=dv01aix_XP
    T)(INSTANCE_NAME=dv01aix1)(SER
    VER=dedicated)))
    fal_server string (DESCRIPTION=(ADDRESS_LIST=(AD
    DRESS=(PROTOCOL=TCP)(HOST=*****
    -vip0)(PORT=1527))(ADDRESS=
    (PROTOCOL=TCP)(HOST=*****-vi
    p0)(PORT=1527)))(CONNECT_DATA=
    (SERVICE_NAME=dv00aix_XPT)(SER
    VER=dedicated)))
    db_recovery_file_dest string +SPARE1
    db_recovery_file_dest_size big integer 100G
    recovery_parallelism integer 0
    fast_start_parallel_rollback string LOW
    parallel_adaptive_multi_user boolean TRUE
    parallel_automatic_tuning boolean FALSE
    parallel_execution_message_size integer 2152
    parallel_instance_group string
    parallel_max_servers integer 8
    parallel_min_percent integer 0
    parallel_min_servers integer 0
    parallel_server boolean TRUE
    parallel_server_instances integer 2
    parallel_threads_per_cpu integer 2
    recovery_parallelism integer 0

  • Forecast Engine forecasting 2.5 to 3 times more than expected

    Is there a system parameter which can be used to reduce the forecast qty?
    Is it possible that 2 instances with same data ( one instance cloned from another) have different forecasts, after running engine on both instances?
    Please advise
    Thanks
    D

    Hi,
    Compare the value for GLOB_PROP in UAT and PROD for a particular combination where you see this difference in forecast. If you see the value is 2-3 times higher in PROD than UAT, then u can try below steps:
    1. Update mdp_matrix set prop_changes=1;
    2. Commit;
    3. Exec proport;
    4 Run Engine in BATCH mode
    Cheers
    Raj
    (http://oracledemantra.blogspot.com/)
    (http://www.demandgyan.com/)

  • Searching finds a lot more than expected

    I'm trying to find specific files which belong to a certain "Tape Name." I have entered multiple tape names in the "Tape Name" field in the metadata. When I search for the term:
    2012-07-29
    which is used in about half of my "Tape Name" fields, and almost certainly a unique term, the search result finds nearly everything, including items labeled:
    2012-08-01 Wed plus earlier- Sony
    I've dug around to be sure that there aren't some rogue comments that have the exact search string above, but there aren't.
    Simply put, it should only find about 300 files, but instead finds about 700.  What am I missing?

    Even if I add a special character to the search (and to one of the clips), it still finds nearly all videos
    2012-07-29$
    will find:
    2012-07-29$ Mon and Tues- Sony
    and
    2012-08-01 Wed plus earlier- Sony  (total 800 files, of what should be 1)
    On the other hand, if I add "this is a te$$t" to one of the metadata fields in one video clip, a search for
    this is a te$$t
    will return exactly one video, as expected.
    In other words, for the first search, I'm certain that all of the other 799 files do NOT have a "$" in the metadata anywhere, and certainly not in the above order.

  • After upgrade,stderr3 file is growing more than capacity of the file system

    The file is located in /usr/sap/C11/DVEBMGS00/ work/ folder was increasing rapidly. How to reduce the file size? How to truncate?

    Hi,
    1. Could you check out SAP note 1140307 ? Update the kernel patch stated in that note, if yours is old.
    2. normally, what I will do is to check the content of stderr3. see what information that keep written in repeatedly.
    3. Check if you have lot of inconsistencies in the temse. See SAP note 48400
    At the moment, delete stderr file which it is not active from operating sytem level.
    Kindly refer to SAP note 16513 (File system is full- What to do)
    Hope this helps.
    Regards,
    Vincent

  • TimeMachine backups more than expected ?

    Hi, I have a little issue with TimeMachine, I only Backup on User for whom I removed The Desktop and some other folders that I don't need to backup. Somehow, everytime it does a backup, it finds 6000 files with about 700 MB to backup although I almost did not do anything on my Mac. I would like to check what files exactly are in the backup, so I can determinate whether I can exclude extra folders. I did not find any way to have this information so far. The system.log actually contains information about how much, how many files, when etc ... but that is all. I wanted to try TimeTracker which has been recommended in this forum, but it is an Alpha version and is already expired and cannot be used anymore. Any idea is welcome.

    you can use TimeTracker to see what exactly is being backed up every time. I don't know what version you are using but the one they officially offer works and is certainly not alpha
    http://www.charlessoft.com/
    but in general, TM is not a good tool if you want to back up only a few folders. TM tries to back up everything and it's very difficult to exclude everything but those folders. and I would recommend you back up the whole system. this provides by far the best security and given the prices and sizes of hard drives these days there are no reasons not to do it.

  • Unattended Browser Communicating More than Expected

    FireFox 23.0 is communicating over the Internet while my computer is unattended and the browser is open to just one ordinary Web site (a single satellite image from http://www.ssec.wisc.edu). The following data come from the LAN Connection Status on my XP SP3 computer with the assumption that one "packet" = 1500 bytes:
    Overall data rates are about 3.23 MB/hr up and 4.58 MB/hr down (averaged over 2 hr 17 min) with Firefox open. With Firefox closed I'm showing only about 0.12 MB/hr up and 0.050 MB/hr down (averaged over 4 hr 22 min), which I presume can be attributed to Windows.
    Question: Is this level of "chatter" reasonable, given FireFox's desire to update itself, it's databases, and its plugins (I have read https://support.mozilla.org/en-US/kb/how-stop-firefox-automatically-making-connections?esab=a&s=automatic+reload&r=3&as=s; no live bookmarks, newsfeeds, etc. are set up), or is this in itself a cause for concern? I have no other obvious symptoms of infection. The above data result solely from my own curiosity about how much bandwidth I was using.

    OK, Thanks. I have a log. Now I just have to learn how to read and make sense out of it! Any suggested references? --JCW2

  • Stuck with Exact fetch returns more than requested number of rows!

    Hi
    I've written the following code where i want to insert an id number into a package to update a record.
    declare myvar NUMBER; begin SELECT App.id into myvar FROM people_units pu LEFT OUTER JOIN(
    SELECT pu.id,pu.record_date,pu.unit_instance_code,pu.person_code,pu.calocc_code,pu.record_date As received
    FROM people_units pu) App ON pu.person_code = App.person_code AND Trunc(pu.record_date) = Trunc(App.record_date)
    WHERE pu.id = 79474; ebs_units_pkg.AddProgressHistory(myvar,'AUTO');end;
    when i run the query in SQL i get the error message ORA-01422 - Exact fetch returns more than expected number of rows.
    Can anyone help me rseolve this error? The select statement may return more than one row which im guessing is the cause of the problem. If the select statement does return more than one value. 2 rows for example, i would like the package to update the 2 rows. Ive never really done any work with PL/SQL before so at a loss at where to begin!!

    Do the select and the update all in one step. It will be much easier then.
    Example:
    UPDATE people_units
    SET yourColumn = calculatedValue
    WHERE id = 79474

  • Can't SSAS engines make use of more than one aggregation to answer a query?!

    I have a very simple cube (just for testing and training) . This cube contains two dimensions: [Dim Soccer Player] contains one attribute hierarchy [player name], the other dimension
    is [Dim Match Acts] contains also one attribute [Acts] which has values like fouls, goals, saves, tackles… etc. And of course a Fact that contains one measure of Just Count ... that simple ... so this cube can
    answers a question like how many goals scored by "Messi", for example ... a very simple trivial cube.
    I'm testing aggregations and their effect. so first I've designed one aggregation (Aggregation 0) on the granularity level of [Player name], then
    I run a query to get the count of ALL the[Acts] done by each [Player name] ... I've checked the SQL Profiler and I found that the aggregation was used.
    Then I cleared the cache, and I run another query, but this time to get just the number of Fouls committed by each [Player name], I checked the Profiler but the Aggregation 0 was NOT used.
    I went back to the aggregations design tab in BIDS, and I added another new aggregation (Aggregation 1) on the level of [Acts], so now I have two aggregation one on the granularity level of
    [Player name] and the second on the level of the [Acts].... I cleared the cache again and rerun the last query. NONE of the aggregation was used!
    In the third test I deleted Aggregation 1 and added [Acts] to Aggregation 0. so Aggregation 0 now on both [Player name] AND [Acts]... cleared the cache and rerun the last query. Aggregation
    0 appeared again.
    I just want to make sure (and if possible know why) the SSAS engine can't make use of and combine more than one aggregation to serve a query (point number 2), and that to design an aggregation
    that will serve a query which contains attributes from different dimensions, I have to add ALL the attributes in that query in that one aggregation, like point 3 ... is this true?!

    I think you are on the right track. You need to include all the attributes used in one query in the same aggregation (like #3) for it to be used. Example #2 works as I would expect. Queries above the grain of the agg (query by player name and an agg by
    player/act can be used) can be used. Queries below grain of the agg (example #2) can't use the agg.
    http://artisconsulting.com/Blogs/GregGalloway

  • Please, the battery of my mac air does not last more than two hours, it's been over 30 days I sent for technical assistance and nothing! I am very disappointed with the mac air, I had a very different expectation of apples products!!

    Please, the battery of my mac air does not last more than two hours, it's been over 30 days I sent for technical assistance and nothing! I am very disappointed with the mac air, I had a very different expectation of apples products!!!

    Maybe the problem is not your MB Air......
    Try these:
    Make sure bluetooth is turned off if you're not using it.
    Set your screen brightness to 4 bars.
    See what's loading in your login items.  Delete the ones you don't need.
    Open Activity Monitor - under All Processes see what's using most of your CPU's resources.
    Highlight the ones with the highest %CPU and hit quit process.
    Remember that when Apple says that your battery should last 7 hours, they tested it just browsing the web and not have anything open in the background and screen is set at 50% brightness.

  • Error in sql query as "loop has run more times than expected (Loop Counter went negative)"

    Hello,
    When I run the query as below
    DECLARE @LoopCount int
    SET @LoopCount = (SELECT Count(*) FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL)
    WHILE (
        SELECT Count(*)
        FROM KC_PaymentTransactionIDConversion with (nolock)
        Where KC_Transaction_ID is NULL
        and TransactionYear is NOT NULL
    ) > 0
    BEGIN
        IF @LoopCount < 0
            RAISERROR ('Issue with data in KC_PaymentTransactionIDConversion, loop has run more times than expected (Loop Counter went negative).', -- Message text.
                   16, -- Severity.
                   1 -- State.
    SET @LoopCount = @LoopCount - 1
    end
    I am getting error as "loop has run more times than expected (Loop Counter went negative)"
    Could any one help on this issue ASAP.
    Thanks ,
    Vinay

    Hi Vinay,
    According to your code above, the error message make sense. Because once the value returned by “SELECT Count(*)  FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL” is bigger than 0,
    then decrease @LoopCount. Without changing the table data, the returned value always bigger than 0, always decrease @LoopCount until it's negative and raise the error.
    To fix this issue with the current information, we should make the following modification:
    Change the code
    WHILE (
    SELECT Count(*)
    FROM KC_PaymentTransactionIDConversion with (nolock)
    Where KC_Transaction_ID is NULL
    and TransactionYear is NOT NULL
    ) > 0
    To
    WHILE @LoopCount > 0
    Besides, since the current query is senseless, please modify the query based on your requirement.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Activate more than one Aggregation Level

    Hello experts,
    We use the Integrated Planning and we have more than 80 Aggregation Levels. When the MultiProvider is deactived because of adding a new InfoObject, all our Aggregation Levels are deactivated.
    Is there a possibility to activate more than one Aggregation Level at once? Maybe a special report?
    Thank you.

    Hi,
    please check:
    [Re: Activate all the Aggregation level of underlying multi provider]
    Gregor wrote a small program and posted it to the forum. Please note: There is no support for this tool.
    Bye Matthias

Maybe you are looking for

  • Synchroniser bug? shared reviews on Sharepoint Server

    Hope someone can help. We have a hosted sharepoint service which our customers use as a place to conduct shared PDF reviews. Our customers use a mix of versions 8 and 9 of Reader and Acrobat. Much of the time things work well but we are frequently ru

  • Help need on Validations in inline-popup task flow

    Hi, I create a adf application, which refer ready only table and in toolbar, there are some button, which used to insert and edit records. For insert and edit i used a edit.jspx page which come as a inline poup in taskflow. In inline popup i used a b

  • When i print out a color photo the colors are not accurate

    I have a new envy110 series CQ811A. I have a windows 7 64-bit laptop. There are no error messages and the Envy works great but when i print a picture off of a web site the colors are not accurate and i mean none of them. i have returned one printer a

  • Pacman -Syu Fails -- Problem wth pm-utils Package

    I just tried to do a "pacman -Syu". I get the following errors: error:  Could not prepare transaction error: failed to commit transaction (conflicting files) pm-utils: /usr/bin/on_ac_power exists in filesystem errors occured, no packages were upgrade

  • Canon 5D MarkIII RAW files

    I have the new Canon 5D Mark III as well as Photoshop CS5 and Lightroom 3. The problem is that RAW files are not recognized and the camera does not even appear in the list of cameras to come. How long does it take for new cameras to be listed? Or is