Aggregate Filling taking Loooong time and failing

Hello All,
I have a Infocube which is having huge number of data records in production. The fact table contains around 100 million records.
I have 2 Aggregates built up on this Cube and the 'Activate & Fill' job is scheduled to run weekly on this Cube.
The problem is that this job runs for more then 5-6 days and then it fails. This is happening for many weeks now.
I also tried to compress the data in Fact table hoping that it will help the aggregates filling, but to no luck.
Now both the E and F table of the Info cube are heavily loaded and the Aggregate job is still failing.
Can anyone let me know how to proceed with this / where to check for failure reason?
1. Is it only bcoz of the huge data that the aggregates are failing?
2. Is there any other way of knowing the failure reason?
Regards
Rohit

Hi,
Why are you activating and filling it every week ?
You can create, activate and fill once and then follow up with rollups. That will decrease the number of records to be added to aggregate.
To use an aggregate for an InfoCube when executing a query, you must first activate it and then fill it with data. Select  the aggregate that you want to activate and fill. The system creates an active version of the aggregate .
If the new data packages (requests) are loaded into the InfoCube, they are not immediately available for Reporting via an aggregate. To provide the aggregate with the new data from the InfoCube, you must first load the data into the aggregate tables at a time which you can set. This process is known as a Rollup.
If Infocube has aggregates and If you had loaded data to this Infocube, then, the newly loaded data wont be available for reporting untill and unless rollup is done. Roll up means adding the newly loaded data to aggregates.
Performance Tuning for Queries with Aggregates
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
Thanks,
JituK

Similar Messages

  • HT1544 i have downloaded mac os 10.5 leopard update version successfully . but try to instal it is taking much time and the status bar is not going

    Hi < i have updated my mac 10.5 with 10.5.8 sucessfully , but while instaling it is taking much time and status bar is not showing any progress

    If I remember correctly one of the updates could hang after doing the update. Fixed by a restart but installing a combo update over the top of others rarely does any harm and it may be the best thing to do.

  • Data from PSA to ODS taking very very long time and fails

    Hi Gurus,
    I have been trying to load data from my CRM system to BI with some range in Data Selection.
    I was loading to PSA then then Data Target (Package by Package). It failed twice by taking lot of time.
    Then I brought the data to the PSA only successfully and tried moving it to ODS and shockingly long waiting hours it took and failed.
    Please help me resolve this.
    Thanks

    Hi,
    Search the BI short dump overview for the short dump that belongs to the request.
    Pay attention to the correct time and date on the selection screen.
    You access a short dump list using the wizard or the menu path
    "Environment -> Short dump -> In the Data Warehouse".
    And also some TRFC time out error.
    We have fixed it by increasing the profile parameter time and also clearing the old errored TRFC which where lying in the Source.
    Yet this has been running for about 20 hours to move data from PSA to DSO. and only some of the Data Packages are turning into Green and Some in Red and Some in Yellow.
    Please help
    Thanks

  • Oracle 9i RMAN backup filling up all disk and failing at the very end

    Hello to all and thank you in advance for any input from any of you.
    I am new in oracle and the only one in my company with a little bit of oracle knowledge, which is not a whole lot.
    I inherited a few servers (Linux and Windows based). We mostly run all our production environment on a Linux box hosting oracle 10g release 10.2.0.1 but we had a legacy application server that still runs on 9i release 9.2.0.1.0. This server is a windows 2003 box, the previous DBA or someone else with admin rights to this machine compressed the whole drive where oracle resides, when I started summarying the oracle assets, I saw this server with no backups scheduled and a compressed disk with very limited space after this compression, only 13 GB left, the database if uncompressed would definitively filled up all the disk. The machine is in used and cannot be taken down for more than a couple of hours. We tried adding another disk but the machine only accept it as a hot spare. The database is in archive mode, when I tried to run a RMAN backup, it start filling up the database drive until it's empty and it crashes the whole machine, it seems like the buffer is growing and growing until it fills up the whole drive. The only way to get the server up and running again is to reboot the machine since the server is more than 6 years old we are not please with this action because we are afraid the machine would go south in one of those reboots.
    Has anyone heard of similar issues when running RMAN on a compressed drive? Can the compressed disk have any involvement in this kind of odd behaviour? I tried running the backup in debug mode and creating a log in a different disk (the one where the backup is going to) and still the same issue. Again any help is much appreciate,
    Sincerely,
    Alex

    Sybrand,
    Thank you for your fast response, I am going to use your previous post to answer one by one your question,
    1 The Oracle binaries --- on a compressed drive > Yes
    2 The Oracle database control files redo log files and datafiles on that same compressed drive? > Yes
    3 The archived redo log ditto? > Yes
    I know for a fact (as I have been using that in 9i) you can create a RMAN backup on a compressed drive, and you can have your archived redo go to a compressed drive. No problemo! > Excellent good news!!!
    But I would never touch the data files. Please note backup sets are multiplexed files consisting of multiple datafiles, so RMAN would need to do some calculations to decompress several blocks at the same time and to calculate a new block and compress it.
    Do you have enough space to:
    a) (preferred) to make datafile copies instead of a backup >
    Not in the drive that hosts the oracle instance but yes on an external usb drive (1TB) I attached to use as a backup repository, maybe this is why I am having these issues. I only have 13 GB left on that drive where the oracle instance resides (actually as I explained in a previous reply to Mark, the server only has 3 slots for disks, two of them as configured as RAID 1 and the other is a hot spare because the RAID configuration manager cannot can only added as hot-spare. The raided disk are partitioned in two, one small partition is for the OS and the other is for the oracle instance). When you said make a copy of the datafiles, I assumed you are talking about doing manual backups, and copying the database after using using "alter tablespace tablespace_name begin backup; then copy the file; alter tablespace tablespace_name end backup;
    b) use filesperset=1 on the backup command line, so the multiplexing (and associated calculations) stop. I have not tried the option of fileperset yet but I can tryi it and see if that helps.
    Can you post the last few lines of the RMAN log where RMAN bombs out? Note: the last few lines. Not all of it, or you must send some paracetamol in conjunction! >
    When I ran the debug option, I stopped the backup at less than 20% because I didn't want to reboot the system in the middle of the production time, the backup consumed at least 3 GB during the time was running. The times I ran the RMAN backup I didn't used the debug option so I don't have a log long enough to send unless you want the last few lines of what I have.
    Thanks again for your input,
    Sincerely,
    Alex

  • Installed 3 times and failed.

    Tried installing moutain loin 3x and failed every time, always had a disk error and needed to reboot. Stuck in a rebook mood....

    Hi and welcome to the Skype Community,
    Did you actually submit the recovery form with your information? Just answer as many answer as much detailed as you can and if the information is correct it might be enough information to restore access for you.
    Additionally can you please send me the Skype account name of your old Skype account so I can check with Skype CS the status of your account and recovery?
    Follow the latest Skype Community News
    ↓ Did my reply answer your question? Accept it as a solution to help others, Thanks. ↓

  • Changing status in the LMS after retrying a quiz (good fisrt time and failed second time)

    Good evening,
    I have a question on Captivate. Is it normal that when my client does a new time a validate lesson he observes that the only one result memoried is the last even if the first was ok (Complete and more than 80%) and the second is failed (Incomplete and less than 80%) ?
    Can I do something on Captivate ?
    LSM platform : WBT
    Version Captivate 5
    Thank you for your answers,
    Antoine

    This should be controllable in the LMS settings.  It's not controlled by Captivate.
    Check your LMS for some setting related to how the Attempts are scored.  Here is a snapshot from Moodle's SCORM attempt settings:
    As you can see, the options are:
    Highest attempt - which retains the best overall score from any attempt
    Average attempts - calculates an average of all scores
    First attempt -
    Last attempt
    So look for something like this in your LMS admin settings for the SCORM course.

  • Export (exp) taking long time and reading UNDO

    Hi Guys,
    Oracle 9.2.0.7 on AIX 5.3
    A schema level export job is scheduled at night. Since day before yesterday it has been taking really long time. It used to finish in 8 hours or so but yesterday it took around 20 hours and was still running. The schema size to be exported is around 1 TB. (I know it is bit stupid to take such daily exports but customer requirement, you know ;) ) Today again it is still running although i scheduled it to run even earlier by 1 and 1/2 hour.
    The command used is:
    exp userid=abc/abc file=expabc.pipe buffer=100000 rows=y direct=y
    recordlength=65535 indexes=n triggers=n grants=y
    constraints=y statistics=none log=expabc.log owner=abcI have monitored the session and all the time the wait event is db file sequential read. From p1 i figured out that all the datafiles it reads belong to UNDO tablespace. What surprises me is that when consistent=Y is not specified should it go to read UNDO so frequently ?
    There is total of around 1800 tables in the schema; what i can see from the export log is that it exported around 60 tables and has been stuck since then. The logfile, dumpfile both has not been updated since long time.
    Any hints, clues in which direction to diagnose please.
    Any other information required, please let me know.
    Regards,
    Amardeep Sidhu

    Thanks Hemant.
    As i wrote above, it runs from a cron job.
    Here is the output from a simple SQL querying v$session_wait & v$datafile:
    13:50:00 SQL> l
      1* select a.sid,a.p1,a.p2,a.p3,b.file#,b.name
      from v$session_wait a,v$datafile b where a.p1=b.file# and a.sid=154
    13:50:01 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     158244          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:03 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     157566          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:07 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     157016          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:11 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        509     156269          1        509 /<some_path_here>/undotbs_45.dbf
    13:50:16 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     167362          1        508 /<some_path_here>/undotbs_44.dbf
    13:50:58 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     166816          1        508 /<some_path_here>/undotbs_44.dbf
    13:51:02 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        508     165024          1        508 /<some_path_here>/undotbs_44.dbf
    13:51:14 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        507     159019          1        507 /<some_path_here>/undotbs_43.dbf
    13:52:09 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        506     193598          1        506 /<some_path_here>/undotbs_42.dbf
    13:52:12 SQL> /
           SID         P1         P2         P3      FILE# NAME
           154        506     193178          1        506 /<some_path_here>/undotbs_42.dbf
    13:52:14 SQL>Regards,
    Amardeep Sidhu
    Edited by: Amardeep Sidhu on Jun 9, 2010 2:26 PM
    Replaced few paths with <some_path_here> ;)

  • Un-Registering takes long time and fails!!!

    I am trying to un register a schema that ia already registered in the XML DB. I am using JDeveloper to un register and it really takes a long time to do this. and eventually fails.
    what is going on? what is broken?
    XML DB is flaky and unreliable.
    right?

    First make sure that all connections that have used the XML Schema are disconnected. Schema deletion cannot start until all sessions that are using the schema have been closed as it needs to get an exclusive lock on the SGA entrires related to the XML Schema.
    If there are a large number of rows in the table(s) associated with the XML Schema truncate the table before dropping the XML Schema. If there are a large number of XDB repository resources assocaited with the table, truncate the table and then delete the resources with DBMS_XDB.DELETERESOURCE() mode 2 or 4 to ignore errors and avoid dangling ref issues.
    To monitor the progress of deleteSchema itself connect as sys and execute select count(*) from all_objects where owner = 'XXXX'. XXXX should be the name of the owner of the database schema that was the owner of the XML schema. See if the number of objects owned by that user is decreasing. If the number of object is decreasing have patience.

  • Calculations in query taking long time and to load target table

    Hi,
    I am pulling approx 45 Million records using the below query in a ssis package which pulls from one DB on one server and loading the results to another target table on the another server. In the select query I have a calculation for 6 columns. The target
    table is trunctaed and loaded every day. Also most of the columns in the source which I used for the calculations is having 0 and it took approximately 1 hour 45 min to load the target table. Is there any way to reduce the load time? Also can I do the calcultions
    after once all the 47 M records loaded during query running and then calculate for the non zero records alone?
    SELECT T1.Col1,
    T1.Col2,
    T1.Col3,
    T2.Col1,
    T2.Col2,
    T3.Col1,
    convert( numeric(8,5), (convert( numeric,T3.COl2) / 1000000)) AS Colu2,
    convert( numeric(8,5), (convert( numeric,T3.COl3) / 1000000)) AS Colu3,
    convert( numeric(8,5), (convert( numeric,T3.COl4) / 1000000)) AS Colu4,
    convert( numeric(8,5),(convert( numeric, T3.COl5) / 1000000)) AS Colu5,
    convert( numeric(8,5), (convert( numeric,T3.COl6) / 1000000)) AS Colu6,
    convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000)) AS Colu7,
    FROM Tab1 T1 
    JOIN Tab2 T2
    ON (T1.Col1 = T2.Col1)
    JOIN Tab3 T3
    ON (Tab3.Col9 =Tab3.Col9)
    Anand

    So 45 or 47? Nevertheless ...
    This is hardly a heavy calculation, the savings will be dismal. Also anything numeric is very easy on CPU in general.
    But
    convert( numeric(8,5), (convert( numeric,T3.COl7) / 1000000))
    is not optimal.
    CONVERT( NUMERIC(8,5),300 / 1000000.00000 )
    Is
    Now it boils to how to make load faster: do it in parallel. Find how many sockets the machine have and split the table into as many chunks. Also profile to find out where it spends most of the time. I saw sometimes the network is not letting me thru so you
    may want to play with buffers, and packet sizes, for example if OLEDB used increase the packet size two times see if works faster, then x2 more and so forth.
    To help you further you need to tell more e.g. what is this source, destination, how you configured the load.
    Please understand that there is no Silver Bullet anywhere, or a blanket solution, and you need to tell me your desired load time. E.g. if you tell me it needs to load in 5 min I will give your ask a pass.
    Arthur
    MyBlog
    Twitter

  • Windows 7 64 bit machine with printer installed locally takes 2-3 minutes for printing most of the time and fails to print some time.

    Our customer  have 3-4 windows machines in local area network (No Domain Login) with different OS versions and platform architecture (32 / 64 bit)
    I have no idea about the configuration of  machine to which printer is hooked up but it is running Windows for sure.
    This machine is accessible from any other machine in LAN. They have another machine running Windows 7 64 bit OS  on which they have installed the printer connected to earlier machine  using local printer driver installation
    with local TCP/IP port being selected during installation. Installation goes well,  printer gets installed successfully and  shows ready state. but when we try to print anything on it takes 2-3 minutes to complete print job. They have tried
    this on printers with couple of different model and make but the result is same delayed printing. They have also tried to install different printer drivers like HP printer model specific driver and HP universal driver too , but the result is
    the same in all cases. earlier when they were using  32 bit XP machine with printer installed same way that they installed now for win 7 64 , there was no issue seen with regards delayed printing. is this a issue with windows  7 64
    bit printing system ? or it has to do something with mixed and matched (32/64 bit )printer drivers running on different machines on LAN ?
    I will try to get more information with regards to the machine to which printer is connected.In the mean please revert back to me if any one had face similar issue and able to resolve it successfully.    
    Thanks in Advance.

    Hi,
    Have you tried ping ip address of printer? and check if there exists any data lost in network connection. If your network connection is crowded, it can cause delay for printer.
    Also, please attempt to disable all security software temporarily, such as antivirus, firewalls,etc and
    clear all printer cache.
    Locate to task manager, check the state of spoolsv.exe, including cpu and memory. If it has high cpu or memory usage, it may be caused by malware. Thus, you'd better to make a full scan with the latest anti-virus.
    Karen Hu
    TechNet Community Support

  • Desktop synching taking long time and no photos showing up on ipad

    I am syncing several collections from my desktop but none are showing up on my ipad. I started last nightand they all seem to be still synching. I also don't see them when I click on "View synched collection on the web"
    I have synched others successfully but that was a few months back. Suggestions? Thanks

    Hi,
    Go through this thread and see if it helps:
    Query executes in RSRT but not in BEx Analyzer
    The problem essentially is because the sahred memory is getting full due to either long calculations in the query or a very large dataset.
    Regards,
    Nikhil

  • Taking much time when trying to Drill a characterstic in the BW Report.

    Hi All,
    When we are executing the BW report, it is taking nearly 1 to 2 mins then when we are trying to drill down a characterstic it is taking much time nearly 30 mins to 1 hour and througing an error message that,
    "An error has occared during loading. Plese look in the upper frame for further information."
    I have executed this query in RSRT and cheked the query properties,
    this quey is bringing the data directly form Aggregates but some chares are not avalable in the Agrregtes.
    So... after execution when we are trying to drill down the chars is taking much time for chars which are not avilable in the Aggregates. For other chars which are avilable in the Aggregates it is taking only 2 to 3 mins only.
    How to do the drill down for the chars which are not avilable in the Aggregates with out taking much time and error.
    Could you kindly give any solution for this.
    Thanks & Regards,
    Raju. E

    Hi,
    The only solution is to include all the char used in the report in the aggregates or this will the issue you will face.
    just create a proposal for aggregates before creating any new aggregates as it will give you the idea which one is most used.
    Also you should make sure that all the navigation characteristics are part of the aggregates.
    Thanks
    Ajeet

  • Releasing transport request taking long time

    Hi All,
    I am releasing transport request in SE09, for releasing child request it was taking long time and for parent request taking much more long time.
    In SM50, i didn't find any processes running.
    Can anyone tell me the solution.
    Thanks & Regards
    Uday

    Hi Uday,
    >> I am releasing transport request in SE09, for releasing child request it was taking long time and for parent request taking much more long time.
    You didn't note what release your are running on, but you can check the note 1541334 - Database connect takes two minutes
    >> In SM50, i didn't find any processes running.
    It is normal, because the system exports the transport request by "tp" command at the OS level, after TMS complete its job on a DIALOG wp.
    Best regards,
    Orkun Gedik

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • Taking huge time to fetch data from CDHDR

    Hi Experts,
    To count the entries in CDHDR table it taking huge time and throught time_out dump.
    I hope in this table some more than million entries exist. Is there any alternate to findout the entries?.
    We are finding the data from CDHDR by following conditions.
    Objclass - classify.
    Udate     -  'X' date
    Utime     -  'X' (( even selecton 1 Min))
    We also tried to index the UDATE filed but it takes huge time ( more than 6 Hrs and uncomplete)
    Can you suggest us is there any alternate to find the entries.
    Regards,
    VS

    Hello,
    at se16 display initila sceen and for you selection criteria you can run it as background and create spool reqeust.
    se16 > content, selection criteria and then Proram execute on background.
    Best regards,
    Peter

Maybe you are looking for