Cube Refresh Failure

I am running into some interesting issues where a particular cube refresh fails with the following message:
10:08:14 ***Error Occured in BUILD_DRIVER: In __XML_SEQUENTIAL_LOADER: In __XML_UNIT_LOADER: In __XML_LOAD_MEAS_CC_PRT_ITEM: In ___XML_LOAD_TEMPPRG: The Q12_AW!Q12_RESPONDENTS_AUMQMA00015F_PRTCOMP dimension does not have a value numbered -157717.
It looks like one of the partition is corrupted or things to that nature.
I built a brand new cube identical to the previous cube and it refreshes fine (same facts). I am just curious if there is any way to fix the previous cube without recreating the whole thing.
Am I missing something very obvious here?
Swapan.

You might try drop/recreate of the composite Q12_RESPONDENTS_AUMQMA00015F_PRTCOMP, Then reload that partition.
You can also drop a partition but I have no idea what impact that has with regards to the AW maintainance tools that use the Java API... i.e. AWM and OWB
Basically the answer is
1) No you are not missing something obvious
2) Recreating the whole thing is easier and more reliable. Especially if you have an EIF backup of the AW....

Similar Messages

  • Cube refresh fails with an error below

    Hi,
    We are experiencing this problem below during planning application database refresh. We have been refreshing the database everyday, but all of a sudden the below error is appearing in log. The error is something like below:
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    When the database refresh is done from workspace manually, the database refresh is happening successfully. But when triggered from unix script, its throwing the above error.
    Is it related to some provisioning related issue for which user has been removed from MSAD?? Please help me out on this.
    Thanks,
    mani
    Edited by: sdid on Jul 29, 2012 11:16 PM

    I work with 'sdid' and here is a better explaination of what exactly is going on -
    As part of our nightly schedule we have a unix shell script that executes refresh of essbase cubes from planning using the 'CubeRefresh.sh' shell script.
    Here is how our shell looks like -
    /opt/hyperion/Planning/bin/CubeRefresh.sh /A:<cube name> /U:<user id> /P:<password> /R /D /FS
    Here is what 'CubeRefresh.sh' looks like -
    PLN_JAR_PATH=/opt/hyperion/Planning/bin
    export PLN_JAR_PATH
    . "${PLN_JAR_PATH}/setHPenv.sh"
    "${HS_JAVA_HOME}/bin/java" -classpath ${CLASSPATH} com.hyperion.planning.HspCubeRefreshCmd $1 $2 $3 $4 $5 $6 $7
    And here is what 'setHPenv.sh' looks like -
    HS_JAVA_HOME=/opt/hyperion/common/JRE/Sun/1.5.0
    export HS_JAVA_HOME
    HYPERION_HOME=/opt/hyperion
    export HYPERION_HOME
    PLN_JAR_PATH=/opt/hyperion/Planning/lib
    export PLN_JAR_PATH
    PLN_PROPERTIES_PATH=/opt/hyperion/deployments/Tomcat5/HyperionPlanning/webapps/HyperionPlanning/WEB-INF/classes
    export PLN_PROPERTIES_PATH
    CLASSPATH=${PLN_JAR_PATH}/HspJS.jar:${PLN_PROPERTIES_PATH}:${PLN_JAR_PATH}/hbrhppluginjar:${PLN_JAR_PATH}/jakarta-regexp-1.4.
    jar:${PLN_JAR_PATH}/hyjdbc.jar:${PLN_JAR_PATH}/iText.jar:${PLN_JAR_PATH}/iTextAsian.jar:${PLN_JAR_PATH}/mail.jar:${PLN_JAR_PA
    TH}/jdom.jar:${PLN_JAR_PATH}/dom.jar:${PLN_JAR_PATH}/sax.jar:${PLN_JAR_PATH}/xercesImpl.jar:${PLN_JAR_PATH}/jaxp-api.jar:${PL
    N_JAR_PATH}/classes12.zip:${PLN_JAR_PATH}/db2java.zip:${PLN_JAR_PATH}/db2jcc.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/css-9_
    3_1.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/ldapbp.jar:${PLN_JAR_PATH}/log4j.jar:${PLN_JAR_PATH}/log4j-1.2.8.jar:${PLN_JAR_
    PATH}/hbrhppluginjar.jar:${PLN_JAR_PATH}/ess_japi.jar:${PLN_JAR_PATH}/ess_es_server.jar:${PLN_JAR_PATH}/commons-httpclient-3.
    0.jar:${PLN_JAR_PATH}/commons-codec-1.3.jar:${PLN_JAR_PATH}/jakarta-slide-webdavlib.jar:${PLN_JAR_PATH}/ognl-2.6.7.jar:${HYPE
    RION_HOME}/common/CLS/9.3.1/lib/cls-9_3_1.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/EccpressoAll.jar:${HYPERION_HOME}/common/
    CLS/9.3.1/lib/flexlm.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/flexlmutil.jar:${HYPERION_HOME}/AdminServices/server/lib/easse
    rverplugin.jar:${PLN_JAR_PATH}/interop-sdk.jar:${PLN_JAR_PATH}/HspCopyApp.jar:${PLN_JAR_PATH}/commons-logging.jar:${CLASSPATH
    export CLASSPATH
    case $OS in
    HP-UX)
    SHLIB_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${SHLIB_PATH:-}
    export SHLIB_PATH
    SunOS)
    LD_LIBRARY_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LD_LIBRARY_PATH:-}
    export LD_LIBRARY_PATH
    AIX)
    LIBPATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LIBPATH:-}
    export LIBPATH
    echo "$OS is not supported"
    esac
    We have not made any changes to either the shell or 'CubeRefresh.sh' or 'setHPenv.sh'
    From the past couple of days the shell that executes 'CubeRefresh.sh' has been failing with the error message below.
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    This error is causing our Essbase cubes to not get refreshed from Planning cubes through these batch jobs.
    On the other hand the manual refesh from within Planning works.
    We are on Hyperion® Planning – System 9 - Version : 9.3.1.1.10
    Any help on this would be greatly appreciated.
    Thanks
    Andy
    Edited by: Andy_D on Jul 30, 2012 9:04 AM

  • Performance issue in browsing SSAS cube using Excel for first time after cube refresh

    Hello Group Members,
    This is a continuation of my earlier blog question -
    https://social.msdn.microsoft.com/Forums/en-US/a1e424a2-f102-4165-a597-f464cf03ebb5/cache-and-performance-issue-in-browsing-ssas-cube-using-excel-for-first-time?forum=sqlanalysisservices
    As that thread is marked as answer, but my issue is not resolved, I am creating a new thread.
    I am facing a cache and performance issue for the first time when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users system (8 GB RAM but around
    4GB available RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cube DB - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after daily cube
    refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (32 GB RAM, around 4GB available RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    As mentioned in my previous thread, we have already implemented a cube wraming cache. But, there is no improvement.
    Currently, the cumulative size of the all 4 cube DB are more than 9 GB in Production and each cube DB having 4 individual cubes in average with highest cube DB size is 3.5 GB. Now, the question is how excel works with SSAS cube after
    daily cube refresh?
    Is it Excel creates a cache of the schema and data after each time cube is refreshed and in doing so it need to download the cube schema in Excel's memory? Now to download the the schema and data of each cube database from server to client, it will take
    a significant time based on the bandwidth of the network and connection.
    Is it anyway dependent to client system RAM ? Today the bigest cube DB size is 3.5 GB, tomorrow it will be 5-6 GB. Now, though client system RAM is 8 GB, the available or free RAM would be around 4 GB. So, what will happen then ?
    Best Regards, Arka Mitra.

    Could you run the following two DMV queries filling in the name of the cube you're connecting to. Then please post back the row count returned from each of them (by copying them into Excel and counting the rows).
    I want to see if this is an issue I've run across before with thousands of dimension attributes and MDSCHEMA_CUBES performance.
    select [HIERARCHY_UNIQUE_NAME]
    from $system.mdschema_hierarchies
    where CUBE_NAME = 'YourCubeName'
    select [LEVEL_UNIQUE_NAME]
    from $system.mdschema_levels
    where CUBE_NAME = 'YourCubeName'
    Also, what version of Analysis Services is it? If you connect Object Explorer in Management Studio to SSAS, what's the exact version number it says on the top server node?
    http://artisconsulting.com/Blogs/GregGalloway

  • (-2004) Critical cache refresh failure

    Dear All,
    We are useing SAP Business One 2007 B PL 10, DB is Sql Server 2005 SP1.
    While logging in to SAP we are getting error msg "(-2004) Critical cache refresh failure" and SAP application closes.
    The DB is accesible from Sql Management Studio and Data is visible.
    But, while restoring the backup of this DB, following error msg is showing..
    System.data.sqllient.sqlError: Could not continue scan with Nolock due to data movement.
    Please help.
    Regards,
    Brijesh Kumar Rai

    Dear Brijesh Kumar Rai,
    Welcome you post on the forum.
    You may check: (-2004) Critical cache refresh failure
    Thanks,
    Gordon

  • Multiple Cubes refresh in parallel

    Hi
    I have an analytical workspace where i have modelled a set of conformed dimensions and some dimensions specific to specific subject areas. There will be multiple cubes (Partitioned and some non partitioned) in this analytical workspace.
    Would like to know if these cubes can be refreshed in parallel. I have tried using DBMS_CUBE provided parallelism parameter and kicked off 2 cubes refresh but when i check the cube_build_log, the slave process is always 0 and the execution seems to have happened in serial.
    Please suggest how these cubes can be refreshed in parallel.
    Thanks

    FAILED RECORDS FROM THE LOG :
    ==========================
    127     0     FAILED     BUILD          BUILD     "(CLOB) <ERROR>
    <![CDATA[
    XOQ-01707: Oracle job "IncrMyCBMV_JOB$_812" failed while executing slave build "GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406" with error "37162: ORA-37162: OLAP error
    XOQ-00703: error executing OLAP DML command "(UPDATE GLOBAL.GLOBAL : ORA-37605: error during OLAP AW UPDATE
    ORA-00600: internal error code, arguments: [kdliLockBlock], [9708], [16859386], [0], [0], [0], [0], [], [], [], [], []
    ORA-06512: at "SYS.DBMS_CUBE", line 234
    ORA-06512: at "SYS.DBMS_CUBE", line 316
    ORA-06512: at line 1
    ".]]>>
    </ERROR>"     GLOBAL     GLOBAL               02-APR-13 12.25.43.702000000 PM ASIA/CALCUTTA     (CLOB) BUILD price_cube, units_cube     DBMS_CUBE     0               4542     0     0     2     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P22:1999.10     IncrMyCBMV_JOB$_821     02-APR-13 12.25.42.673000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6288     230     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P24:1999.12     IncrMyCBMV_JOB$_819     02-APR-13 12.25.32.533000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6272     228     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P25:2000.01     IncrMyCBMV_JOB$_818     02-APR-13 12.25.30.505000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6259     227     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P26:2000.02     IncrMyCBMV_JOB$_817     02-APR-13 12.25.28.477000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6258     226     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P27:2000.03     IncrMyCBMV_JOB$_816     02-APR-13 12.25.26.449000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6237     225     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P28:2000.04     IncrMyCBMV_JOB$_815     02-APR-13 12.25.24.421000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6235     224     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P29:2000.05     IncrMyCBMV_JOB$_814     02-APR-13 12.25.22.393000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6234     223     0     3     IncrMyCBMV
    127     0     FAILED     SLAVE     UNITS_CUBE     CUBE          GLOBAL     GLOBAL     P30:2000.06     IncrMyCBMV_JOB$_813     02-APR-13 12.25.20.349000000 PM ASIA/CALCUTTA     (CLOB) GLOBAL.UNITS_CUBE USING (CLEAR LEAVES, LOAD, SOLVE) AS OF SCN 1533406     DBMS_CUBE     1          S     6232     222     0     3     IncrMyCBMV

  • Cube Refresh Performance Issue

    We are facing a strange performance issue related to cube refresh. The cube which used to take 1 hr to refresh is taking around 3.5 to 4 hr without any change in the environment. Also, the data that it processes is almost the same before and now. Only these cube out of all the other cubes in the workspace is suffering the performance issue over a period of time.
    Details of the cube:
    This cube has 7 dimensions and 11 measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 2480261 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed with the below script
    DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Has anyone faced similar issue? Please can advise on what might be the cause for the performance degradation.
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
    AWM - awm11.2.0.2.0A

    Take a look at DBMS_CUBE.BUILD documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube.htm#ARPLS218 and DBMS_CUBE_LOG documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    You can also search this forum for more questions/examples about DBMS_CUBE.BUILD
    David Greenfield has covered many Cube loading topics in the past on this forum.
    Mapping to Relational tables
    Re: Enabling materialized view for fast refresh method
    DBMS CUBE BUILD
    CUBE_DFLT_PARTITION_LEVEL in 11g?
    Reclaiming space in OLAP 11.1.0.7
    Re: During a cube build how do I use an IN list for dimension hierarchy?
    .

  • Multiple Cube Refresh

    Hi,
    We are having multiple cubes in a single workspace. We would need to refresh more than one cube parallely, using DBMS_CUBE package. Is that feasible? Can 2 cubes refresh in parallel? Will there not be conflict in read-write attach mode on the analytic workspace?
    rgds,
    Prakash S

    This should work
    exec dbms_cube.build('cubeA, cubeB', parallelism=>2)All partitions of cube A will be built before cubeB is started, but if paralellism is sufficiently high, then they should both begin. It will not work to call dbms_cube.build in two different sessions.

  • What is the Cube refresh and how to determine cube refresh time for a cube

    hi,
    what will be the meaning of cube refershing, i believe it is clearing and updating the data in production weekly or monthly basis. how it will happen in general, do we need to do mannually, will there be any scheduler in production.
    please clarify me how to determine refresh timime for a cube, what facters we need to consider.
    Thanks,
    bhavani.

    I can't give you specific guidelines as every client/cube is different based on needs. Here is some general info.
    Cube refresh is a pretty generic name for a number of things.
    It could be as you say clearing and updating the cube, or just updating, or running calculations or running restructure operations. It's pretty much what you need it to be for your environment. As to timing, There are two timings to consider. First, the frequency of refresh. Is it Monthly, weekly, Daily, hourly? That depends on the application and the needs of the users. The second thing is how long the refresh takes. Is it seconds, minutes, hours, etc. This has to be considered to meed the User's SLA and hopfully fit into a batch processing window. Refreshes can be done manually or automated with batch or shell scripts (depending on the OS) and scheduled through whatever scheduling package is on the box (Windows AT scheduler, Cron, third party, etc)

  • (-1) Critical cache refresh failure!

    Hi Experts,
    While SAP client  is up and running the following message pop ups. "(-1) Critical cache refresh failure" and the application close.
    The issue occurs randomly on some clients.
    We are running SAP Business One for HANA PL09 HF1 AND HANA Server rev 69.
    Please advise
    Vasilis Korolis

    Hi,
    Please check thread:
    http://scn.sap.com/thread/1030870
    Thanks & Regards,
    Nagarajan

  • Number of users logged in during cube refresh activity

    I am using Hyperion 11.1.2.1 and running  Planning cube refresh activity using \Hyperion\Planning\bin\CubeRefresh.cmd.
    Is there any way to find out total number of users logged in into a particular Planning Application during Cube refresh activity running for that application ?

    I don't think Planning repository has information about sessions.
    Let us know if you succeed.
    I was not talking about Essbase as in "Essbase", all users that are performing an activity in Planning (data refresh, data load, run business rule) will show up in EAS sessions so if you run the maxl display session against Planning application it'll give you all users.
    Now OP, you can perform a search for "line count windows cmd" will give you ideas on how to count the lines and that count is going to be the numbers of users (well it is not the number of users, but number of sessions)
    ORA-00001: Unique constraint violated: Count lines in multiple files using Windows command prompt
    Regards
    Celvin

  • 1 beep at startup, DRAM refresh failure? Board/RAM problem? Bios needs changing?

    I am new to PC building, and I have just put together my first system. So I have everything assembled, system turns on, and I hear 1 short beep. I can go into my motherboard BIOS screen, and even begin installing Windows 7. However, after about 4-5 minutes, the computer will just turn itself off. According to my motherboards beep codes, 1 short beep means a DRAMS refresh failure. Is this RAM related? Now I tried to unhook the ram, switch the spots on the board, and plug it back in, but I got the same problem. Is this perhaps faulty RAM? Or is a motherboard/power supply issue? Here are the components I am using:
    Patriot Gaming Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model PGS34G1333ELK
    MSI H55-GD65 Motherboard
    Intel Corei5 750 Processor
    eVGA GTS 450 Video Card (01G-P3-1450-TR)
    Cooler Master 650W Power Supply ATX 12v
    I am new when it comes to all this, so is it possible I need to set ram timing in my BIOS? As far as I know, the 2 ram sticks should be compatible with my motherboard.
    Any help would be great, and I can supply more information if need be.

    When you the system continues to booting something (e.g. Windows Setup CD), then that one beep is probably a power-good signal or indicates an installed USB device, however, it cannot really be a sign of a critical error, otherwise the system would simply halt when it beeps and you would probably see nothing on the screen at all.
    Test with one memory module only and make sure the memory command rate is set to 2T.  If you still experience sudden shut-downs or reboots, please check the CPU Temperature in BIOS Setup (H/W Monitor section) to make sure this is not a simple overheating issue.  Often, such reboots indicate a problem with the PSU.  So, you might want to check that as well.

  • MV refresh failure...Request for guidance.!!!

    Guys,
    I've got to find that an MV Refresh that is supposed to happen last night via an automated script has failed with the error below:
    ORA-12008: error in materialized view refresh path
    ORA-08103: object no longer exists
    As a workaround, I have manually refreshed the MV, and it worked fine. Could somone shed thoughts on what could've lead to the failure?It has been working all along, the objects referenced in the MV are owned by the schema from where refresh is done.
    Thanks!!!!
    -Bhagat

    Can you check the dependencies of the MV. Check whether all the dependent object used within the MV definitions exist in valid state. If any db link is there, check its working fine.
    Make sure the access privelegs are fine for all the objects.

  • Hyperion planning cube refresh is taking more time

    Hi,
    My planning application has around 4 cubes/databases as below:
    Cube A: 1000+ dense members and  500+ sparse members
    Cube B: 100+ dense members and  400+ sparse members
    Cube C: 100+ dense members and  600+ sparse members
    Cube D: 300+ dense members and  2000+ sparse members
    When some changes(eg:add members) done to any of the above database and perform refresh database, it is taking almost 2 hrs for the refresh to get completed.
    Is there any option to just refresh the database which is updated/changed and not hitting all the 4 databases in an application  ??
    Is there any tuning to be done to minimise this refresh time??
    Kindly let me know.
    Thanks!

    Well, everytime you touch the dense simension it'll do a full restructuring, that's means, it'll rebuild all the .pag files and .ind files.
    The good side is that this will get rid of all the fragmentation too.
    The problem is the amount of data you have in the cube. The bigger the slower.
    Take a look in the follow thing during the restructuring:
    Look in the server how much processing its use:
    The disk usage:
    Memory (If a cube uses 20 gb it'll double during restructuring process, if not it'll uses swap and things will get really slow):
    Take a look in the .pag file the speed that it'll create the file (Refresh the data and see how much it increase since the last refresh and calculate how much kb/s it's loading)
    With this you can think how you'll tunning the cube:
    If the last topic above is too slow, this could be the size of your block (Or too big or too small). I know oracle said that in 64 bits systems you could have a block bigger than 100k but I never see thir work well
    If the third topic is the problem you can (decrease the memory for each cube or increase the memory of the server)
    If is the first and second you need to verify your server at all.You can put your cube .pag files in different disks if the problem is in the disks.
    Also in the worst case scenario you can create a backup database and put old data there. If the user need it he could acess by smart view. This way your cube doesn't increase in size every new year.
    Hope this can help you.

  • Missing MDX Sets, Measures after Cube refresh

    Hello and thank you for any help on this question.  I am somewhat new to MDX, Cubes and sorts, but have found an issue that I can not find a work around.
    I have a large excel workbook that connects to a data cube.  Within this workbook, I use about 8 MDX queries to bring back date related data such as, Yesterday, MTD, Previous Month ect.  This has worked great until now.  On refresh, I am receiving
    an error stating the "Organization of the OLAP Cube has changed and as a result a pivot table will not update".  I have identified that what is happening is that all of my MDX queries are missing.  They still show in the field list in the
    rows section, but they seem to be removed from the data connection.  I can not recreate the MDX queries as it says that the name is not unique.  Any help on this error?
    EXHORTER

    Hi,
    Was there any changes on your data source side, like data structure, DB server name, etc.. Make sure your connection
    string to the right server and right cube.
    You can refer
    this thread and
    this article for some ideas. Thank you.

  • MV_Scheduled Refresh failure

    Hi,
    I have created a materialized view log on table testing_mview_rev in 11g:
    SQL> CREATE MATERIALIZED VIEW LOG ON testing_mview_rev WITH ROWID;
    Materialized view log created.
    And a MV on it in 10g:
    SQL> create materialized view testing_mview_1_rev REFRESH Force start with (sysdate) next (sysdate+1/1440) with rowid as select * from testing_mview_rev@S58_TO_S56;
    Materialized view created.
    Now this is suppose to automatically refresh itself every minute but refresh is failing. However manual refresh is working fine.
    Please let me know what I have missed here. is there any other way to schedule the MV to run every minute?

    Oh my god I couldnt believe it.. it's done and I would like to share the solution with all:
    The issue here was dblink because of which everything was working except the scheduler for refresh:
    Earlier I created the db link as:
    create database link testdblink using '<Service name on target server>';
    As this was throwing error:
    SQL> create public database link testdblink1 connect to <username> identified by <password> using '<Service name on target server>';
    Database link created.
    SQL> select from v$version@testdblink1;*
    select from v$version@testdblink1*
    ERROR at line 1:
    ORA-01017: invalid username/password; logon denied
    ORA-02063: preceding line from TESTDBLINK1
    The correct way of doing this when you are connecting to 11g is:
    SQL> create  database link testdblink connect to "<username>" identified by "<password>" using '<Service name on target server>';
    Database link created.
    SQL> select from v$version@testdblink;*
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for Linux: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    The problem turns out to be that the DB I try to connect to is 11g and it by default sets password case sensitive to true.
    http://lihaobin.com/?p=78
    now Materialized view scheduled refresh is working great;
    SQL> create materialized view testing_mview_1_rev REFRESH Force start with (sysdate) next (sysdate+1/1440) with rowid as select * from imig.testing_mview_rev@testdblink;
    Materialized view created.
    SQL> select count(*) from testing_mview_1_rev;
    COUNT(*)
    10
    SQL> insert into testing_mview_rev values ...;
    5 rows created.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from testing_mview_1_rev;
    COUNT(*)
    15
    SQL> Select job, log_user, substr(to_char(last_date ,'DD.MM.YYYY HH24:MI'),1,16) "LAST DAT", substr(to_char(next_date ,'DD.MM.YYYY HH24:MI'),1,16) "NEXT DATE", failures from dba_jobs;
    JOB LOG_USER LAST DAT NEXT DATE
    FAILURES
    36 IMIG 13.07.2012 13:08 13.07.2012 13:09
    0
    :)

Maybe you are looking for

  • Help with imagemap effect in java not applet

    Hi, I would like my java application to show a frame with an image, that is clickable depending on pic's xy coordinates (the xy coordinates of clickable area are saved in xml file) It can be done quite easily with image map in applet. But I want to d

  • Cannot see files in Windows Explorer, but iTunes still plays 'em

    hi all, I have all my music files on an external hard drive. Up until this morning, I could browse the files on the external drive using Windows Explorer. Now the iTunes Music folder appears empty when I click on it, and even says 0kbs when I right-c

  • MIR5:  MIRO Blocked Invoices

    Hello Gurus- What are reasons why documents would not become FI documents in MIRO?  This is confusing to me because I get a document number but I don't see it under the Vendor?  What are some causes and their solutions? In reviewing MIR5, I noticed i

  • Tip - Hidden Keys for Portable Home Directories

    FYI, the following info was posted by John de Troye on the Client Management mailing list. hth, b. Buried inside the managed client properties are a set of keys that allow more overt control of the portable home directory (PHD) behavior. Many people

  • Do your versions of catalyst support video?

    Hi, I am really new to Catalyst but I have spent hours on tutorials and I think I am getting the hang of it. The problem I am having is that I cannot seem to add any video to my work. When I go to the Prerelease help page, there is a section on video