Problem maintaining cube

Hi there,
I have a 2 dimensions with the following levels and hierarchies
CALENDAR_YEAR:
Levels:
YEAR
Hierarchy CALENDAR:
YEAR
LOCATION:
Levels:
ALL_LOCATIONS
CONTINENT
COUNTRY
Hierarchy PLACE:
ALL_LOCATIONS <--- CONTINENT <--- COUNTRY(default)
I created a star schema by sqlplus commandline using create table scrypts. The star schema consists of 2 dimensions as tables locations,*years* with primary keys NUMBER type ids and a fact table with primary key 2 foreign keys:
1) year pointing to primary key of years and
2) country pointing to primary key of locations.
This fact table has only 1 measure.
I already created the dimensions and the cube on awm.
I also did the mappings as they should be.
But, when i try to maintain the cube, an error occurs saying this:
An error has occurred on the server
Error class: Express Failure
Server error descriptions:
INI: Error creating a definition manager, Generic at TxsOqConnection::generic<BuildProcess>
INI: XOQ-01711: Aggregation path for hierarchies on a dimension "LOCATION" is inconsistent, Generic at TxsOqStdFormCommand::execute
at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
at oracle.olapi.data.source.DataProvider.executeBuild(Unknown Source)
at oracle.olap.awm.wizard.awbuild.UBuildWizardHelper$1.construct(Unknown Source)
at oracle.olap.awm.ui.SwingWorker$2.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Any ideas on how to solve this?
Note: I don't have a Metalink and i don't need to have one!

i created the tables locations,years with the scrypts:
CREATE TABLE LOCATIONS
(COUNTRY_KEY varchar2(44) NOT NULL,
COUNTRY_NAME varchar2(54),
CONTINENT_KEY varchar2(20) NOT NULL,
CONTINENT_NAME varchar2(30),
COUNTRY_ID NUMBER,
CONTINENT_ID NUMBER NOT NULL,
PRIMARY KEY(COUNTRY_ID)
CREATE TABLE YEARS
(CALENDAR_YEAR_KEY NUMBER NOT NULL,
CALENDAR_YEAR_NAME varchar2(40),
CALENDAR_YEAR_TIME_SPAN NUMBER,
CALENDAR_YEAR_END_DATE DATE,
PRIMARY KEY(CALENDAR_YEAR_KEY)
I also created a name table as the fact table with this scrypt:
CREATE TABLE NAME
(SALES NUMBER,
YEAR NUMBER,
COUNTRY NUMBER,
PRIMARY KEY(YEAR,COUNTRY),
FOREIGN KEY (YEAR) REFERENCES YEARS(CALENDAR_YEAR_KEY),
FOREIGN KEY (COUNTRY) REFERENCES LOCATIONS(COUNTRY_ID)
Then i loaded my data (It is certain that they are loaded as i checked it). All these are done by sqlplus on user olaptrain(from tutorial).
After these i created on 11.1.0.7B AWM THE 2 DIMENSIONS: CALENDAR_YEAR,LOCATION
The mappings for them are:
CALENDAR_YEAR:
Member --> OLAPTRAIN.YEARS.CALENDAR_YEAR_KEY
Long Desc --> OLAPTRAIN.YEARS.CALENDAR_YEAR_NAME
Short Desc --> OLAPTRAIN.YEARS.CALENDAR_YEAR_NAME
time_span --> OLAPTRAIN.YEARS.CALENDAR_YEAR_TIME_SPAN
end_date --> OLAPTRAIN.YEARS.CALENDAR_YEAR_END_DATE
LOCATION:
ALL_LOCATIONS
Member --> 'ALL_LOCATIONS'
Long Desc --> 'All locations'
Short Desc --> 'All locations'
CONTINENT
Member --> OLAPTRAIN.LOCATIONS.CONTINENT_ID
Long Desc --> OLAPTRAIN.LOCATIONS.CONTINENT_NAME
Short Desc --> OLAPTRAIN.LOCATIONS.CONTINENT_NAME
COUNTRY
Member --> OLAPTRAIN.LOCATIONS.COUNTRY_ID
Long Desc --> OLAPTRAIN.LOCATIONS.COUNTRY_NAME
Short Desc --> OLAPTRAIN.LOCATIONS.COUNTRY_NAME
For the levels and hierarchies look on previous post. Also here are the mappings for the cube:
SALES
Sales --> OLAPTRAIN.NAME.SALES
DIMENSIONS
CALENDAR_YEAR
Year --> OLAPTRAIN.YEARS.CALENDAR_YEAR_KEY
join condition --> OLAPTRAIN.NAME.YEAR=OLAPTRAIN.YEARS.CALENDAR_YEAR_KEY
COUNTRY
ALL_LOCATIONS
CONTINENT
COUNTRY --> OLAPTRAIN.LOCATIONS.COUNTRY_ID
Join condition --> OLAPTRAIN.NAME.COUNTRY=OLAPTRAIN.LOCATIONS.COUNTRY_ID
When i press maintain cube the following error appears:
Error class: Express Failure
Server error descriptions:
INI: Error creating a definition manager, Generic at TxsOqConnection::generic<BuildProcess>
INI: XOQ-01711: Aggregation path for hierarchies on a dimension "LOCATION" is inconsistent, Generic at TxsOqStdFormCommand::execute
Could you help me solve this? I have been working on this for 3 days but still i cannot find a solution!
Edited by: spiros on 1 Ιουν 2009 4:46 πμ

Similar Messages

  • Problem Maintaining CUBE using AWM

    when i am trying to maintain cube using AWM. Primarily there was no problem the maintain task is running well. But now after 3 or 4 days when i am trying to maintain my Cube. I am getting an error massage "ORA-00933: SQL command not properly ended". this error massage is generated for new cube and previously maintained cube too. I don't understand why this error has been generated? Please . help me if anyone recover this error.

    Hi,
    You are probably hitting a bug 5757454. Please do the following to solve the problem:
    1. delete * from olapsys.xml_load_log;
    2. drop SEQUENCE "OLAPSYS"."XML_LOADID_SEQUENCE" ;
    -- This is how the sequence was defined in OH/olap/admin/cmwinst.sql originally
    --create sequence XML_LOADID_SEQUENCE;
    -- In testing, we used the following explicit statement instead of the above one:
    create SEQUENCE "OLAPSYS"."XML_LOADID_SEQUENCE" MINVALUE 1 MAXVALUE
    999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE ;
    3. The following line is VERY important:
    grant select on olapsys.XML_LOADID_SEQUENCE to public;
    4. Now try to maintain a dimension, cube or AW.
    It worked for me :)
    Regards,
    Jarek Przybyszewski

  • Error when attempting to maintain cube by submitting task to Job Queue

    Hi,
    I've been working with Cubes/Dimensions on AWM 10.2.0.3 for the last little while and have been maintaining cubes by selecting the Run maintenance task immediately in this session. This has been working reasonable well for me.
    I am now looking at how this can be put into production and trying to use the Submit the task to the Job Queue selection. When I choose this option and select the Run immediately radio item I have no problems. However when I choose to run this in the future I receive the following error:
    Action BUILDDATABASE failed on object OLAPBI.DUBDEV_OLAPBI
    oracle.AWXML.AWException: Action BUILDDATABASE failed on object OLAPBI.DUBDEV_OLAPBI
    at oracle.AWAction.BuildDatabase.Execute(BuildDatabase.java:737)
    at oracle.olap.awm.wizard.awbuild.BuildWizardHelper.runBuild(BuildWizardHelper.java:275)
    at oracle.olap.awm.navigator.node.modeler.cube.ModelerCubeNode.actionPerformed(ModelerCubeNode.java:480)
    at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
    at javax.swing.AbstractButton$ForwardActionEvents.actionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
    at javax.swing.AbstractButton.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI$MouseInputHandler.mouseReleased(Unknown Source)
    at java.awt.Component.processMouseEvent(Unknown Source)
    at java.awt.Component.processEvent(Unknown Source)
    at java.awt.Container.processEvent(Unknown Source)
    at java.awt.Component.dispatchEventImpl(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Window.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.EventQueue.dispatchEvent(Unknown Source)
    at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.run(Unknown Source)
    Caused by: oracle.AWXML.AWException: ***Error Occured in BUILD_DRIVER:
    at oracle.AWAction.BuildDatabase.Execute(BuildDatabase.java:731)
    ... 27 more
    I am only setting the time to be a couple of minutes in the future in order to verify that it is working but I receive this error for the GLOBAL_AW schema as well as my own AW Schema.
    Any ideas??
    Thanks in advance,
    LM

    Hi LM,
    I have not seen this specific issue before but there are a few things you can check:
    1) Did you exit AWM before the scheduled job started? You must exit AWM before the jobs starts as the job will try to attach the AW in read-write mode which is the default mode used by AWM. Therefore, if you are still in AWM when the job starts the job will fail. Usually you will get a more helpful error message that says something like "unable to attach AW".
    2) Try scheduling the job one hour in the future. Then start Enterprise Manager and look at the job queue. You should find a job with the prefix AW$ which will be your job. Look at the details for the job and make sure the program to be executed looks correct, ie it actually contains a program and a name of a cube/dimension. I created a job once, that for some reason did not contain a PL/SQL command block. No idea why.
    3) If the PL/SQL block is present try running it from the command line and see what happens, login in to SQLPlus as the same user as you use for AWM.
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Analytic Workspace Manager possible bug: Maintain Cube hangs

    Hi All,
    I am a newbe, I have been following the tutorial "Building OLAP 11g Cubes". http://www.oracle.com/technology/obe/olap_cube/buildicubes.htm
    After the step "Maintain Cube SALES_CUBE", I got the information box "Loading facts for cube SALES_CUBE"...on the screen for the last 4 hours.
    Is this normal? Should I kill the process and start again?
    I am running Oracle 11g Enterprise Edition Release 11.1.0.7.0 on a Virtual Machine with Windows Server 2008 Standard SP1 with 1GB RAM.
    The Analytic Workspace Manager is 11.2.0.1.0 running on a Windows XP SP3.
    Any help is much appreciated

    I'm getting a similar problem, I cannot maintain cubes that worked fine yesterday:
    An error has occurred on the server
    Error class: Express Failure
    Server error descriptions:
    INI: error creating a definition manager, Generic at TxsOqConnection::generic<BuildProcess>
    INI: XOQ-01706: An unexpected condition occurred during the build: "TxsOqLoadCommandProcessor::generatePartitionListSource-unsupported join condition-table"., Generic at xsoqBuild
    at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
    at oracle.olapi.data.source.DataProvider.callGeneric(Unknown Source)
    at oracle.olapi.data.source.DataProvider.executeBuild(Unknown Source)
    at oracle.olap.awm.wizard.awbuild.UBuildWizardHelper$1.construct(Unknown Source)
    at oracle.olap.awm.ui.SwingWorker$2.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
    It all started after I tried to add a calculated measure to an existing cube (something I have done before in 11g...a feature I love).
    This is too bad, so far I've been loving 11g Olap as compared to 10g Olap. But now 11g is starting to crop up these bullshit bugs as well. I guess OLAP is still too crappy to rely on for production....good thing I won't recommend a rollout of this product to my clients. It's a great tool for having fun with development, but using Oracle Olap + AWM in a real company is career suicide.
    AWM 11.2.01

  • Excessive time when maintaining cube

    Hi there,
    I have a star schema with:
    a) 2 dimensions:
    year with hierarchy : CALENDAR_YEAR ------------>all_years
    location with hierarchy : COUNTRY -------------> CONTINENT -----------> ALL_COUNTRIES
    b) 6 partitioned cubes (uncompressed)
    Each cube contains measures with diffirent data types. In particular, each measure may have 1 of the following 3 data types:
    varchar2 ------------> with aggregation maximum
    int or dec ------------> with aggregation SUM (cube's aggregation)
    date ------------> with aggregation Non additive
    When i execute maintain cube (for 1 of the cubes i have) I leave my pc for 2 hours to load the data and after that it doesn't end but it continues to load data. So, data loading is never done. I have been on my pc for a week trying to solve the problem but nothing has changed. What could the problem be?
    Notes:
    (A)
    I checked vls parameters and tha data's format and they are both compatible. See for yourself:
    SQL> select value from V$NLS_Parameters;
    VALUE
    AMERICAN
    AMERICA
    $
    AMERICA
    GREGORIAN
    DD-MON-RR
    AMERICAN
    WE8MSWIN1252
    BINARY
    HH.MI.SSXFF AM
    VALUE
    DD-MON-RR HH.MI.SSXFF AM
    HH.MI.SSXFF AM TZR
    DD-MON-RR HH.MI.SSXFF AM TZR
    $
    AL16UTF16
    BINARY
    BYTE
    FALSE
    19 rows selected.
    (B)
    Mappings are also ok. I checked them. As for each hierarchy, I gave on each attribute, values that prevent data conflict. I think `all_years` and `all_countries` levels are also ok as they include everything.
    (C)
    My computer is an Intel Pentiium 4 with 2x 512mb ram. I am running oracle 11g home on windows xp professional service pack 2.
    Thanks in Advance

    I need uncompressed cubes because as i said i have non-numeric data types in my data. I have dates, nums and varchar2.
    Anyway.
    i don't understand what you mean by saying dimensional members, but i suppose you are refering to the levels and the hierarchy of each dimension. I have already included that on my previous post. Check it! If you mean something else inform me!
    As for the amount of data:
    YEAR:2 RECORDS (1990 and 1991)
    CREATE TABLE YEARS
    (CALENDAR_YEAR_KEY NUMBER NOT NULL,
    CALENDAR_YEAR_NAME varchar2(40),
    CALENDAR_YEAR_TIME_SPAN NUMBER,
    CALENDAR_YEAR_END_DATE DATE,
    PRIMARY KEY(CALENDAR_YEAR_KEY)
    LOCATION : 256 RECORDS (It also contains a CONTINENT_ID whose value range from 350 to 362 REPRESENTING all oceans, continents and the world. COUNTRY_ID ranges from 1 to 253)
    CREATE TABLE LOCATIONS
    (COUNTRY_KEY varchar2(44) NOT NULL,
    COUNTRY_NAME varchar2(54),
    CONTINENT_KEY varchar2(20) NOT NULL,
    CONTINENT_NAME varchar2(30),
    COUNTRY_ID NUMBER,
    CONTINENT_ID NUMBER NOT NULL,
    PRIMARY KEY(COUNTRY_ID)
    MEASURES : 498 RECORDS (249 records for 1990 and 249 records for 1991)
    CREATE TABLE MEASURES
    (GEOGRAPHY_total_area DEC(11,1),
    GEOGRAPHY_local_area DEC(11,1),
    GEOGRAPHY_arable_land DEC(5,4),
    GEOGRAPHY_permanent_crops DEC(5,4),
    . (various other measures)
    MEASURES_YEAR NUMBER,
    MEASURES_COUNTRY NUMBER,
    PRIMARY KEY(MEASURES_YEAR,MEASURES_COUNTRY),
    FOREIGN KEY (MEASURES_YEAR) REFERENCES YEARS(CALENDAR_YEAR_KEY),
    FOREIGN KEY (MEASURES_COUNTRY) REFERENCES LOCATIONS(COUNTRY_ID)
    TOTALLY : 268 measures
    But to make data loading easier i created 6 cubes on Analytical Workspace Manager each one containing:
    GEOGRAPHY : 51 attributes
    PEOPLE : 24 attributes
    ECONOMY : 40 attributes
    GOVERNMENT : 113 attributes
    COMMUNICATION : 28 attributes
    DEFENSE FORCES : 11 attributes
    (If i made any number counting error, forgive me. I only wanted to show you that there are many measures.)
    So, Is there anything I can do to solve the problem?

  • Maintain Cube Not Working

    Brijesh,
    I built my dimensions, levels and heirarchies successfully and also created cube. Now that I've built my measures and run the maintainance, I'm not seeing any values in them even though I know I should.
    Based on my mapping, the keys from the fact are going to the right dimensions (and I even made a simplier, just one dimension --> measure cube as well), but no success. There are cases where I know I shouldn't get any data (based on selected values), but when I make valid selection I see only 0.00 being displayed.
    Can you tell where I may have gone wrong here? Are the values made available for selection (the attributes) ONLY supposed to be the same one-to-one values available in the fact table?
    **I'm using the simple SUM aggregate function for my measures, and pretty much all the default configurations given.
    Brijesh Gaur
    Posts: 416
    Registered: 04/03/08
    Re: where are dimension attributes in AWM - cube viewer?
    Posted: Nov 10, 2009 3:21 AM in response to: mikeyp Reply
    Attribute is something related to dimension. They are purely the property of the dimension and not the fact. Now you said the data is not visible in the cube and you are getting 0.00 even with a simplier case(one dimensional cube). There are many causes of the value not shown in the cube and some are mentioned below
    1. All records are rejcted in the cube maintaince process. For this you can check olapsys.xml_log_log table and see if you can find any rejected records.
    2. There is some breakage in the hierarchy of the dimension. It also prevents data from summing up.
    Did you tried with the global sample available on OTN? That should be good starting point for you.
    You can check the cube as below to find out it is loaded or not. Consider your one dimensional cube. Find out a member which is existing in the dimension and also has some fact value associated with it.
    1. Now limit your dimension to that value like this
    lmt <dim name> to '<value>'
    For compressed composite cube
    2. Now check data in the cube like this
    rpr cubename_prt_topvar
    for uncompressed cube you should do
    rpr cubename_stored
    #2 should show the same value which is available in the fact.
    Thanks,
    Brijesh
    mikeyp
    Posts: 14
    Registered: 09/22/09
    Re: where are dimension attributes in AWM - cube viewer?
    Posted: Nov 13, 2009 1:24 PM in response to: Brijesh Gaur Edit Reply
    Brijesh,
    Thanks for your suggestions, and here are my results based on that.
    1. No records rejected after running cube maintenance
    2. I didn't limit my dimension to specific value as you recommended, but made my member the same as my Long and Short description attributes using AWM. (Its a flat dimension i.e. no level or hierarchy since the dimension only has one value/meaningful field.
    Based on those steps, I still didn't get the results I was looking for. The fact table has five values for that one dimension and I'm seeing 0.00 for 4 of them, and an inaccurate value for the last one. (this after doing comparison with simple aggregate query against fact)
    Do you have any other possible reasons/solutions?
    **Loading the Global Schema into our dev environment is out of my hands unfortunately, so that's the reason for the prolonged learning curve.

    Brijesh,
    Here's the results of what you suggested:
    1. Creating test dim and fact table with simple case you provided was successful, and AWM was able to map the same values to cube which was created on top of that model.
    2. I took it a step further and changed dim values to be same like existing dim table
    2.b. Also replaced test fact table values to mimic existing values as well so it would match what's available in dim table, and here's where the fun / mystery begins
    Scenario 1:
    I created fact like this...........select dim value, sum(msr) from <existing fact table>
    As you can easily tell, my values were already aggregated in the table, and they also match perfectly in cube created by AWM - no issue.
    Scenario 2:
    Created fact like this............select dim value, msr from <existing fact table>
    Quite clear is that my values are no longer aggregated, and broken down across multiple occurences of dim values; did this so that I can verify that the "sum" will actually work when used in AWM.
    The results from scenario 2 lead me back to same issue being faced before - i.e. the values weren't being rolled up when the cube was created. No records were rejected, and there was only ONE measure value showing up (and it was still incorrect), and everything else was 0.00
    I retrieved this error from the command program that runs in the background. This was generated right after running the maintain cube:
    <the system time> TRACE: In oracle.dss.metadataManager.............MDMMetadataProviderImpl92::..........MetadataProvider is created
    <the system time> PROBLEM: In oracle.dss.metadataManager.........MDMMetadataProviderImpl92::fillOlapObjectModel: Unable to retrieve AW metadata. Reason ORA-942
    BI Beans Graph version [3.2.3.0.28]
    <the system time> PROBLEM: In oracle.dss.graph.GraphControllerAdapter::public void perspectiveEvent( TDGEvent event ): inappropriate data: partial data is null
    <the system time> PROBLEM: In oracle.dss.graph.BILableLayout::logTruncatedError: legend text truncated
    Please tell me this helps shed some light on the main reason for no values coming back; we really need to move forward with using Oracle cubes where we are.
    Thanks
    Mike

  • ORA-37999: Serious OLAP error - while running Maintain Cube...

    While running "Maintain cube..." I'm getting error
    "Action BUILDDATABASE failed on object TEST_DB.NPE"
    Below is the stack trace. It states its a "ORA-37999: Serious OLAP error". Am I missing something or should I contact Oracle support like the statement said?
    oracle.AWXML.AWException: Action BUILDDATABASE failed on object TEST_DB.NPE
    at oracle.AWAction.BuildDatabase.Execute(BuildDatabase.java:530)
    at oracle.olap.awm.wizard.awbuild.BuildWizardHelper$1.construct(BuildWizardHelper.java:185)
    at oracle.olap.awm.ui.SwingWorker$2.run(SwingWorker.java:109)
    at java.lang.Thread.run(Unknown Source)
    Caused by: oracle.AWXML.AWException: oracle.express.ExpressServerException
    java.sql.SQLException: ORA-37999: Serious OLAP error: UPDATE. Please contact Oracle Technical Support.
    ORA-03238: unable to extend LOB segment HPMP_WH.SYS_LOB0000043088C00004$$ subpartition SYS_LOB_SUBP73 by 64 in tablespace USERS
    ORA-06512: at "SYS.GENCONNECTIONINTERFACE", line 70
    ORA-06512: at line 1
    at oracle.AWXML.AWConnection.executeCommand(AWConnection.java:279)
    at oracle.AWAction.BuildDatabase.Execute(BuildDatabase.java:513)
    ... 3 more
    Caused by: oracle.express.ExpressServerException
    java.sql.SQLException: ORA-37999: Serious OLAP error: UPDATE. Please contact Oracle Technical Support.
    ORA-03238: unable to extend LOB segment HPMP_WH.SYS_LOB0000043088C00004$$ subpartition SYS_LOB_SUBP73 by 64 in tablespace USERS
    ORA-06512: at "SYS.GENCONNECTIONINTERFACE", line 70
    ORA-06512: at line 1
    at oracle.express.spl.SPLExecutor.executeCommand(SPLExecutor.java:155)
    at oracle.AWXML.AWConnection.executeCommand(AWConnection.java:268)
    ... 4 more

    Well... it was indeed a tablespace error...no need to contact Oracle.
    Worked fine after the DBA's added more tablespace!

  • Problems maintaining network connectivity in hotels

    I have a problem maintaining internet connectivity when I use hotel WiFi services where you have to "login" first via the web browser. I don't have a problem with the WiFi per se and I have no problems with initial connectivity -- I can login and browse the Web without problems. But if my MacBook goes to sleep, I can't browse, get email, etc when it wakes up.
    I don't have this problem at home - only when I travel to places where I have to first "login". This has been happening consistently since I got the MacBook 10 months ago and at various hotel chains around the country. Also when I have another laptop with me ( Windows ) I don't see this problem. Sometimes on the Windows machine I have to "login" again after it wakes up but I never have this opportunity on my MacBook -- the browser just hangs there, "loading" indefinitely.
    The only way I've found to fix this is to reboot my MacBook. I've tried Googling for a solution or even anybody else who has this problem and I've turned up nothing! I am really the only person having this problem?!? LOL...

    Unfortunately this is a fact of life with these types of hotel systems. Live with it or set your Mac to "Computer never sleep" when you are traveling and that occurs.

  • Help me! AWM "maintain cube ... " performance: too slow with a small cube

    Hi all,
    I have 3 dimensions: TIMES, PRODUCTS, STORES
    The hierarchy of TIMES, is in the order: year, quarter, month, week, day
    The hierarchy of Products is in the order: sector, family, sub-family, product
    The hierarchy of STORES is in the order: area, department, store
    each hierarchy is mapped with a data set and the size is
    730 lines (days) for the TIMES dimension
    282 lines (products) for the PRODUCTS dimension
    237 lines (stores) for the STORES dimension
    With these dimensions, I create a cube and 2 measures: QUANTITY, BENEFIT
    My measures used all 3 dimensions : TIMES, PRODUCTS, STORES.
    I mapped in the end my cube with a dataset contains: store, day, product, quantity, benefit
    This data set has 3921 lines.
    I chosed the rule to aggregate all dimensions is SUM.
    So when I do "maintain cube...", AWM takes much time to do it! (approximately 2h to maintain my cube)
    Some one knows why it's long?
    How can I do to accelerate it?
    Due I modify the parameters in AWM or in my cube?
    Thanks in advance!

    thanks all for your reply!
    I done maintain cube already in following your advice. I increased the rollback and tmp space and changed "NUMBER to DECIMAL" as type of data of cube.
    But I have other question. I hope you can help me!
    1. What's the effect of the option "Choose the regions of the cube to be presummarized and stored in the analytic workspace" in the table "Summarize To" when you create a cube? If I check this option for all hierarchy level, the next time, when I do the request,
    the time of calcul will be optimize so fastest?
    (I check this option in creating my cube but when I make a request for level of a particular hierarchy, it's so long to have the result. I use a software this can connect and do request from Oracle Olap DataBase.)
    2. I done maintain cube last week. Today I created a new cube and added more data set. But I cant maintain cube again. This is the message error. Can you explain to me why?
    ( I increase already the rollback and tmp space. To know how and what I done, view this thread please!
    Error when I do "Maintain Cubes ..." )
    --------------------------------------java.sql.SQLException: ORA-00933: SQL command not properly ended
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:111)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:330)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:287)
    at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:742)
    at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:206)
    at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:789)
    at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1030)
    at oracle.jdbc.driver.T4CStatement.executeMaybeDescribe(T4CStatement.java:829)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1123)
    at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1678)
    at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1644)
    at oracle.olap.awm.dataobject.dialog.olapcatalog.TableDialog.populateTableDataModel(TableDialog.java:164)
    at oracle.olap.awm.dataobject.dialog.olapcatalog.TableDialog.initialiseAndPopulate(TableDialog.java:68)
    at oracle.olap.awm.dataobject.dialog.olapcatalog.TableDialog.<init>(TableDialog.java:57)
    at oracle.olap.awm.wizard.awbuild.BuildWizardHelper.runBuild(BuildWizardHelper.java:252)
    at oracle.olap.awm.navigator.node.modeler.cube.ModelerMeasureNode.actionPerformed(ModelerMeasureNode.java:226)
    at javax.swing.AbstractButton.fireActionPerformed(Unknown Source)
    at javax.swing.AbstractButton$ForwardActionEvents.actionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.fireActionPerformed(Unknown Source)
    at javax.swing.DefaultButtonModel.setPressed(Unknown Source)
    at javax.swing.AbstractButton.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI.doClick(Unknown Source)
    at javax.swing.plaf.basic.BasicMenuItemUI$MouseInputHandler.mouseReleased(Unknown Source)
    at java.awt.Component.processMouseEvent(Unknown Source)
    at java.awt.Component.processEvent(Unknown Source)
    at java.awt.Container.processEvent(Unknown Source)
    at java.awt.Component.dispatchEventImpl(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source)
    at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source)
    at java.awt.Container.dispatchEventImpl(Unknown Source)
    at java.awt.Window.dispatchEventImpl(Unknown Source)
    at java.awt.Component.dispatchEvent(Unknown Source)
    at java.awt.EventQueue.dispatchEvent(Unknown Source)
    at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
    at java.awt.EventDispatchThread.run(Unknown Source)
    Thanks in advance!

  • AWM maintain CUBE not refreshing data

    Hi,
    I have refreshed the data that feeds my CUBE and it seems as if AWM is not deleting old CUBE data, when I view the source table for the CUBE in the mapping I can see there is only 1 row but after I maintain CUBE and tick the delete dimension members checkboxes and run process immediately in this session the original data exists. Any help appreciated.
    Cheers,
    Brandon

    hello brandon,
    my solution is quite awkward but it works:
    you have to maintain the cube's measure data where the result would be 0 processed data(wierd huh?).
    To do that you, you can empty one of the dimension's data(members) by emptying the table/view that it is mapped to, giving you a memberless dimension. Then maintain the cube's measures(do not include dimensions). This will give you a result of 0 processed data, and all records are rejected, all the while emptying the cube.
    Then fill back the table/view that is mapped to the memberless dimenstion with data again so it will have its members back when you re-maintain it, then maintain cube data again.
    There's probably a more elegant solution out there, but this one works for sure.

  • I'm having problems maintaining a VPN connection on iPad. Any thoughts?

    I'm having problems maintaining a VPN connection on iPad. Any thoughts?

    Sorry to be vague but just drops out. Often citing insufficient bandwidth. However, I don't have the same problems using a 3g iPhone on the same wifi connection.

  • Problem with Cube Viewer

    Hi techi
    i am using OEM with the Oracle 9.2.0.1.0.
    If i try to start Cube Viewer i will get a black Screen and after a minute i will get an alert without error message.
    I found a Patch 2323004 which only be used for Windows not for Linux Suse.
    The process is as follows:
    Then the Oracle Enterprise Manager Cube Viewer starts loading. An error
    window with a fire alarm symbol and with an OK-button appears. There is no
    window title or error text displayed in the window.
    Selecting the OK-button will close the cube viewer window and the 9.2.0.1
    RDBMS instance is no longer running.
    This problem is handled in <bug:2407685>
    Apply the following 9.2.0.1 Oracle OLAP patch like described in the patch
    readme file and reboot the machine (always create a backup before applying a
    patch):
    Patchnumber: 2323002
    OLAPI API FAILS ON WINDOWS 2000 & XP
    Product:Oracle OLAP Version:Oracle 9.2.0.1
    Platform:MS Windows NT/2000/XP Server
    To download the patch login to http:
    //metalink.oracle.com
    Select button "Patches", enter the Patch Number 2323002 and select
    the "SUBMIT" button.
    Can anybody tell me if there is a pacth for Linux Server.
    I will be thanfull to get any answer
    Mehdi

    One-off 2323002 is not applicable for Linux. It addresses a bug in the OLAP API due to differences in the behavior of memory management system services between Windows platforms.
    As for your Cube Viewer problem, please make sure your OLAPI jar files found on the client match those on the server/database. You can find them at $ORACLE_HOME\olap\olapi\lib directory. Copy the database express*.jar files to the client OLAPI lib directory. Cube Viewer is based on the OLAP API.

  • Error when I try to maintain cubes

    Hi all,
    I have this error when i try to maintain some cubes with dimensions.
    Failed to Build(Refresh) OLAP_BUILD.MAIN Analytic Workspace.
    ***Error Occurred in BUILD_DRIVER: In __XML_SEQUENTIAL_LOADER: In __XML_UNIT_LOADER: In __XML_RUN_CC_AUTOSOLVE:
    Someone knows how I can learn more about this error.
    Best Regards

    I think this might be an issue with 10.2.0.2 version, I would try applying the 10.2.0.3 patchset and then apply the additional the OLAP A patch for your specific platform. The patches can be downloaded from Metalink.
    Regards
    Keith Laker
    Oracle EMEA Consulting
    OLAP Blog: http://oracleOLAP.blogspot.com/
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    DM Blog: http://oracledmt.blogspot.com/
    OWB Blog : http://blogs.oracle.com/warehousebuilder/
    OWB Wiki : http://wiki.oracle.com/page/Oracle+Warehouse+Builder
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • Maintain CUBE through back end

    Hii
    I want to write a procedure which generate CUBE from back end. Is is possible? What is the method?
    Aneesh

    hi
    I have one small doubut, what is the differnce in the PO_Requisiiton_headers_all Table Have PREPARER_ID and PO_action_history table Have Empolyee_id and also in po_action_history have created by .what is the main differnce between thease three PREPARER_ID,empolyee_id and created_by .
    Ex :- select created_by,last_updated_by,PREPARER_ID,creation_date,last_update_date
    from apps.po_requisition_headers_all where requisition_header_id=6731548
    O:P- created by last_updated_by prepaper_id creation_date last_update_date
    1216 83867 72208 5/26/2009 8:48:59 AM 5/26/2009 12:41:19 PM
    select created_by,last_updated_by,employee_id,action_code,creation_date,last_update_date
    from apps.po_action_history where object_id=6731548 order by sequence_num
    CREATED_BY, LAST_UPDATED_BY, EMPLOYEE_ID, ACTION_CODE, CREATION_DATE, LAST_UPDATE_DATE
    1216, 1216, 72208, IMPORT, 5/26/2009 8:48:59 AM, 5/26/2009 8:49:27 AM
    1216, 1216, 72208, SUBMIT, 5/26/2009 8:55:41 AM, 5/26/2009 8:55:41 AM
    1216, 1216, 72208, RESERVE, 5/26/2009 8:55:41 AM, 5/26/2009 8:55:42 AM
    1216, 1216, 72208, FORWARD, 5/26/2009 8:55:42 AM, 5/26/2009 8:55:43 AM
    1216, 83867, 72208, APPROVE, 5/26/2009 8:55:43 AM, 5/26/2009 12:41:13 PM
    My Mian Problem is why same employee forward him self to approve the requisiiton.
    but his last updated_by is different.
    I am not Understanding the functionality of this .
    Thanks

  • Problem on cube while reporting

    hello SDNs,
    i want to know that when we report on a cube wher does the data comes from E fact table or F fact table?
    and if i compress a cube what happens?
    where does the data comes from E or F fact table?
    If two requests were compressed and now the third request is in F table ,i want to report on this.which request should be in the reporting?
    thanks in advance
    sathish

    Hi,
    Compressing InfoCubes
    Before compression report read the data in F table.
    After the compression its read the data from E table as data is moved from F table to E table.
    After compression when you have queries fetching uncompressed data (data from both E and F) then query hits both the tables.
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Compression is done to improve the performance. When data is loaded into the InfoCube, its done request wise.Each request ID is stored in the fact table in the packet dimension.This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.When you compress the request from the cube, the data is moved from F Fact Table to E Fact Table.Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). i.e. all the data will be stored at the record level & no request will then be available. This also removes the SID's, so one less Join will be there while data fetching.
    The compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct before compressing.
    Note 407260 - FAQs: Compression of InfoCubes
    Summary
    Symptom
    This note gives some explanation for the compression of InfoCubes with ORACLE as db-platform.
    Compression on other db-platform might differ from this.
    Other terms
    InfoCubes, Compression, Aggregates, F-table, E-table, partitioning,
    ora-4030, ORACLE, Performance, Komprimierung
    Reason and Prerequisites
    Questions:
    1. What is the extent of compression we should expect from the portion we are loading?
    2. When the compression is stopped, will we have lost any data from the cube?
    3. What is the optimum size a chunk of data to be compressed?
    4. Does compression lock the entire fact table? even if only selected records are being compressed?
    5. Should compression run with the indexes on or off?
    6. What can I do if the performance of the compression is bad or becomes bad? Or what can I do if query performance after compression is bad?
    Solution
    In general:
    First of all you should check whether the P-index on the e-facttable exists. If this index is missing compression will be practically impossible. If this index does not exist, you can recreate this index by activating the cube again. Please check the activation log to see whether the creation was successful.
    There is one exception from this rule: If only one request is choosen for compression and this is the first request to be compressed for that cube, then the P-index is dropped and after the compression the index is recreated again automatically. This is done for performance reasons.
    Answers:
    1. The compression ratio is completely determined by the data you are loading. Compression does only mean that data-tuples which have the identical 'logical' key in the facttable (logical key includes all the dimension identities with the exception of the 'technical' package dimension) are combined into a single record.
    So for example if you are loading data on a daily basis but your cube does only contain the month as finest time characteristics you might get a compression ratio of 1/30.
    The other extreme; if every record you are loading is different from the records you have loaded before (e.g. your record contains a sequence number), then the compression ratio will be 1, which means that there is no compression at all. Nevertheless even in this case you should compress the data if you are using partitioning on the E-facttable because only for compressed data partitioning is used. Please see css-note 385163 for more details about partitioning.
    If you are absolutely sure, that there are no duplicates in the records you can consider the optimization which is described in the css-note 0375132.
    2. The data should never become inconsistent by running a compression. Even if you stop the process manually a consistent state should be reaches. But it depends on the phase in which the compression was when it was canceled whether the requests (or at least some of them) are compressed or whether the changes are rolled back.
    The compression of a single request can be diveded into 2 main phases.
    a) In the first phase the following actions are executed:
    Insert or update every row of the request, that should be compressed into the E-facttable
    Delete the entry for the corresponding request out of the package dimension of the cube
    Change the 'compr-dual'-flag in the table rsmdatastate
    Finally a COMMIT is is executed.
    b) In the second phase the remaining data in the F-facttable is deleted.
    This is either done by a 'DROP PARTITION' or by a 'DELETE'. As this data is not accessible in queries (the entry of package dimension is deleted) it does not matter if this phase is terminated.
    Concluding this:
    If the process is terminated while the compression of a request is in phase (a), the data is rolled back, but if the compression is terminated in phase (b) no rollback is executed. The only problem here is, that the f-facttable might contain unusable data. This data can be deleted with the function module RSCDS_DEL_OLD_REQUESTS. For running this function module you only have to enter the name of the infocube. If you want you can also specify the dimension id of the request you want to delete (if you know this ID); if no ID is specified the module deletes all the entries without a corresponding entry in the package-dimension.
    If you are compressing several requests in a single run and the process breaks during the compression of the request x all smaller requests are committed and so only the request x is handled as described above.
    3. The only size limitation for the compression is, that the complete rollback information of the compression of a single request must fit into the rollback-segments. For every record in the request which should be compressed either an update of the corresponding record in the E-facttable is executed or the record is newly inserted. As for the deletion normally a 'DROP PARTITION' is used the deletion is not critical for the rollback. As both operations are not so expensive (in terms of space) this should not be critical.
    Performance heavily dependent on the hardware. As a rule of the thumb you might expect that you can compress about 2 million rows per hour if the cube does not contain non-cumulative keyfigures and if it contains such keyfigures we would expect about 1 million rows.
    4. It is not allowed to run two compressions concurrently on the same cube. But for example loading into a cube on which a compression runs should be possible, if you don´t try to compress requests which are still in the phase of loading/updating data into the cube.
    5. Compression is forbidden if a selective deletion is running on this cube and compression is forbidden while a attribute/hierarchy change run is active.
    6. It is very important that either the 'P' or the primary index '0' on the E-facttable exists during the compression.
    Please verify the existence of this index with transaction DB02. Without one of these indexes the compression will not run!!
    If you are running queries parallel to the compression you have to leave the secondary indexes active.
    If you encounter the error ORA-4030 during the compression you should drop the secondary indexes on the e-facttable. This can be achieved by using transaction SE14. If you are using the tabstrip in the adminstrator workbench the secondary indexes on the f-facttable will be dropped, too. (If there are requests which are smaller than 10 percent of f-facttable then the indexes on the f-facttable should be active because then the reading of the requests can be speed up by using the secondary index on the package dimension.) After that you should start the compression again.
    Deleting the secondary indexes on the E facttable of an infocube that should be compressed may be useful (somemtimes even necessary) to prevent ressource shortages on the database. Since the secondary indexes are needed for reporting (not for compression) , queries may take much longer in the time when the secondary E table indexes are not there.
    If you want to delete the secondary indexes only on the E facttable, you should use the function RSDU_INFOCUBE_INDEXES_DROP (and specify the parameters I_INFOCUBE = ). If you want to rebuild the indexes use the function RSDU_INFOCUBE_INDEXES_REPAIR (same parameter as above).
    To check which indexes are there, you may use transaction RSRV and there select the elementary database check for the indexes of an infocube and its aggregates. That check is more informative than the lights on the performance tabstrip in the infocube maintenance.
    7. As already stated above it is absolutely necessary, that a concatenated index over all dimensions exits. This index normally has the suffix 'P'. Without this index a compression is not possible! If that index does not exist, the compression tries to build it. If that fails (forwhatever reason) the compression terminates.
    If you normally do not drop the secondary indexes during compression, then these indexes might degenerate after some compression-runs and therefore you should rebuild the indexes from time to time. Otherwise you might see performance degradation over time.
    As the distribution of data of the E-facttable and the F-facttable is changed by a compression, the query performance can be influenced significantly. Normally compression should lead to a better performance but you have to take care, that the statistics are up to date, so that the optimizer can choose an appropriate access path. This means, that after the first compression of a significant amount of data the E-facttable of the cube should be analyzed, because otherwise the optimizer still assumes, that this table is empty. Because of the same reason you should not analyze the F-facttable if all the requests are compressed because then again the optimizer assumes that the F-facttable is empty. Therefore you should analyze the F-facttable when a normal amount of uncompressed requests is in the cube.
    Header Data
    Release Status: Released for Customer
    Released on: 05-17-2005 09:30:44
    Priority: Recommendations/additional info
    Category: Consulting
    Primary Component: BW-BEX-OT-DBIF-CON Condensor
    Secondary Components: BW-SYS-DB-ORA BW ORACLE
    https://forums.sdn.sap.com/click.jspa?searchID=7281332&messageID=3423284
    https://forums.sdn.sap.com/click.jspa?searchID=7281332&messageID=3214444
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6466e07211d2acb80000e829fbfe/frameset.htm
    Thanks,
    JituK

Maybe you are looking for

  • Invoking EBusiness Suite APIs from Oracle Data Integrator

    Hi, I am using ODI as the data migration tool to load data from oracle legacy application to ebiz. Is there any knowledge module available to call oracle Ebusiness APIs? Thanks in advance. -Santanu

  • The user does not exist or is not unique - workflow problem

    I am using Solution Starters Dynamic Management workflow on my Project Server 2010.  I was working fine for a year now, but today all workflows were broken when someone tried to submit them to a next stage. "Workflow terminated. An error has occurred

  • How do I get the arrow to record during a Keynote recording?

    OS: 10.7.5 Keynote ver: 5.2 I need a pointer during my Keynote presentation recording. However, the pointer does not record, even though I can see it during my recording session. How can I fix this? Thank you.

  • LSMW - Idoc method  - Urgent ??

    Hi, What is "Allow structure assignment to EDIDC40" ? on the object attributes maintenance screen (step 1) in the LSMW Idoc inbound processing. we are using CRM 5.0 to test the BP creation with the help of IDOC methods. any inputs or step by step IDO

  • My mouse wont scroll

    the rolly ball on my mouse wont scroll down, it scrolls up & both ways sideways but not down... i swear, like 5 minutes ago it was fine, then all of a sudden it just stopped scrolling. Its brand new like 2 months ago... what is wrong, some one help p