Point and dimension

When inserting a POINT type geometry into sdo_geometry column do I really have to use the third dimension, ?
I have added row to geom_metadata table with 2 dim_elements (x and y) for a geom table.
My line and polygon data are going well.
The point requires Z element (even it is null) and the index doesnot like this.
ORA-29875: failed in the execution of the ODCIINDEXINSERT routine
ORA-13000: dimension number is out of range
If I update my dim info (add Z dim_element) in the geom_metadata then my index re-creation fails because of uncompatible old data.
Where am I going wrong ?
null

Hi,
You can pass null value as a third column value.Say for eg you have a following table with sdo_geometry datatype.
CREATE TABLE cola_market (
mkt_id NUMBER PRIMARY KEY,
name VARCHAR2(32),
shape MDSYS.SDO_GEOMETRY);
So to insert a point value you can try the following query.
insert into cola_market values(
1,
'cola_f',
mdsys.sdo_geometry
(2001,null,mdsys.sdo_point_type
(11,7,null),null,null)
Regards
Chandan D.

Similar Messages

  • Best practice when FACT and DIMENSION table are the same

    Hi,
    In my physical model I have some tables that are both fact and dimension table, i.e. in the BMM they are of course separated into Fact and Dim source (2 different units) and it works fine. But I can see that there will be trouble when having more fact tables and I e.g. have a Period dimension pointing to all the different fact tables (different sources).
    Seems like the best solution to this is to have an alias of the fact/transaction table and have 2 "copies" of the transaction table (one for fact and one for dimension table) in the physical layer. Only bad thing is that there will then allways be 2 lookups in the same table when fetching data from the dimension and the fact table.
    This is not built on a datawarehouse - so the architecture is thereby more complex. Hope this was understandable (trying to make a short story of it).
    Any best practice on this? Or other suggestions.

    Id recommend creation of a view in the database. if its an oracle DB, materialised views would be a huge performance benefit. you just need to make sure that the MVs are updated when the source is updated.
    -Domnic

  • About Surrogate Key and Dimension Key on OWB 10.2

    Hi, everyone.
    I am using OWB 10.2 and I have a question about Surrogate key and Dimension Key.
    I indicated the foreign key as VARCHAR2 type in Fact Table and Dimension Key as VARCHAR2 type is operated as Primary key in Dimension Table. I made Single Level in Dimension Table.
    I know that Dimension Key stores the surrogate ID for dimension and is the primary key of the table. Also, Surrogate ID should be only NUMBER type.
    So, in this case, Surrogate ID is NUMBER type
    Dimension key should be NUMBER type to store the surrogate ID.
    But, Dimension key also should operate the primary to relate Foreign key as VARCHAR2 type.
    How I can solve this confusing condition?
    Please let me know that.
    JWS

    Hi JWS,
    From a SQL point of view it should not be a problem to join a NUMBER field to a VARCHAR2 field because during execution there will be an implicite cast for the NUMBER value to a VARCHAR2 value. See the example below.
       SELECT * FROM DUAL
       WHERE   1 = '1'From an OWB point of view it is not possible to have a Dimension with an NUMBER value Key that has a relation to a VARCHAR2 value Foreign key in a Fact table. This is caused due to the creation of a Fact table in OWB in which the Foreign keys in it are build from de Dimension tables that refer to them.
    You will loose the reference to the Dimension when changing the type of the Foreign Key.
    To resolve this issue I would advise you to use a Sequence that generates your Surrogate Key (NUMBER type) for the Dimension table and store it in the Primary Key Column (VARCHAR2 type).
    When validating the mapping you will get a warning, but when executing this should give no problems.
    Regards,
    Ilona

  • Reg: Fact table and Dimension table in Data Warehousing -

    Hi Experts,
    I'm not exactly getting the difference between the criteria which decide how to create a Fact table and Dimension table.
    This link http://stackoverflow.com/questions/9362854/database-fact-table-and-dimension-table states :
    Fact table contains data that can be aggregate.
    Measures are aggregated data expressions (e. Sum of costs, Count of calls, ...)
    Dimension contains data that is use to generate groups and filters.
    This's fine but how does one decide which columns to consider for Fact table and which columns for Dimension table?
    Any help is much appreciated.
    Pardon me if this's not the correct place for this question. My first question in the new forum.
    Thanks and Regards,
    Ranit Biswas

    ranitB wrote:
    But my main doubt was - what is the criteria to differentiate between columns for Fact tables and Dimension tables? How can one decide upon the design?
    Columns of a fact table will often be 'scalar' attributes of the 'fact' data item. A dimension table will often be 'compound' attributes of a 'fact'.
    Consider employee information. The EMPLOYEE table can be a fact table. It might have scalar attribute columns such as: DATE_HIRED, STATUS, EMPLOYEE_ID, and so on.
    Other related information that can't be specified as a single attribute value would often be stored in a 'dimension' table: ADDRESS, PHONE_NUMBER.
    Each address requires several columns to define it: ADDRESS1, ADDRESS2, CITY, STATE, ZIP, COUNTRY. And an employee might have several addresses: WORK_ADDRESS, HOME_ADDRESS. That address info would be stored in a 'dimension' table and only the primary key value of the address record would be stored in the EMPLOYEE 'fact' table.
    Same with PHONE_NUMBER. Several columns are required to define a phone number and each employee might have several of them. The dimension tables are used to help 'normalize' the data in the employee 'fact' table.
    And that EMPLOYEE table might also be a DIMENSION table for other FACT tables. A DEVELOPER table might have an EMPLOYEE_ID column with a value that points to a 'dimension' row in the EMPLOYEE dimension table.

  • Fact and Dimension tables

    Hi all,
    Please let me know the relation between the fact and dimension table(s).
    regards
    chandra kanth.

    Hi Chandra, it would be useful if you read a little bit about BI before start working with a BI tool.
    Anyway, I'll try to help you.
    A dimension is a structure that will be join to the fact table to so that users can analyze data from different point of views and levels. The fact table will most probably (not always) have data at the lowest possible level. So, let's suppose you have SALES data for different cities in a country (USA). Your fact table will be:
    CITY --- SalesValue
    Miami --- 100
    NYC --- 145
    Los Angeles --- 135 (because Arnold is not managing the State very well ;-) )
    You will then can have a Dimension table with the "Country" structure. This is, a table containing the different cities along with their states, counties, and finally the country. So your dimension table would look like:
    NATION --- STATE --- COUNTY --- CITY
    USA --- Florida --- Miami --- Miami
    USA --- NY ---- NY --- NYC
    USA --- Los Angeles --- LA --- Los Angeles
    This dimension will allow you to aggregate the data at dffirent lavels. This is, the user will not only be able to see data at the lowest level, but also at the County, State and Country level.
    You will join your fact table (field CITY) with your dimension table (field CITY). The tool will then help you with the aggregation of the values.
    Ihope this helps.
    J.-

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • What are stray points and where do they come from?

    Just curious.
    CS3
    XP2.
    I recently made a drawing incorporating some elements from Inventor (PDFd and placed). After finishing filling in the details, I drew a marquee around the entire drawing and noticed dimensions of the drawing where 10x what it should have been.  I zoomed out and and saw two pairs of points. one pair right, one left, were showing as selected. I tried selecting just the points and could not. After re-selecting only my drawing, they would show up.  Its like they were grouped but not.
    Anyway, going to select/ object/ stray points and deleting them solved it.
    What are they and where did they come from?  Obviously Illustrator recognizes they can be there because of the built-in menu allowance.

    Stray points can occur in a multitude of ways. They occur much more frequently in Illustrator than other drawing programs because of its basic selection interface.
    However, some programs' exports intentionally place stray points at opposite corners of the page bounds in order to convey those bounds to the importing program. (I often see this in imported DXFs.) That sounds like the kind of situation you are describing.
    JET

  • In Answers am seeing "Folder is Empty" for Logical Fact and Dimension Table

    Hi All,
    Am working on OBIEE Answers, on of sudden when i clicked on Logical Fact table it showed me as "folder is empty". I restarted all the services and then tried still showing same for Logical Fact and Dimension tables but am able to see all my reports in Shared Folders. I restarted the machine too but no change. Please help me out to resolve this issue.
    Thanks in Advance.
    Regards,
    Rajkumar.

    First of all, follow the forum etiquette :
    http://forums.oracle.com/forums/ann.jspa?annID=939
    React or mark as anwser the post that the user gave.
    And for your question, you must check the log for a possible corrupt catalog :
    OracleBIData_Home\web\log\sawlog0.log

  • Can't view my Cube and Dimension Data with the Cube Viewer

    I'm new in using OWB, i'm using Oracle 10g release1 with OWB R2 also Oracle WorkFlow 2.6.3.
    When studying with the steps from the OTN pages (start01, flat-file02, relational-wh-03, etl-mappings, deployingobjects, loading-warehouse and bi-modeling)
    the loading was success, i guess...
    But when I want to see the data in the cube and dimension, an error occurs.
    It says
    " CubeDV_OLAPSchemaConnectionException_ENT_06952??
    CubeDV_OLAPSchemaConnectionException_ENT_06952??
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.DataViewerConnection.connect(DataViewerConnection.java:115)
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerMain.BIBeansConnect(DimDataViewerMain.java:433)
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerMain.init(DimDataViewerMain.java:202)
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerEditor._init(DimDataViewerEditor.java:68)
         at oracle.wh.ui.editor.Editor.init(Editor.java:1115)
         at oracle.wh.ui.editor.Editor.showEditor(Editor.java:1431)
         at oracle.wh.ui.owbcommon.IdeUtils._tryLaunchEditorByClass(IdeUtils.java:1431)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1344)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1362)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:864)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:851)
         at oracle.wh.ui.console.commands.DataViewerCmd.performAction(DataViewerCmd.java:19)
         at oracle.wh.ui.console.commands.TreeMenuHandler$1.run(TreeMenuHandler.java:188)
         at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:178)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:454)
         at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:201)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:145)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:137)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:100) "
    Can somebody explain what is happening, I really don't understand, when the cube viewer window appears, there's no data in it....
    I realy need help with this...

    I'm new in using OWB, i'm using Oracle 10g release1 with OWB R2 also Oracle WorkFlow 2.6.3.
    When studying with the steps from the OTN pages (start01, flat-file02, relational-wh-03, etl-mappings, deployingobjects, loading-warehouse and bi-modeling)
    the loading was success, i guess...
    But when I want to see the data in the cube and dimension, an error occurs.
    It says
    " CubeDV_OLAPSchemaConnectionException_ENT_06952??
    CubeDV_OLAPSchemaConnectionException_ENT_06952??
    at oracle.wh.ui.owbcommon.dataviewer.dimensional.DataViewerConnection.connect(DataViewerConnection.java:115)
    at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerMain.BIBeansConnect(DimDataViewerMain.java:433)
    at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerMain.init(DimDataViewerMain.java:202)
    at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerEditor._init(DimDataViewerEditor.java:68)
    at oracle.wh.ui.editor.Editor.init(Editor.java:1115)
    at oracle.wh.ui.editor.Editor.showEditor(Editor.java:1431)
    at oracle.wh.ui.owbcommon.IdeUtils._tryLaunchEditorByClass(IdeUtils.java:1431)
    at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1344)
    at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1362)
    at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:864)
    at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:851)
    at oracle.wh.ui.console.commands.DataViewerCmd.performAction(DataViewerCmd.java:19)
    at oracle.wh.ui.console.commands.TreeMenuHandler$1.run(TreeMenuHandler.java:188)
    at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:178)
    at java.awt.EventQueue.dispatchEvent(EventQueue.java:454)
    at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:201)
    at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:145)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:137)
    at java.awt.EventDispatchThread.run(EventDispatchThread.java:100) "
    Can somebody explain what is happening, I really don't understand, when the cube viewer window appears, there's no data in it....
    I realy need help with this...

  • The Difference between "Cell Data" and "Dimension Data"?

    What is the difference between the tab "Cell Data" and "Dimension Data" in SSAS?

    Article quote: " SSAS provides the way to secure analysis services database/cube data from unauthorized access. Analysis services provides secure access by creating object called "roles". After creation of role, user's windows login credential can be used
    to enroll into particular role because analysis services identifies user from their windows login credentials . You can protect your data in roles at two levels:
    1) Dimension level
    2) Cell level
    If user has been assigned more than one role, analysis services loop through all assigned roles after login. Analysis services finds all permission level for the particular user and  union all the permission levels.
    If two roles has contradictory access for user then particular access will be allowed. Suppose role1 says Australia data access and role2 denies Australia data access then access to Australia data will be allowed. "
    LINK:
    http://www.msbiconcepts.com/2010/10/ssas-data-security-dimension-and-cell.html
    Kalman Toth Database & OLAP Architect
    IPAD SELECT Query Video Tutorial 3.5 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Update new material master weight and dimension in open sales orders and de

    Hi,
    Iu2019m maintaining gross weight, net weight, volume in material master. When the time of sales order entry in VA01 its calculating weight and dimensions based on material master and order quantity. And I create deliveries in VL01N.
    If I made a correction in the net and gross weight in material master after I created the sales order will the correct net and gross weight will pick up at the delivery creation.
    In other term is the VL01N net and gross weight is taking from which is available in SO or Material master.
    Is there any standard transaction to update already existing open sales order, delivery net and gross weight once it is corrected in material master?
    Please advice.
    Sam

    Is there any standard transaction to update
    No it is not possible to update the weight in existing sale order or delivery.  You have to change it manually or create a new sale order.   Weight in delivery will be fetched from sale order only and hence,  whatever delivery you create referencing a sale order, system will copy whatever is there.
    thanks
    G. Lakshmipathi

  • Unable to embed External CSS file, dropdown list,filter panel and dimension filter are not working in SAP Design Studio Tool

    Hello Everyone,
    I am new to SAP Design Studio. I am working on creating dashboards and I am using Design Studio 1.2 version. Please suggest some solutions for the following issues. Thanks in Advance.
    1. External CSS file which is embeded using custom CSS option under "application component properties" is not working.
        * I kept the external CSS file inside repository-> my application folder.
    2. During runtime, getting javascript error while selecting '-' option from filter panel.
    3. Unable to select or type dimension name under "dimension filer component" properties.
    4. Getting runtime error for dropdown list, and I have inserted the following code under "onStartup" option of Application component properties.
    "DROPDOWN_1.setItems(DS_1.getMemberList("PRODUCTREF", MemberPresentation.INTERNAL_KEY, MemberDisplay.TEXT, 20));
    and dimension values are not populated in auto suggest."

    There should be an Error Log under View > Error Log that gives you more details of these errors.  Could you add that?  There is also a View > Script Problems log that provides more details as well.

  • I have an Air running Mountain Lion. I also have a digital point-and-shoot camera, but have never downloaded any images from the camera to this Air. If I connect the camera to the Air, what might I expect? Will connecting the camera to the Air by a cable

    I have an 11" Air with lots of remaining capacity running Mountain Lion. I also have a digital point-and-shoot camera. I have never downloaded images from the camera to this Air. However, if I just connect the camera to the Air's USB port, with the camera's available cable, and the Air is open, will a program like ImageCapture or iPhoto, auomatically come up, and I can begin to download wanted images? Or, will this sort of connection delete or fry everything on the Air? Do I need any other software on board? Will current Apple onboard programs allow me to download images from my camera and not destroy everything else?

    Answered. Thanks.

  • My macbook seems to be going crazy. At certain points (and for hours) I get the oh snap page on chrome, safari doesnt work, macbook wont let me create any files or for example upload music to itunes. I restored my mac so im sure its not a malware problem.

    My macbook seems to be going crazy. At certain points (and for hours) I get the oh snap page on chrome, safari doesnt work, macbook wont let me create any files or for example upload music to itunes. I restored my mac so im sure its not a malware problem. The only thing that solves it is switching off and on , but sometimes I have to do that for 6-7 times before its ok or wait for a few hours. Some help please

    Please read this whole message before doing anything.
    This procedure is a test, not a solution. Don’t be disappointed when you find that nothing has changed after you complete it.
    Step 1
    The purpose of this step is to determine whether the problem is localized to your user account.
    Enable guest logins* and log in as Guest. Don't use the Safari-only “Guest User” login created by “Find My Mac.”
    While logged in as Guest, you won’t have access to any of your personal files or settings. Applications will behave as if you were running them for the first time. Don’t be alarmed by this; it’s normal. If you need any passwords or other personal data in order to complete the test, memorize, print, or write them down before you begin.
    Test while logged in as Guest. Same problem?
    After testing, log out of the guest account and, in your own account, disable it if you wish. Any files you created in the guest account will be deleted automatically when you log out of it.
    *Note: If you’ve activated “Find My Mac” or FileVault, then you can’t enable the Guest account. The “Guest User” login created by “Find My Mac” is not the same. Create a new account in which to test, and delete it, including its home folder, after testing.
    Step 2
    The purpose of this step is to determine whether the problem is caused by third-party system modifications that load automatically at startup or login, by a peripheral device, by a font conflict, or by corruption of the file system or of certain system caches.
    Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards, if applicable. Start up in safe mode and log in to the account with the problem. You must hold down the shift key twice: once when you turn on the computer, and again when you log in.
    Note: If FileVault is enabled, or if a firmware password is set, or if the startup volume is a Fusion Drive or a software RAID, you can’t do this. Ask for further instructions.
    Safe mode is much slower to start up and run than normal, with limited graphics performance, and some things won’t work at all, including sound output and Wi-Fi on certain models. The next normal startup may also be somewhat slow.
    The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
    Test while in safe mode. Same problem?
    After testing, restart as usual (not in safe mode) and verify that you still have the problem. Post the results of Steps 1 and 2.

  • TS3297 iTunes puts up a message "The item you've requested is not currently available in the Hong Kong Store." I had not requested anything from the HK store at that point, and I have never used any store other than the HK one. How I can clear that messag

    I've been running iTunes on WinXP for years with few problems.  I have all my iTunes files on an external USB drive. I just bought a new PC running Win7, 64-bit.
    I connected the external USB drive to the new PC and installed 64-bit iTunes on it. When iTunes started up, I pointed it at the iTunes directory in the external drive, using Edit > Preferences > Advanced > iTunes Media Folder Location. It showed a progress bar that it was updating the iTunes Library. I signed in and authorized the new machine. I have one spare authorization.
    Then iTunes put up a message "The item you've requested is not currently available in the Hong Kong Store." I had not requested anything from the HK store at that point, and I have never used any store other than the HK one.
    When I click OK, the iTunes Store page remains blank, apart from saying "iTunes Store" in the middle of the page.
    I went to "View My Account" and pressed the Reset button to "Reset all warnings for buying and downloading items", but that doesn’t fix this particular warning. I also tried Edit > Preferences > Advanced > Reset Warnings and Rest Cache.
    But still, every time I click the “App Store” button in the iTunes Store window, the message appears. If I click the Books, Podcasts or iTunesU buttons, these display normally.  So I’m stuck with being unable to purchase apps, other than through my iPad and iPhone.
    If I move the external drive back to the XP machine, the same thing happens.  If I go to another PC - a notebook running Vista - everything is normal.
    Any idea how I can clear that message?
    Thanks for any help you can offer.

    Further info on my question above.
    I have tried re-validating my credit card, which apparently fixed it for some. 
    I have also tried uninstalling, re-downloading and installing again.
    Neither of these steps fixed the problem.

Maybe you are looking for