Problem of querying a data warehouse

hi
I need to create a data warehouse I used version 10.2 Oracle Warehouse Builder but I found problems with the interrogation of my warehouse so I used Excel to solve this problem
if someone helps me and gives me an alternative I do not know if there is another version that resolve this problem
thnx

Can you explain the problem in detail to understand it better?
what does interrogation exactly means in OWB.

Similar Messages

  • Oracle OCI: Problem in Query with Date field

    Client compiled with OCI: 10.2.0.4.0
    Server: Oracle9i Enterprise Edition Release 9.2.0.4.0
    The problematic query is:
    SELECT CODIGO FROM LOG WHERE TEL = :telnumber AND DATE_PROC = '05-JUL-08'Table description:
    SQL>describe LOG;
    TEL NOT NULL VARCHAR2(15)
    CODIGO NOT NULL VARCHAR2(20)
    DATE_PROC NOT NULL DATEAs simple as it might look, when executed directly on the server with SQLPlus, it returns a result, but when executed from the app that uses OCI, this query returns OCI_NO_DATA always. In the beginning, the date value was also a placeholder, but I found out that even giving a literal like '05-JUL-08' didn't work. I have tried the following:
    <ul>
    <li>I've tried the basics: querying the DB from the client does work. It's this one that gives me trouble</li>
    <li>The query: SELECT CODIGO FROM LOG WHERE TEL = :telnumber does work</li>
    <li>Executing: ALTER SESSION SET NLS_DATE_FORMAT="DD-MM-YYYY"; before the query in both the server and the client. Same result: server returns data, client OCI_NO_DATA</li>
    <li>Tried changing DATE_PROC format, combining this with the use of TO_DATE(). Same result.</li>
    <li>Searched, searched, searched. No answer</li>
    </ul>
    I'm a bit desperate to find an answer, would appreciate any help and can provide as many further details as needed. Thanks.
    Edited by: user12455729 on Jan 15, 2010 5:59 AM. -Formatting-

    Hi,
    I've recreated your table and populated with your data.
    I've run your select using OCILIB on a 10gR2 (client and server) and the codes runs fine and give expected results.
    So the problem must resides in your code.... (i don't think it can be a OCI bug)
    Here is the ocilib code :
    #include "ocilib.h"
    int main(void)
        OCI_Connection *cn;
        OCI_Statement *st;
        OCI_Resultset *rs;
        char msisdn[100] = "11223344";
        char datetime[100] = "";
        if (!OCI_Initialize(err_handler, NULL, OCI_ENV_DEFAULT))
            return EXIT_FAILURE;
        cn = OCI_ConnectionCreate("db", "usr", "pwd", OCI_SESSION_DEFAULT);
        st = OCI_StatementCreate(cn);
        OCI_Prepare(st, "SELECT "
                        "  CODIGO_BANCO "
                        "FROM "
                        "  VTA_LOG "
                        "WHERE "
                        "  TELEFONO = :msisdn AND "
                        "  FECHA_PROCESO = TO_DATE(:datetime, 'YYYYMMDDHH24MISS')");
        OCI_BindString(st, "msisdn", msisdn, sizeof(msisdn)-1);
        OCI_BindString(st, "datetime", datetime, sizeof(datetime)-1);
        strcpy(datetime, "20080705162918");
        OCI_Execute(st); 
        rs = OCI_GetResultset(st);
        OCI_FetchNext(rs);
        printf("%s\n", OCI_GetString(rs, 1));
        strcpy(datetime, "20080705062918");
        OCI_Execute(st); 
        rs = OCI_GetResultset(st);
        OCI_FetchNext(rs);
        printf("%s\n", OCI_GetString(rs, 1));
        OCI_Cleanup();
        return EXIT_SUCCESS;
    }Output is :
    BancoOne
    BancoTwoi recreated your data with
    create table VTA_LOG
         TELEFONO          VARCHAR2(15) NOT NULL ,
         CODIGO_BANCO     VARCHAR2(20) NOT NULL ,
         FECHA_PROCESO     DATE NOT NULL
    insert into VTA_LOG values ('11223344', 'BancoOne',  to_date('20080705162918', 'YYYYMMDDHH24MISS'));
    insert into VTA_LOG values ('11223344', 'BancoTwo', to_date('20080705062918', 'YYYYMMDDHH24MISS'));
    commit;Regards,
    Vincent

  • Problem inserting and querying XML data with a recursive XML schema

    Hello,
    I'm facing a problem with querying XML data that is valid against a recursive XML Schema. I have got a table category that stores data as binary XML using Oracle 11g Rel 2 on Windows XP. The XML Schema is the following:
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:complexType name="bold_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="keyword_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
                   <xs:element name="plain_text" type="xs:string"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="emph_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="text_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="parlist_type">
              <xs:sequence>
                   <xs:element name="listitem" minOccurs="0" maxOccurs="unbounded" type="listitem_type"/>
              </xs:sequence>
         </xs:complexType>
         <xs:complexType name="listitem_type">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="parlist" type="parlist_type"/>
                   <xs:element name="text" type="text_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:element name="category">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element name="name"/>
                        <xs:element name="description">
                                  <xs:complexType>
                                            <xs:choice>
                                                           <xs:element name="text" type="text_type"/>
                                                           <xs:element name="parlist" type="parlist_type"/>
                                            </xs:choice>
                                  </xs:complexType>
                        </xs:element>
                                                                </xs:sequence>
                                                                <xs:attribute name="id"/>
                                            </xs:complexType>
                        </xs:element>
    </xs:schema>I registered this schema and created the category table. Then I inserted a new row using the code below:
    insert into category_a values
    (XMlElement("category",
          xmlattributes('categoryAAA' as "id"),
          xmlforest ('ma categ' as "name"),
          (xmlelement("description", (xmlelement("text", 'find doors blest now whiles favours carriage tailor spacious senses defect threat ope willow please exeunt truest assembly <keyword> staring travels <bold> balthasar parts attach </bold> enshelter two <emph> inconsiderate ways preventions </emph> preventions clasps better affections comes perish </keyword> lucretia permit street full meddle yond general nature whipp <emph> lowness </emph> grievous pedro')))    
    The row is successfully inserted as witnessed by the results of row counting. However, I cannot extract data from the table. First, I tried using SqlPlus* which hangs up and quits after a while. I then tried to use SQL Developer, but haven't got any result. Here follow some examples of queries and their results in SQL Developer:
    Query 1
    select * from category
    Result : the whole row is returned
    Query 2
    select xmlquery('$p/category/description' passing object_value as "p" returning content) from category
    Result: "SYS.XMLTYPE"
    now I tried to fully respect the nested structure of description element in order to extract the text portion of <bold> using this query
    Query 3
    select  xmlquery('$p/category/description/text/keyword/bold/text()' passing object_value as "p" returning content) from  category_a
    Result: null
    and also tried to extract the text portion of element <text> using this query
    Query 4
    select  xmlquery('$p/category/description/text/text()' passing object_value as "p" returning content) from  category_a
    Result: "SYS.XMLTYPE".
    On the other hand, I noticed, from the result of query 1, that the opening tags of elements keyword and bold are encoded as the less than operator "&lt;". This explains why query 3 returns NULL. However, query 4 should display the text content of <text>, which is not the case.
    My questions are about
    1. How to properly insert the XML data while preserving the tags (especially the opening tag).
    2. How to display the data (the text portion of the main Element or of the nested elements).
    The problem about question 1 is that it is quite unfeasible to write a unique insert statement because the structure of <description> is recursive. In other words, if the structure of <description> was not recursive, it would be possible to embed the elements using the xmlelement function during the insertion.
    In fact, I need to insert the content of <description> from a source table (called category_a) into a target table (+category_b+) automatically .
    I filled category_a using the Saxloader utility from an flat XML file that I have generated from a benchmark. The content of <description> is different from one row to another but it is always valid with regards to the XML Schema. The data is properly inserted as witnessed by the "select * from category_a" instruction (500 row inserted). Besides, the opening tags of the nested elements under <description> are preserved (no "&lt;"). Then I wrote a PL/SQL procedure in which a cursor extracts the category id and category name into varchar2 variables and description into an XMLtype variable from category_a. When I try to insert the values into a category_b, I get the follwing error:
    LSX-00213: only 0 occurrences of particle "text", minimum is 1which tells that the <text> element is absent (actually it is present in the source table).
    So, my third question is why are not the tags recognized during the insertion?
    Can anyone help please?

    Hello,
    indded, I was using an old version of Sqlplus* (8.0.60.0.0) because I had a previous installation (oracle 10g XE). Instead, I used the Sqlplus* shipped with the 11g2database (version 11.2.0.1.0). All the queries that I wrote work fine and display the data correctly.
    I also used the XMLSERIALIZE function and can now display the description content in SQL Developer.
    Thank you very much.
    To answer your question Marco, I registered the XML Schema using the following code
    declare
      doc varchar2(4000) := '<?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
    <xs:complexType name="bold_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="keyword_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
                   <xs:element name="plain_text" type="xs:string"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="emph_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="text_type" mixed="true">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="bold" type="bold_type"/>
                   <xs:element name="keyword" type="keyword_type"/>
                   <xs:element name="emph" type="emph_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:complexType name="parlist_type">
              <xs:sequence>
                   <xs:element name="listitem" minOccurs="0" maxOccurs="unbounded" type="listitem_type"/>
              </xs:sequence>
         </xs:complexType>
         <xs:complexType name="listitem_type">
              <xs:choice minOccurs="0" maxOccurs="unbounded">
                   <xs:element name="parlist" type="parlist_type"/>
                   <xs:element name="text" type="text_type"/>
              </xs:choice>
         </xs:complexType>
         <xs:element name="category">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element name="name"/>
                        <xs:element name="description">
                                  <xs:complexType>
                                            <xs:choice>
                                                           <xs:element name="text" type="text_type"/>
                                                           <xs:element name="parlist" type="parlist_type"/>
                                            </xs:choice>
                                  </xs:complexType>
                        </xs:element>
                                                                </xs:sequence>
                                                                <xs:attribute name="id"/>
                                            </xs:complexType>
                        </xs:element>
    </xs:schema>';
    begin
      dbms_xmlschema.registerSchema('/xmldb/category_auction.xsd', doc,     LOCAL      => FALSE, 
            GENTYPES   => FALSE,  GENBEAN    => FALSE,   GENTABLES  => FALSE,
             FORCE      => FALSE,
             OPTIONS    => DBMS_XMLSCHEMA.REGISTER_BINARYXML,
             OWNER      => USER);
    end;then, I created the Category table as follows:
    CREATE TABLE category_a of XMLType XMLTYPE store AS BINARY XML
        XMLSCHEMA "xmldb/category_auction.xsd" ELEMENT "category";Now, there still remains a problem of how to insert the "description" content which I serialized as a CLOB data into another table as XML. To this purpose, I wrote a view over the Category_a table as follows:
    CREATE OR REPLACE FORCE VIEW "AUCTION_XWH"."CATEGORY_V" ("CATEGORY_ID", "CNAME", "DESCRIPTION") AS
      select category_v."CATEGORY_ID",category_v."CNAME",
      XMLSerialize(content ( xmlquery('$p/category/description/*' passing object_value as "p" returning content)) as clob) as "DESCRIPTION"
      from  auction.category_a p, 
    xmltable ('$a/category' passing p.Object_Value as "a"
    columns  category_id varchar2(15) path '@id',
              cname varchar2(20) path 'name') category_v;Then, I wrote a procedure to insert data into the Category_xwh table (the source and target tables are slightly different: the common elements are just copied wereas new elements are created in the target table). The code of the procedure is the following:
    create or replace PROCEDURE I_CATEGORY AS
    v_cname VARCHAR2(30);
    v_description clob ;
    v_category_id VARCHAR2(15);
    cursor mycursor is select category_id, cname, description from category_v;
    BEGIN
    open mycursor;
      loop
      /*retrieving the columns*/
      fetch mycursor into v_category_id, v_cname, v_description ;
      exit when mycursor%notfound;
      insert into category_xwh values
      (XMlElement("category",
          xmlattributes(v_category_id as "category_id"),
          xmlelement("Hierarchies", xmlelement("ObjHierarchy", xmlelement ("H_Cat"),
                                                               xmlelement ("Rollsup",
                                                                                  (xmlelement("all_categories",
                                                                                   xmlattributes('allcategories' as "all_category_id")))
        xmlforest (
                  v_cname as "cat_name",
                  v_description as "description")    
    end loop;
      commit;
      close mycursor;
    END I_CATEGORY;When I execute the procedure, I get the following error:
    LSX-00201: contents of "description" should be elements onlyso, I just wonder if this is because v_description is considered as plain text and not as XML text, even if its content is XML. Do I need to use a special function to cast the CLOB as XML?
    Thanks for your help.
    Doulkifli

  • Problem with extract huge data in WEBI (Errors:WIO 30280 and ERR_WIS_30270)

    Hi gurus, need your help.
    When we run the query for the report we get an error in Web Intellegence .
    This error may be two types. And their rotation occurs randomly.
    First error: There is no memory available. Please close document to free memory (WIO 30280)
    Second error: An internal error occurred while calling the 'processDPCommands' API. (ERR_WIS_30270)
    When we try reopen WEB Intelligence, or reloading servers or machine, we have no   any results and get error again.
    I was try change different parametrs in universe designer for connection and universe, in central managment console for WEBI, but it's not solve problem
    This query select data from huge table (4,7 million rows), and when I use limit for maximum rows in WEBI my report working correctly.
    Is there any way to solve these problems associated with large data? What  is invoke this problem when we try to get full data?
    We use Business Objects Enterprise XI 3.1 version 12.4.0.966 
    Our system Windows Server  R2  Enterprise 2008 SP1 (64-bit system), Processor Intel 2.67 GHz (2 processors), RAM 8 Gb.                                                                               
    Thanks.
                                                                                    Ruslan

    Hi Brad,
    here we are talking about XI3.1 which has the limitations inherited from the operating system. as this blog post explains with plenty of details.
    In BI4.0 you can leverage larger datasets without any problems however you need to properly size and configure the services.
    There are several documents out there:
    How to configure the APS  http://scn.sap.com/docs/DOC-31711
    Companion Guide
    https://service.sap.com/~sapdownload/011000358700000307202011E/SBO_BI_4_0_Companion_V4.pdf
    Web Intelligence Sizing Guide
    https://service.sap.com/~sapdownload/011000358700001403692011E/BI_4_0_WEBI_NEW.pdf
    Best regards,
    Simone

  • B Tree in Data Warehouse

    Is it true - In a data warehouse (controlled DML , loads no online updates esentially locking will not happen and not an issue ) , B-tree indexes can be used for unique columns or other columns with very high cardinalities (that is, columns that are almost unique) ?
    Edited by: user11159529 on May 11, 2011 1:31 PM

    user11159529 wrote:
    If it is a choice between B Tree and bitmap - which is better ?It depends. It depends on the environment. It depends on the queries you're trying to optimize.
    I will think BTree will be bigger in Size than B Tree , considering NDV for the column is so high .
    Will Bitmap be able to support better queries (considering bit map operations) especially if queried in combination with other columns.If the column is essentially unique, why would there be a benefit to using bitmap operations? If a single-column b-tree index is going to yield a handful of ROWID's, there wouldn't seem to be a benefit to combining the nearly-unique criteria with some other criteria that might eliminate 2 or 3 rows. Of course, you would rarely query a data warehouse looking to return a handful or rows, so it's not obvious what the use case is.
    Justin

  • Design the data warehouse around the reporting system?

    Hi All,
    A Jr. data warehouse developer resisted my suggestion to flatten out activity tables of differing grains into a single fact table.  (Think sales order header, sales order detail, and even a 3rd level of details to each sales order detail.)  Although
    he agreed that flattening out the fact tables into a single fact would be proper for a data warehouse, he's concerned that report developers will have an easier time querying the data warehouse with the 3 separate fact tables.  I'm not sure if it's because
    the report developers don't like learning new schemas or if their reporting tool is just severely limited, mainly because I've never used Cognos.  I assured him that a properly-designed data warehouse will save on query execution time, but he's concerned
    about the reporting tool and how it may not work so well with the data warehouse.  
    Did I give him the proper advice?  It seems like a data warehouse should be built properly regardless of reporting tool shortcomings.  Assuming this tool is lousy, maybe they need a new reporting system for their new data warehouse.
    Thanks,
    Eric

    Hi Eric,
    one of the hard and fast rules of building a data warehouse is that from a logical point of view the fact table presents data at a certain level of granularity and that you do not mix facts in fact tables. This is data warehousing 101.
    From your comment you seem to be suggesting mixing data of different granularity in the one table.
    Now, we have ways and means of co-habiting data that will appear as different fact tables in the one physical table. We control the physical placement of data in fact tables. But on SQL Server we would never mix facts at different granularities or representing
    different data in the one fact table. SQL Server supports that quite poorly.
    It is sad that in 2015 people are still messing up data warehouse project from pure ignorance of what is available. We have data warehouse data models that are extremely extensive but people just have to start from scratch and reinvent the wheel and fail over
    and over again. Sad but true.
    Best Regards 
    Peter Nolan

  • Data Warehouse Cursor Problem

    I am trying to complete a piece of work for College but am having trouble with the completion of a cursor. The object of this small project is to create a very basic data warehouse from an operational system.
    I have populated all of the dimension tables except one which is to be populated with the FACT table. These tables are to be populated with the cursor I am trying to complete.
    I having difficulty understanding what the first select statement in the cursor does. For the region dimension table, I was asked to create a sequence to use as the primary key (region_id). The region_id in the operational table has different values e.g. 6000, 6001.
    'dw_op' is the schema on the operation table which is accessed through the DB link 'q_link'.
    Any thoughts on what is required to complete this cursor would be a big help.
    Here is the incomplete anonymous block and cursor:
    declare
         cursor c_sales is
              select order_line.product_id, order_line.quantity,
              product.unit_cost, product.unit_price, ord.client_id,
              ord.SALES_REP_ID, ord.order_date from dw_op.ord@q_link, dw_op.order_line@q_link,
              dw_op.product@q_link
              where ord.order_id = order_line.order_id AND
              order_line.product_id = product.product_id...
         s_value number;
         s_cost number;
         begin
         for c_rec in c_sales loop
              select region_id into r_id
              from region where region_name =
                   (select region_name from dw_op.sales_region@q_link,
                   dw_op.sales_rep@q_link where sales_region.region_id =
                   sales_rep.region_id and SR_id = c_rec.SALES_REP_ID);
              select time_seq.nextval into s_time from dual;
              Insert into time values (s_time, s_day, s_month, s_year ... );
              s_value := ... // how much it costs the company, unit_price * something?
              s_cost := ... // time something by the quantity
    INSERT INTO sale VALUES (ord.client_id, order_line.product_id, ord.SALES_REP_ID, r_id, s_time, order_line.quantity,
    s_value, s_cost); // need to find out how to enter select info into table
         end loop
    end;

    You may have an IO problem but you may also have a design issue/configuration issue. what you are seeing is multiple session waiting for the same block, if 20 sessions all request the same block one will read it form disk (db file scattered read or db file sequential read) and 19 session will wailt on read by other session and then get the block from the cache.
    there do seem to be a very high number of waits for read by other session in the database so you may want to investigate exactly what sql is waiting on this event and if you could benifit form either a larger buffer cache or using te keep and recycle pools to manage frequently accessed tables better. Otherwise investigate the SQL that is perfomring the most IO and tune it to do less work.
    Chris

  • Data warehouse problem plz help

    hi, i got a problem in making my first warehouse
    first of all
    i have many operational databases, and i want to make a warehouse to get the data of these db's to save them according to time...
    and how i connect a vb .net application to retrieve data and queries from the data warehouse
    is this possible and can any one help pleaaaaaaaassssssse

    Because of this our server gets shut down automatically No. Just because connection pool got suspended, server should not go down. There is any other issue which you did not notice. For Data Source to function properly, make sure that intial and maximum connections limit has been set appropriately (preferably both should be equal) and make sure that database is up and running always and has that many connections open. Check with the DBA for DB connection limit settings.
    Raise a SR with support if you are not able to figure out the exact issue.
    Regards,
    Anuj

  • Compression and query performance in data warehouses

    Hi,
    Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
    Understand bitmap indexes are compressed by default so assume cannot further compress them.
    Is this correct?
    Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
    ETL speed fine just want to increase the report performance.
    Thoughts - anyone seen significant gains in data warehouse report performance with compression.
    Also, current PCTFREE on table 10%.
    As only insert into tabel considering making this 1% to imporve report performance.
    Thoughts?
    Thanks

    First of all:
    Table Compression and Bitmap Indexes
    To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
    Mark bitmap indexes unusable.
    Set the compression attribute.
    Rebuild the indexes.
    The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
    This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
    To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
    Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates.

  • Configuration Dataset = 90% of Data Warehouse - Event Errors 31552

    Hi All,
    I'm currently running SCOM 2012 R2 and have recently had some problems with the Data Warehouse Data Sync. We currently have around 800 servers in our production environment, no Network devices, we use Orchestrator for integration with our call logging system
    and I believe this is where our problems started. We had a runbook which got itself into a loop and was constantly updating alerts, it also contributed to a large number of state changes. We have resolved that problem now, but I started to receive alerts
    saying SCOM couldn't sync Alert data under event 31552.
    Failed to store data in the Data Warehouse.
    Exception 'SqlException': Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance 
    Instance name: Alert data set 
    Instance ID: XX
    Management group: XX
    I have been researching problems with syncing alert data, and came across the queries to manually do the database maintenance, I ran that on the alert instance and it took around 16.5 hours to run on the first night, then it ran fast (2 seconds) most the
    day, when it got to about the same time the next day it took another 9.5 hours to run so I'm not sure why that's giving different results.
    Initially it appeared all of our datasets were out of sync, after the first night all appear to be in sync bar the Hourly Performance Dataset. Which still has around 161 OutstandingAggregations. When I run the Maintenance on Performance it doesn't appear
    to be fixing it. (It runs in about 2 seconds, successfully)
    I recently ran DWDatarp on the database to see how the Alert Dataset was looking and to my surprise I found that the Configuration Dataset has blown out to take up 90% of the DataWarehouse, table below. Does anyone have any ideas on what might cause this
    or how I can fix it?
    Dataset name                   Aggregation name     Max Age     Current Size, Kb
    Alert data set                 Raw data                 400       132,224 (  0%)
    Client Monitoring data set     Raw data                  30             0 (  0%)
    Client Monitoring data set     Daily aggregations       400            16 (  0%)
    Configuration dataset          Raw data                 400   683,981,456 ( 90%)
    Event data set                 Raw data                 100    17,971,872 (  2%)
    Performance data set           Raw data                  10     4,937,536 (  1%)
    Performance data set           Hourly aggregations      400    28,487,376 (  4%)
    Performance data set           Daily aggregations       400     1,302,368 (  0%)
    State data set                 Raw data                 180       296,392 (  0%)
    State data set                 Hourly aggregations      400    17,752,280 (  2%)
    State data set                 Daily aggregations       400     1,094,240 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Raw data      
    7     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations        
    3     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations      
    182     0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data                 400           176 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data 7             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Rawdata                   3        84,864 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations        7       407,416 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations       182       143,128 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data                   7         6,088 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations       31        20,056 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations       182         3,720 (  0%)
    I have one other 31553 event showing up on one of the Management servers as follows,
    Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations.
    Exception 'SqlException': Sql execution failed. Error 2627, Level 14, State 1, Procedure ManagedEntityChange, Line 368, Message: Violation of UNIQUE KEY constraint 'UN_ManagedEntityProperty_ManagedEntityRowIdFromDAteTime'. Cannot insert duplicate key in
    object 'dbo.ManagedEntityProperty'. The duplicate key value is (263, Aug 26 2013  6:02AM). 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.ManagedEntity 
    Instance name: XX 
    Instance ID: XX
    Management group: XX
    which from my readings means I'm likely in for an MS support call.. :( But I just wanted to see if anyone has any information about the Configuration Dataset as I couldn't find much in my searching.

    Hi All,
    The results of the MS Support call were as follows, I don't recommend doing these steps without an MS Support case, any damage you do is your own fault these particular actions resolved our problems:
    1. Regarding the Configuration Dataset being so large. 
    This was caused by our AlertStage table which was also very large, we truncated the alert stage table and ran the maintenance tasks manually to clear this up. As I didn't require any of the alerts sitting in the AlertStage table we simply did a straight truncation
    of the the table. The document linked by MHG above shows the process of doing a backup & restore on the AlertStage Table if you need to. It took a few days of running maintenance tasks to resolve this problem properly. As soon as the truncation had taken
    place the Confirguration Dataset dropped in size to less than a gig.
    2. Error 31553 Duplicate Key Error
    This was a problem with duplicate keys in the ManagedEntityProperty table. We identified rows which had duplicate information, which could be gathered from the Events being logged on the Management Server.
    We then updated a few of these rows to have a slightly different time to what was already in the Database. We noticed that the event kept logging with a different row each time we updated the previous row. We ran the following query to find out how many rows
    actually had duplicates:
    select * from ManagedEntityProperty mep
    inner join ManagedEntity me on mep.ManagedEntityRowId = me.ManagedEntityRowId
    inner join ManagedEntityStage mes on mes.ManagedEntityGuid = me.ManagedEntityGuid
    where mes.ChangeDateTime = mep.FromDateTime
    order by mep.ManagedEntityRowId
    This returned over 25,000 duplicate rows. Rather than replace the times for all the rows, we removed all duplicates from the database. (Best to have MS Check this one out for you if you have a lot of data)
    After doing this there was a lot of data moving around the Staging tables (I assume from the management server that couldn't communicate properly), so once again we truncated the AlertStage table as it wasn't keeping up. Once this was done everything worked
    properly and all the queues stayed under control.
    To confirm things had been cleared up we checked the AlertStage table had no entries and the ManagedEntityStage table had no entries. We also confirmed that the 31553 events stopped on the Management server.
    Hopefully this can help someone, or provide a bit more information on these problems.

  • Service manager console can't connect to Service manager data warehouse SQL reporting services

    When I start Service manager console, it gives this kind of error:
    The Service Manager data warehouse SQL Reporting Services server is currently unavailable. You will be unable to execute reports until this server is available. Please contact your system administrator. After the server becomes available please close your
    console and re-open to view reports.
    Also in EventViewer says:
    cannot connect to SQL Reporting Services Server. Message= An unexpected error occured while connecting to SQL Reporting Services server: System.Net.WebException: The request failed with HTTP status 401: Unauthorized.
    at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
    at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
    at Microsoft.EnterpriseManagement.Reporting.ReportingService.ReportingService2005.FindItems(String Folder, BooleanOperatorEnum BooleanOperator, SearchCondition[] Conditions)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String searchPath, IList`1 criteria, Boolean And)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String itemPath)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItem(String itemPath, ItemTypeEnum[] desiredTypes)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.GetFolder(String path)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReportingGroup.Initialize()
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath, NetworkCredential credentials)
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath)
    at Microsoft.EnterpriseManagement.UI.SdkDataAccess.ManagementGroupServerSession.TryConnectToReportingManagementGroup() Remediation = Please contact your Administrator.
    We have a four server set-up where SCSM, SCDW, and sqls for both are on different servers. Also I have red that this could be a SPN problem, but this has  been worked on last week without the SPNs.

    On the computer you get the "SQL Reporting Services server is currently unavailable" message please open the Internet Explorer and try to connect to the URL <a href="http:///reports">http://<NameOfReportingServer>/reports
    This should open the reporting website in IE. If this isn't working you should check the proxy settings in IE. If the URL doesn't work in IE it won't work in the SCSM console as well (and vice versa).
    Andreas Baumgarten | H&D International Group
    Actually I can't access to the reporting website. It asks me credentials 3 times and then return a blank page. Also error message comes to the EventViewer System log with id 4 and source Security-Kerberos.
    The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server "accountname".
    The target name used was HTTP/"reporting services fqn". This indicates that the target server failed to decrypt the ticket provided by the client.
    This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using.
    Ensure that the target SPN is only registered on the account used by the server.
    This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service.
    Ensure that the service on the server and the KDC are both configured to use the same password.
    If the server name is not fully qualified, and the target domain (domain.com) is different from the client domain (domain.com), check if there are identically named server accounts in these two domains,
    or use the fully-qualified name to identify the server.
    I can access the website directly from the server which hosts Reporting Services.
    Also I query "setspn -Q HTTP/"reporting services fqn" whit result NO SUCH SPN FOUND.

  • Accessing Data Warehouse with HTML DB

    I have a test data warehouse database 10g comprising of seven dimension tables and one fact table. When I access one table at a time, the query runs fine, but when I join two dimension tables or more to the fact table, the result set comes out wrong. The performance is also very poor. Is HTML DB not capable of properly accessing a data warehouse data?
    Here is the query I'm having problem with:
    SELECT p.prod_name, s.store_name, pr.week, sl.dollars
    FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
    AND pr.perkey = sl.perkey
    AND p.prod_name LIKE 'Assam Gold%'
    OR p.prod_name LIKE 'Earl%'
    AND s.store_name LIKE 'Instant%'
    AND pr.month = 'NOV'
    AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESC
    Your input would be appreciated.

    I doubt this was intentional, but you are not joining the store table to anything. You do filter the rows from that table with the AND s.store_name LIKE 'Instant%' predicate, but it is not joined to any of the other 3 tables. Your query will essentially return the number of rows from the other 3 tables multiplied by the number of rows returned from store. SYou might think about grouping some of your predicates for readability and possibly for correct logic.SELECT p.prod_name, s.store_name, pr.week, sl.dollars
      FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
       AND pr.perkey = sl.perkey
       -- Add missing predicate here
       -- AND s.something = sl,p, or pr .something
       -- end missing predicate
       AND (p.prod_name LIKE 'Assam Gold%'
            OR
            p.prod_name LIKE 'Earl%')
       AND s.store_name LIKE 'Instant%'
       AND pr.month = 'NOV'
       AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESCHope this helps,
    Tyler

  • Availability data not visible in data warehouse

    I'm having a problem with our data warehouse. I can't run or even find availability reports from some of the objects that are visible and clearly monitored in our scom. For example I did a web transaction monitor with the wizard but when I try to run a availability
    report from it, there is no object for that so I can not even run the report. I know the 500 object limit and I have set the registry key to see more objects. We use SCOM 2012 R2 UR2.
    Is there anything that I should check? Can I somehow run a SQL query against my data warehouse to see if there is any availability data?

    Hello SamiKoskivaara, 
    Could you please check if event ID 31553 is being logged on one of your SCOM management servers ?
    Event ID 31553:
    "Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations. Exception 'SqlException': Sql execution failed. Error 2627, Level
    14, State 1, Procedure ManagedEntityChange, Line 368, Message: Violation of UNIQUE KEY constraint 'UN_ManagedEntityProperty_ManagedEntityRowIdFromDAteTime'. Cannot insert duplicate key in object 'dbo.ManagedEntityProperty'. The duplicate key value is (184,
    Mar 1 2013 9:42AM). One or more workflows were affected by this...

  • OBJECT TABLES in DATA WAREHOUSE

    Can anyone give a high level response of the benefits as well as
    drawbacks of using Nested object tables as a permanent means of
    storage in an ORACLE data warehouse. In particular, is there a
    performance enhancement/degradation, maintenance problems,
    scalability etc in comparison to standard HEAP tables.
    Most books that I have read suggest NOT to use object tables as
    a permanent means of storage.
    Thanks for any help that you can provide.

    Hi Hakan,
    it's not easy to give a quick answer here.
    a) you want to compare warehouse tables with spatial tables
    To answer this very well you should define what is a warehouse table and what is a spatial table. In this forum you will get for spatial tables the definition "A spatial table is a table with one or more columns of type SDO_GEOMETRY". How do you define a spatial table and how do you define a warehouse table?
    b) you ask for performance
    Performance is not depending only on table type. Performance is depending on a lot of steps. For example on the work which has to be done to answer the question, can data read from buffer cache or must they be read from physical devices, execution plan .....
    I don't know if there is a official statement from Oracle which is usefull to answer your query.
    Can you ask a little bit particular please?
    In my experience there it is not adviseable to store tables in different databases (instances) when I have to query it together. If you split the tables in different instances, the execution has much more overhead because you have to talk to 2 instances and it will very hard for the optimizer to find the best execution plan.
    Regards
    U. Martin

  • Oracle Data Warehouse DB and OLAP_OBIEE DB [OBIEE11g]

    Hi Experts,
    I have a problem regarding joining the 2 DB in an Analysis report.
    The scenario is this:
    Oracle Data Warehouse is DB_1 and OLAP_OBIEE is DB_2
    DB_1
    -col1
    -col2
    DB_2
    -col3
    In Analysis report when they are queried or run separately, obviously there will be a report generated.
    But when this 2 DB columns are joined together in one Analysis report, I always get 'No Results'.
    Please help how do I fix this issue :(
    Thanks,

    Hi,
    check the following links:
    http://www.rittmanmead.com/2007/10/reporting-against-multiple-datasources-in-obiee/
    http://108obiee.blogspot.com/2009/01/fragmentation-in-obiee.html
    http://gerardnico.com/wiki/dat/obiee/multiple_subject_area

Maybe you are looking for

  • I can't see how to update itunes match.!

    I tryed to follow instuctions: go to store in Itunes then to update itunes match. But no such option exists. further more how often is it supposed to update automaticly anyway????

  • Problem with Access Point app

    Yesterday I experienced a problem with my internet sharing where it wouldn't start and gave me an error: Connection not shared, Internet sharing isn't currently available . I ran across an article on wpcentral.com where they suggested installing the

  • Where is the 'Do not track' option?

    I do not find 'Do no track' option in my Firefox Android browser

  • Announcement: JDeveloper 11g Patch Set 2 (11.1.1.3.0) is Out.

    I have a message "An error occurred while processing the request. Try refreshing your browser. If the problem persists contact the site administrator". In IE6 and Firefox 3.6, and Chrome 4.

  • ESS/MSS  Bank Details Applications.

    HI all,         I have this Bank Details applications and there are multiple buttons (New Other Bank Buttons) showing up in the application. What is the cause and how do I solve this?