Some questions on data warehousing

My questions are:
(1.) Is ANY dimension, except the time dimension, a candidate for a SCD (slowly changing dimension)?  i.e. A sales rep dimension can have a column changing slowly like the state he belongs to, a employee dimension can have a columns
such as highest education level changing slowly over time. So, any dimension, except the time one, can have columns which are candidates for SCD?
(2.) When designing a DW, do you have to think about SCDs at design time? Or will the need for a SCD come later when the system is running live? When designing a DW is it best practice to look at all columns in the dimensions, and see if
the data can change slowly over time and make room for that, or do we do it as an when the requirement comes, after the system goes live?
(3.) Can a dimension have more than 1 column which changes slowly over time? Like, for a product dimension, the product price and the supplier both changes slowly over time. So, what is the solution to this scenario?
(4.) What is the the MOST COMMON solution practiced in real life to a SCD problem? Is it creating more than one row with a version number or begin/end dates?
(5.) Does a solution to a SCD require rebuilding the fact table?

1.Yes
2. We make them SCDs in design time itself. In case it doesnt need historic data analysis we make attributes as Type 1 to preserve only latest values
3. The same dimension can have attributes (columns) that can be handled differently ie some columns might be processed in Type 1 way (update with latest value), some in historic way (multiple records with each intermediate values) and some in fixed way
4. Common method is to use Valid From and Valid To date fields. The latest record will have ValidTo as NULL to indicate its the currently valid record. We will also add a bit field IsCurrent for quick retrieval and it will be 1 only for latest valued records
5. Yes. As the surrogate key changes when you implement Type 2 changes. Each intermediate value surrogate key would be different and fact has to have correct surrogate key reference to make sure you get correct attribute value as on the required period.
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

Similar Messages

  • Some questions about data traffic

    Hi Guys,
    I live in Kong Kong and have an iphone 3g at the latest version 2.2. I am on a 3 HK plan and all is well when I am in Hong Kong. My work pays for all my phone costs and up until now there has not been any issues. The problem is that I travel a lot with my work to many countries in Asia and some in Europe and when I am there, most times I wait until I have access to a wireless network to get my emails, or as I always carry with me an airport express, I wait until I am in the hotel room and set up my wireless network and away I go. The problem is sometimes I need to get my emails and so I turn on data roaming and download my emails. I have the account set to manual pull and so I can monitor the data that I donwload. The current problem is that the IT department has flagged my increased costs when roaming compared to my colleagues who roam with a blackberry and in some cases it is 10 times the cost. For example, for 3 days in China last month, my bill was about 1000HKd which equates to about 10MB of data. My colleague who was there at the same time with a blackberry who would have a similar amount of emails had a cost of about 90HKd which equates to about 900b of data.
    I decided to check today and so reset my data usage monitor and downloaded 9 emails of normal size, no pictures just text and it came to 406KB. Surely this cannot be correct. If it is, and this is my question, is there anywhere that I can limit the amount of data that is downloaded for each email and is there anywhere I can set the emails downloaded for work to be text only and not HTML. My work email is syncing using activesync with my work exchange server.
    If I cannot find a way to overcome this, I will either have to stop using data roaming when I am abroad or I will have to take back my blackberry and start using it again which is not what I want at all.
    I love the iPhone and actually my IT Manager does not want me to have to use the blackberry and cannot believe it is so data intensive.
    Any help from you fellow iPhone lovers would be much appreciated.
    Thanks,
    Paul

    I don't know about your question 1 and 2. Seems like it could be a workout that wasn't ended or just a glitch.
    What was your goal? It will measure whatever you set it to measure. Calories burned, distance, run more often, etc.
    I set a run more often goal of at least 3 times in 16 weeks. It is averaging and making a calculated guess from that average as to when those runs would take place. So if I run three days in a row and not again that week, I will be ahead of my goal, and then the system catches up and I'm on target again.
    I imagine it's doing the same for your goal.

  • Some question on IDOC (Control Record/Data Record/Status Record)

    Dear all,
    I am new in this area, and would like to enquire some question on this topic.
    When I view a IDOC via WE02, each of the IDOC record will consist of Control Record/Data Record/Status Record).
    Questions:
    I notice that the data records consists of many segment (i.e. E1EDK01, etc) which are use to store application data.
    1 - My question is do I have to manually create all these segment and do a mapping to my application field one by one (i.e. that is when I want to create a brand new message type from scratch)?
    2 - If question no. 1 is Yes, how to do it, what are the transaction code to create it? can you show me the step by step.
    3 - I don't have to create the Control record and the status record for my new message type right ? because those field value will automatically pull out from partner profile and system status message, am I correct?
    Thanks.
    Tuff

    Hi Tuff,
    As everything in SAP, with IDOCs too there are
    1) Standard IDOCs
    2) Standard IDOCs(Extending - Enhancement to an IDOC, to accomodate for custom values)
    3) Custom IDOCs
    And every IDOC has,
    Control record - EDIDC Structure - This mostly reflects the partner profile information, along with few more details which are used for IDOC extension, Sequencing etc
    Data Records - EDID4 Structure - These records contain the actual business data of the document in concern. So for ORDERS05 it would contain order details, INVOIC02 - Invoice details so on...
    Status Records - These records capture the status of an IDOC from the time it is received/sent from your system and a corresponding business document is created/changed. So this will have messages like "IDOC sent to the port OK" etc which are status from the communication layer(ALE) to application specific messages like "Sales Order XXX created" or "Invalid Material" etc.
    You would have noticed something called as Process code in the partner profile, this is associated with a FM(or work flow task etc) which has the business logic coded in.
    So in case of an Inbound IDOC, the sending system updates the IDOC - Control and Data records, and sends it to the receiving system. On the receiving system the IDOC's control record is validated against the partner profiles set, if an entry is found then using the process code it finds the associated FM which will decode the data from the IDOC data records as per the IDOC type and then use it to post data into SAP (VIA BDC, Batch Input, BAPI etc).
    And all this while the Status records are being updated accordingly.
    So with the above context will try to answer your questions,
    1 - My question is do I have to manually create all these segment and do a mapping to my application field one by one (i.e. that is when I want to create a brand new message type from scratch)?
    In case of a custom IDOC, yes you will have  to.
    In case of a standard IDOC, you wouldn't have you just have set up the necessary configuration (Partner Profile, Process code etc)
    In case of a standard IDOC extended to accommodate for some custom values(for which there are no fields in standard IDOC - Let us say you have added some new fields on VA01) - In this case you can still use the standard Process code and Standard FM associated with it, SAP provides several Function exits in these FM's which you can leverage to add your custom logic.
    2 - If question no. 1 is Yes, how to do it, what are the transaction code to create it? can you show me the step by step.
    There are several documents available on the net and on SDN detailing step by step approach for all the above three cases,
    just search for step by step guide for IDOCS - sap.
    3 - I don't have to create the Control record and the status record for my new message type right ? because those field value will automatically pull out from partner profile and system status message, am I correct?
    Again it depends, in case of using a standard IDOC you wouldn't have to. But in case you have some customizations/enhancements then you might have to.
    For Ex: updating the control record accordingly for indicating that you have extended the standard IDOC. Or append custom messages to the status record as per the business logic.
    Try out the examples you find on the net and post any specific questions you might have.
    Regards,
    Chen

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • Some questions on versioning and synchronizing metadata

    Hy all!
    I am quite new to warehousing and Oracle Warehouse Builder, and so i would have some questions regarding on some common issues. I would appriciate if you guys would who have experience in this domain to share some good practice knowledge :)
    I am using OWB 10.2
    So first of all i would like to know if you have some proposal of the way of versioning control and synchronizing projects between team memebers when working on a bigger project, team memebers that don't work on the same repository (cause i saw that OWB has an integrated multiuser support for handeling object locks and user sessions).
    I saw that a way of migrating data from one place to a nother is using the import/export options integrated in OWB. This creates mdl files wich are some kind of "dumps" of the metadata informations, but the thing with these mdl files wich i don't think is a good way to synchronize is that first of all the .mdx and .xml files contained in the .mdl (wich is kind of a zip) contains many informations in it (like creation date, some timestamps, etc) wich are always updated when exporting, and if synchronizing these files maybee using CVS, we always will get differences between the files alltough they would contain the same thing, only timestamps changed.
    Then a nother issue with this, is that we could have 2 alternatives: dump the whole project, wich is odd to have to synchronize a single file between users, especialy on a big project, then the orher way would be doing for each object from the project (each mapping, each table, etc) an separate .mdl filem then to synchronize each file of each object, wich will be unefficient on reimporting each file in part.
    So please if you can share the way you work on a big project with many implementers with OWB, i would really appriciate.
    A nother thing i would like to know is: is there a way to generate from an existing project (like one created with OWB) the OMB commands dump (maybee in a tcl script)? Cause i saw that the way the exeprienced users implement warehousing is using TCL with OMB language. I downloaded the example from oracle for warehouse project, and i saw that is entirely made from tcl scripts (so no mdl file involved). And this i think would be nice, to have the OMB commands generated from an existing projects.
    I see this OWB projects like a database wich can be built up from only OMB commands and OWB a graphical tool to do this (same as constructing a database only from DDL commands or using SQL developer to do this), this is why i am asking about a way of dumping the OMB commands for creating an OWB project.
    Please give me some advices, and correct me if i sad some dumb things :D but i really am new to warehousing and i would really appriciate if you guys with experience could share some informations.
    Thank you verry much!
    Alex21

    Depends. Having everyone working on the same project certainly simplifies things a lot regarding merging and is generally my preference. But I also recognize that some projects are complex enough that people wind up stepping on each other's toes if this is the case. In those cases, though, I try to minimize the issue of merging changes by having common structural objects (code libraries, tables, views, etc) retained in a single, strictly controlled, central project schema and having the developer's personal work areas reference them by synonym, thus being unable to alter them to the detriment of others.
    If they want to change a common object then need to drop their synonym and make a local copy which they can alter, and then there is a managed process by which these get merged back into the main project schema.
    This way any changes MUST go through a central schema, we can put processes in place to notify all of the team of any impending changes, and can also script updates across the team.
    Every hour a script runs automatically that checks for dropped synonyms and notifies the project leader. It especially checks for two developers who have built local copies of the same object and notifies each that they need to coordinate with each other as they are risking a conflict. When a structural change is submitted back to the central shared schema, it is added to a batch that is installed at end of business and a list of those impending changes is circulated to the team along with impact analysis for dependencies. The install script updates the main schema, then also drops the local copy of the object in the developer's schema who made the change and re-establishes the synonym there to get back to status quo for the change monitoring. Finally, it then updates itself in all of the developer areas via OMBPlus. So, each morning the developers return to an updated and synched environment as far as the underlying structure.
    This takes care of merging structural issues, and the management of the team should minimize other metadata merging by managing the worklist of who is to be working on a given mapping or process flow at a given time. Anyone found to be doing extraneous changes to a mapping or process flow when it is not in their job queue without getting pre-approval will be spoken to VERY firmly as this is counter to policy. And yes, OWB objects such as mappings are then also coordinated to the central project via import/export. OMBplus scripts also propogate these changes daily across the team as well.
    Yep, there is a whole lot of scripting involved to get set up.... but it saves a ton of time merging things and solvinv conflicts down the road.
    Cheers,
    Mike

  • ALM - Visual Studio Database Projects and Data Warehousing

    I have moved onto a team doing data warehousing. In the past I have performed application lifecycle managment using Visual Studio and TFS for web and test projects. I am helping my current team define a plan to start using TFS 2010 and Visual Studio 2012
    to manage our schema and T-SQL code (develop, build, and deploy). I was wondering if there are a set of best pracitices specific to databases with large data volumes on multicoatabase Windows servers. Some of our deployment edge cases involve long running
    data migrations (which we can manage in an ETL tool) and schema upgrades to larger tables. That last one is what I need some guidance on. any guidance on applying ALM best practices to warehouses with larger volumes of data would be greatly appreciated. Thanks.
    Eric Aldinger

    Any ideas for Eric? Is this the right forum?
    Thanks!
    Ed Price (a.k.a User Ed), SQL Server Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • Newbie Question - Live Data

    Hi All,
    I am very new to the world of BI but have been programming with MS SQL Server and Crystal Reports for the past 15 years.  I'm looking into SQL BI to replace our Crystal Reports suite and provide additional reporting functionality such
    as drilldown capability and allowing users to edit/build their own reports.  Some reports will require drilldown functionality and some do not, also, some reports require live data and others do not.  A live data report might
    include employee time punches, or an open orders report run at the end of the day.
    To build drilldown reports, I understand that I must create data cubes which require Dimension and Fact tables that are generally populated by a data warehousing process using SSIS.  In doing that, I no longer have live
    data and the warehouse tables must be updated at some point.
    I believe that if the report does not require drill-down functionality, you don't have to create a cube and can therefore connect directly to the live data.
    What are the generally accepted practices for these scenarios?  Should I build reports directly against a live system in SQL reports?   If not, how would I go about updating the warehouse tables as needed?
    Am I correct about the usage of the Dimension and Fact tables?  Do they have to be based on a data warehouse?  Could a cube pull data from a live table?
    I really want to implement BI correctly, so this information would be of great help to me.
    Thanks

    There is a Myth that Business Intelligence (which includes Analysis Services) replaces current reporting in systems. This is not the case even though people are trying to do it or being “told” to do it in their current environment.
    My suggestion is let’s separate analysis of data from real-time reporting.  Analysis might include show a trend of Employee Turnover in the last 5 years to see if there is a growing problem. Real-Time reporting might be can get a list of new employees
    in the last 3 months that have not finished all required training.
    The real-time data can be updated by the employee today, and then re-reported the same day. The analysis of data would be run once a month lagging the end month of the report to 2 months prior to current month because
    data is not completed.
    Now to answer your questions:
    1. What are the generally accepted practices for these scenarios?  Should I build reports directly
    against a live system in SQL reports?   If not, how would I go about updating the warehouse tables as needed?
    --> If you want to build the data warehouse to do the analysis type (Business Intelligence) reporting,
    you will want to do it according to Best Practices in the community and Dimensions and Facts (Kimball Methodology) is a standard that holds up very well with Analysis Service cubes, even though there are other options - Data Vault or Inmon methods. Reports
    in this scenario should not be against live data. SSIS is the Microsoft tool for ETL (Extract, Transform and Load) into a Data Warehouse.
    You should only build those types of reports against real-time data as the example above with current employees.
    2. Am
    I correct about the usage of the Dimension and Fact tables?  Do they have to be based on a data warehouse?  Could a cube pull data from a live table?
    -->Yes, you are correct, Yes it is best in a data warehouse and yes a cube can pull live data. BUT,
    to pull live data, the cube would have to be what is called ROLAP connected and direct queries would run against Live Data and can cause headaches (blocking, long running) in the transaction system. A company I worked for in the past had a replicated database
    for real-time reporting that was 2-15 seconds behind the transaction system and it worked well. But, not with a cube running on top of it. Dimensions and Facts are best for a cube.
    Hope this helps.
    http://www.kimballgroup.com/
    http://www.inmon.com/
    https://www.youtube.com/user/DataVaultAcademy
    Thomas
    TheSmilingDBA Thomas LeBlanc MCITP 2008 DBA

  • SQL-Model-Clause / Example 2    in  Data Warehousing Guide   11G/Chapter 24

    Hi SQL-Experts
    I have a RH 5.7/Oracle 11.2-Environment!
    The sample schemas are installed!
    I executed as in Example 2 in Data Warehousing Guide 11G/Chapter 24:
    CREATE TABLE currency (
       country         VARCHAR2(20),
       year            NUMBER,
       month           NUMBER,
       to_us           NUMBER);
    INSERT INTO currency
    (SELECT distinct
    SUBSTR(country_name,1,20), calendar_year, calendar_month_number, 1
    FROM countries
    CROSS JOIN times t
    WHERE calendar_year IN (2000,2001,2002)
    UPDATE currency set to_us=.74 WHERE country='Canada';and then:
    WITH  prod_sales_mo AS       --Product sales per month for one country
    SELECT country_name c, prod_id p, calendar_year  y,
       calendar_month_number  m, SUM(amount_sold) s
    FROM sales s, customers c, times t, countries cn, promotions p, channels ch
    WHERE  s.promo_id = p.promo_id AND p.promo_total_id = 1 AND
           s.channel_id = ch.channel_id AND ch.channel_total_id = 1 AND
           s.cust_id=c.cust_id  AND
           c.country_id=cn.country_id AND country_name='France' AND
           s.time_id=t.time_id  AND t.calendar_year IN  (2000, 2001,2002)
    GROUP BY cn.country_name, prod_id, calendar_year, calendar_month_number
                        -- Time data used for ensuring that model has all dates
    time_summary AS(  SELECT DISTINCT calendar_year cal_y, calendar_month_number cal_m
      FROM times
      WHERE  calendar_year IN  (2000, 2001, 2002)
                       --START: main query block
    SELECT c, p, y, m, s,  nr FROM (
    SELECT c, p, y, m, s,  nr
    FROM prod_sales_mo s
                       --Use partition outer join to make sure that each combination
                       --of country and product has rows for all month values
      PARTITION BY (s.c, s.p)
         RIGHT OUTER JOIN time_summary ts ON
            (s.m = ts.cal_m
             AND s.y = ts.cal_y
    MODEL
      REFERENCE curr_conversion ON
          (SELECT country, year, month, to_us
          FROM currency)
          DIMENSION BY (country, year y,month m) MEASURES (to_us)
                                    --START: main model
       PARTITION BY (s.c c)
       DIMENSION BY (s.p p, ts.cal_y y, ts.cal_m m)
       MEASURES (s.s s, CAST(NULL AS NUMBER) nr,
                 s.c cc ) --country is used for currency conversion
       RULES (
                          --first rule fills in missing data with average values
          nr[ANY, ANY, ANY]
             = CASE WHEN s[CV(), CV(), CV()] IS NOT NULL
                  THEN s[CV(), CV(), CV()]
                  ELSE ROUND(AVG(s)[CV(), CV(), m BETWEEN 1 AND 12],2)
               END,
                          --second rule calculates projected values for 2002
          nr[ANY, 2002, ANY] = ROUND(
             ((nr[CV(),2001,CV()] - nr[CV(),2000, CV()])
              / nr[CV(),2000, CV()]) * nr[CV(),2001, CV()]
             + nr[CV(),2001, CV()],2),
                          --third rule converts 2002 projections to US dollars
          nr[ANY,y != 2002,ANY]
             = ROUND(nr[CV(),CV(),CV()]
               * curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2)
    ORDER BY c, p, y, m)
    WHERE y = '2002'
    ORDER BY c, p, y, m;I got the following error:
    ORA-00947: not enough values
    00947. 00000 -  "not enough values"
    *Cause:   
    *Action:
    Error at Line: 39 Column: 83But when I changed the part
    curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2) of 3.rd Rules to
    curr_conversion.to_us[ cc[CV(),CV(),CV()] || '', CV(y), CV(m)], 2)or
    curr_conversion.to_us[ cc[CV(),CV(),CV()] || null, CV(y), CV(m)], 2)It worked!
    My questions:
    1/Can anyone explain me why it worked and why it didn't work?
    2/Rule 3 has not the same meaning as the comment, Is it an error? Or I misunderstood anything?
    the comment is: third rule converts 2002 projections to US dollars the left side has y != 2002 Thank for any help !
    regards
    hqt200475
    Edited by: hqt200475 on Dec 20, 2012 4:45 AM

    Hi SQL-Experts
    I have a RH 5.7/Oracle 11.2-Environment!
    The sample schemas are installed!
    I executed as in Example 2 in Data Warehousing Guide 11G/Chapter 24:
    CREATE TABLE currency (
       country         VARCHAR2(20),
       year            NUMBER,
       month           NUMBER,
       to_us           NUMBER);
    INSERT INTO currency
    (SELECT distinct
    SUBSTR(country_name,1,20), calendar_year, calendar_month_number, 1
    FROM countries
    CROSS JOIN times t
    WHERE calendar_year IN (2000,2001,2002)
    UPDATE currency set to_us=.74 WHERE country='Canada';and then:
    WITH  prod_sales_mo AS       --Product sales per month for one country
    SELECT country_name c, prod_id p, calendar_year  y,
       calendar_month_number  m, SUM(amount_sold) s
    FROM sales s, customers c, times t, countries cn, promotions p, channels ch
    WHERE  s.promo_id = p.promo_id AND p.promo_total_id = 1 AND
           s.channel_id = ch.channel_id AND ch.channel_total_id = 1 AND
           s.cust_id=c.cust_id  AND
           c.country_id=cn.country_id AND country_name='France' AND
           s.time_id=t.time_id  AND t.calendar_year IN  (2000, 2001,2002)
    GROUP BY cn.country_name, prod_id, calendar_year, calendar_month_number
                        -- Time data used for ensuring that model has all dates
    time_summary AS(  SELECT DISTINCT calendar_year cal_y, calendar_month_number cal_m
      FROM times
      WHERE  calendar_year IN  (2000, 2001, 2002)
                       --START: main query block
    SELECT c, p, y, m, s,  nr FROM (
    SELECT c, p, y, m, s,  nr
    FROM prod_sales_mo s
                       --Use partition outer join to make sure that each combination
                       --of country and product has rows for all month values
      PARTITION BY (s.c, s.p)
         RIGHT OUTER JOIN time_summary ts ON
            (s.m = ts.cal_m
             AND s.y = ts.cal_y
    MODEL
      REFERENCE curr_conversion ON
          (SELECT country, year, month, to_us
          FROM currency)
          DIMENSION BY (country, year y,month m) MEASURES (to_us)
                                    --START: main model
       PARTITION BY (s.c c)
       DIMENSION BY (s.p p, ts.cal_y y, ts.cal_m m)
       MEASURES (s.s s, CAST(NULL AS NUMBER) nr,
                 s.c cc ) --country is used for currency conversion
       RULES (
                          --first rule fills in missing data with average values
          nr[ANY, ANY, ANY]
             = CASE WHEN s[CV(), CV(), CV()] IS NOT NULL
                  THEN s[CV(), CV(), CV()]
                  ELSE ROUND(AVG(s)[CV(), CV(), m BETWEEN 1 AND 12],2)
               END,
                          --second rule calculates projected values for 2002
          nr[ANY, 2002, ANY] = ROUND(
             ((nr[CV(),2001,CV()] - nr[CV(),2000, CV()])
              / nr[CV(),2000, CV()]) * nr[CV(),2001, CV()]
             + nr[CV(),2001, CV()],2),
                          --third rule converts 2002 projections to US dollars
          nr[ANY,y != 2002,ANY]
             = ROUND(nr[CV(),CV(),CV()]
               * curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2)
    ORDER BY c, p, y, m)
    WHERE y = '2002'
    ORDER BY c, p, y, m;I got the following error:
    ORA-00947: not enough values
    00947. 00000 -  "not enough values"
    *Cause:   
    *Action:
    Error at Line: 39 Column: 83But when I changed the part
    curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2) of 3.rd Rules to
    curr_conversion.to_us[ cc[CV(),CV(),CV()] || '', CV(y), CV(m)], 2)or
    curr_conversion.to_us[ cc[CV(),CV(),CV()] || null, CV(y), CV(m)], 2)It worked!
    My questions:
    1/Can anyone explain me why it worked and why it didn't work?
    2/Rule 3 has not the same meaning as the comment, Is it an error? Or I misunderstood anything?
    the comment is: third rule converts 2002 projections to US dollars the left side has y != 2002 Thank for any help !
    regards
    hqt200475
    Edited by: hqt200475 on Dec 20, 2012 4:45 AM

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • Some Questions on Adobe PDF Forms

    Hi,
    I have some questions on Adobe Forms development (especially Adobe Forms in ABAP)
    1)     In a form interface, in the Code Initialization can we use Object Oriented Syntaxes such as class method calls, etc?
    2)     Can we declare Global Data variables with reference to Class References / Interfaces
    3)     Can we use the latest ECC 6.0 enhancement framework to add code in the standard form interface to enhance / enrich some logic within the form? This is to avoid copying the form interface into a Z in order to enrich the form via coding?
    Please let me know if this is possible at all and how
    Will award points
    Thanking You in advance
    Regards,
    Aditya

    That is possible of course. You should use the PDFObject API for that requirements.
    1. define a xml schema with all data given by SAP system.
    2. create a form template which uses this schema as data connection
    3. implement a java application (i.e. J2EE application) which:
    - gets the data from SAP system and generates some pdf files (using PDFObject API)
    - reads pdf files back to xml and submits the data to a BAPI
    Regards
    Sebastian

  • Basic questions on data modeling

    Hi experts,
    I have some basic questions regarding data modeling within MDM. I understand the available table types and the concept of lookup fields. I know that the MDM data modeling concept is different to the relational concept. But having a strong database background my first step was to design a relational data model which I would like to transfer to a MDM repository. Unfortunately I didn't found good information material on this. So here are some questions maybe you can help me:
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    Thank you for your answers.
    Regards, bd

    Yes you are correct. It is almost difficult to map relational database to mdm one. But again MDM is not 'just' a database. It holds much more 'master' information as compared to any relational db.
    1) Is it the right approach to model n:m relationships with multivalued lookup fields? E.g. main table Users with lookup field from subtable SapAccounts (a user can have accounts in different SAP systems, that means more than one account).
    Yes Here you need to use MV look up tables or can also try Qualifier tables if it gets more complex
    2) Has a record always be unique in MDM repositories (e.g. should we use Auto ID's in every table or do we have to mark a combination of fields as unique)? Is a composite key of 2 or more fields represented with marking these fields as unique?
    Concept of uniqueness differs here that you also have something called Display Fields (DF). A combination of DF can also be treated as Unique one. For instance while importing records if you select these DF as a combination, you will eliminate any possible of duplicates based on this combination. Auto Id is one of the ways to have a unique id once record is within MDM. While you use UF or DF to eliminate any possible duplicates at import level
    3) The concept of relationships in MDM is only based on relationships between single records (not valid for all records in a table)? Is it necessary to define all relationships similar to the relational data model in MDM? Is there something similar to referential integrity in MDM?
    Hmm... good one. Referencial Integrity. What I assume you are talking is that if you have relationships between tables then removing a record will not be possible as it is a foreign key for some record. Here MDM does not allow that. As Relationships within MDM are physical and not conceptual. For instance material can have components. Now if material does not exist then any relationship to components is not worthwile to maintain. Hence relationshsip is eliminated.  While in relational model relationships are more conceptual. Hence with MDM usage of lookups and main table you do not need to maintain these kind of relationships on your own.
    4) Is it possible to change the main table to a sub table later on if we realize that it has also to be used as a lookup table for another table (when extending the data model) or do we have to create a new repository from scratch?
    No. It is not possible to convert main table. There is only one main table and it cannot be changed.
    I went for the same option but it did not work. What I suggest is to look up your legacy system one by one and see what fields in general can be classified as Master, Reference, Transactional - You will start getting answers immediately.

  • Some questions on SAP DMS

    Hello
    some questions on SAP DMS:-
    - can versions of same document be shown graphically. ?
    - can links be created for documents. can we make sure that links are not broken when document is moved ?(links to document on 1st location shall not be broken)
    - is there a notification mechanism(user can recieve e-mail when document gets changed)
    - is there a review mechanism, before doc. is archived.(sort life cycle)
    Thanks

    Hi Shovon,
    1. can versions of same document be shown graphically?
    You can display all existing versions of a document info record by using the menu 'Extras' >> 'Versions'. Then you will see all existing document version for this specific document number. Further you will see which is the current version at the moment.
    2. can links be created for documents. can we make sure that links are not broken when document is moved ?(links to document on 1st location shall not be broken)
    I'm not quite sure what you mean by links to documents. If you mean linking document info records together, I can confirm that this works also in DMS. You can maintain the object DRAW for object links in the customizing and then you will be able to link one document info record to another. The data for all document info records is stored in table DRAW.
    If you are talking about originals which are linked to document info records, please note that if you check in an original the system creates a unique LOIO- and PHIO-ID which is stored in the Content Server and in the database tables DMS_DOC2LOIO and DMS_PHIO2FILE. But there are also different reports to migrate original data from one content Server to another (e.g. DMS_RELOCATE_CONTENT).
    3. Is there a notification mechanism(user can recieve e-mail when document gets changed)
    Yes you can activate an event type linkage for object
    DRAW     CHANGED     DOKST
    DRAW     CHANGED     DWNAM
    in transaction SWE2. Here you just have to set the flag for 'Type linkage' (linkage activated). Then everytime the document is changed a notification is sent to the entered USER of this document info records.
    4. Is there a review mechanism, before doc. is archived.(sort life cycle)
    Normally there is no special lifecycle. You can control this by maintaining a released status and if there are more versions always version with the release status is the current one. But there is no specific date when the document info record expires or something like that.
    Best regards,
    Christoph
    P.S.: Please reward points for useful information.

  • Some questions regarding ESB system.

    Hi all,
    I've used my ESB system for a few months now, so I thought it would be interesting to look at what's going on in my database schema created by my esb system (oraesb). This led to some questions (and raising eyebrows), I hope some of you soa-experts might have an answer.
    * Is my system installed properly:
    I noticed that the oraesb schema created by running IRCA.zip installs only tables, views, topic queues and 1 procedure (create_queue). However, looking at the sql scripts in ${soaSuite_home}/integration/esb/sql/oracle there are far more stored procedures defined. Is it normal cq. okay that these objects are not installed or is my esb system faulty?
    * No constraints or indices:
    My system is yet very small, so it is still performing good/fast. I imagine that when the esb system is going further, performance and locking becomes an issue due to lack of indices on foreign key columns and primary/foreign constraints.
    * Only small part of schema used:
    When browsing through the tables of the oraesb user I notice that only a few tables are filled with data. For example all "Slide Tables" (this is the name given to these group tables in file ${soaSuite_home}/integration/esb/sql/oracle/wfeventc.sql) are empty. Is this normal? What kind of processes should enter data in these tables? What is the use of the "Slide tables"?
    * AQ-tables not being used:
    My esb system has five aq-queue tables (esb topics), but they are never used! I recall another thread on this forum about these queue tables growing enormous in size. I guess there must be a sort of switch somewhere to switch between jms-queue's and aq-queue's? Can anyone please explain how switch on the aq-queue's or point me to the proper documentation. I must have overlooked this in the documentation.
    Kind regards,
    Happy new year,
    H

    When your podcast is accepted you should receive an email telling you this and giving the URL for its page in the iTunes Store. The string of numbers at the end is the ID number.
    It usually takes somewhat longer for a new podcast to appear in the search results. Once you can find it by searching on the title, you can get the Store page URL, if you still don't know it, by control-clicking on the podcast image (or where it should be) and choosing 'Copy Podcast URL'.
    You may find this page helpful in giving you basic information about podcasting:
    http://www.wilmut.org.uk/pc

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • I have a question about Data Rates.

    Hello All.
    This is a bit of a noob question I'm sure. I don't think I really understand Data Rates and how it applies to Motion... therefore I'm not even sure what kind of questions to ask. I've been reading up online and thought I would ask some questions here. Thanks to all in advance.
    I've never really worried about Data Rates until now. I am creating an Apple Motion piece with about 15 different video clips in it. And 1/2 of them have alpha channels.
    What exactly is Data Rate? Is it the rate in which video clip data is read (in bits/second) from the Disc and placed into my screen? In Motion- is the Data Rate for video only? What if the clip has audio? If a HDD is simply a plastic disc with a dye read by "1" laser... how come my computer can pull "2" files off the disc at the same time? Is that what data transfer is all about? Is that were RAM comes into play?
    I have crunched my clips as much as I can. They are short clips (10-15seconds each). I've compressed them with the Animation codec to preserve the Alpha channel and sized them proportionally smaller (320x240). This dropped their data rate significantly. I've also taken out any audio that was associated with them.
    Is data rate what is slowing my system down?
    The data rates are all under 2MBs. Some are as low as 230Kbs. They were MUCH higher. However, my animation still plays VERY slowly.
    I'm running a 3GigRam Powerbook Pro 2.33GHz.
    I store all my media on a 1TB GRaid Firewire 800 drive. However for portability I'm using a USB 2 smartdisk external drive. I think the speed is 5200rpm.
    I'm guessing this all plays into the speed at which motion can function.
    If I total my data rate transfer I get somewhere in the vicinity of 11MBs/second. Is that what motion needs for it to play smoothly a 11MBs/second data connection? USB 2.0 is like what 480Mbs/second. So there is no way it's going to play quickly. What if I played it from my hard drive? What is the data rate of my internal HDD?
    I guess my overall question is.
    #1. Is my thinking correct on all of these topics? Do my bits, bytes and megs make sense. Is my thought process correct?
    #2. Barring getting a new machine or buying new hardware. What can I do to speed up this workflow? Working with 15 different video clips is bogging Motion down and becoming frustrating to work with. Even if only 3-4 of the clips are up at a time it bogs things down. Especially if I throw on a glow effect or something.
    Any help is greatly appreciated.
    -Fraky

    Data rate DOES make a difference, but I'd say your real problem has more to do with the fact that you're working on a Powerbook. Motion's real time capabilities derive from the capability of the video card. Not the processor. Some cards do better than others, but laptops are not even recommended for running Motion.
    To improve your workflow on a laptop will be limited, but there are a few things that you can try.
    Make sure that thumbnails and previews are turned off.
    Make sure that you are operating in Draft Mode.
    Lower the display resolution to half, or quarter.
    Don't expect to be getting real time playback. Treat it more like After Effects.
    Compressing your clips into smaller Animations does help because it lowers the data rate, but you're still dealing with the animation codec which is a high data rate codec. Unfortunately, it sounds necessary in your case because you're dealing with alpha channels.
    The data rate comes into play with your setup trying to play through your USB drive. USB drives are never recommended for editing or Motion work. Their throughput is not consistent enough for video work. a small FW drive would be better, though your real problem as I said is the Powerbook.
    If you must work on the powerbook, then don't expect real-time playback. Instead, build your animation, step through it, and do RAM previews to view sections in real time.
    I hope this helps.
    Andy

Maybe you are looking for

  • Bug In Payment Wizard Report

    We have found an obscure bug in the BP Summary Report from the payment Wizard. We created historical transactions for the suppliers using one journal for each month with line level remarks & dates. Where the journal has several lines for the same sup

  • Wildcards in Compatability View

    I have over 100 sites to add to Compatability View settings in IE 11.  I have a few questions regarding this. 1.  Is there a limit on the number of sites that can be added? 2.  Is there a way to import a list of sites?  I have to add these sites so m

  • [SOLVED] LXDE menu disappeared after update

    After last update the menu entries in LXDE menu disappeared. There are only two entries, Logout  and Run Is there any way to restore the menu entries? Last edited by leonidas (2014-11-17 18:16:14)

  • About credit card

    Hi, I had registered my credit card a week ago on Apple.and then that day  I remove all information about credit card off my apple acount.but today i learned that  2 dollars payment has been given off my credit card.but I never  buy any paid apps.Eac

  • Using metrics with a cluster of brokers.

    Hello, Here is my situation - I'm using the metric topic "mq.metrics.destination_list" to receive the mapped message of metric data. I'm doing this to get the queue depth, number of consumers and number of messages acknowledge. My JMS provider is Sun