Oracle Data Compression on SID tables and Dimension Tables

Hello Community,
We have had great success with Oracle compression on ODS tables that are no longer loaded.
We'd now like to move on to other types of BW tables that are very large.
OSS Note 701235 provides answers to questions concerning the possible use of Oracle compression together with SAP BW.
But the Note does not give suggestions for (or against) Oracle compression on SID tables or Dimension tables.
I believe both table types would exhibit the same behaviour : mostly inserts of new SID IDs and new DIM IDs, but few updates to existing SID or Dimension records.  If this is true, then both are good candidates for oracle compression. 
Do you also agree that this is the typical behaviour for SID tables and dimension tables ?  And that these types of tables are good candidates for Oracle compression in a large BW system ?
Thanks kindly!
Keith Helfrich

Hi all,
Although this is an old thread I stumbled on during my own investigations I can provide some answers to your questions.
Table candidates for compression are found by these criteria
       - Table size big enough?
       - Long lifetime of object planned ?
      - No or only rare structural changes for the table   ?
      - u201EUpdateu201C rate low : is your data mostly kind of u201Cread onlyu201D ?
--               for the wideley used rolling window partition techniques of
                  tables in BW this is not a problem: mostly INSERTu2019s in the
                 current partition not affecting other partitions
BW tables that can benefit from compression (see SAP notes 105047,701235)
       - PSA tables with data that must be saved for a longer time
       - ODS change log (no Updates of old data, only Inserts of new data)
       - u201Ehistoricalu201C cubes wich get no changes in table structure anymore
Limitations
         - normal Insert or Update statements are stored ALWAYS in uncompressed
                format and must be compressed separately ( <= Oracle10g )
         - Slight CPU overhead of compression, butu2026
         --      CPU consumption is more than compensated by doing less I/O as
               for Bulk loads or parallel processing
         --      SAP BW transformations took a significant amount of CPU for
                   overall load-time into cubes caused by the application server not
                   the database
          - Table must have not more than 255 fields
          - Adding columns with an initial value or dropping columns require
               uncompression of the complete table (strongest limitation)
Consider all this above and you can decide that tables that going through UPDATE's are
not good candidates for compression or tables that can change it's structure (like
Fact- or DIM tables) .
Now, my questions to you:
Wich Oracle version do you use?
Wich tool do you use for  Oracle compression?
BRspace (can you give an example ?)
ALTER ... MOVE COMPRESS
bye
yk

Similar Messages

  • ORA-01578: ORACLE data block corrupted on tables in sysaux tablespace

    Dear Experts,
    From the alert log file we noticed data block corruptions on one of our datafiles. After further investigation, we realized that the corruptions were on 3 of the AWR related tables in the SYSAUX tablespace:
    1. WRH$_LIBRARYCACHE
    2. WRH$_TEMPSTATXS
    3. WRI$_ALERT_OUTSTANDING
    The bad news is that we may not have a valid rman backup to do the recovery due to the retention policy - RECOVERY WINDOW OF 2 DAYS. Since this is a development database with limited monitoring, we did not discover the corruption until 6 days later. The issue happened about 6 days ago (about Christmas time).
    So, what are our recovery options? Can someone advice? We are thinking about drop and recreate the 3 affected v$WR* tables, but not quite sure about the impact to the system if we drop and recreate the 3 objects. Did someone experience this type of recovery. If you did, what are your approaches?
    We are running oracle 10.2.0.3 version.
    I greatly appreciate your input and suggestion. Thanks!!!

    as long as you have a backup of ur database before christmas, you can use the " MAXDAYS " cmd to get ur backup working so long as you have not used delete obsolote....had a same sistuation....where i had a backup and trying to restore it ...kept saying no valid backup...after going thru some stuff...found the MAXDAYS cmd to use my backup...here is an example ...
    $ rman target /
    Recovery Manager: Release 10.2.0.2.0 - Production on Sun Apr 6 09:05:44 2008
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    connected to target database (not started)
    RMAN> SET DBID=1528894801
    executing command: SET DBID
    RMAN> startup force nomount;
    startup failed: ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/u01/app/oracle/product/10.2.0/db_1/dbs/initsameera.ora'
    starting Oracle instance without parameter file for retrival of spfile
    Oracle instance started
    Total System Global Area 159383552 bytes
    Fixed Size 1259672 bytes
    Variable Size 58722152 bytes
    Database Buffers 92274688 bytes
    Redo Buffers 7127040 bytes
    RMAN> set controlfile autobackup format for device type disk to '/u99/backup/sameera/control_spfile_%F';
    executing command: SET CONTROLFILE AUTOBACKUP FORMAT
    using target database control file instead of recovery catalog
    RMAN> run
    2> {
    3> allocate channel p1 type disk;
    4> restore spfile to pfile '/u01/app/oracle/product/10.2.0/db_1/dbs/initsameera.ora' from autobackup;
    5> shutdown abort;
    6> }
    allocated channel: p1
    channel p1: sid=36 devtype=DISK
    Starting restore at 06-APR-08
    channel p1: looking for autobackup on day: 20080406
    channel p1: looking for autobackup on day: 20080405
    channel p1: looking for autobackup on day: 20080404
    channel p1: looking for autobackup on day: 20080403
    channel p1: looking for autobackup on day: 20080402
    channel p1: looking for autobackup on day: 20080401
    channel p1: looking for autobackup on day: 20080331
    channel p1: no autobackup in 7 days found
    released channel: p1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 04/06/2008 09:09:09
    RMAN-06172: no autobackup found or specified handle is not a valid copy or piece
    Solution:
    RMAN> shutdown abort;
    RMAN> EXIT;
    $ ps -ef |grep pmon
    oracle 2891 2856 0 09:05 pts/1 00:00:00 grep pmon
    oracle 7448 1 0 Apr05 ? 00:00:00 ora_pmon_primary
    $export ORACLE_SID=sameera
    $ rman target /
    Recovery Manager: Release 10.2.0.2.0 - Production on Sun Apr 6 09:05:44 2008
    Copyright (c) 1982, 2005, Oracle. All rights reserved.
    connected to target database (not started)
    RMAN> SET DBID=1528894801
    executing command: SET DBID
    RMAN> startup force nomount;
    startup failed: ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/u01/app/oracle/product/10.2.0/db_1/db s/initsameera.ora'
    starting Oracle instance without parameter file for retrival of spfile
    Oracle instance started
    Total System Global Area 159383552 bytes
    Fixed Size 1259672 bytes
    Variable Size 58722152 bytes
    Database Buffers 92274688 bytes
    Redo Buffers 7127040 bytes
    RMAN> set controlfile autobackup format for device type disk to '/u99/backup/sameera/control_spfile_%F';
    executing command: SET CONTROLFILE AUTOBACKUP FORMAT
    using target database control file instead of recovery catalog
    RMAN> run
    2> {
    3> allocate channel p1 type disk;
    4> restore spfile to pfile '/u01/app/oracle/product/10.2.0/db_1/dbs/initsameera.ora' from autobackup maxdays 15;
    5> shutdown abort;
    6> }
    released channel: ORA_DISK_1
    allocated channel: p1
    channel p1: sid=36 devtype=DISK
    Starting restore at 06-APR-08
    channel p1: looking for autobackup on day: 20080406
    channel p1: looking for autobackup on day: 20080405
    channel p1: looking for autobackup on day: 20080404
    channel p1: looking for autobackup on day: 20080403
    channel p1: looking for autobackup on day: 20080402
    channel p1: looking for autobackup on day: 20080401
    channel p1: looking for autobackup on day: 20080331
    channel p1: looking for autobackup on day: 20080330
    channel p1: looking for autobackup on day: 20080329
    channel p1: looking for autobackup on day: 20080328
    channel p1: looking for autobackup on day: 20080327
    channel p1: looking for autobackup on day: 20080326
    channel p1: looking for autobackup on day: 20080325
    channel p1: looking for autobackup on day: 20080324
    channel p1: looking for autobackup on day: 20080323
    channel p1: autobackup found: /u99/backup/sameera/control_spfile_c-1528894801-20080323-00
    channel p1: SPFILE restore from autobackup complete
    Finished restore at 06-APR-08
    Oracle instance shut down
    Check to make sure if initsameera.ora exists in $ORACLE_HOME/dbs location.
    $ cd $ORACLE_HOME/dbs
    $ ls -ltr
    total 7052
    -rw-r----- 1 oracle oinstall 2560 Apr 5 13:21 spfileprimary.ora
    -rw-r----- 1 oracle oinstall 7061504 Apr 5 13:23 snapcf_primary.f
    -rw-rw---- 1 oracle oinstall 1544 Apr 5 18:42 hc_sameera.dat
    -rw-r--r-- 1 oracle oinstall 1087 Apr 6 09:12 initsameera.ora
    $ pwd
    /u01/app/oracle/product/10.2.0/db_1/dbs
    $

  • Oracle data guard configuration for primary and standby db_name

    I am working on configuring an active data guard for one primary DB and one standby DB. I have a few questions:
    1. Can I use different db_name, db_unique_name and instance_name for primary and standby. For example: primary(db_name, db_unique_name and instance_name)=chicago. When I create standby DB with Rman backup and copy of pfile and control file from primary DB or use Grid control to create standby database. Oracle document or Grid control all keep standby db_name=chicago. Only make standby db_unique_name and instance_name=boston. Due to my application system condition, I want to make db_name=boston, not keep it as the same as primary=chicago. Is this valid configuration?
    2. In primary datafiles, application system generate datafile name like this: hr_chicago_01.dbf, fn_chicago_01.dbf. When I move datafiles to standby server, if I plan to use db_name=boston for standby DB, can I rename datafiles as
    hr_boston_01.dbf, fn_boston_01.dbf? In this way, datafile name match up with db_name. but I will create standby log group and members on primary and standby identically. If in future switching over, DB will not have problems.
    3. If I don't use primary DB backup. Instead, I copy all datafiles, redo_log files (no control files) to standby. Then "alter database backup controlfile to trace" from promary and also " create pfile='/xxx/initSTANDBY.ora' from primary. Then modify init.ora and controlfile. Then run control.sql to bring standby DB up. After that, configure redo log shipping and apply with data guard or SQL. Is this a acceptable way to create physical standby DB?
    Please advise your comments. Thanks in advance.

    I want to make db_name=boston, not keep it as the same as primary=chicago. Is this valid configuration?NO. DB_NAME must be the same ("chicago") at both sites. The Standby will be using a different DB_UNIQUE_NAME (e.g. "boston") and can be using a different Instance name / SID (e.g. "boston").
    can I rename datafiles Yes. The database file names can be changed.
    If I don't use primary DB backup. Instead, I copy all datafiles, redo_log files (no control files) to standbyWhat is the difference between the first sentence (a backup of the primary) and the second sentence (a copy of the primary) ? A Copy is a backup.
    Are you intending to differentiate between an RMAN Backup and a User-Managed (aka "scripted") backup ?
    Normally, for DataGuard, tou can use non-RMAN methods to copy the database but there's no value add in this.
    You'd still have to setup DataGuard ! (and I wonder if you'd have complications setting up Active DataGuard).
    But remember that you MUST create the Standby controlfile from the Primary and copy it over to your Standby -- particularly as you are planning to use DataGuard. This is not created by 'alter database backup controlfile to trace' , but by 'ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'filename''
    Hemant K Chitale

  • Oracle data integrator , reverse the tables

    Hi all,
    am new to odi. Is there any possibility of passing the table name as parameter to reverse it.. am facing issues in reversing the table ,as there large number of views and tables in a schema it takes long time to import. even i tried using selective reverse also.
    kindly let me know.
    Thanks in advance,
    nithya

    Hi All,
    I need to compare two tables say A which is in source and A1 which is in target based on id, if id exists in A1, then need to load the contents from table c in source to table c1 in target. if not need to throw a error message. this is the scenario.
    Please let me know how to do in ODI. Am completely new to odi. problem is i dont know how to compare the id's , before using IKM incremental update knowledge module.
    Thanks in advance

  • Oracle Data Profiling and Data Quality

    Hi,
    How to create metabase for Oracle Data Profiling and Data Quality.Is metabase and repository are same.

    Hi,
    You can create a metabase in the Metabase Manager:
    - Expand Control Admin
    - Click on Metabases
    - in the Metabases window, right-click on the white area and select Add...
    - go through the wizard to create your metabase
    This is documented in the ODQ/ODP tutorial (http://www.oracle.com/technology/products/oracle-data-quality/pdf/oracledq_tutorial.pdf) and in the Documentation (in Metabase Manager or Oracle Data Quality go to Help and then Manuals).
    Thanks,
    Julien

  • Deleting data from SID Table

    Hi,
    I have deleted master data of an info object but it does not remove all the data from the SID table.
    I have selected with SID's option but not all data was removed.
    I am getting the error below when transporting
    "SID Table /BIC/ S2QKHITIND Contains Data : Characteristic ZQKHITIND Cannot be activated. "
    What needs to be done to get the charecteristic activated.
    Thanks
    James

    Hi Selva,
    Thank you for the info. I have deleted data from all the infoproviders but still get the message
    The system is unable to delete all the master data as, because some of it is still in use.  (See log:
    Object RSDMD, sub-object MD_DEL )
    Do you want to delete the data records that are no longer in use?
    What does this mean. is it due to the query open in some place or something else.
    While I am not able to delete the SID table entries as the delete is not active. Please help.
    Thanks
    James

  • Visual Web Developer 2010 Express and Oracle ODAC - Oracle data provider

    system - xp pro
    Microsoft Visual Web Developer 2010 Express and Oracle ODAC112021.
    I can't seem to get the ODAC to work correctly with the Web Developer Express. When putting in a gridview I only see the standard adapters. The one from Oracle does not appear. I have spent days trying to figure out why but no luck. No one else seems to have this problem. I do not see any errors when I installed the ODAC. I've followed a few tutorials.
    http://weblogs.asp.net/nannettethacker/archive/2010/09/17/installing-oracle-data-access-components-odac.aspx
    and
    http://www.smart-soft-nz.com/oracle-and-asp-dot-net.html
    I tried using the Windows adapters but they just don't seem to work. I can get results in the query builder but when I go to the next step in the wizard I get the constraints error. I've done all the research on that with no luck so I wanted to use the ODAC. This killing me! This really shouldn't be that hard. I've added the dll to the references but again it's just not an option. If anyone could give me a little direction that would be great!!!!!!!!!!!!!!!!!!!!!!
    Thank you
    John

    Dear Friends,
    Installing SP1 and updates to it solved the first issue. So the database now seems to be of type 2008 R2.
    Deleting some files solved the second issue.
    However, another issue remains.
    My most important projects were created with a version of VWDX 2010 that was downloaded from Microsoft and installed on July 3, 2012. This version of VWDX has now been lost, because my hard drive recently ceased to function and had to be replaced.
    On the new hard drive I have a version of VWDX 2010 that was downloaded yesterday (May 28, 2014). As indicated above, I also installed SP1 and related updates.
    When I load a project that was created in the version of VWDX 2010 that was installed almost two years ago and try to open the database connection, I get, in essence, the following error message:
    "The database C:\Users\...\Database.mdf cannot be opened, because it is version 661. This server supports version 655 and earlier. A downgrade path is not supported."
    So it seems that the original database version was 661, whereas the version downloaded yesterday is 655. It seems very strange that a version downloaded now should be earlier than a version downloaded two years ago. Does anyone have an explanation or a solution?
    Thank you all for your trouble, and best regards.
    Eric (orexin)
    P.S. The site from which I obtained VWDX yesterday was:
    www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-web-developer-express and the executive was called VWD2010SP1AzurePack.exe.

  • HOW to enable oracle advance compression for EXIST partitioned table

    Hi All,
    I have to enable oracle advance compression for existing table which PARTITION BY RANGE then SUBPARTITION BY HASH.
    ORacle version: 11.2.0.2.0
    Please provide me any relevant doc or any exp.
    Thanks in advance.

    could not see any text for how to enable oracle advance compression for EXIST partitioned table.RTFM.
    From the resource above:
    How do I compress an existing table?
    There are multiple options available to compress existing tables. For offline compression, one could use ALTER TABLE Table_Name MOVE COMPRESS statement. A compressed copy of an existing table can be created by using CREATE TABLE Table_Name COMPRESS FOR ALL OPERATIONS AS SELECT *. For online compression, Oracle’s online redefinition utility can be used. More details for online redefinition are available here.
    "

  • Oracle Data Modeler - Impact Analysis option

    Hi
    I am using Oracle Data Modeler 3.1.0683 and reversed engineering my existing relational models to logical models. I have 3 relational models and reverse engineering it to 1 logical model.
    In logical model under enity's propoerties -> impact analysis how do I add which relational table the logical entity depends on? For example in relational models i have table Class, Student, teacher in 3 separate relation model. Logical i created entity Person which depends on table Student from relational model 1, and teacher from relational model 2, I want to view(add) these tables under "Impact Analysis".
    The help window says
    <quote>
    +"Impact Analysis+
    +Enables you to view and specify information to be used by Oracle Warehouse Builder for impact analysis."+
    </quote>
    Though i couldnt figure out where to specify?
    Thanks in advance.
    Regards
    Lahoria

    So any suggestions how can I bring those table ( as i mentioned in original post) to show up in Impact AnalysisIf your entity is result of reverse engineering from relational model then you can find related table under mappings. The same if you engineer logical model to relational model.
    If you start from column then you can see related attribute in logical model and usage in data flow diagrams and dimensional models
    Philip

  • How to check if Oracle Data Access Components  is installed?

    How to check if  Oracle Data Access Components is installed and version on my computer?
    Also How to check if Oracle Data Provider is installed and version?
    TIA
    Steve42

    Regedit HKLM->Software->Oracle.  See what's there...
    At the very least, that can give you paths and can check file versions from there.

  • SID table in the general tab and master data/text tab

    Hello Bi Experts,
      For example 0material Info-object:
    There is a SID table in the general tab i.e /BI0/SCOMP_CODE
    There is another  SID table Attribute  in the master data/text tab i.e /BI0/XCOMP_CODE,here there are Nav attributes with name S__0COMPANY.
    When i got 0company info-object it got its own SID table i.e /BI0/SCOMPANY
    Can some body explain me what is significance of SID table Attribute  in the master data/text tab and what is difference with SID Table of attribute?
    Cheers,
    Stalin

    Hi,
    SID is surrogate ID generated by the system. The SID tables are created when we create a master data IO. In SAP BW star schema, the distinction is made between two self contained areas: Infocube & master data tables/SID tables.
    The master data doesn't reside in the satr schema but resides in separate tables which are shared across all the star schemas in SAP BW. A numer ID is generated which connects the dimension tables of the infocube to that of the master data tables.
    The dimension tables contain the dim ID and SID of a particular IO. Using this SID the attributes and texts of an master data Io is accessed.
    The SID table is connected to the associated master data tables via teh char key 
    Sid Tables are like pointers in C
    Tables Starting with  Description:
    M - View of master data table 
    Q  - Time Dependent master data table 
    H - Hierarchy table 
    K - Hierarchy SID table 
    I  - SID Hierarchy structure 
    J  - Hierarchy interval table 
    S  - SID table 
    Y  - Time Dependent SID table 
    T  - Text Table 
        F  - Fact Table - Direct data for cube ( B-Tree Index ) 
    E  - Fact Table - Compress cube ( Bitmap Index ) 
    For more info go through the belwo link
    http://www.sap-img.com/bw010.htm
    Regards,
    Marasa.

  • Insert /delete data from SAP Z table to Oracle table and opposite

    Hi,
    Can u help me write this FM from the SAP side?
    So, I have two tables ZTABLE in SAP and Oracle table ORAC.
    Let's put three columns in each of them, for example
    TEL1
    TEL2
    ADRESS
    NAME
    where TEL field is primary from ZTABLE to ORAC...
    (in FM there shoud be abap code for writing data in ZTABLE after we press some pushbutton made in sap screen painter..)
    for example, when we write new record in ZTABLE
    00
    112233
    Street 4
    Name1
    this data shoud be inserted in Oracle table ORAC.
    when we write new record in Oracle table for example
    01
    445566
    New Street
    Name2
    this data shoud be inserted in ZTABLE.
    Field TEL1 can be only of two values 01 or 02, other combination is not valid...
    I must have all data from Oracle table ORAC in ZTABLE and opposite.
    It should be the same scenario for DELETE...
    And this communication should be online between sap and table in oracle database...
    Can u help me from sap side? and give idea how to configure on oracle side??
    Thanks a lot,
    Nihad

    I dont know if we can directly connect to a oracle database ( wait for the answers from others on this )
    but in XI we have the JDBC adaptor to insert and retrieve data.
    so for the outbound from SAP the flow can be something like this (with XI in landscape):
    1) You have a screen to maintain a new entry / delete an entry
    2) On save , this record gets saved or deleted from the Ztable in SAP
    3)) In the same screen you can call a proxy class-method (generated using SPROXY transaction ) to send the record to XI.
    4) XI to format it and insert into the oracle table
    Mathews

  • How to find out when data was deleted from table in oracle and Who deleted that

    HI Experts,
    Help me for below query:
    how to find out when data was deleted from table in oracle and Who deleted that ?
    I did that to fidn out some data from dba_tab_modifications, but I m not sure that what timestamp shows, wether it shows for update,insert or delete time ?
    SQL> select TABLE_OWNER,TABLE_NAME,INSERTS,UPDATES,DELETES,TIMESTAMP,DROP_SEGMENTS,TRUNCATED from dba_tab_modifications where TABLE_NAME='F9001';
    TABLE_OWNER                    TABLE_NAME                        INSERTS    UPDATES    DELETES     TIMESTAMP         DROP_SEGMENTS TRU
    PRODCTL                        F9001                                                     1683         46       2171            11-12-13 18:23:39             0                   NO
    Audit is enable in my enviroment?
    customer is facing the issue and data missing in the table and I told him that yes there is a delete at 11-12-13 18:23:39 in table after seeing the DELETS column and timestamp in dba_tab_modifications, but not sure I am right or not
    SQL> show parameter audit
    NAME                                 TYPE        VALUE
    audit_file_dest                      string      /oracle/admin/pbowe/adump
    audit_sys_operations                 boolean     TRUE
    audit_syslog_level                   string
    audit_trail                          string      DB, EXTENDED
    please help
    Thanks
    Sam

    LOGMiner --> Using LogMiner to Analyze Redo Log Files
    AUDIT --> Configuring and Administering Auditing

  • Reg: Fact table and Dimension table in Data Warehousing -

    Hi Experts,
    I'm not exactly getting the difference between the criteria which decide how to create a Fact table and Dimension table.
    This link http://stackoverflow.com/questions/9362854/database-fact-table-and-dimension-table states :
    Fact table contains data that can be aggregate.
    Measures are aggregated data expressions (e. Sum of costs, Count of calls, ...)
    Dimension contains data that is use to generate groups and filters.
    This's fine but how does one decide which columns to consider for Fact table and which columns for Dimension table?
    Any help is much appreciated.
    Pardon me if this's not the correct place for this question. My first question in the new forum.
    Thanks and Regards,
    Ranit Biswas

    ranitB wrote:
    But my main doubt was - what is the criteria to differentiate between columns for Fact tables and Dimension tables? How can one decide upon the design?
    Columns of a fact table will often be 'scalar' attributes of the 'fact' data item. A dimension table will often be 'compound' attributes of a 'fact'.
    Consider employee information. The EMPLOYEE table can be a fact table. It might have scalar attribute columns such as: DATE_HIRED, STATUS, EMPLOYEE_ID, and so on.
    Other related information that can't be specified as a single attribute value would often be stored in a 'dimension' table: ADDRESS, PHONE_NUMBER.
    Each address requires several columns to define it: ADDRESS1, ADDRESS2, CITY, STATE, ZIP, COUNTRY. And an employee might have several addresses: WORK_ADDRESS, HOME_ADDRESS. That address info would be stored in a 'dimension' table and only the primary key value of the address record would be stored in the EMPLOYEE 'fact' table.
    Same with PHONE_NUMBER. Several columns are required to define a phone number and each employee might have several of them. The dimension tables are used to help 'normalize' the data in the employee 'fact' table.
    And that EMPLOYEE table might also be a DIMENSION table for other FACT tables. A DEVELOPER table might have an EMPLOYEE_ID column with a value that points to a 'dimension' row in the EMPLOYEE dimension table.

  • Difference between Data staging and Dimension Table ?

    Difference between Data staging  and Dimension Table ?

    Data Staging:
    Data extraction and transformation is done here.
    Meaning that, if we have source data in flat file, we extract it and load into staging tables, we take care of nulls, we change datetime format etc.. and after such cleansing/transformation at then end, load it to Dim/Fact tables
    Pros: Makes process simpler and easy and also we can keep track of data as we have data in staging
    Cons: Staging tables need space hence need memory space
    Dimension Table:
    tables which describes/stores the attribute about specific objects
    Below is star schema which has dimension storing information related to Product, Customer etc..
    -Vaibhav Chaudhari

Maybe you are looking for