Materialized view on a Partitioned Table (Data through Exchange Partition)

Hello,
We have a scenario to create a MV on a Partitioned Table which get data through Exchange Partition strategy. Obviously, with exchange partition snap shot logs are not updated and FAST refreshes are not possible. Also, this partitioned table being a 450 million rows, COMPLETE refresh is not an option for us.
I would like to know the alternatives for this approach,
Any suggestions would be appreciated,
thank you

From your post it seems that you are trying to create a fast refresh view (as you are creating MV log). There are limitations on fast refresh which is documented in the oracle documentation.
http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007028
If you are not planning to do a fast refresh then as already mentioned by Solomon it is a valid approach used in multiple scenarios.
Thanks,
Jayadeep

Similar Messages

  • Importing partitioned table data into non-partitioned table

    Hi Friends,
    SOURCE SERVER
    OS:Linux
    Database Version:10.2.0.2.0
    i have exported one partition of my partitioned table like below..
    expdp system/manager DIRECTORY=DIR4 DUMPFILE=mapping.dmp LOGFILE=mapping_exp.log TABLES=MAPPING.MAPPING:DATASET_NAPTARGET SERVER
    OS:Linux
    Database Version:10.2.0.4.0
    Now when i am importing into another server i am getting below error
    Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 17 January, 2012 11:22:32
    Copyright (c) 2003, 2007, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "MAPPING"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "MAPPING"."SYS_IMPORT_FULL_01":  MAPPING/******** DIRECTORY=DIR3 DUMPFILE=mapping.dmp LOGFILE=mapping_imp.log TABLE_EXISTS_ACTION=APPEND
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-39083: Object type TABLE failed to create with error:
    ORA-00959: tablespace 'MAPPING_ABC' does not exist
    Failing sql is:
    CREATE TABLE "MAPPING"."MAPPING" ("SAP_ID" NUMBER(38,0) NOT NULL ENABLE, "TG_ID" NUMBER(38,0) NOT NULL ENABLE, "TT_ID" NUMBER(38,0) NOT NULL ENABLE, "PARENT_CT_ID" NUMBER(38,0), "MAPPINGTIME" TIMESTAMP (6) WITH TIME ZONE NOT NULL ENABLE, "CLASS" NUMBER(38,0) NOT NULL ENABLE, "TYPE" NUMBER(38,0) NOT NULL ENABLE, "ID" NUMBER(38,0) NOT NULL ENABLE, "UREID"
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_TG_ID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."PK_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_UREID" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_V2" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."IDX_PARENT_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."CKC_SMAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"MAPPING"."PK_MAPPING_ITM" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_MAPPING" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_MAPPING_CT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TG" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type REF_CONSTRAINT:"MAPPING"."FK_TT" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_PART" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_TIME_T" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_DAY" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    ORA-39112: Dependent object type INDEX:"MAPPING"."X_BTMP" skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_TG_ID" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_V2_T" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."PK_MAPPING" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_PARENT_CT" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"MAPPING"."IDX_UREID" creation failed
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"MAPPING"."MAPPING" creation failed
    Job "MAPPING"."SYS_IMPORT_FULL_01" completed with 52 error(s) at 11:22:39Please help..!!
    Regards
    Umesh Gupta

    yes, i have tried that option as well.
    but when i write one tablespace name in REMAP_TABLESPACE clause, it gives error for second one.. n if i include 1st and 2nd tablespace it will give error for 3rd one..
    one option, what i know write all tablespace name in REMAP_TABLESPACE, but that too lengthy process..is there any other way possible????
    Regards
    UmeshAFAIK the option you have is what i recommend you ... through it is lengthy :-(
    Wait for some EXPERT and GURU's review on this issue .........
    Good luck ....
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to delete partition table data of Solman 7.01

    Hi, all.
    The partition table size such as '/BI0/F0CCMARSH' of Solman is too big. It's almost 60GB include of index.
    And there are a many of partition table and sum of partition table is 125GB.
    Let me know the SAP report to remove the needless data?
    Best Regards

    1. Execute t-code RSA1
    2. Select the menu > InfoProvider.
    3. Search the vaild InfoProvider with partition table name.
    4. You can remove all data using popup menu 'Delete Data'
    5. or select the popup menu 'Manage' to delete data of the specific day.

  • Ora 14098 Index mismatch for tables  in Alter Exchange Partition

    Hi All,
    I want to exchange data from retek schema to CONV schema. Both the table have same partition, but there is no data in CONV table.
    So I'm populating the data from retek of one particular partition into one staging table and then I'm doing exchanging partition with staging table to CONV table.
    I have created the same index and constraints for staging as there are in CONV table.
    But When I'm doing exchange partition I'm getting error Index mismatch.
    v_parition_name:='mar 2012'
    v_stmt := 'create table staging_tab_st_hist as ( select * from retek.abc_st_hist partition(' ||
    v_parition_name || ') )';
    execute immediate v_stmt;
    v_stmt := ' alter table conv.abc_st_hist exchange partition ' ||
    v_parition_name ||
    ' with table staging_tab_st_hist
    including indexes without validation';
    execute immediate v_stmt;

    Welcome to the forum!
    Whenever you post provide your 4 digit Oracle version (result of SELECT * FROM V$VERSION).
    >
    Hi All,
    I want to exchange data from retek schema to CONV schema. Both the table have same partition, but there is no data in CONV table.
    So I'm populating the data from retek of one particular partition into one staging table and then I'm doing exchanging partition with staging table to CONV table.
    I have created the same index and constraints for staging as there are in CONV table.
    But When I'm doing exchange partition I'm getting error Index mismatch.
    v_parition_name:='mar 2012'
    v_stmt := 'create table staging_tab_st_hist as ( select * from retek.abc_st_hist partition(' ||
    v_parition_name || ') )';
    execute immediate v_stmt;
    v_stmt := ' alter table conv.abc_st_hist exchange partition ' ||
    v_parition_name ||
    ' with table staging_tab_st_hist
    including indexes without validation';
    execute immediate v_stmt;
    >
    I don't see any index creation on the staging table. You said this
    >
    I have created the same index and constraints for staging as there are in CONV table.
    >
    But you didn't create the indexes. When you do the CTAS (create table as select) it only creates the table with the same structure; it doesn't create ANY indexes.
    Add the code to create the necessary indexes after you populate the staging table.

  • Questions on Materialized Views and MV Log tables

    Hello all,
    Have a few questions with regards to Materialized View.
    1) Once the Materialized View reads the records from the MLOG table the MLOG's records get purged. correct? or is it not the case? In some cases I still see (old) records in the MLOG table even after the MV refresh.
    2) How does the MLOG table distinguish between a read that comes from an MV and a read that comes from a user? If I manually execute
    "select * from <MLOG table>" would the MLOG table's record get purged just the same way as it does after an MV refresh?
    3) One of our MV refreshes hangs intermittently. Based on the wait events I noticed that it was doing a "db file sequential read" against the master table. Finally I had to terminate the refresh. I'm not sure why it was doing sequential read on the master table when it should be reading from the MLOG table. Any ideas?
    4) I've seen "db file scattered read" (full table scan) in general against tables but I was surprised to see "db file sequential read" against the table. I thought sequential read normally happens against indexes. Has anyone noticed this behaviour?
    Thanks for your time.

    1) Once all registered materialized views have read a particular row from a materialized view log, it is removed, yes. If there are multiple materialized views that depend on the same log, they would all need to refresh before it would be safe to remove the MV log entry. If one of the materialized views does a non-incremental refresh, there may be cases where the log doesn't get purged automatically.
    2) No, your query wouldn't cause anything to be purged (though you wouldn't see anything interesting unless you happen to implement lots of code to parse the change vectors stored in the log). I don't know that the exact mechanism that Oracle uses has been published, though you could go through and trace a session to get an idea of the moving pieces. From a practical standpoint, you just need to know that when you create a fast-refreshable materialized view, it's going to register itself as being interested in particular MV logs.
    3) It would depend on what is stored in the MV log. The refresh process may need to grab particular columns from the table if your log is just storing the fact that data for a particular key changed. You can specify when you create a materialized view log that you want to store particular columns or to include new values (with the INCLUDING NEW VALUES) clause. That may be beneficial (or necessary) to the fast refresh process but it would tend to increase the storage space for the materialized view log and to increase the cost of maintianing the materialized view log.
    4) Sequential reads against a table are perfectly normal-- it just implies that someone is looking at a particular block in the table (i.e. looking up a row in the table by ROWID based on the ROWID in an index or in a materialized view log).
    Justin

  • Creation of materialized view from remote linked table

    Hi ,
    I am facing problem in creating a materialized view which is based on remote link and my query is involving one equi-join.And both table contributes around 2.75 crore rows. I am trying to create two diff views(MV) but the views are taking very much time to create. If you have any ideas or suggestions.And also I want performance I cant compromise it,so help. Please post it down.
    Thanks,

    user13104802 wrote:
    Hi ,
    I am facing problem in creating a materialized view which is based on remote link and my query is involving one equi-join.And both table contributes around 2.75 crore rows. I am trying to create two diff views(MV) but the views are taking very much time to create. If you have any ideas or suggestions.And also I want performance I cant compromise it,so help. Please post it down.
    Thanks,Welcome to the forum.
    You will need to provide more information if you are interested in getting an intelligent response to your post. It appears that you are creating 2 different MVs but the details of each are not provided, ie
    Where do each of the source tables exist, ie local or remote?
    How many rows in each?
    How will the MVs be refreshed?
    There are other considerations, ie competition for resources, processing power, network bandwidth, etc, etc.
    If all of the source tables exist on the remote database then consider creating the MV there and create a local view across the db link, or possibly, create a MV on the remote server for a subset of the remote data and link to that MV locally.

  • How to send internal table data through mail from report in foreground

    hi all,
    iam trying  to convert the internal table data into excel format and sending it through mail by runnning the report in foreground.
    mail is going sucessfully with excel format,but iam facing the problem in the excel format of the material column as follows:
    the matrno shows the wrong format -2.63E+11  instead of displaying correct format-263215000000.
    Pls suggest the alternative process for the above mentioned problem.
    Thanks,
    Sivagopal R.

    Hi Siva,
      Try to copy 263215000000 in one of the cells of excel sheet and press enter.It will automatically convert into -2.63E+11 .
      This means the default formatting of the excel sheet makes this happen.If you convert the format of the cell to "NUMBER" then u will get the required result.
      But I doubt whether or not it is possible through ABAP programming.
    Regards,
    Vimal.

  • Query on Materialized View created on Prebuilt Table

    Hi,
    I am trying to explain the scenario below on Materialized View which I want to setup for our database.
    I have created a materialized view on prebuilt table on target side on primary key. Now for purging history data from target side which are more than 5 years old I have executed the following steps:
    1. Dropped the materialized view which was created previously on prebuilt table in target side.
    2. Deleted the data from that target table which are more than 5 years old.
    3. Now created the materialized view again on the prebuilt table.
    Now, the problem which I am facing is that - if any changes happens in the source side during the above 3 steps on the target side those changed records were not captured even after successfully building materialized view again in the target side (i.e. after completing step-3 of above the intermediate changed records were not captured).
    Can you please let me know exactly what I am doing wrong here and how can I achieve my intended result.
    Regards,
    Koushik

    See matelink for doc id *252246.1*. The document id says: A materialized view was defined on a Table A. This table had a referential constraint defined against another table, Table B. This constraint was defined as 'ON DELETE CASCADE'. An 'ON DELETE CASCADE' constraint is not allowed on views but as the constraint was created on the table underlying the materialized view, Table A, it could be created although it would behave as a constraint on the view. The constraint existed for performance reasons, which is permitted, and was disabled when created but a general script had been run to enable all constraints. When a delete was performed on Table B the error above was reported although there was no view created on Table B.

  • Materialized view or a big table

    Hi all,
    In our project based on Documentum we are expecting to store 140 million of invoices of third party consultant companies in our Oracle 10g database. We received two different approaches from data modeling team.
    First is in order to speed up the queries to “denormolize” the data and store to the one table which will have 140 million of records with the total length 403 bytes or to split it in two tables Company and Invoice.
    In this case the Company table will contain 2000000 of records X 116 bytes and Invoice table 140 million of records X 171 byte. They also suggested creating the materialized view based on those two tables to take advantage of “fast refresh” feature.
    I don’t have any experience with materialized views and my question is what is which of those approaches is the best
    Any suggestion will be much appreciated.
    Thanks in advance,
    Alex

    I'd say that it mostly depends on other requirements at least including:
    1) how many changes (inserts/updates) do you expect? Especially in spike hours.
    2) how fast you'd like to see changes in the Mat view if you choose this approach? ON COMMIT? On regular interval basis? If on commit then if you predict to have problems with performance in spike hours (see above) then you'll have even more problems because on commit is rather expensive in terms of total statements executed (see tables under Bright idea – materialized views in http://www.gplivna.eu/papers/mat_views_search.htm)
    3) how big is the possibility and will you have a requirement at all to update existing companie's data in already registered invoices?
    4) What kind of queries you'll have and do at least some of them benefit from 2 table approach i.e. could be satisfied only with companies table?
    Gints Plivna
    http://www.gplivna.eu

  • Help needed in Exporting tables data through SQL query

    Hi All,
    I need to write a shell script(ksh) to take some of the tables data backup.
    The tables list is not static, and those are selecting through dynamic sql
    query.
    Can any body tell help me how to write the export command to export tables
    which are selected dynamically through SQL query.
    I tried like this
    exp ------ tables = query \" select empno from emp where ename\= \'SSS\' \"
    but its throws the following error
    EXP-00035: QUERY parameter valid only for table mode exports
    Thanks in advance,

    Hi,
    You can dynamically generate parameter file for export utility using shell script. This export parameter file can contain any table list you want every time. Then simply run the command
    $ exp parfile=myfile.txt

  • ORA-12096: error in materialized view log on a table

    Hi while updating a table getting below error
    ORA-12096: error in materialized view log on "FII"."FII_GL_JE_SUMMARY_B"
    What might be the problem.
    Thanks.

    might be table definition has changed, you may need to drop the mview log and recreate it.

  • Create materialized view in SE referring table in EE

    Hi DBAs,
    I am trying to create a Materialized view on Oracle 10g Standard Edition which is installed on Linux Debian Lenny.
    The master table is on Oracle 10g Enterprise Edition installed on Red Hat Linux 4.1.2.
    When I run,
    create materialized view t1 build immediate refresh fast as (select * from aksharaemsmigrated.t1@db102)
    i get,
    ORA-12028: materialized view type is not supported by master site
    When I run,
    create materialized view t1 build immediate refresh fast on commit as (select * from aksharaemsmigrated.t1@db102);
    I get,
    ORA-01031: insufficient privileges
    When I run,
    select * from aksharaemsmigrated.t1@db102
    I get,
    the rows from aksharaemsmigrated.t1@db102
    NOTE: 'db102' is remote DB on 10g EE for which I created db link.
    I am able to create Mview for local table.
    Is this not supported? I mean creating Mview on Oracle SE referring base table from Oracle EE.
    I have to give solution soon. Can you please throw some light.
    Regards,
    Vijay

    Don't put select statement inside parantheses.
    Test these
    create table x1 as select * from aksharaemsmigrated.t1@db102;
    create materialized view t1 build immediate refresh fast as select * from aksharaemsmigrated.t1@db102 ;Note that you cannot create an MV that is REFRESH ON COMMIT when across Databases.
    See my explanation at http://hemantoracledba.blogspot.com/2008/06/mvs-with-refresh-on-commit-cannot-be.html
    Hemant K Chitale

  • Transfer table data through DB Link

    Hi, hope somebody can help me with this one.
    I have got two oracle servers (one is v.8 and the other v.9i)and i want to transfer a table (structure + data) from the v.8 one to the other. Let4s assume that i am issuing SQL commands from the TARGET (v.9i) server. I have an DB Link already pointing to the tablespace on the source server so reading from the source table is not a problem, in fact, it works quick and nicely with commands like:
    INSERT INTO Table USING
    SELECT *
    FROM Table@DBLink_to_Remote_DB
    The problem starts when i try to transfer a table with a LONG column in its structure. It gives me an ORA-00600 Oracle Internal Error. It doesn4t look good at all. Should i assume that someone else have already tried this with no problems, and, thus, consider that this is an Oracle bug? Anyway... could i transfer this table using a DB Link at all? Any other suggestions?
    Thanks a lot in advance
    Francisco

    I don't think it's possible to trasfer long data through a DB link. It's an Oracle restriction.

  • Does hash partition distribute data evenly across partitions?

    As per Oracle documentation, it is mentioned that hash partitioning uses oracle hashing algorithms to assign a hash value to each rows partitioning key and place it in the appropriate partition. And the data will be evenly distributed across the partitions. Ofcourse following following conditions :
    1. Partition count should follow 2^n logic
    2. Data in partition key column should have high cardinality.
    I have used hash partitioning in some of our application tables, but data isn't distributed evenly across partitions. To verify it, i performed a small test :
    Table script :
    Create table ch_acct_mast_hash(
    Cod_acct_no number)
    Partition by hash(cod_acct_no)
    PARTITIONS 128;
    Data population script :
    declare
    i number;
    l number;
    begin
    i := 1000000000000000;
    for l in 1 .. 100000 loop
    insert into ch_acct_mast_hash values (i);
    i := i + 1;
    end loop;
    commit;
    end;
    Row-count check :
    select count(1) from Ch_Acct_Mast_hash ; --rowcount is 100000
    Gather stats script :
    begin
    dbms_stats.gather_table_stats('C43HDEV', 'CH_ACCT_MAST_HASH');
    end;
    Data distribution check :
    Select min(num_rows), max(num_rows) from dba_tab_partitions
    where table_name = 'CH_ACCT_MAST_HASH';
    Result is :
    min(num_rows) = 700
    max(num_rows) = 853
    As per the result, it seems there is lot of skewness in data distribution across partitions. Maybe I am missing something, or something is not right.
    Can anybody help me to understand this behavior?
    Edited by: Kshitij Kasliwal on Nov 2, 2012 4:49 AM

    >
    I have used hash partitioning in some of our application tables, but data isn't distributed evenly across partitions.
    >
    All keys with the same data value will also have the same hash value and so will be in the same partition.
    So the actual hash distribution in any particular case will depend on the actual data distribution. And, as Iordan showed, the data distribution depends not only on cardinality but on the standard deviation of the key values.
    To use a shorter version of that examle consider these data samples which each have 10 values. There is a calculator here
    http://easycalculation.com/statistics/standard-deviation.php
    0,1,0,2,0,3,0,4,0,5 - total 10, distinct 6, %distinct 60, mean 1.5, stan deviation 1.9, variance 3.6 - similar to Iordan's example
    0,5,0,5,0,5,0,5,0,5 - total 10, distinct 2, %distinct 20, mean 2.5, stan dev. 2.64, variance 6.9
    5,5,5,5,5,5,5,5,5,5 - total 10, distinct 1, %distinct 10, mean 5, stan dev. 0, variance 0
    0,1,2,3,4,5,6,7,8,9 - total 10, distinct 10, %distinct 100, mean 4.5, stan dev. 3.03, variance 9.2
    The first and last examples have the highest cardinality but only the last has unique values (i.e. 100% distinct).
    Note that the first example is lower for all other attributes but that doesn't mean it would hash more evenly.
    Also note that the last example, the unique values, has the highest variance.
    So this is no single attribute that is controlling. As Iordan showed the first example has a high %distinct but all of those '0' values will hash to the same partition so even using a perfect hash the data would use 6 partitions.

  • Send table data through mail in oracle 10g

    Hi ,
    I am trying to send a mail through oracle 10g .
    I can send mail through utl_mail .
    The text that I need to send is data from a table .
    The table contains information about all the employees .
    Table name is person . If the employee is absent on any day without any reason there would be a row of this employee in the table person.
    There is also a column named email in this table .
    I need to write a stored procedure which will send the data about each particular employee to their respective email for all the employees in the person table .
    Can anyone please help me on this .
    Thank you.

    Try this forum thread first:
    Re: send email by procedure
    There are lots of articles on how to accomplish this taks on the web:
    -- utl_smtp example
    http://it.toolbox.com/wiki/index.php/Send_email_from_Oracle_Database
    http://www.databasejournal.com/features/oracle/article.php/3423431/Sending-e-mail-from-within-Oracle.htm
    -- From Application Express product
    http://www.oracle.com/technology/products/database/application_express/howtos/howto_workflow.html
    HTH -- Mark D Powell --

Maybe you are looking for