Flashack table and partitions??

Hi All,
Can We use the flash back tables with partitions...
Any idea or any document about Flash back table and partitions relationship in 11 or 10g would help...
Thanks

Hi,
we need to input PRT material Number and get in which and all tasklists it is being used
Our PRT material number is of 12 digits and i could not find any suitable field to input the PRT material number in the table PLFH
our PRT is like this
FA115200A001
how to input these PRT material numbers and get the required data
regards,
Madhu Kiran

Similar Messages

  • How to Create a Table with Merge and partitions in HANA

    Hi,
    What is the best way to create a Table with MERGE and PARTITION and UNLOAD PRIORITIES.
    Any body can you please give me some examples.
    Regards,
    Deva

    Ok,
    1) the UNLOAD PRIORITY has nothing to do with the order of data loads in your ETL process
    2) Unloading of columns will happen automatically. Don't specify anything specific for the tables, then SAP HANA will take care about it
    3) Not sure where you get your ideas from, but there is no need to manually "flush" tables or anything like that. SAP HANA will take care of memory housekeeping.
    4) Partitioning and how to specify it for tables has been largely documented. Just read up on it.
    5) Delta Merge will happen automatically, as long as you don't prevent it (e.g. by trying to outsmart the mergedog rules)
    Seriously, I get the impressions that this list of requirements is based on some hear-say and lack of actual information and experience with SAP HANA. There are a couple of extensive discussions on data loading optimization available here in SCN and on SAPHANA.COM. Please read those first.
    All this had been discussed broadly a couple of times.
    - Lars

  • Partitioned tables and IOT

    Hi All,
    We had a database using IOT but now because of the performance reasons
    we want to elimnate IOT table and using the Parititioned Tables.
    Have you ever using the partitioned tables for ORACLE with your database?
    If you have done that please share your experiences or refer me some
    documents related to Partitioned tables for ORACLE to make my job more
    easier.
    Thanks in advance,
    JP

    Hi,
    You will get good information from this Oracle site,
    http://www.oracle.com/technology/documentation/index.html.
    Thanks

  • Question on Creating table from another table and copying the partition str

    Dear All,
    I want to know whether is there any way where we can create a table using another table and have the partitions of the new table to be exactly like the used table.
    Like
    CREATE TABLE TEST AS SELECT * FROM TEMP;
    The table TEMP is having range and hash partitions.
    Is there any way when we use the above command, we get the table partitions of TEMP to be copied to the new table TEST.
    Appreciate your suggestions on this one.
    Thanks,
    Madhu K.

    may this answer your question...
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:595568200346856483
    Ravi Kumar

  • Cannot INSERT records into Partitioned Spatial Table and Index

    I am trying to tune our Spatial Storage by creating partitioning our spatial_entity table and index. I used the World Geographic Reference System (GEOREF) creating a partition for each 15 x 15 degree grid square assigning a partition key of decimal longitude, decimal_latitude. The build went OK, however when trying to insert a data record I receive an ORA-14400: Inserted partition key does not map to any partition.
    I validated the CREATE(s), and all appears correct, but obviously something is not correct, which is prompting for expert help in this forum.
    I would be very grateful for your help.
    Below are the code snippets for the table and index, and an insert statement.
    CREATE TABLE spatial_entity
         geoloc_type VARCHAR2 (60 BYTE) NOT NULL
    ,entity_id NUMBER NOT NULL
    ,metadata_xml_uuid VARCHAR2 (40 BYTE) NOT NULL
    ,geoloc MDSYS.sdo_geometry NOT NULL
    ,nee_method CHAR (1 BYTE) NOT NULL
    ,nee_status CHAR (1 BYTE) NOT NULL
    ,decimal_latitude NUMBER (15, 6) NOT NULL
    ,decimal_longitude NUMBER (15, 6) NOT NULL
    PARTITION BY RANGE (decimal_longitude, decimal_latitude)
         PARTITION p_lt_0_90s
              VALUES LESS THAN (1, -90)
         ,PARTITION p_lt_0_75s
              VALUES LESS THAN (1, -75)
         ,PARTITION p_lt_0_60s
              VALUES LESS THAN (1, -60)
         ,PARTITION p_lt_0_45s
              VALUES LESS THAN (1, -45)
         ,PARTITION p_lt_0_30s
              VALUES LESS THAN (1, -30)
         ,PARTITION p_lt_0_15s
              VALUES LESS THAN (1, -15)
         ,PARTITION p_lt_0_0
              VALUES LESS THAN (1, 0)
         ,PARTITION p_lt_0_15n
              VALUES LESS THAN (1, 15)
         ,PARTITION p_lt_0_30n
              VALUES LESS THAN (1, 30)
         ,PARTITION p_lt_0_45n
              VALUES LESS THAN (1, 45)
         ,PARTITION p_lt_0_60n
              VALUES LESS THAN (1, 60)
         ,PARTITION p_lt_0_75n
              VALUES LESS THAN (1, 75)
         ,PARTITION p_lt_0_90n
              VALUES LESS THAN (1, maxvalue)
    CREATE INDEX geo_spatial_ind ON spatial_entity (geoloc)
    INDEXTYPE IS mdsys.spatial_index
    PARAMETERS ('layer_gtype=MULTIPOINT TABLESPACE=GEO_SPATIAL_IND') LOCAL
    (PARTITION p_lt_0_90s,
    PARTITION p_lt_0_75s,
    PARTITION p_lt_0_60s,
    PARTITION p_lt_0_45s,
    PARTITION p_lt_0_30s,
    PARTITION p_lt_0_15s,
    PARTITION p_lt_0_0,
    PARTITION p_lt_0_15n,
    PARTITION p_lt_0_30n,
    PARTITION p_lt_0_45n,
    PARTITION p_lt_0_60n,
    PARTITION p_lt_0_75n,
    PARTITION p_lt_0_90n,
    INSERT INTO spatial_entity
         geoloc_type
         ,entity_id
         ,metadata_xml_uuid
         ,geoloc
         ,nee_method
         ,nee_status
         ,decimal_latitude
         ,decimal_longitude
    VALUES
                   'BATCH'
                   ,0
                   ,'6EC25B76-8482-4F95-E0440003BAD57EDF'
                   ,"MDSYS"."SDO_GEOMETRY"
                        2001
                        ,8307
                        ,"MDSYS"."SDO_POINT_TYPE" (32.915286, 44.337902, NULL)
                        ,NULL
                        ,NULL
                   ,'M'
                   ,'U'
                   ,32.915286
                   ,44.337902
    Thank you for you help.
    Dave

    Thank you for your quick reply. I did not post the entire CREATE script as it is quite long. The portion of the script that is applicable to the INSERT is:
    ,PARTITION p_lt_45e_90s
              VALUES LESS THAN (23, -90)
         ,PARTITION p_lt_45e_75s
              VALUES LESS THAN (23, -75)
         ,PARTITION p_lt_45e_60s
              VALUES LESS THAN (23, -60)
         ,PARTITION p_lt_45e_45s
              VALUES LESS THAN (23, -45)
         ,PARTITION p_lt_45e_30s
              VALUES LESS THAN (23, -30)
         ,PARTITION p_lt_45e_15s
              VALUES LESS THAN (23, -15)
         ,PARTITION p_lt_45e_0
              VALUES LESS THAN (23, 0)
         ,PARTITION p_lt_45e_15n
              VALUES LESS THAN (23, 15)
         ,PARTITION p_lt_45e_30n
              VALUES LESS THAN (23, 30)
         ,PARTITION p_lt_45e_45n
              VALUES LESS THAN (23, 45)
         ,PARTITION p_lt_45e_60n
              VALUES LESS THAN (23, 60)
         ,PARTITION p_lt_45e_75n
              VALUES LESS THAN (23, 75)
         ,PARTITION p_lt_45e_90n
              VALUES LESS THAN (23, maxvalue)
    Or, I do not fully understand. Are you indicating that I must explcitly state the longitude in each clause,
    e.g ,PARTITION p_lt_45e_45n
              VALUES LESS THAN (45, 45)
    ,PARTITION p_lt_45w_45n
              VALUES LESS THAN (-45, 45)
    If so, that answers the question of why it cannot find a partition, however an Oracle White Paper "Oracle Spatial Partitioning Best Practices" Sept 2004, discusses multi column partitioning such as represented by this problem, and gives an INSERT statement example of :
    CREATE TABLE multi_partn_table (in_date DATE,
    geom SDO_GEOMETRY, x_value NUMBER, y_value NUMBER)
    PARTITION BY RANGE (X_VALUE,Y_VALUE)
    PARTITION P_LT_90W_45S VALUES LESS THAN (1,-45),
    PARTITION P_LT_90W_0 VALUES LESS THAN (1,0),
    PARTITION P_LT_90W_45N VALUES LESS THAN (1,45),
    PARTITION P_LT_90W_90N VALUES LESS THAN (1,MAXVALUE
    and as I am writing this I am seeing that I failed to include the longitude and latitude in the SDO_GEOMETRY clause, so it does appear tht I need to explicitly state the longitude valuues.
    What is your judgement sir?
    Dave

  • Top space consuming Tables (which considers LOBs and Partitions )

    11.2.0.3/Solaris
    When u run a query like below which lists the top space consuming segments , It will only list segments for LOBs and Partitions of a table separately..
    select segment_name, bytes/1024/1024/1024 gb, segment_type from dba_segments where owner = 'MCS_WM_USR'  order by gb desc;
    SEGMENT_NAME                                                                             GB SEGMENT_TYPE
    WMERROR                                                                          14.4287109 TABLE
    SYS_LOB0000737548C00014$$                                                        9.93554687 LOBSEGMENT
    WMSERVICE                                                                        6.00488281 TABLE
    WMSESSION                                                                        5.11621093 TABLE
    SYS_LOB0000737571C00017$$                                                        4.61914062 LOBSEGMENT
    WMCUSTOMPROCESSDATA                                                               2.8046875 TABLE
    SYS_C00511081                                                                    1.25097656 INDEX
    IDX_SVC_COM1                                                                     0.95501708 INDEX
    IDX_SESS_AUDTM                                                                   0.92291259 INDEX
    .Does anyone have a query which will list the actual table sizes which is calculated after summing up all LOBs and Partitions in that table ?
    For example;
    If table EMP has 3 LOB columns, in user_segments , it will appear something like
    SEGMENT_NAME                                                                          GB     SEGMENT_TYPE
    EMP                                                                               3           TABLE
    SYS_LOB0000737548C00014$$                                                         7           LOBSEGMENT
    SYS_LOB0000451978C00014$$                                                         2           LOBSEGMENT
    SYS_LOB0000875128C00014$$                                                         5           LOBSEGMENTThe total size of EMP table is 3 + 7 + 2 + 5 = 17 GB
    I am looking for query which will do a lookup in DBA_LOBS and DBA_PARTITIONS and add up space for each table.
    So, my required output will looks like
    TABLE_NAME          Size
    EMP               17g
    DEPT                1g
    WMERRORS          104g
    .

    Hi Garry,
    the segment_name in DBA_SEGMENTS can be used to group all table partitions of a partitioned table. So we only have to join DBA_LOBS to DBA_SEGMENTS and group the result rows:
    with
    basedata as (
    select owner
         , segment_name
         , segment_type
         , round(sum(bytes)/1024/1024/1024) gb
         , sum(bytes) bytes
         , count(*) segment_count
      from dba_segments s
    group by owner, segment_name, segment_type
    lobs as (
    select owner
         , table_name
         , segment_name
      from dba_lobs
    all_segs as (
    select coalesce(lobs.table_name, basedata.segment_name) table_name
         , basedata.*
      from basedata
      left outer join
           lobs
        on (basedata.segment_name = lobs.segment_name
            and basedata.owner = lobs.owner)
    select table_name
         , sum(bytes) bytes
         , sum(gb) gb
      from all_segs 
    group by table_name
    having sum(gb) > .1;Regards
    Martin

  • [SOLVED] UEFI system booting from MBR partition table and GRUB legacy

    I'm trying to understand once and for all the process by which Arch can be booted from a system with UEFI firmware and an MBR partition table. Some of the information on the wiki seems conflictual / non-nonsensical at times. Apologies in advance if this has been answered time and time again, but I did search around and all I found was fixes to get Arch to boot rather than comprehensive explanations of the boot process.
    Now, the way I would imagine it works is that it's just completely identical to the way it would work with a BIOS firmware. The UEFI firmware detects an MBR partitioning scheme (or is configured to know it's an MBR partitioning scheme), activates some "legacy" mode and executes the MBR boot code, just like a BIOS firmware would.
    The wiki however, says different. From the Macbook article: "Do not install GRUB onto /dev/sda !!! Doing so is likely to lead to an unstable post-environment."?
    So what is there in the MBR boot sector? Nothing?
    How does the firmware know what to boot if there's no 0xEF BIOS boot partition and no Grub stage 1 in the MBR boot sector?
    Also, how does installing Grub stage 1 to a partition work? Does it have to be at the beginning of the partition? Wouldn't that overwrite some existing data?
    I'm especially puzzled since many guides to installing Vista on a macbook recommend simply formatting as MBR, and installing as normal, which I suppose entails having the Windows installation process write its boot code to the MBR, ie the equivalent of installing grub stage 1 to /dev/sda rather than to the /boot partition, as the Macbook article suggests.
    Any input is appreciated.
    P.S. I realize it's probably simpler, if I just want to dual boot Windows and Arch, to install Windows 7 in UEFI-GPT mode, let it create the EFI System Partition, and then install GRUB 2 to that partition, but I'm still curious about the UEFI-MBR boot process.
    Last edited by padavoine (2012-06-06 09:35:10)

    padavoine wrote:
    CSM in UEFI firmwares do the exact same job as normal BIOS firmware.
    So it's something specific to the Mac that it's able to boot from a partition's VBR while ignoring the MBR?
    The reason that warning is given is because grub-legacy modifies more than just the MBR boot code region.  It can overwrite some parts of GPT header.
    Not true, the instruction is given in the context of an MBR format, not in the context of a GPT format, so there's nothing to overwrite and Stage 1.5 should be safely embeddable in the post-MBR gap.
    In BIOS boot (normal case in non-UEFI firmwares or CSM in UEFI firmwares) does not read the partitition table (atleast it is supposed to be dumb in this regard), it simply launches whatever boot code exists in the 1st 440-byte of the MBR region.
    So again, you're saying it's specific to the Mac UEFI that it lets you choose a partition whose VBR to load, regardless of what's in the MBR?
    I haven't used Macs so I can't comment on Mac firmware behaviour. But normal BIOS firmwares (legacy and CSM) launch only the MBR boot code and not the partition boot code. We need some chainload capable boot manager in the MBR to launch the partition VBR.
    grub-legacy does not know anything about GPT. So when you install grub-legacy to /dev/sda, it install the MBR boot code (stage1) and stage 1.5 code to the (supposed) post MBR gap. Since there is no actual post MBR gap in GPT (which has been taken over by the header and partition table), grub-legacy does not check for GPT and it assumes the post MBR gap actually exists which is invalid in case of GPT. grub-legacy embeds the stage 1.5 code in GPT header and table region (which grub assumes to be unused post MBR gap) and thus corrupts it.
    0xEF is the MBR type code for UEFISYS partition. grub stage 1 (used in grub-legacy, not in grub2) is the 440-byte boot code stored in MBR for use in BIOS boot.
    That's precisely my point: with neither proper executable code in the MBR (since grub was installed to a partition, not to the MBR) nor a UEFI system partition, what does the firmware default to, and how does it know what partition to boot from?
    In that case it might fallback to UEFI Shell (if it exists)  or give an error similar to the case where BIOS does not find any bootable code in 440-byte MBR region.
    So even with bootcamp/CSM, the disk also needs to be MBR partitioned. So Macs use something called "Hybrid GPT/MBR" ( http://rodsbooks.com/gdisk/hybrid.html ) where the MBR table is synced to match the first 3 partitions in the GPT table.
    I know what Bootcamp does, and that's not what I was referring to. I was referring to standalone Vista installs. I wasn't puzzled at the fact that they were using MBR, I was puzzled at the fact that contrary to the recommendations for the standalone Arch install on the wiki (with MBR partitioning, not GPT), they didn't do anything to try and prevent Windows from writing to the MBR.
    You can't prevent Windows from overwriting the MBR region. You have to re-install the bootloader (grub2/syslinux etc.) after installing Windows. That is the reason why it is recommended to install Windows first and linux later.
    Thats not true. I actually find it is much easier to install Windows UEFI-GPT using USB rather than a DVD.
    I haven't done it since the only UEFI system I own has no DVD drive, but I was under the impression that it was simply a matter of choosing DVD UEFI boot in the firmware's boot menu.
    format the USB as FAT32 and extract the iso to it. That it.
    No, thats not it, precisely, it doesn't work out of the box with a standard Windows install USB, you need to fiddle around:
    2.3 Extract bootmgfw.efi from [WINDOWS_x86_64_ISO]/sources/install.wim => [INSTALL.WIM]/1/Windows/Boot/EFI/bootmgfw.efi (using 7-zip aka p7zip for both the files), or copy it from C:\Windows\Boot\EFI\bootmgfw.efi from a working Windows x86_64 installation.
    2.4 Copy the extracted bootmgfw.efi file to [MOUNTPOINT]/efi/microsoft/boot/bootmgfw.efi .
    Most of the Windows isos already have /EFI/BOOT/BOOTX64.EFI file, so no need to extract the bootmgfw.efi file.
    There is no difference between in BIOS booting in UEFI firmwares and BIOS booting with legacy firmware.
    There has to be a difference, at least in the Mac firmware (sorry, I keep switching), since legacy firmware, AFAIK, cannot chainload a bootloader in a partition's VBR without there being some sort of "stage1" code in the MBR.
    No idea about Mac EFI. Apple made a spagetti out of UEFI Spec. To actually understand how Mac firmwares work, read the blog posts by Matthew Garrett of Redhat, about his efforts in getting Fedora to boot in Macs.

  • How to partition tables and indexes in this scenario?

    So our situation is pretty simple. We have 3 tables.
    A, B and C
    the model is A->>B->>C
    Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> B
    we have many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.
    It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
    We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.
    We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualify by that.
    We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.
    I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any advantage.
    C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying on columns within A.
    If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look like compared to having to work all the way up from the bottom to the top before it begins qualifying A.
    Edited by: steffi on Jan 14, 2011 11:36 PM

    We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on >created_date but rarely qualify by that. Very good question. Why did you partition on it? You will never have INSERTS on these partitions, but maybe deletes and updates? The only advantage (I can think of) would be to set these partitions in a read only tablespace to ease backup... but that's a weired reason.
    we have many queries where the data is identified by state that is indexed currently non partitioned on columns in >A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at >this time.Of course. Why should they be partitioned by Create_date?
    It is typical that we qualify A on either account or user or both. There are indexes (non partitioned) on these
    We have a problem now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This >often results in large full table scans. Our solution has been to remove the leading wildcard but this isn't always possible.I would suspect full index scan. Isn't it?
    We are also wondering where and how we can benefit from either global partitioned index or local partitioned >indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.As A is not accessed by any partition, why should C and B profit? You should look to partition by the key you are using to access. But, you are looking to tune your SQLs where the access is like '%ACCOUNT' on A. Then when there is a match. ORACLE joins via your index and nested loop (right?) to B and C.
    I am also wondering what impact pushing the state from A that's used to qualify A down to C would have any >advantage.Why should it. It just makes the table and indexes larger => more IO.
    C is the table that currently we qualify with the partition key so I figure if you also pushed down the state from A >that's used to qualify the set of C's we want based on the set of B's we want based on the set of A thru qualfying >on columns within A.If the access from A to C would be .. AND A.CREATE_DATE =C.CREATE_DATE and c.key like '%what I want%' which does not qualifify for a FK ;-) then, as that could be resulting in a partition scan, you could "profit". But, I'm sure that's not your model.
    If we push down some of those values to C and simply use C when filtering I'm wondering what the plans will look >like compared to having to work all the way up from the bottom to the top before it begins qualifying A.So you want to denormalize A,B,C and into one table? With the same access is like '%ACCOUNT' you would get a full scan on an index as before, just the objects would be larger due to redundance and harder to maintain. In the end you would have a bad and slower design.
    Maybe you explain what the problem is.
    Full index scan can not be avoided, but that can be made faster by e.g. parallel query, and then the join to B and C should be a "snip" if you just identify a small subset of rows in these tables.

  • How to select data from AZure table storage without row key and partition key

    Hi 
    I need to select a data from azure table storage without rowkey and partition key. how  in azure storage emulator click query it display all data from that table. 
    thanks 
    rajesh 

    Hi rajesh,
    It seems that you didn't click query data using storage emulator. But I recommend you could use the azure server explore in your VS to view your data and query data. Please see this document (http://msdn.microsoft.com/en-us/library/azure/ff683677.aspx).
    And base on my experience, you may need input the command on Azure storage emulator, such as this page(http://msdn.microsoft.com/en-us/library/azure/gg433005.aspx).
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Issue with Multiple LTS for a fact table and filters

    Hello,
    I am facing an issue with obiee 10g.
    In my model, I have a huge FACT table F1 (partitioned and indexed). The average response time for the queries, which targeted it, was ~30-60 seconds, which was not really convincing our end user.
    So, we decided to create a materialized view, which removes some dimensions that are not used by default, but might be used if the end user adds some filters. I added the Materialized view in the Physical Layer and in the corresponding Logical Table Source.
    I then tried to see if it works, but I was a bit surprised by the result. Indeed,
    -> If the report does not reference a truncated dimension, it targets the materialized view. -> Perfect
    -> If the report does reference a truncated dimension in the columns, it targets the Fact Table. -> Perfect
    -> If the report does reference a truncated dimension in the Filters, it targets the materialized view. For this reason, the filter is never resolved and no join on the dimension table is applied, whereas it exists in logical SQL generated. -> Ko.
    A suggestion could be to add the filters into the columns, but I am not satisfied by this response because it will never use the materialized view in that case.
    An other suggestion could be to use query rewrite, but I 'd like to have the full control on the generation of the queries.
    Does someone know if the filters are not evaluated to determine which LTS to use? How can I force this evaluation?
    Regards,

    Hi,
    If I understand your description correctly, then your materialized view skips some dimensions (infrequent ones). However, when you reference these skipped dimensions in filters, the queries are hitting the materialized view and failing as these values do not exist. In this case, you could resolve it as follows
    1. Create dimensional hierarchies for all dimensions.
    2. In the fact table's logical sources set the content tabs properly. (Yes, I think this is it).
    When you skipped some dimensions, the grain of the new fact source (the materialized view in this case) is changed. For example:
    Say a fact is available with the keys for Product, Customer, Promotion dimensions. The grain for this is Product * Customer * Promotion
    Say another fact is available with the keys for Product, Customer. The grain for this is Product * Customer (In fact, I would say it is Product * Customer * Promotion Total).
    So in the second case, the grain of the table is changed. So setting appropriate content levels for these sources would automatically switch the sources.
    So, I request you to try these settings and let me know if it works.
    Thank you,
    Dhar

  • Table and Index compression in data warehouse - thoughts?

    Hi,
    We have a data warehouse with large fact tables and materialized views of this data.
    Approx 3 million inserts per day week-ends about 12 million.
    The fact tables we have expected to have 200 million, and couple with 1-3 billion.
    Tables partitioned and have bitmap indexes.
    Just wondered what thoughts were about compressing large fact tables and mviews both from point of view of ETL into them and reporting from them afterwards.
    I take it, can compress/uncompress accordingly without any problem?
    Many Thanks

    After compression, most SELECT statements would not get slower. Actually, many can get faster due to reduced IO and buffer needs.
    The situation with DMLs is more complex. It depends on the exact compression options (basic or advanced) and the DML (INSERT,UPDATE, direct load,..),but generally DML are negatively affected by compression.
    In a Data Warehouses (DWs), it is usually quite beneficial to compress partitions or tables that contain data that is not supposed to be modified (read only or read mostly). Please note that in many cases you do not have to compress while you are loading the data – you can do that later.
    You can also consider compressing some of your B-tree indexes (if you use them in your DW system).
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Index and partition not visible !!!

    Hello,
    I am creation a table with partition by range and primary key constraint like
    CREATE TABLE employees
    (employee_id NUMBER(4) NOT NULL,
    last_name VARCHAR2(10),
    department_id NUMBER(2),
    CONSTRAINT SUP_EMP_PK
    PRIMARY KEY
    (employee_id,department_id)
    PARTITION BY RANGE (department_id)
    (PARTITION employees_part1 VALUES LESS THAN (10) TABLESPACE DATA_003_P001_HUB3);
    but when I query the user_ind_partitions as
    select * from user_ind_partitions where partition_name='employee_part1';
    it returns nothing
    Why is this problem coming up?
    Help..
    Regards
    Abhinav

    Actually the problem statement is:
    I have table as :
    CREATE TABLE SUPERVISION_BPS
    ID NUMBER NOT NULL,
    servicetype NUMBER NOT NULL,
    type VARCHAR2(6 BYTE) NOT NULL,
    insertDate DATE NOT NULL,
    filename VARCHAR2(100 BYTE) NOT NULL,
    numberOK NUMBER NOT NULL,
    numberKO NUMBER NOT NULL,
    numberCRE NUMBER NOT NULL,
    numberUPD NUMBER NOT NULL,
    numberDEL NUMBER NOT NULL,
    CONSTRAINT SUP_BPS_PK
    PRIMARY KEY
    (ID, servicetype, type)
    using index local
    TABLESPACE DATA_003_P001_HUB3
    PARTITION BY RANGE (InsertDate)
    PARTITION "PARTITION_LAST" VALUES LESS THAN (MAXVALUE)
    create sequence seq_sup_bps start with 1 increment by 1;
    The problem is whenever I am deleting a partition as
    alter table &1 drop partition &2;
    and rebuilding the partition as:
    alter index &1 parallel;
    alter index &1 rebuild partition &2 compute statistics;
    alter index &1 noparallel;
    based on unused indexes
    select user_ind_partitions.index_name || '|' || user_ind_partitions.partition_name
    from user_indexes, user_ind_partitions
    where user_indexes.index_name=user_ind_partitions.index_name
    and user_ind_partitions.status='UNUSABLE'
    and upper(user_indexes.table_name)='&2';
    Now when the entry for the same primary key takes place, it gives an error putting the same value again.
    Kindly help
    Tried all bit have not found the solution.
    Note: have tried removing UNUSABLE check but how to get it done as per the need.
    Regards
    Abhinav
    Edited by: user8744860 on Jul 2, 2012 2:35 AM

  • How to Find what are tables and "Table Name" in particular "Data File"

    hi Team,
    I have one database that database 300 gb size , this database having 6 ndf files ,
    here How to Find  what are the tables and  "Table Name"  in particular "Data File"[.ndf]

    Hi,
    In addition to Prashanth’s suggestion, you can also use the following Transact-SQL statements to get more detailed information including  all objects and indexes per Filegroup / Partition and allocated data size of databases.
    The script can work with Microsoft SQL Server 2005 and higher version in all Editions. For more details, please review this article:
    List all Objects and Indexes per Filegroup / Partition.
    -- List all Objects and Indexes
    -- per Filegroup / Partition and Allocation Type
    -- including the allocated data size
    SELECT DS.name AS DataSpaceName
    ,AU.type_desc AS AllocationDesc
    ,AU.total_pages / 128 AS TotalSizeMB
    ,AU.used_pages / 128 AS UsedSizeMB
    ,AU.data_pages / 128 AS DataSizeMB
    ,SCH.name AS SchemaName
    ,OBJ.type_desc AS ObjectType
    ,OBJ.name AS ObjectName
    ,IDX.type_desc AS IndexType
    ,IDX.name AS IndexName
    FROM sys.data_spaces AS DS
    INNER JOIN sys.allocation_units AS AU
    ON DS.data_space_id = AU.data_space_id
    INNER JOIN sys.partitions AS PA
    ON (AU.type IN (1, 3)
    AND AU.container_id = PA.hobt_id)
    OR
    (AU.type = 2
    AND AU.container_id = PA.partition_id)
    INNER JOIN sys.objects AS OBJ
    ON PA.object_id = OBJ.object_id
    INNER JOIN sys.schemas AS SCH
    ON OBJ.schema_id = SCH.schema_id
    LEFT JOIN sys.indexes AS IDX
    ON PA.object_id = IDX.object_id
    AND PA.index_id = IDX.index_id
    WHERE OBJ.type_desc='USER_TABLE'-- add this WHERE clause to display the information of user tables
    ORDER BY DS.name
    ,SCH.name
    ,OBJ.name
    ,IDX.name
    Thanks,
    Lydia Zhang

  • Dbms_redefinition used to convert a non partitioned table to partitioned

    A table is created with below DDL
    CREATE TABLE TEST1("EQUIPMENT_DIM_ID" NUMBER(9,0) NOT NULL ENABLE,
    "CARD_DIM_ID" NUMBER(9,0),
    "NH21_DIM_ID" NUMBER(5,0) NOT NULL ENABLE);
    Interim table created with
    CREATE TABLE INTERIM("EQUIPMENT_DIM_ID" NUMBER(9,0) NOT NULL ENABLE,
    "CARD_DIM_ID" NUMBER(9,0),
    "NH21_DIM_ID" NUMBER(5,0) NOT NULL ENABLE);
    PARTITION BY RANGE ("EQUIPMENT_DIM_ID")
    (PARTITION "P0" VALUES LESS THAN (1)
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
    PARTITION "P1" VALUES LESS THAN (2)
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
    Performed dbms_Redefinition (start,sync and finish) to get the table test1 data from nonpartitioned to partitioned one. At the end of table conversion, dbms_metadata.get_ddl shows like below
    CREATE TABLE TEST("EQUIPMENT_DIM_ID" NUMBER(9,0) CONSTRAINT "SYS_C005605" NOT NULL ENABLE,
    "CARD_DIM_ID" NUMBER(9,0),
    "NH21_DIM_ID" NUMBER(5,0) CONSTRAINT "SYS_C005601" NOT NULL ENABLE);
    PARTITION BY RANGE ("EQUIPMENT_DIM_ID")
    (PARTITION "P0" VALUES LESS THAN (1)
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS ,
    PARTITION "P1" VALUES LESS THAN (2)
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOCOMPRESS
    Can you help me how to hide or remove the "CONSTRAINT "SYS_C005605"" etc section showing in ddl for the table?The reason being if take this ddl definition and load in another database where it can give error in case same named constraint already exist.
    Many thanks
    Regards
    Manoj Thakkan
    Oracle DBA
    Bangalore

    Create your NOT NULL check constraints using ALTER TABLE and name them as you would a primary or foreign key ... with a name that makes sense.
    I personally have a strong dislike for system generated naming and would be thrilled if this lazy practice of defining columns as NOT NULL during table creation went away.

  • Truncate table and materialized view log

    I user oracle 10 R2
    I have a table and on that table a materialized view log.
    I execute in a pl/sql procedure:
    1) execute immediate('drop materialized view log on tab1');
    then:
    2) execute immediate('truncate table tab1');
    3) Now I insert a lot of records in tab1
    4) execute immediate('create materialized view log on tab1 WITH rowid INCLUDING NEW VALUES');
    When I create the materialized view log I recieved this message:
    ora32321: refresh fast on tab2 unsupported after detail table truncate
    Why?

    Refresh fast after truncate operation on container table is not supported, regardless the container table is or is not partitioned.
    Perform a refresh complete.
    ORA-32321 :
    Cause:     A detail table has been truncated and no materialized view
         supports fast refersh after a detail table has been truncated
    Action:     Use REFRESH COMPLETE. Note: you can determine why your
         materialized view does not support fast refresh after TRUNCATE
         using the DBMS_MVIEW.EXPLAIN_MV() API.

Maybe you are looking for

  • To get the table for the purchase order , Completely delivered

    Hi Experts, I need to find a table for a given purchase order no . EKKO-EBLEN , which shows the status of the purchase orde is completely delivered or completely invoiced. I tried finding using the F1 help and the where used list, but couldnt.. Thank

  • Client Object Model Sharepoint 2010

    We are using client object model in sharepoint 2010 When we excute the ExcuteQuery method of clinet obejct model we are getting following error The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and tr

  • Excise(serious issue)

    hi friends..         when excise was changed from 10% to 8%, i changed the conditions records for all tax code with JMOP conditions type from 10% to 8% directly in t-code FV12 and saved.       My issue is before changing BED was calculating properly

  • Jaggies and artefacts

    Having seemingly resloved my issues regarding contrast I find I have another problem with quite obvious jaggies and shimmering which is more obvious on my Mac than on my LCD tv although I suspect being so close to the screen would have something to d

  • Pci card driver

    hello everyone  i am completely new in labview. so i want some basic help. Actually i want to take the digital 16bit data from the PCI card NI DAQ 6534 and then want to plot the spectrum of that data. so i have install the labview from the NI evaluat