ONLINE TABLE REDEFINATION

Hi All,
One of the production table which has 5500 extents but it does not have any data in it and its slowing down our application(Its an Advance Queue table).
It is recommended by the application that we recreate this table and they have a script for that.
I want to do it online without any downtime for the application.
Can I reorganize this table online.
Regards,
Umair

As far as I know your table will be locked during the ALTER TABLE MOVE. So there will be no insert/update/delete possible during this operation.
If your table is realy empty you may simply execute a TRUNCATE DROP STORAGE
If you are using 10g you may also have a look at ALTER TABLE SHRINK.

Similar Messages

  • Shell script for online table redefinition

    Hi,
    Could someone help me out in building a script for online table redefinition in AIX 11g, moving the table into a new table space.
    Thanks

    You are embarking upon a voyage in which you will expend a substantial effort reinventing the wheel.
    Look at Oracle DBMS_REDEFINITION built-in package.
    http://www.morganslibrary.org/reference/pkgs/dbms_redefinition.html
    and never do something outside the database, in a proprietary language, that can be done far more efficiently inside the RDBMS in a platform independent language.
    In other words, inside the database, I could code your entire project with error handling, in far less than an 15 minutes including testing.
    With a simple DDL statement, issued at the command prompt in SQL*Plus ... I could do it in less than 15 seconds: Your choice.
    ALTER TABLE <table_name> MOVE TABLESPACE <new_tablespace_name>;

  • Problem with DB6CONV online table move

    Hello,
    I tried to schedule a few concurrent online table move using DB6CONV (version 4.08, on NW07 SP17, DB2 9.5), and I got the error message below in two of the table move sessions:
    20090917 174517 ONLINE CONVERSION: Start of Step 1 (INIT)
    20090917 174517 CALL SAPTOOLS.ONLINE_TABLE_MOVE( 'SAPBD1', '/BIC/AZGL_D00600',
                      'BD1#ZDGLAD', 'BD1#ZDGLAI', '', '', '', 'INIT' )
    20090917 174518 ONLINE_TABLE_MOVE - INIT step aborted.
    20090917 174518 SQL0443N  Routine "*BLE_MOVE" (specific name "") has returned an error
                      SQLSTATE with diagnostic text "SQL0624N  Table
                      "SAPTOOLS.ONLINE_TABLE_MOVE" already has a ".  SQLSTATE=42889
    20090917 174518 Step 1 aborted with errors
    I then chose "Continue" on one session alone, it completed successfully.
    If this is a problem when running DB6CONV online table move in parallel? Is there any way to fix this problem?
    Thanks,
    Patrick

    Hi Siegfried,
    My trace file will excee the maximum 15000 characters allowed in this forum. If you like I can send you in the email.
    With a single table move, it will not fail. So the trace file below showed a successful init step:
    13286: entry: SQLT_db2sap_online_table_move     "SAPBD1"        "/BIC/B0001321000"      "INIT,TRACE"
      838: entry: SQLT_db2sap_Otm_constructor       fffffffffff5c40
        145: entry: SQLT_db2sap_StoredProcedure_connect     fffffffffff5c40
        172: exit:  SQLT_db2sap_StoredProcedure_connect=0
        113: entry: SQLT_db2sap_StoredProcedure_initVersion fffffffffff5c40
          130: SQLT_db2sap_StoredProcedure_initVersion      "driverVer"     "09.05.0002"
        136: exit:  SQLT_db2sap_StoredProcedure_initVersion=0       true    true
        877: SQLT_db2sap_Otm_constructor    "indexNameSz"   128
        880: SQLT_db2sap_Otm_constructor    "triggerNameSz" 128
      883: exit:  SQLT_db2sap_Otm_constructor=0
      2951: entry: SQLT_db2sap_Otm_getProtocolEntryAsInt    "TRACE" fffffffffff5c40 true    0
        2921: entry: SQLT_db2sap_Otm_getProtocolEntry       "TRACE" fffffffffff5138 129     fffffffffff5c40 true    "<null
    >"
          2847: entry: SQLT_db2sap_Otm_getProtocolEntry     "SAPBD1"        "/BIC/B0001321000"      "TRACE" fffffffffff513
    8       129     fffffffffff5c40
            305: entry: SQLT_db2sap_StoredProcedure_prepare "SELECT COALESCE(value, longvalue) AS VALUE FROM SAPTOOLS.ONLI
    NE_TABLE_MOVE WHERE tabschema = ? and tabname = ? and key = ? OPTIMIZE FOR 1 ROW"       fffffffffff5c40 false
            323: exit:  SQLT_db2sap_StoredProcedure_prepare=0
            2890: SQLT_db2sap_HANDLE_SQLCA  -100014
          2903: exit:  SQLT_db2sap_Otm_getProtocolEntry=-100014     ""
          2847: entry: SQLT_db2sap_Otm_getProtocolEntry     ""      ""      "TRACE" fffffffffff5138 129     fffffffffff5c4
    0
            2890: SQLT_db2sap_HANDLE_SQLCA  -100014
          2903: exit:  SQLT_db2sap_Otm_getProtocolEntry=-100014     ""
          2940: SQLT_db2sap_HANDLE_SQLCA    -100014
        2943: exit:  SQLT_db2sap_Otm_getProtocolEntry=-100014       ""
      2981: exit:  SQLT_db2sap_Otm_getProtocolEntryAsInt=0  0
      3469: entry: SQLT_db2sap_Otm_setLock  fffffffffff5c40
        2921: entry: SQLT_db2sap_Otm_getProtocolEntry       "LOCK"  fffffffffff50d8 129     fffffffffff5c40 false   ""
          2847: entry: SQLT_db2sap_Otm_getProtocolEntry     "SAPBD1"        "/BIC/B0001321000"      "LOCK"  fffffffffff50d
    8       129     fffffffffff5c40
            2890: SQLT_db2sap_HANDLE_SQLCA  -100014
          2903: exit:  SQLT_db2sap_Otm_getProtocolEntry=-100014     ""
        2943: exit:  SQLT_db2sap_Otm_getProtocolEntry=0     ""
        2993: entry: SQLT_db2sap_Otm_protocolTime   "LOCK"  fffffffffff5c40
          305: entry: SQLT_db2sap_StoredProcedure_prepare   "MERGE INTO SAPTOOLS.ONLINE_TABLE_MOVE AS t USING TABLE
      323: exit:  SQLT_db2sap_StoredProcedure_prepare=0
      922: entry: SQLT_db2sap_Otm_destroy   110029290       fffffffffff5c40
        3522: entry: SQLT_db2sap_Otm_releaseLock    fffffffffff5c40
        3536: exit:  SQLT_db2sap_Otm_releaseLock=0
        770: entry: SQLT_db2sap_Otm_deinitReplay    fffffffffff5c40
        788: exit:  SQLT_db2sap_Otm_deinitReplay=0
        181: entry: SQLT_db2sap_StoredProcedure_disconnect  fffffffffff5c40
        260: exit:  SQLT_db2sap_StoredProcedure_disconnect=0
        648: entry: SQLT_db2sap_StmtPool_destructor
          649: SQLT_db2sap_StmtPool_destructor      2
        654: exit:  SQLT_db2sap_StmtPool_destructor=0
        181: entry: SQLT_db2sap_StoredProcedure_disconnect  fffffffffff5148
        260: exit:  SQLT_db2sap_StoredProcedure_disconnect=0
      997: exit:  SQLT_db2sap_Otm_destroy=0
    13634: exit:  SQLT_db2sap_online_table_move=0   "00000" ""

  • What are the step involved in online table reorg in oracle 10g?

    Hi All,
    Could you please provide the step by step how to perform the online table reorg ?
    Thaks
    Bala

    Etbin wrote:
    You mean http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_redefi.htm#ARPLS042 ?
    Regards
    EtbinNo, not table redefinition, I think he means a tablespace reorg as you can do through Enterprise Manager.

  • Online Table redefinition(partition maintenance) in 10g

    Hi,
    I would like to know more about the 10g feture of online table redifinition. I am specifically looking for removing the partition or adding new partition to the the exisitng table while the sessions are accessing the table. Can you please guide me on this.
    Thanks
    Anand

    I recommend you writing procedure for
    appending paritions
    dropping partitions
    from existing partition tables.
    Here is simple example of dropping partitions, remember, you should drop only when parition is no longer used by application. Similarly you can write pl/sql to append partitions
    DECLARE
    v_msg VARCHAR2 (200);
    v_table VARCHAR2 (30);
    v_partiion_cur VARCHAR2 (200);
    v_sql VARCHAR2 (200);
    CURSOR table_cur
    IS
    SELECT DISTINCT table_name
    FROM user_tab_partitions
    Where <your condition>= criteria;
    -- reteriving partition qualified for being dropped ; pass retention period
    CURSOR partition_date_cur (i_table_name IN varchar2
    IS
    SELECT TO_DATE (TO_NUMBER (SUBSTR (partition_name, 9, 10)),
    'YYYYMMDDHH24'
    partition_date,
    partition_name,
    tablespace_name,
    table_name
    FROM user_tab_partitions
    WHERE table_name = i_table_name
    AND TO_DATE (TO_NUMBER (SUBSTR (partition_name, 9, 10)),
    'YYYYMMDDHH24'
    ) < (SELECT TRUNC (SYSDATE)
    - (SELECT VALUE
    FROM some_retention_period_conf
    WHERE code = 'RETENTION_PERIOD')
    FROM DUAL);
    BEGIN
    -- some logging message
    OPEN table_cur;
    FETCH table_cur INTO v_table;
    CLOSE table_cur;
    FOR partition_date_rec IN partition_date_cur (v_table)
    LOOP
    v_sql :=
    'ALTER TABLE '
    || partition_date_rec.table_name
    || ' DROP PARTITION '
    || partition_date_rec.partition_name;
    EXECUTE IMMEDIATE v_sql;
    END LOOP;
    EXCEPTION
    WHEN OTHERS
    THEN
    v_msg :=
    'Partition Removal Procedure Failed : ' || SUBSTR (SQLERRM, 1, 150);
    END ;

  • DB6CONV Online table move

    Hi all,
        We used SAP standard report DB6CONV to move some tables from one tablespace to another in ECC 5.0.
        When all steps have finished following the SAP Notes 362325 and no errors occur, what make me confuse is that the table which has been moved still show in the old tablespace in SAP system using Tcode DB13, but we go to DB level we found this table has reside in the new tablespace.
        We use the latest version of DB6CONV 4.08 and do online table move.
        Should it like that?  Can somebody clarify my confusion? Thank you very much!
        PS: SAP NW640, DB2 8.2.2, AIX5.3
    Best Regards,
    Terry

    Hi Terry,
    For SAP table spaces historical information, they are collected by a standard SAP background job.  Just wait patiently until that job is run for the information to be updated. 
    To check whether your table actually resides in the new table space, you can go to se11, technical setting, to check the data class.  Your moved table should be residing in the data class that associated to the target table space.
    Regards,
    -Beck

  • 10G online table redefinition

    We have a DWH large table T1 (one single partition) which needs to be loaded incrementally, but due to large number of updates and deletes, we create another table T2 (one single partition) where we load all the data as inserts and then use partition exchange to load T1 and retire T2 as we cant have any downtime. This entire process takes a long time. Is there a better way using Oracle 10G online table redefinition that we can use so that the entire process doesnt take too long?
    Thanks

    If you are getting a large number of updates and deletes then I believe you have more of an OLTP processing system rather than a Data Ware House.
    Have you thought about looking into redesigning the system to accomodate your large number of updates and deletes as well as your incremental loads?
    What about using partitioning as you eluded to briefly?
    Regards
    Tim

  • ONLINE TABLE RANGE PARTITIONING

    Hello Gurus,
    I have huge table which i want to partition by RANGE(DATE) .
    select count(*) from ECTMT.TEST_BORAL;
    COUNT(*)
    1070985
    Columns are :
    NID NUMBER,
    TICKET_NBR VARCHAR2(20),
    CHANGE_DATE DATE, (I want to partitioned by this column value )
    CHANGE_DESC VARCHAR2(4000),
    OPERATOR VARCHAR2(255),
    TYPE VARCHAR2(255),
    LAST_MODIFIED DATE
    Now, the issue is space. I know the simple partioned method in which we can create the emty table and load data into it.
    But, i don't want to do that. i want to do a partition into exisiting table (ECTMT.TEST_BORAL) only, in other word i can say online split partitioning.
    If somebody can help,, really appreciate
    Regards,
    Srinivas kumar

    You cannot partition a table that is not currently partitioned. You will have to create a new partitioned table and move the data from the current table into the partitioned table.
    I am a little concerned that your problem is with space. You have a 1 million row table (not particularly big) and each row is no more than 5k, so we're only talking about 5 GB of data (plus another few GB, likely, for indexes). If you're trying to use partitioning in a database that doesn't have 5 GB of free space, you're probably going to have much bigger problems down the line.
    You could create a new, empty, partitioned table that had a single partition that covered all CHANGE_DATE values, do a partition exchange to load the current table into your new partitioned table, and then proceed to split the one partition into the proper size and number of partitions. That is going to be slower than the alternative options, however, and involve more downtime, so it's not something that would be generally recommended. Getting 5 GB of space temporarily is likely a far easier solution.
    Justin

  • How to do a mass UPDATE on a table that must be kept "online"

    Hello,
    I am using Oracle 10g and would like to know how best to update a very large table (20 million rows) globally (one column in all rows) in such a way as to make the UPDATE as fast as possible and avoiding contention on indices, row locks etc.
    The table has no Foreign Keys, No triggers. I have read about creating temporary tables, filling them, dropping the original and then renaming, but this does not seem reasonable considering that the table must be at all times online.
    I have also tried with a parallel hint but it was slow.
    Any insights greatly appreciated.

    Is this identical to the question you asked Strategy for a fast global UPDATE on a large online table?? Or is there some difference between the two that I'm missing?
    Justin

  • TABLESPACE cleanup not happening after table data delete

    Hi all
    I have a partitioned table with 4 partitions all residing in a single tablespace. I populated the table with sample data and then deleted all with a simple delete statement followed by a commit. But the tablespace is still showing data from the 4 partitions under the bytes column in query below:
    select segment_name, partition_name, tablespace_name, bytes
    from dba_segments
    where tablespace_name='tb_name'
    Am I missing something here? Do I need to run some other command after delete and commit to fully flush out the data?
    FYI, my system specs are:
    Windows 2008 Server Standard (32-bit)
    ORACLE 11.1.0.7.0

    HI,
    LMT is Locally Managed Tablespace.
    +Moreover, when you say the blocks will not be available immediately when do you reckon they will be available? How do I go about automating this to free up space as soon as data is deleted?+
    You cannot automate this job, this is already automated by Oracle.
    +FYI, both pct_free and pct_used come up as empty values for my table.+
    reason for a NULL-value for PCT_USED is that the tablespace where the table resides uses automatic segments space management (ASSM). With ASSM there is no need for PCT_USED only for PCT_FREE. Oracle manages blocks automatically.
    You dont worry about releasing the space, Oracle will take care of it automatically.
    Go through the below link
    [http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN10065]
    [http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm#sthref2100]
    [http://www.dba-oracle.com/art_builder_assm.htm]
    This is an extract form Oracle docs
    Understanding Reclaimable Unused Space
    Over time, updates and deletes on objects within a tablespace can create pockets of empty space that individually are not large enough to be reused for new data. This type of empty space is referred to as fragmented free space.
    Objects with fragmented free space can result in much wasted space, and can impact database performance. The preferred way to defragment and reclaim this space is to perform an online segment shrink. This process consolidates fragmented free space below the high water mark and compacts the segment. After compaction, the high water mark is moved, resulting in new free space above the high water mark. That space above the high water mark is then deallocated. The segment remains available for queries and DML during most of the operation, and no extra disk space need be allocated.
    You use the Segment Advisor to identify segments that would benefit from online segment shrink. Only segments in locally managed tablespaces with automatic segment space management (ASSM) are eligible. Other restrictions on segment type exist. For more information, see "Shrinking Database Segments Online".
    If a table with reclaimable space is not eligible for online segment shrink, or if you want to make changes to logical or physical attributes of the table while reclaiming space, you can use online table redefinition as an alternative to segment shrink. Online redefinition is also referred to as reorganization. Unlike online segment shrink, it requires extra disk space to be allocated. See "Redefining Tables Online" for more information.Regards,
    Vijayaraghavan K

  • Long time reorganization table

    Hi Support
    I tried to run the table reorganization of 128 GB, this run takes 15 hours.
    Brtools run this reorganization, the options used were:
    - brtools
    - (3)
    - (1)
    - BRSPACE options for reorganization of tables
    1 - BRSPACE profile (profile) ...... [initRQ1.sap]
    2 - Database user/password (user) .. [/]
    3 ~ Reorganization action (action) . []
    4 ~ Tablespace names (tablespace) .. []
    5 ~ Table owner (owner) ............ []
    6 ~ Table names (table) ............ [GLPCA]
    7 - Confirmation mode (confirm) .... [yes]
    8 - Extended output (output) ....... [no]
    9 - Scrolling line count (scroll) .. [20]
    10 - Message language (language) .... [E]
    11 - BRSPACE command line (command) . [-p initRQ1.sap -s 20 -l E -f
    tbreorg -t "GLPCA"]
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    BR0662I Enter your choice:
    BR1001I BRSPACE 7.00 (41)
    BR1002I Start of BRSPACE processing: sehujakt.tbr 2012-02-01 16.11.59
    BR0484I BRSPACE log file: /oracle/RD1/sapreorg/sehujakt.tbr
    BR0280I BRSPACE time stamp: 2012-02-01 16.12.03
    BR1009I Name of database instance: RD1
    BR1010I BRSPACE action ID: sehujakt
    BR1011I BRSPACE function ID: tbr
    BR1012I BRSPACE function: tbreorg
    BR0280I BRSPACE time stamp: 2012-02-01 16.12.08
    BR0657I Input menu 353 - please enter/check input values
    Options for reorganization of tables: SAPRD1.GLPCA (degree 1)
    1 ~ New destination tablespace (newts) ........ []
    2 ~ Separate index tablespace (indts) ......... []
    3 - Parallel threads (parallel) ............... [1]
    4 ~ Table/index parallel degree (degree) ...... []
    5 - Create DDL statements (ddl) ............... [yes]
    6 ~ Category of initial extent size (initial) . []
    7 ~ Sort by fields of index (sortind) ......... []
    8 - Table reorganization mode (mode) .......... [online]
    Standard keys: c - cont, b - back, s - stop, r - refr, h - help
    and its execution was 15 hours
    Now this same process we have to do in production, the table has 329GB
    GLPCA approx.
    This table is archived run now do the reorganization is required to
    restore the real space in the table.
    I need to run this reorganization with the best option from Sete to avoid
    the high execution time of 15 hours in quality, because on average
    production would be 35 hours in such execution to this table and not so
    far achieved the best way to try to reduce the execution time for this
    process of reorganization.
    Thanks for your attention, I look forward to your comments
    Regards

    Hi,
    Here is a simple execution of an online table reorganization:
    -- Check table can be redefined
    EXEC Dbms_Redefinition.Can_Redef_Table('SCOTT', 'EMPLOYEES');
    -- Create new table with CTAS
    CREATE TABLE scott.employees2
    TABLESPACE tools AS
    SELECT empno, first_name, salary as sal
    FROM employees WHERE 1=2;
    -- Start Redefinition
    EXEC Dbms_Redefinition.Start_Redef_Table( -
      'SCOTT', -
      'EMPLOYEES', -
      'EMPLOYEES2', -
      'EMPNO EMPNO, FIRST_NAME FIRST_NAME, SALARY*1.10 SAL);
    -- Optionally synchronize new table with interim data
    EXEC dbms_redefinition.sync_interim_table( -
      'SCOTT', 'EMPLOYEES', 'EMPLOYEES2');
    -- Add new keys, FKs and triggers
    ALTER TABLE employees2 ADD
    (CONSTRAINT emp_pk2 PRIMARY KEY (empno)
    USING INDEX
    TABLESPACE indx);
    -- Complete redefinition
    EXEC Dbms_Redefinition.Finish_Redef_Table( -
      'SCOTT', 'EMPLOYEES', 'EMPLOYEES2');
    -- Remove original table which now has the name of the new table
    DROP TABLE employees2;
    Regards,
    Venkata S Pagolu
    Edited by: Venkata Pagolu on Feb 17, 2012 1:48 PM

  • DB02 view is empty on Table and Index analyses  DB2 9.7 after system copy

    Dear All,
                 I did the Quality refresh by System copy export/import method. ECC6 on HP-UX DB29.7.
    After Import Runstats status n Db02 for Table and Index analysis was empty and all value showing '-1'. Eventhough
    a) all standard backgrnd job scheduled in sm36
    b) Automatic runstats are enabled in db2 parameters
    c) Reorgchk all scheduled periodically from db13 and already ran twice.
    4) 'reorgchk update statistics on table all' was also ran on db2 level.
    but Run stats staus in db02 was not getting updated. Its empty.
    Please suggest.
    Regards
    Vinay

    Hi Deepak,
    Yes, that is possible (but only offline backup). But for the new features like reclaimable tablespace (to lower the high watermark)
    it's better to export/import with systemcopy.
    Also with systemcopy you can use index compression.
    After backup and restore you can have also reclaimable tablespace, but you have to create new tablespaces
    and then work with db6conv and online table move to move one tablespace online to the new one.
    Best regards,
    Joachim

  • How do I use Aggregate formulas with multiple measures from different tables?

    I have three measures:
    Cash - this sums the £ column in my 'Cash' table.
    Online - this sums the £ column in my 'Online'
    table
    Phone - this sums the £ column in my 'Phone'
    table
    How do I now do aggregate formulas that combine this three measures, for example:
    Find the MIN or MAX of the three measures
    Average the three measures
    I have worked out how to use simple aggregation like add, subtract etc through the CALCULATION formula, but doing the average or MIN/MAX does not work.
    Thanks.

    Hi, thanks for the suggestions.
    Re: Julian
    I had thought about this method, unfortunately it is not always three measures so this doesn't work.
    Re: Tim
    I was not aware of the APPEND formula however I will definitely give it a try and report back - I can see this one working.
    Re: Michael
    Apologies, I have never found a an easy way of simulating some of my issues since it would mean creating a new power model and establishing quite a number of relationships. I definitely see the benefit when posting on the forum since it makes my issue far more
    accessible, unfortunately when I've posted before I've generally been racing against time and not had time to prepare this anonymised data. Is there a quick way of  doing it?

  • Advice needed on designing schema to accomodate multiple transaction tables.

    Hi,
    The attached images shows my current schema. It consists of three transaction tables, a product table and a calendar table.
    - Background -
    The product table 'Q1 Data Set' contains all unique sales. In addition it also contains a number of columns by which I will later filter my pivot tables (e.g. whether the customer of the order is new/returning). This
    table also contains a column named 'DateOrdered',the date the order was originally placed (but not paid). 
    Each sale that is paid can be done so either in a single transaction, or across multiple transactions of different transaction types.
    An example of a sale  paid in multiple parts would be an order that has three transactions;
    one online (table 'trans_sagepay',
    one over the phone (table'trans_epdq')
    and another by card (table'trans_manual'). Furthermore there can be more than one transaction of each type for an sale.
    I have created measures which total the sales in each transaction table.
    Each transaction has a 'transaction_date' which is the date of that individual transaction.
    The calendar is simply a date table that has some friendly formatted columns for laying out pivot tables. An example column
    is FiscalMonthAbbrv which displays months similar to '(04) - January'
    to accommodate our fiscal year.
    - Problem -
    My problem is that I need the ability to create some tables that have the
    Date Ordered as the rows (listed by Year>Month), and I need to produce other tables that have
    Transaction Date as the rows.  
    Date ordered works fine, however the problem comes when I try and create a table based on the transaction date.
    With the current model seen in the attached image I cannot do it because the transactions have a relationship to
    Q1 Data Set and this table has the relationship with the
    Cal_Trans table. What happens in this scenario is that whenever I set the rows to be FiscalMonthAbbr  the values it displays is the transactions based not on transaction date but date ordered. To explain further:
    If I have an order A with a DateOrdered of 01/01/2014, but the transaction of £100 for that order was made later on the 05/01/2014, that £100 is incorrectly attributed to the 01/01/2014.
    To clarify the type of table I am aiming for see the mock-up below, I however NEED the ability to filter this table using columns found in
    Q1 Data Set.
    How can I make a schema so that I can use both DateOrdered and TransactionDate? I cannot combine all three transaction tables into one because each transaction type has columns unique to that specific type.

    Thanks for your suggestions, at the moment I don't have time to prepare a non-confidential copy of the data model, however I've taken one step forward, and one step back!
    First to clarify; to calculate sales of each transaction type I have created the following measures (I've given them friendly names):
    rev_cash
    rev_online
    rev_phone
    I then have a measure called rev_total which sums together the above measures. This allows me to calculate total revenue, but also to break it down by transaction type.
    With this in mind I revised the schema based on Visakh original suggestion to look like this:
    Using this I was able to produce a table which looked like that below:
    There were two issues with this:
    If I add the individual measures for each transaction type I get no errors, as soon as I add the 'Total Sales' measure on the end of the table I get an error "Relationship between tables may be needed". Seemingly however the numbers still calculate as expected
    - what is causing this error and how do I remove it?
    I CAN with this scenario filter by 'phd' which is a column in the Q1 Data Set table
    and it works as expected. I cannot however filter by all columns in this table, an example would be 'word count'.
    'Word Count' is a integer column, each record in the Q1 Data Set table has a value set for this column.
    I would like to take the column above and add a new measure called 'Total Word Count' (which I have created) which will calculate the total number of words in that monthly period. When I add this however I get the same relationship error as above and it
    display the word count total for the entire source tbale for every row of the pivot table.
    How can I get this schema working so that I can filter by word count and other columns from the product table. It Is confusing me how I can filter by one column, but not by a another in the same table.
    Also, I don't fully understand how I would add a second date table or how it would help my issues.
    Thanks very much for you help.

  • Long running table partitioning job

    Dear HANA grus,
    I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
    Total data volumn is around 340GB and the table size was 32GB !!!!!
    (migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
    Before partitioning, the data volumn of the table was around 32GB.
    After partitioning, the size has changed to 25GB.
    It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
    (It is QA DB, so less complaints)
    I thought that I might not can do this in the production DB.
    Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
    Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
    Any comments would be appreciate.
    Cheers,
    - Jason

    Jason,
    looks like we're cross talking here...
    What was your rationale to partition the table in the first place?
           => To reduce deleting time of CDPOS            (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
    Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
    As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
    Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
    The deletion then should run along these 4 sets of 25% of data.
    It's surely me, but where is the speedup potential here?
    How many unloads happened during the re-partitioning?
           => It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
    I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
    How do the now longer running SQL statements look like?
           => As i mentioned selecting/deleting increased almost twice.
    That's not what I asked.
    Post the SQL statement text that was taking longer.
    What are the three columns you picked for partitioning?
           => mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
    Why those? Because these are the primary key?
    I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
    In that case the partition pruning cannot work and all partitions have to be searched.
    How did you come up with 4 partitions? Why not 13, 72 or 213?
           => I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
    Alright, so basically that was arbitrary.
    For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
    Well, not sure what "most people" would do.
    HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
    - Lars

Maybe you are looking for

  • Colour Calibration for 24" iMac and setting up an ICC profile

    Hi, Can anyone recommend a way for calibrating the display of a 24" iMac (not too expensive)? I will mainly be using Photoshop, Aperture and web browsing (I do not need for business, but like to have the best set up possible). Also, on a similar note

  • Parse times in Oracle 11G

    Has anyone else experienced extremely long parse times for Oracle 11G versus 10G? We are experiencing at least a 10 times increase in the parsing of our SQL statements. This is causing our customers to complain when running reports which contain seve

  • Web Service and Java Classes

    Hello, I have list of java classes. I have one java class which calls only the required functions from the list of java classes. When I try to create a VI on that java class I see all the functions needed but all of them are grayed. Can anybody pleas

  • Not valid use of the New word

    Hi, I have an old script in visual basic 6 which I need to modify. In the variables declaration I have the following: Dim test as mscomm and later I instantiate this variable in the following way: set test = new mscomm In the components I have select

  • Measuring execution time

    Hi, I had written a program a while back which was simply loading in an xml file, parsing out a few tags, and then writing back to the same file. I thought of a way the parsing could be made more efficient, and it certainly seems to be running faster