Update on Partitioned table

Hi Friends,
I've a table which is partitioned based on the day of year. I need to know if the UPDATE statement on this table would consider ONLY the specific partition even if I DON'T SPECIFY the PARTITION clause in my UPDATE statement.
Hope my doubt is clear...
Any views on this regards is welcome...
Regards,
Aniruddh

Assuming your where clause narrows down the update to a specific partition, and your indexes are partitioned, it should. I'm not sure what happens if you have global indexes the query would benefit from.
Justin

Similar Messages

  • Compression getting disabled when performing Update on partitioned tables

    Hi All,
    I am on Oracle Database 11g Enterprise Edition Release 11.2.0.3.0.
    My question is related to Oracle Compression.
    I have a sub-partitioned table enabled with Basic Compression. In enabled compressed state, I am updating few columns of this table(Normal Update command and Merge as well) but the end result shows increase in the table size and the compression is still in ENABLED state. Post that, if I compress the sub-partition explicitly the table comes back to its original size.
    Is it a bug? I read a white paper on 11g itself, that compression remains enabled in case of all DML operations, then why this behaviour?
    Thanks,
    Ishan

    Ishan,
    taking a look at http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables.htm#CJAGFBFG It seems that the distinction between OLTP and basic compression is sometimes a little bit vague ("Operations that permit compression include: ..."), but I can also find the statement "Rows inserted without using direct-path insert and updated rows are uncompressed." So I would say it's not a bug but a limitation of the feature. Updates just don't mix well with compression.
    Martin

  • Updating a Partitioned Table

    Hi
    I have a partitioned table with 1000 million records with avg 20 million records in each partition
    and I want to update the entire table.
    The way I am updating the table is for e.g.
    UPDATE /*+ PARALLEL(A,8) BKG_FAC A
    SET
    A.BKG_FLAG = DECODE(A.BKG_FLAG, NULL, 'N','Y'),
    A.BKG_ITEM = DECODE('ABC','123','DEF','456','999')
    WHERE PART_KEY BETWEEN 1 AND 5     -- stPart and endPart
    AND SUBSTR(ebt.ROWID, 15, 1) BETWEEN STARTCHAR AND ENDCHAR;     
    Passing STARTCHAR AND ENDCHAR so that it splits the rows into 8 parts based on distinct rowid's.
    It works fine with the non partitioned tables.
    Looks like it is issuing lock on the entire table and only one partition is being executed at a time so it is taking long time to come out.
    Could some one shed some light on this and suggest best/fast way to do it.
    Thanks,
    Chandu.

    Yes I do with the following command in my procedure.
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

  • Problem in bulk update on partitioned table

    Hi,
    As per my old discussions on my huge table t_utr with 220 million rows,
    I'm running a bulk update statement on the table which may update 10 to 10 million rows in a single update statement.
    The problem is that when the statement has to update more number of rows, the update statement take more time.
    Here I want to know, when a update statement has to update more rows, will it impact the performance?
    Regards
    Deepak

    > I'm running a bulk update statement on the table
    which may update 10 to 10 million rows in a single
    update statement.
    Bulk updates does not make SQL statements execute any faster.
    > The problem is that when the statement has to update
    more number of rows, the update statement take more
    time.
    It is not a problem, but a fact.
    > Here I want to know, when a update statement has to
    update more rows, will it impact the performance?
    You have a car capable of traveling 120km/h. You drive from point A to point B. These are 10 km apart. It takes 5 minutes.
    Obviously when you travel from A to Z that are a 1000 km apart, it is going to take a lot longer than just 5 minutes.
    Will updating more rows impact performance? No. Because you cannot compare the time it takes to travel from point A to B with the time it takes to travel from point A to Z. It does not make sense wanting to compare the two. Or thinking that a 1000km journey will be as fast to travel than a 10km journey.
    Updating 10 rows cannot be compared to updating 10 million rows. Expecting a 10 million row update to be equivalent in "performance" to a 10 row update is ludicrous.
    The correct question to ask is how to optimise a 10 million row update. The optimisation methods for a large update is obviously very different than those of a small update. E.g. 5 I/Os per row updated is insignificant when updating 10 rows. But is very significant when updating 10 million rows.

  • Issue with updating partitioned table

    Hi,
    Anyone seen this bug with updating partitioned tables.
    Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
    We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
    Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
    I'd be interested if anyone has seen this and has a version of Sybase without the issue.
    Unfortunately when it happens on a replicated table - it takes down rep server.
    CREATE TABLE #table1
        (   PK          char(8) null,
            FileDate        date,
            changed         bit
    CREATE TABLE partitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    LOCK DATAROWS
      PARTITION BY RANGE (ValidTo)
      ( p2014 VALUES <= ('20141231') ON [default],
      p2015 VALUES <= ('20151231') ON [default],
      pMAX VALUES <= (MAX) ON [default]
    CREATE UNIQUE CLUSTERED INDEX pk
      ON partitioned(PK, ValidFrom, ValidTo)
      LOCAL INDEX
    CREATE TABLE unpartitioned  (
      PK         char(8) NOT NULL,
      ValidFrom     date DEFAULT current_date() NOT NULL,
      ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    LOCK DATAROWS
    CREATE UNIQUE CLUSTERED INDEX pk
      ON unpartitioned(PK, ValidFrom, ValidTo)
    insert partitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert unpartitioned
    select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    insert #table1
    select "ET00jPzh", "Jan 15 2015", 1
    union all
    select "ET00jPzh", "Jan 15 2015", 1
    go
    update partitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join partitioned p on (p.PK = t.PK)
    where  p.ValidTo = '99991231'
    and    t.changed = 1
    go
    update unpartitioned
    set    ValidTo = dateadd(dd,-1,FileDate)
    from   #table1 t
    inner  join unpartitioned u on (u.PK = t.PK)
    where  u.ValidTo = '99991231'
    and    t.changed = 1
    go
    drop table #table1
    go
    drop table partitioned
    drop table unpartitioned
    go

    wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
    wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
    Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
    -- testing with tinyint
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed tinyint
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned
    -- duplicating with 'int'
    1> use demo_db
    1>
    2> CREATE TABLE #table1
    3>     (   PK          char(8) null,
    4>         FileDate        date,
    5> --        changed         bit
    6>  changed int
    7>     )
    8>
    9> CREATE TABLE partitioned  (
    10>   PK         char(8) NOT NULL,
    11>   ValidFrom     date DEFAULT current_date() NOT NULL,
    12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
    13>   )
    14>
    15> LOCK DATAROWS
    16>   PARTITION BY RANGE (ValidTo)
    17>   ( p2014 VALUES <= ('20141231') ON [default],
    18>   p2015 VALUES <= ('20151231') ON [default],
    19>   pMAX VALUES <= (MAX) ON [default]
    20>         )
    21>
    22> CREATE UNIQUE CLUSTERED INDEX pk
    23>   ON partitioned(PK, ValidFrom, ValidTo)
    24>   LOCAL INDEX
    25>
    26> CREATE TABLE unpartitioned  (
    27>   PK         char(8) NOT NULL,
    28>   ValidFrom     date DEFAULT current_date() NOT NULL,
    29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
    30>   )
    31> LOCK DATAROWS
    32>
    33> CREATE UNIQUE CLUSTERED INDEX pk
    34>   ON unpartitioned(PK, ValidFrom, ValidTo)
    35>
    36> insert partitioned
    37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    38>
    39> insert unpartitioned
    40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
    41>
    42> insert #table1
    43> select "ET00jPzh", "Jan 15 2015", 1
    44> union all
    45> select "ET00jPzh", "Jan 15 2015", 1
    (1 row affected)
    (1 row affected)
    (2 rows affected)
    1>
    2> update partitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join partitioned p on (p.PK = t.PK)
    6> where  p.ValidTo = '99991231'
    7> and    t.changed = 1
    Msg 2601, Level 14, State 6:
    Server 'PHILLY_ASE', Line 2:
    Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
    Command has been aborted.
    (0 rows affected)
    1>
    2> update unpartitioned
    3> set    ValidTo = dateadd(dd,-1,FileDate)
    4> from   #table1 t
    5> inner  join unpartitioned u on (u.PK = t.PK)
    6> where  u.ValidTo = '99991231'
    7> and    t.changed = 1
    (1 row affected)
    1>
    2> drop table #table1
    1>
    2> drop table partitioned
    3> drop table unpartitioned

  • Does partition pruning happen while updating a partition

    Hi,
    I want to know if i am doing conditional update on partitioned table then do i need to mention the partition name.
    Does it do it by itself ?

    No. You should only specify the partition name if you are doing some sort of partition maintenance operation (dropping a partition, adding a partition, etc.). Assuming your UPDATE statement has a WHERE clause that includes the partition key(s), Oracle should automatically prune the partitions if it is reasonable to do so.
    Of course, it's a reasonable idea to run a query plan to confirm that the optimizer is behaving reasonably.
    Justin

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Update a partioned table dynamically

    Hi All,
    Parameters:
    MANDATORY input parameters, FROM_DATE and TO_DATE.
    OPTIONAL input parameter REVENUE_ID
    My requirement:
    To update table "ABC" based on the MANDATORY input parameters, FROM_DATE and TO_DATE. If the optional parameter, REVENUE_ID is entered, then, the column dept in ABC should get updated for the particular REVENUE_ID.
    Steps Taken:
    1) Since ABC has 3million rows.. A simple Update will take a long time and affect performance. So, I am updating it PARTITION wise.
    2) I am fetching the PARTITION name in the cursor.(Partition is done period Wise on the table).
    Code Created:
    CREATE OR REPLACE PROCEDURE apps.test_dept_upd_prc (
    p_from_date IN VARCHAR2,
    p_to_date IN VARCHAR2,
    p_revenue_id IN NUMBER
    AS
    CURSOR c_period (l_from_date DATE, l_to_date DATE)
    IS
    SELECT DISTINCT REPLACE (period_name, '-', '_') period_name,
    start_date,
    'ABC_'
    || REPLACE (period_name, '-', '_') partition_name
    FROM gl_period_statuses
    WHERE start_date BETWEEN TRUNC (TO_DATE (l_from_date), 'MM')
    AND TRUNC (TO_DATE (l_to_date), 'MM')
    AND set_of_books_id = 1
    AND application_id = 101
    ORDER BY start_date ASC;
    TYPE tbl_period IS TABLE OF c_period%ROWTYPE
    INDEX BY BINARY_INTEGER;
    rec_period tbl_period;
    l_from_date DATE;
    l_to_date DATE;
    l_partition VARCHAR2 (100);
    BEGIN
    -- DBMS_OUTPUT.ENABLE (1000000);
    l_from_date := TO_DATE (fnd_date.canonical_to_date (p_from_date));
    l_to_date := TO_DATE (fnd_date.canonical_to_date (p_to_date));
    OPEN c_period (l_from_date, l_to_date);
    LOOP
    FETCH c_period
    BULK COLLECT INTO rec_period LIMIT 100;
    EXIT WHEN c_period%NOTFOUND;
    FORALL i IN 1 .. rec_period.COUNT
    UPDATE ABC PARTITION (rec_period (i).partition_name) kr
    SET kr.dept =
    NVL ((SELECT rt.attribute1
    FROM ra_territories rt, hz_cust_site_uses_all hc
    WHERE rt.territory_id = hc.territory_id
    AND kr.ship_to_site_use_id = hc.site_use_id),
    kr.dept
    WHERE kr.dept NOT IN (SELECT attribute1
    FROM ra_territories)
    AND TRUNC (kr.creation_date) BETWEEN TO_DATE (l_from_date)
    AND TO_DATE (l_to_date)
    AND kr.revenue_id =
    NVL2 (p_revenue_id, p_revenue_id, kr.revenue_id);
    END LOOP;
    COMMIT;
    CLOSE c_period;
    END test_dept_upd_prc;
    Issue:
    When I compile the code, it gives me an error at line
    UPDATE ABC PARTITION (rec_period (i).partition_name) kr
    as below:
    PL/SQL: ORA-00971: missing SET keyword
    This is the 1st time, I am using the BULK Binding concept. But, I have not been able to debug.
    Please help me in solving this issue.
    Thanks!
    Regards,
    AB

    Partitioning is a PHYSICAL database design feature. Similar to the physical storage clause for the table. The block size of the tablespace for that table. The name of the primary key constraint of that table. Etc.
    Application code works with the LOGICAL database design. So the app code needs to know the name of the table, the columns, and what the relationships are (e.g. for correctly joining that table with others).
    App code does this in order to satisfy business requirements. Like processing an invoice. Or processing a leave application.
    Why should app code ever need to know physical features and characteristics of a table? What business requirement says that "+process the invoice+" and then "+oh yes, use index SYS_C102344 to verify the customer's id+"?
    Application DOES NOT need to know ANYTHING about the physical database implementation. It does not need to know whether the table is a hash table or index organised table. It does not need to know whether or not the table is clustered.
    And it does not need to know whether the table is partitioned.. nor what partitions should or should not be used.
    If your application needs to know physical attributes of the table, like partition name, then your application code and design are flawed.
    Oracle's CBO (Cost Based Optimiser) has all the data needed for it to make the decision of how to best use the physical features and attributes of the data objects used by your application SQL. It can perfectly well determine what partitions to use and not use.
    It makes no sense at all for your application code to infringe on what the CBO does - and does very well. Your code does not have the same abilities. Your code lacks the intelligence. Your application code will never be able to compete with the CBO.

  • I want to install OS X Mavericks on my MacBook Pro which is not with guid partition table

    i want to install OS X Mavericks on my MacBook Pro which had both mountain lion & windows 8.1, its not in GUID partition table format, so i couldnt install mavericks, so is there any way to change into GUID partition table format & install Mavericks  without losing Windows from my hard disk ?

    Open the App Store and upgrade iPhoto to the Mavericks version.
    iWork and iLife for Mac come free with every new Mac purchase. Existing users running Mavericks can update their apps for free from the Mac App Store℠. iWork and iLife for iOS are available for free from the App Store℠ for any new device running iOS 7, and are also available as free updates for existing users. GarageBand for Mac and iOS are free for all OS X Mavericks and iOS 7 users. Additional GarageBand instruments and sounds are available for a one-time in-app purchase of $4.99 for each platform.
    The iWork apps are free with a new iOS device since 1 SEP 2013. They are free with a new Mac since 1 OCT 2013. They are also free with the upgrade to OS X Mavericks 10.9 if you had the previous version installed when you upgraded.

  • Deadlock issue in Oracle 10g Partitioned Tables

    Hi ALL,
    I am facing an issue of Deadlock while inserting data into a partitioned table.
    I get an error "ORA-00600: Deadlock detected". when i see the trace files, following lines are appearing in them:
    "Single resource deadlock: blocking enqueue which blocks itself".
    Here is the detail of my test case:
    1. I have a list-partitioned table, with partitioning defined on some business codes.
    2. I have a query that merges data into partitioned table (actually compares unique keys between temporary table and partitioned table and then issue an insert if keys not matched, no update part).
    3. The temporary table contains transactional data against many business codes.
    3. when calling the above query from multiple (PL/SQL) sessions, i observe that when we merge data in same partition (from different sessions) than deadlock issue occurs, otherwise it is OK.
    4. Note that all sessions are executed at same time. Also note that Commit is called after each session is completed. Each session contains 2-3 more queries after the mentioned merge statement.
    Is there an issue with oracle merge/insert on same partition (from different sessions)? What is the locking mechanism for this particular case (partitioned tables)?
    My oracle version is Oracle 10g (10.2.0.4). Kindly advice.
    Thanks,
    QQ.

    Could you print the deadlock tree so we can see the type and mode of the locking. (Please use the 'code' tags - see FAQ at top right of screen - to showthe output in fixed font). can you list any SQL operated by this session that gets reported in the trace file.
    Does the table reference itself in a foreign key.
    Is this table involved in any referential integrity constraints.
    Do you have a global primary key index, or a local primary key index ?
    Are there any triggers on the table - if so do they contain autonomous transactions.
    At present the only though that springs to mind is that the merge command has to lock the target table to do the insert/update, but it also has to lock any child table. The mode of the child lock depends on whether it has a suitable index or not, and whether the child table IS also the parent table. If you have two merges to the same partition one partition may get its locks, and the other partition may be in a state where it can't get one of the locks because it's wait for the other. (This shouldn't be a self-deadlock, though, but the scenario might be heading in the right direction for a self-deadlock).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." (Stephen Hawking)

  • Partition Table Looks Different Between OSX and Windows 7

    Hey all,
    I recently replaced the hard drive on my 2007 iMac, going from the 320GB drive to a 1 TB drive. It actually worked! The previous drive was failing in very odd ways, though booting into the Windows side (more on dual boot later) always seemed to work, and S.M.A.R.T. always reported that the physical drive seemed OK.
    The previous drive (320 GB) had around 200GB devoted to OSX and 100GB partitioned off for a working Windows 7 installation (custom installed x64 Win7 Ultimate). I had the Windows system image backed up to my NAS, and had a Windows system bootable disc to restore that image.
    After replacing the drive (and almost crying that I had actually done it right), I first restored OSX from a Time Machine backup, and let it take the full 1TB of space as Journalled HFS+. Then, I used Disk Utility to shrink OSX down to 500GB, and created a second partition (formatted to NTFS) with the remaining 500GB.
    Now, restoring a Windows system image is an odd thing, as it tries to do a lot of partition work as opposed to simply restoring the Windows install to a partition. I tried Macrium Reflect first (made a backup in that, too), and it looked like it was going to let me restore to the second partition. It completed the restore...and the entire hard drive was hosed. Partitions had been moved, renamed, resized, and nothing was bootable. I had to use Recovery from an external USB thumb drive to go back to the single, full-drive install of OSX.
    Then I tried again. Made the second NTFS partition and used the basic Windows System Restore disk to restore from the standard system image I had on the NAS. I was not expecting this to work. But it did. Windows started showing up in Startup Manager when "option" was pressed on bootup, and both OSX and Windows booted properly and ran fine. This is where I (finally) get to the supreme oddities:
    OSX Disk Utility still reports two 500GB partitions, one for OSX and one for Windows.
    In OSX the Windows partition shows as having NO DATA on it. Not sure what would happen if I tried to write a file to it when mounted, but there is no data on it when viewed from OSX (I was always able to see the Windows files when I mounted that partition on the previous drive).
    The Windows partition does not show up as a valid bootable system in System Prefs --> Startup Disk (naturally, I suppose, since OSX doesn't think there is anything there).
    From the WINDOWS side, Windows still sees the old partition table: 200GB for the "unknown" HFS partition, and then the rest of the space can be devoted to Windows (started as 100GB, but I was ablt to expand it to use the remaining ~750GB!).  Windows thinks it can have 750GB of space even though I know its partition is only 500GB in size!
    Windows cannot see the OSX HFS partition data using HFSExplorer. It CAN see the HFS partition on the attached backup drive (the drive I use for Time Machine).
    GParted (a partition program on a Linux bootabld CD-ROM) shows the same partitions as OSX Disk Utility (2x500GB), and also thinks the Windows NTFS partition is empty (all space reports as "unused").
    Did I mention both OSX and Windows work fine???
    There are, of course, two other partitions on the drive: the first partition is the 200MB one I always see (EFI/GUID portion?), and then between the HFS and NTFS partitions is the 600MB recovery partition (which also shows at option-pressed boot time). OSX, GParted, and Windows see all four partitions, and in the same order. It is just that Windows sees the wrong sizes, and OSX cannot see any data in the Windows partition.
    Surely this is all going to break spectacularly at some point, isn't it? What if I ever did write a file to the Windows side from OSX, or what if OSX starts taking more space than the 200GB Windows thinks is the max for that partition? What if I try to make Windows use more than 500GB because it thinks it has almost 800GB to use? What if I defrag the Windows drive?
    I had no idea a partition table could look this goofy and yet still have everything be bootable and workable. Is there something I can do to get everything in sync? Basically, I am assuming I need to get Windows to do some low-level kung fu in Disk Manager in order to properly get everything lined up with the "right" partitions as reported by both GParted and OSX Disk Utility. But how do I do that?
    By the way, any ideas that totally nuke the drive and start from scratch are completely fine (if it seems like they are doing something different enough that I'd give it a try). I have good backups of both OSX and Windows and have restored them about a half dozen times already as I dealt with the previous failing hard drive and with trying to get dual-boot working again. Not to mention, this iMac is now my secondary machine to the new Mac Mini I got a couple weeks back when I wasn't sure how much more life I was going to get from this 6+ year old iMac.
    Thanks for listening to me ramble about this very odd issue, and a huge THANK YOU in advance to anyone who has ideas to help.
    Thanks,
    sutekh138

    Update:
    I am pretty sure the issue is a simple GPT/MBR discrepancy.
    I installed rEFIt and used it's partitioning tool (gptsync built in) upon bootup. It was able to show the GPT table and the MBR table, but it thinks the second partition of the drive (the Mac OSX bootable partition) is "extended" in the MBR table and says "will not touch this disk."
    However, it does look like an MBR sync should be straightforward, as there four partitions in the GPT table and four in the MBR (and MBR allows a max of four, AFAIK). I just need gptsync to relax some rules. I found a link to a supposedly newer version of gptsync compiled for OSX, so I will try that later.
    First, I will try Partition Wizard, a free tool I found for the Windows side. It has a "Repair MBR" option that I would have tried last night if I weren't running a new Windows Image Backup in case all of this goes haywire.  *smile*  The PW tool also has an option to change the MBR over to GPT entirely. That might work, but then I am not sure Windows 7 will boot (from what I read, x64 Win7 running on EFI-enabled hardware should work, but who knows).
    Anyway, I will try the following things, in order, until something works, when I get home tonight:
    From Windows, run Partition Wizard and try "Repair MBR".
    From OSX, download recent gptsync and try to run it.
    From Windows, use Partition Wizard to do a full MBR --> GPT conversion.
    Nuke the Windows partition in OSX Disk Utility, expand the HFS partition to take up the whole drive, and then add a Windows-bootable partition via Boot Camp-ish command line commands (diskutil). Because if nothing else works, I have to assume I just created the partitions wrong in the first place such that a Windows restore miraculously works, but the partition weirdness is just a timebomb waiting to happen.
    Finally, if none of the above work, I'll just get things back to the way they now work and wait for the timebomb to (possibly never) go off.  *smile*
    I'll update this thread if I get something figured out, in case anyone else stumbles upon it...
    Thanks,
    sutekh138

  • CTAS using dbms_metadata.get_ddl for Partitioned table

    Hi,
    I would like to create a temporary table from a partitioned table using CTAS. I plan to use the following steps in a PL/SQL procedure:
    1. Use dbms_metadata.get_ddl to get the script
    2. Use raplace function to change the tablename to temptable
    3. execute the script to get the temp table created.
    SQL> create or replace procedure p1 as
    2 l_clob clob;
    3 str long;
    4 begin
    5 SELECT dbms_metadata.get_ddl('TABLE', 'FACT_TABLE','USER1') into l_clob FROM DUAL;
    6 dbms_output.put_line('CLOB Length:'||dbms_lob.getlength(l_clob));
    7 str:=dbms_lob.substr(l_clob,dbms_lob.getlength(l_clob),1);
    8 dbms_output.put_line('DDL:'||str);
    9 end;
    12 /
    Procedure created.
    SQL> exec p1;
    CLOB Length:73376
    DDL:
    PL/SQL procedure successfully completed.
    I cannot see the DDL at all. Please help.

    Thanks Adam. The following piece of code is supposed to do that. But, its failing because the dbms_lob.substr(l_clob,4000,4000*v_intIdx +1); is putting newline and therefore dbms_sql.parse
    is failing.
    Please advice.
    create table my_metadata(stmt_no number, ddl_stmt clob);
    CREATE OR REPLACE package USER1.genTempTable is
    procedure getDDL;
    procedure createTempTab;
    end;
    CREATE OR REPLACE package body USER1.genTempTable is
    procedure getDDL as
    Description: get a DDL from a partitioned table and change the table name
    Reference: Q: How Could I Format The Output From Dbms_metadata.Get_ddl Utility? [ID 394143.1]
    l_clob clob := empty_clob();
    str long;
    l_dummy varchar2(25);
    dbms_lob does not have any replace function; the following function is a trick to do that
    procedure lob_replace( p_lob in out clob, p_what in varchar2, p_with in varchar2 )as
    n number;
    begin
    n := dbms_lob.instr( p_lob, p_what );
    if ( nvl(n,0) > 0 )
    then
    dbms_lob.copy( p_lob,
    p_lob,
    dbms_lob.getlength(p_lob),
    n+length(p_with),
    n+length(p_what) );
    dbms_lob.write( p_lob, length(p_with), n, p_with );
    if ( length(p_what) > length(p_with) )
    then
    dbms_lob.trim( p_lob,
    dbms_lob.getlength(p_lob)-(length(p_what)-length(p_with)) );
    end if;
    end if;
    end lob_replace;
    begin
    DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
    DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SEGMENT_ATTRIBUTES',false);
    DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
    DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM,'SEGMENT_ATTRIBUTES',false);
    execute immediate 'truncate table my_metadata';
    -- Get DDL
    SELECT dbms_metadata.get_ddl('TABLE', 'FACT','USER1') into l_clob FROM DUAL;
    -- Insert the DDL into the metadata table
    insert into my_metadata values(1,l_clob);
    commit;
    -- Change the table name into a temporary table
    select ddl_stmt into l_clob from my_metadata where stmt_no =1 for update;
    lob_replace(l_clob,'"FACT"','"FACT_T"');
    insert into my_metadata values(2,l_clob);
    commit;
    -- execute immediate l_clob; <---- Cannot be executed in 10.2.0.5; supported in 11gR2
    DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT');
    end getDDL;
    Procedure to create temporary table
    procedure createTempTab as
    v_intCur pls_integer;
    v_intIdx pls_integer;
    v_intNumRows pls_integer;
    v_vcStmt dbms_sql.varchar2a;
    l_clob clob := empty_clob();
    l_str varchar2(4000);
    l_length number;
    l_loops number;
    begin
    select ddl_stmt into l_clob from my_metadata where stmt_no=2;
    l_length := dbms_lob.getlength(l_clob);
    l_loops := ceil(l_length/4000);
    for v_intIdx in 0..l_loops loop
    l_str:=dbms_lob.substr(l_clob,4000,4000*v_intIdx +1);
    l_str := replace(l_str,chr(10),'');
    l_str := replace(l_str,chr(13),'');
    l_str := replace(l_str,chr(9),'');
    v_vcStmt(v_intIdx) := l_str;
    end loop;
    for v_intIdx in 0..l_loops loop
    dbms_output.put_line(v_vcStmt(v_intIdx));
    end loop;
    v_intCur := dbms_sql.open_cursor;
    dbms_sql.parse(
    c => v_intCur,
    statement => v_vcStmt,
    lb => 0,
    --ub => v_intIdx,
    ub => l_loops,
    lfflg => true,
    language_flag => dbms_sql.native);
    v_intNumRows := dbms_sql.execute(v_intCur);
    dbms_sql.close_cursor(v_intCur);
    end createTempTab;
    end;
    /

  • Deadlock issue in Partitioned Tables

    Hi ALL,
    I am facing an issue of Deadlock while inserting data into a partitioned table.
    I get an error "ORA-00600: Deadlock detected". when i see the trace files, following lines are appearing in them:
    "Single resource deadlock: blocking enqueue which blocks itself".
    Here is the detail of my test case:
    1. I have a list-partitioned table, with partitioning defined on some business codes.
    2. I have a query that merges data into partitioned table (actually compares unique keys between temporary table and partitioned table and then issue an insert if keys not matched, no update part).
    3. The temporary table contains transactional data against many business codes.
    3. when calling the above query from multiple (PL/SQL) sessions, i observe that when we merge data in same partition (from different sessions) than deadlock issue occurs, otherwise it is OK.
    4. Note that all sessions are executed at same time. Also note that Commit is called after each session is completed. Each session contains 2-3 more queries after the mentioned merge statement.
    Is there an issue with oracle merge/insert on same partition (from different sessions)? What is the locking mechanism for this particular case (partitioned tables)?
    My oracle version is Oracle 10g (10.2.0.4). Kindly advice.
    Thanks,
    QQ.

    Oracle MERGE statements are slow as they must validate every record before insert.
    If you use array processing with BULK COLLECT and FORALL with the SAVE EXCEPTIONS clause you can avoid most of the overhead. Just collect your rows in an array, issue a FORALL INSERT SAVE EXCEPTIONS and let Oracle handle whatever happens.
    When Oracle is done, and it will be hundreds of times faster than what you are doing now, you can either process or ignore the records in the exceptions array.
    Another solution, more efficient if you can do it, is to just to an INSERT INTO SELECT FROM using an exceptions table created with DBMS_ERRLOG.
    www.psoug.org/reference/dbms_errlog.html

  • Partition Tables - issue

    I have lots of partitioned tables in my schema , the requirement is to identify the current partition of the table.
    Let me explain this with an eg.
    Lets say I have a table T , with 24 partitions .. starting for Jan -04 to Dec-05. Now , my current partition is Dec-04. Partition w.r.t sysdate
    How can I achive this by querying the data dictionary

    120196, I think you are mistaken. There is no such thing as "current" partition. Oracle offers 3 types of partition, i.e. range partitioning, hash partitioning and composite partitioning. For all 3 types of partitions, you specify the column you want to partition on. Then, then when you insert/update/delete rows, Oracle does "partition elimination" (if the columns you partitioned is part of the query), which greatly improves performance. So, the partition you deal with (I guess this is what you mean by current partition) is the same one your row has to deal with. Hope this makes it clearer and gives you a 10,000 foot view. There is lots more to partitioning.
    -Raj Suchak
    [email protected]

  • Materialized view on a Partitioned Table (Data through Exchange Partition)

    Hello,
    We have a scenario to create a MV on a Partitioned Table which get data through Exchange Partition strategy. Obviously, with exchange partition snap shot logs are not updated and FAST refreshes are not possible. Also, this partitioned table being a 450 million rows, COMPLETE refresh is not an option for us.
    I would like to know the alternatives for this approach,
    Any suggestions would be appreciated,
    thank you

    From your post it seems that you are trying to create a fast refresh view (as you are creating MV log). There are limitations on fast refresh which is documented in the oracle documentation.
    http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007028
    If you are not planning to do a fast refresh then as already mentioned by Solomon it is a valid approach used in multiple scenarios.
    Thanks,
    Jayadeep

Maybe you are looking for

  • How do I restore Time Machine of my iMac Apps & Files to my MacBook?

    My situation: I have 2 computers (iMac and MacBook Pro). My MacBook Pro has a suite of software that I cannot transfer to my iMac. Why? Because the liscencing was computer specific (foolish me for thinking I could use one lisnence and have it used on

  • Purchase order related

    Hi, service purchase order created & placed to(X) vendor for service.We have made 50% Down payment for that vendor(X). The user requirement is that if he creates one more service purchase order against the same vendor(x) system should give some messa

  • Read source code of a method

    There's still one thing that i can't do, how can i read a source code of a method. Ex: read report '...~...' into .... . Maybe the answer is in your previously answer, but my english.... thanks for wasting all this time with me.

  • What is the configuration step assigen 2 plant to one company code

    hi all, i want assigen one code to 2 plants, what is configuration  step what is the special roll in chart of a/c, document number range, thanks in advance milind

  • Using CIS to get UCM environment variables

    Hi, I am working on extending an existing CIS application. I would need to read the UCM's environment variable value. I see there are CIS APIs for the most common actions needed, but couldn't find anything useful in the docs - is there something simi