Reference partition with Trigger
This post is an extended version if my previous post -Dynamic Reference partition by Trigger on Feb 24
I need to create a Dynamic Reference partition with Trigger.
Requirement:-
1. It should be a list partition
2. It should be a referencing Partition (There are 2 tables with Parent Child relation)
3. It should be dynamic(trigger) (Whenever new value enter to the List, It should be create a New partition by trigger, As per my understanding interval partitioning is dynamic but it will work only for range partition. )
In other way, I can manually create both the tables with List partition initially itself, but when the new value is added to the List the table supposed to add the new partition. List value is unknown at the time of creation. Since the List value may increase and in future it will be more than five, if it is five we need five partition so partition with a default clause also not possible here.
Almost I have completed this task, but facing some issues. Please find the example i have done. (Please note that it is just an example and not my actual requirement)
==============================================================================================================
--TABLE 1
create table customers
cust_id number primary key,
cust_name varchar2(200),
rating varchar2(1) not null
partition by list (rating)
partition pA values ('A'),
partition pB values ('B')
--TABLE 2
create table sales
sales_id number primary key,
cust_id number not null,
sales_amt number,
constraint fk_sales_01
foreign key (cust_id)
references customers
partition by reference (fk_sales_01);
--TRIGGER
CREATE OR REPLACE TRIGGER CUST_INSERT
BEFORE INSERT ON CUSTOMERS FOR EACH ROW
DECLARE
V_PARTITION_COUNT NUMBER;
V_PART_NAME VARCHAR2 (100) := :NEW.RATING;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
SELECT COUNT (PARTITION_NAME)
INTO V_PARTITION_COUNT
FROM USER_TAB_PARTITIONS
WHERE TABLE_NAME = 'CUSTOMERS' AND PARTITION_NAME = V_PART_NAME;
IF V_PARTITION_COUNT = 0 THEN
EXECUTE IMMEDIATE 'ALTER TABLE CUSTOMERS ADD PARTITION '||V_PART_NAME||' VALUES ('''||V_PART_NAME||''')' ;
END IF;
END;
--INSERT TO CUSTOMER
insert into customers values (111, 'OOO', 'C');-- A and B are already in the Customer table List along with the create script, so I am inserting new row with 'C'
==============================================================================================================
When i am inserting the new value 'C' to the customer table it supposed to create a new partition along with new row to the table.
But I am getting an error like "ORA-14400: inserted partition key does not map to any partition", like partition is not exist.
But If I execute the same insert statement again it is inserting the row.
That means Even though it is a before insert trigger, insert is trying to happen before the partition creation.
Since it is before insert trigger My expectation is
a) Create a Partition first
b) Insert the record second
But actual behavior is
a) Try to insert first and failing
b) Creating the partition second
That is the reason when I am trying second time the record is getting inserted.
Can anyone help me on this, Thanks in advance.
Shijo
You can't do this with a trigger. And you really, really don't want to.
As you've discovered, Oracle performs a number of checks to determine whether an insert is valid before any triggers are executed. It is going to discover that there is no appropriate partition before your trigger runs. Even if you somehow worked around that problem, it's likely that you'd end up in a rather problematic position from a lock standpoint since you'd be trying to do DDL on the object in the autonomous transaction while doing an insert likely from a stored procedure that would itself be made invalid because of the DDL in another-- that's quite likely to cause an error anyway. And that's before we get into the effects of doing DDL on other sessions running at the same time.
Why do you believe that you need to add partitions dynamically? If you are doing ETL into a data warehouse, it makes sense to add any partitions you need at the beginning of the load before you start inserting the data, not in a trigger. If this is some sort of OLTP application, it doesn't make sense to do DDL while the application is running. I could see potentially having a default partition and then having a nightly job that does a split partition to add whatever list partitions you need but that would strike me as a corner case at best. If you really don't know what the possible values are and users are constantly creating new ones, list partitioning seems like a poor choice to me-- wouldn't a hash partition make more sense?
Justin
Similar Messages
-
Hi everybody,
I have a little problem with distributed reference partition.
We work with Forte 30L2 and some connected environments, each one
corresponds as a node (a server).
1. ENV-DEVELOP: environment that contains repositories in which developers
are coding applications.
2. ENV-TEST: environment that contains repositories where some users and
help-desk test our applications.
3. ENV-DISTRIB: Super-Environment that contains repositories from where I
distribute, then deploy the versions for each connected env.
4. ENV-PRODUCTION: environment which contains some SO whose communicate
with our mainframes.
5-xx. ENVxx: our remote environments whose use our distributed applications.
The normal flow of our development is:
The developers work in ENV-DEVELOP
Once done, I export pex or cex into the ENV-TEST
Once ok, I export pex or cex into ENV-DISTRIB from where I distribute the
final applications into ENVxx.
In evidence, you imagine that the versions are different between this
consecutive flow.
The only way we can communicate with our mainframe is with ENV-PRODUCTION
where I have any sockets only available from this server. In this server I
have one SO which comes from ENV-DISTRIB (the only way I can distribute
app).
Let's say that this reference partition plan.SO's name is
MF_central.MF_online(CL1).
Now, my problem resides when I have to update and test this reference
partition with the different versions of our client applications (different
environments and repositories).
What I mean is :
Install a new version of MF_central.MF_online(CL2) on ENV-PRODUCTION (from
ENV-DISTRIB)
The source of this plan is the same in ENV-DEVELOP,ENV-TEST and
ENV-DISTRIB.
But I cannot communicate... The scope of this class is not the same.
The only way to run the client app properly is when I distribute the client
app. in the same environment from which I've distributed the ref.part. The
goal is to join this with all our versions of our application.
Thank you very much for your knowledge !You can't do this with a trigger. And you really, really don't want to.
As you've discovered, Oracle performs a number of checks to determine whether an insert is valid before any triggers are executed. It is going to discover that there is no appropriate partition before your trigger runs. Even if you somehow worked around that problem, it's likely that you'd end up in a rather problematic position from a lock standpoint since you'd be trying to do DDL on the object in the autonomous transaction while doing an insert likely from a stored procedure that would itself be made invalid because of the DDL in another-- that's quite likely to cause an error anyway. And that's before we get into the effects of doing DDL on other sessions running at the same time.
Why do you believe that you need to add partitions dynamically? If you are doing ETL into a data warehouse, it makes sense to add any partitions you need at the beginning of the load before you start inserting the data, not in a trigger. If this is some sort of OLTP application, it doesn't make sense to do DDL while the application is running. I could see potentially having a default partition and then having a nightly job that does a split partition to add whatever list partitions you need but that would strike me as a corner case at best. If you really don't know what the possible values are and users are constantly creating new ones, list partitioning seems like a poor choice to me-- wouldn't a hash partition make more sense?
Justin -
How to use Oracle partitioning with JPA @OneToOne reference?
Hi!
A little bit late in the project we have realized that we need to use Oracle partitioning both for performance and admin of the data. (Partitioning by range (month) and after a year we will move the oldest month of data to an archive db)
We have an object model with an main/root entity "Trans" with @OneToMany and @OneToOne relationships.
How do we use Oracle partitioning on the @OneToOne relationships?
(We'd rather not change the model as we already have millions of rows in the db.)
On the main entity "Trans" we use: partition by range (month) on a date column.
And on all @OneToMany we use: partition by reference (as they have a primary-foreign key relationship).
But for the @OneToOne key for the referenced object, the key is placed in the main/source object as the example below:
@Entity
public class Employee {
@Id
@Column(name="EMP_ID")
private long id;
@OneToOne(fetch=FetchType.LAZY)
@JoinColumn(name="ADDRESS_ID")
private Address address;
EMPLOYEE (table)
EMP_ID FIRSTNAME LASTNAME SALARY ADDRESS_ID
1 Bob Way 50000 6
2 Sarah Smith 60000 7
ADDRESS (table)
ADDRESS_ID STREET CITY PROVINCE COUNTRY P_CODE
6 17 Bank St Ottawa ON Canada K2H7Z5
7 22 Main St Toronto ON Canada L5H2D5
From the Oracle documentation: "Reference partitioning allows the partitioning of two tables related to one another by referential constraints. The partitioning key is resolved through an existing parent-child relationship, enforced by enabled and active primary key and foreign key constraints."
How can we use "partition by reference" on @OneToOne relationsships or are there other solutions?
Thanks for any advice.
/MatsCrospost! How to use Oracle partitioning with JPA @OneToOne reference?
-
Reference partitioning and partition pruning
Hi All,
I am on v 11.2.0.3.
I have a pair of typical parent-child tables. Child table is growing like hell, hence we want to partition it, which will be used for deleting/dropping old data later on.
There is no partitioning key in the child table which I can use for relating the data to the time when data was created. So, I thought I can use the timestamp from parent table for partitioning the parent table and reference partition the child table.
I am more concerned about the child table (or the queries running on the child table) in terms of performance. ITEM_LIST_ID from the child table is extensively used in queries to access data from child table.
How will partition pruning work when the child table is queried on the foreign key? will it every time go to the parent table, find out the partition and then resolve the partition for child table?
The setup is given in the scripts below, will it cause lot of locking (to resolve partitions)? or am I worrying for nothing?
Here are the scripts
CREATE TABLE ITEM_LISTS /* Parent table, tens of thousands of records */
ITEM_LIST_ID NUMBER(10) NOT NULL PRIMARY KEY, /* Global index on partitioned table !!! */
LIST_NAME VARCHAR2(500) NOT NULL,
FIRST_INSERTED TIMESTAMP(6) NOT NULL
PARTITION BY RANGE ( FIRST_INSERTED )
partition p0 values less than ( to_date('20130101','YYYYMMDD') ),
partition p201301 values less than ( to_date('20130201','YYYYMMDD') ),
partition p201302 values less than ( to_date('20130301','YYYYMMDD') ),
partition p201303 values less than ( to_date('20130401','YYYYMMDD') ),
partition p201304 values less than ( to_date('20130501','YYYYMMDD') ),
partition p201305 values less than ( to_date('20130601','YYYYMMDD') )
CREATE INDEX ITEM_LISTS_IDX1 ON ITEM_LISTS ( LIST_NAME ) LOCAL ;
CREATE TABLE ITEM_LIST_DETAILS /* Child table, millions of records */
ITEM_ID NUMBER(10) NOT NULL,
ITEM_LIST_ID NUMBER(10) NOT NULL, /* Always used in WHERE clause by lots of big queries */
CODE VARCHAR2(30) NOT NULL,
ALT_CODE VARCHAR2(30) NOT NULL,
CONSTRAINT ITEM_LIST_DETAILS_FK
FOREIGN KEY ( ITEM_LIST_ID ) REFERENCES ITEM_LISTS
PARTITION BY REFERENCE ( ITEM_LIST_DETAILS_FK )
CREATE INDEX ITEM_LIST_DETAILS_IDX1 ON ITEM_LIST_DETAILS (ITEM_ID) LOCAL;
CREATE INDEX ITEM_LIST_DETAILS_IDX2 ON ITEM_LIST_DETAILS (ITEM_LIST_ID, CODE) LOCAL;Any thoughts / opinions / corrections ?
Thanks in advanceTo check how partition pruning works here, I inserted some data in these tables. Inserted data in ITEM_LISTS (parent) from DBA_OBJECTS ( object_id => item_list_id, object_name => list_name, first_inserted => manually created). Also created corresponding child data in ITEM_LIST_DETAILS, so that, for every item_list_id in parent about 5000 records go in child table.
Looking at the queries and plan below, my question is, what exactly does the operations "PARTITION REFERENCE SINGLE" and "PARTITION REFERENCE ITERATOR" imply ??
I gave a search on "PARTITION REFERENCE ITERATOR" in Oracle 11.2 documentation, it says "No exact match found" !!
/* Direct query on child table */
SQL> select count(*) from item_list_details where item_list_id = 6323 ;
COUNT(*)
5000
1 row selected.
Execution Plan
Plan hash value: 2798904155
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 5 | 22 (0)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 5 | | | | |
| 2 | PARTITION REFERENCE SINGLE| | 5000 | 25000 | 22 (0)| 00:00:01 | KEY | KEY |
|* 3 | INDEX RANGE SCAN | ITEM_LIST_DETAILS_IDX2 | 5000 | 25000 | 22 (0)| 00:00:01 | KEY | KEY |
Predicate Information (identified by operation id):
3 - access("ITEM_LIST_ID"=6323)
SQL> select * from temp1; /* Dummy table to try out some joins */
OBJECT_ID OBJECT_NAME
6598 WRH$_INTERCONNECT_PINGS
1 row selected.
/* Query on child table, joining with some other table */
SQL> select count(*)
2 from temp1 d, ITEM_LIST_DETAILS i1
3 where d.object_id = i1.item_list_id
4 and d.object_name = 'WRH$_INTERCONNECT_PINGS';
COUNT(*)
5000
1 row selected.
Execution Plan
Plan hash value: 2288153583
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 70 | 24 (0)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | 70 | | | | |
| 2 | NESTED LOOPS | | 5000 | 341K| 24 (0)| 00:00:01 | | |
|* 3 | TABLE ACCESS FULL | TEMP1 | 1 | 65 | 3 (0)| 00:00:01 | | |
| 4 | PARTITION REFERENCE ITERATOR| | 5000 | 25000 | 21 (0)| 00:00:01 | KEY | KEY |
|* 5 | INDEX RANGE SCAN | ITEM_LIST_DETAILS_IDX2 | 5000 | 25000 | 21 (0)| 00:00:01 | KEY | KEY |
Predicate Information (identified by operation id):
3 - filter("D"."OBJECT_NAME"='WRH$_INTERCONNECT_PINGS')
5 - access("D"."OBJECT_ID"="I1"."ITEM_LIST_ID") -
If I have tables
A, B, C where I have 1->M->M
if I have reference partitioning everything ends up in the same partition based on A's partitioning defintion.
If when I query I only specify the partition key to quality C when I'm joining from A to B to C is the optimizer smart enough to know that everything is in the same partition and therefore even if I do like '%some state about an A' that that will result in only a full partition scan and not a full table scan? because the query has qualified C also with the partition key?
Is this a classic case of where reference partitioning is desired because you're qualfying your query only at the lowest level but you risk doing full table scans at the highest level because the query often uses wildcards and work top down instead of bottom up without this form of partitioning?
The alternative approach was to push the state '%some state about an A' down into C's table and instead of qualifying
A with wildcards you'd change that to C with wildcards with the same column. So the optimizer is smart enough to work bottom up instead of top down.
If you just have range partitions on all three tables. Why does the optimizer choose to do a full table scan on A when you use like '%some state about an A' even when you're joining to a C thru a B and specifying the partition key for the C in the query?
What Oracle SQL hints forces it to follow the partition key and not try to evaluate the query top down by first tablescanning A? Instead you want it to figure out from the C's what the B's are and then from the B's narrow to the full set of a A's and then using the wildcard pattern on the all the rowid's for A.
ie. bottom up instead of top down.
The idea then is that you avoid full tablescans on A.
Top down results in a full table scan
Bottom up results in a full partition scan only.
Edited by: steffi on Jan 19, 2011 9:08 PMIt sounds like you've found that it is not working to push the predicate on the partition key up to A, and therefore prune partitions on A. Do you have an explain plan to demonstrate that? I wouldn't be very suprised (though I would be disappointed) to find that it was so as I think the optimizer might reasonably assume that the partition pruning predicate could (should?) be placed on the parent table as easily as on the child.
-
Reference partitioning - thoughts
Hi,
At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.
Large fact table similarly partitioned date range-hash local bitmap indexes
Suggested to use would get automatic partition-wise joins if used reference partitioning
Would have thought would get that with range-hash on both dimension
Any disadvtanatges with reference partitioning.
Know can't us range interval partitioning.
Thanks>
At moment, the large dimension table and large fact table are have the same partitioning strategy but partitioned independently(range-hash)
the range column is a date datatype and the hash column is the surrogate key
>
As long as the 'hash column' is the SAME key value in both tables there is no problem. Obviously you can't hash on one column/value in one table and a different one in the other table.
>
With regards null values the dimesnion table has 3 levels in it (part of a dimensional model data wqarehouse)i.e. the date on which tabel partitioned is only at the loest level of the diemnsion.
>
High or low doesn't matter and, as you ask in your other thread (Order of columsn in table - how important from performance perspective the column order generally doesn't matter.
>
By default in a diemsnional model data warehouse, this attribute not populated in the higher levels therefore is a default null value in the dimension table for such records
>
Still not clear what you mean by this. The columns must be populated at some point or they wouldn't need to be in the table. Can you provide a small sample of data that shows what you mean?
>
The problem the performance team are attempting to solve is as follows:
the tow tables are joined on the sub-partition key, they have tried joined the two tables together on the entire partition key but then complained they don'y get star transformation.
>
Which means that team isn't trying to 'solve' a problem at all. They are just trying to mechanically achieve a 'star transformation'.
A full partition-wise join REQUIRES that the partitioning be on the join columns or you need to use reference partitioning. See the doc I provided the link for earlier:
>
Full Partition-Wise Joins
A full partition-wise join divides a large join into smaller joins between a pair of partitions from the two joined tables. To use this feature, you must equipartition both tables on their join keys, or use reference partitioning.
>
They believe that by partitioning by reference as opposed to indepently they will get a partition-wise join automatically.
>
They may. But you don't need to partition by reference to get partition-wise joins. And you don't need to get 'star transformation' to get the best performance.
Static partition pruning will occur, if possible, whether a star transformation is done or not. It is dynamic pruning that is done AFTER a star transform. Again, you need to review all of the relevant sections of that doc. They cover most of this, with example code and example execution plans.
>
Dynamic Pruning with Star Transformation
Statements that get transformed by the database using the star transformation result in dynamic pruning.
>
Also, there are some requirements before star transformation can even be considered. The main one is that it must be ENABLED; it is NOT enabled by default. Has your team enabled the use of the star transform?
The database data warehousing guide discusses star queries and how to tune them:
http://docs.oracle.com/cd/E11882_01/server.112/e25554/schemas.htm#CIHFGCEJ
>
Tuning Star Queries
To get the best possible performance for star queries, it is important to follow some basic guidelines:
A bitmap index should be built on each of the foreign key columns of the fact table or tables.
The initialization parameter STAR_TRANSFORMATION_ENABLED should be set to TRUE. This enables an important optimizer feature for star-queries. It is set to FALSE by default for backward-compatibility.
When a data warehouse satisfies these conditions, the majority of the star queries running in the data warehouse uses a query execution strategy known as the star transformation. The star transformation provides very efficient query performance for star queries.
>
And that doc section ALSO has example code and an example execution plan that shows that the star transform is being use.
That also also has some important info about how Oracle chooses to use a star transform and a large list of restrictions where the transform is NOT supported.
>
How Oracle Chooses to Use Star Transformation
The optimizer generates and saves the best plan it can produce without the transformation. If the transformation is enabled, the optimizer then tries to apply it to the query and, if applicable, generates the best plan using the transformed query. Based on a comparison of the cost estimates between the best plans for the two versions of the query, the optimizer then decides whether to use the best plan for the transformed or untransformed version.
If the query requires accessing a large percentage of the rows in the fact table, it might be better to use a full table scan and not use the transformations. However, if the constraining predicates on the dimension tables are sufficiently selective that only a small portion of the fact table must be retrieved, the plan based on the transformation will probably be superior.
Note that the optimizer generates a subquery for a dimension table only if it decides that it is reasonable to do so based on a number of criteria. There is no guarantee that subqueries will be generated for all dimension tables. The optimizer may also decide, based on the properties of the tables and the query, that the transformation does not merit being applied to a particular query. In this case the best regular plan will be used.
Star Transformation Restrictions
Star transformation is not supported for tables with any of the following characteristics:
>
Re reference partitioning
>
Also this is a data warehouse star model and mentioned to us that reference partitioning not great with local indexes - the large fact table has several local bitmpa indexes.
Any thoughts on reference partitioning negatively impacting performance in this way compared to standalone partitioned table.
>
Reference partitioning is for those situations where your child table does NOT have a column that the parent table is being partitioned on. That is NOT your use case. Dont' use reference partitioning unless your use case is appropriate.
I suggest that you and your team thoroughly review all of the relevant sections of both the database data warehousing guide and the VLDB and partitioning guide.
Then create a SIMPLE data model that only includes your partitioning keys and not all of the other columns. Experiment with that simple model with a small amount of data and run the traces and execution plans until you get the behaviour you think you are wanting.
Then scale it up and test it. You cannot design it all ahead of time and expect it to work the way you want.
You need to use an iterative approach. That starts by collecting all the relevant information about your data: how much data, how is it organized, how is it updated (batch or online), how is it queried. You already mention using hash subpartitioning but haven't posted ANYTHING that indicates you even need to use hash. So why has that decision already been made when you haven't even gotten past the basics yet? -
Will JDeveloper support INTERVAL and REFERENCES partitioning methods soon?
JDeveloper is a fantastic tool, and I'm very impressed with its data modeling capabilities as they are quite robust. However, currently its support for Table Partitioning is limited to the feature level of Oracle 10g, as it does not support 11g INTERVAL or REFERENCES partitioning methods, nor the Composite List partitioning methods List-Range, List-Hash, and List-List.
Any idea when JDeveloper will support these Partitioning enhancements available in Oracle Database 11g? If it's in the works, I'd be happy to help beta test it!
Best regards,
TerryHi Terry
I love it that you are working with the data modeling and would love to hear how you are using it. Please email me direct with more info
We are working to get composite partitions (11g) into a future release but I don't have any timeframe for you at the moment.
rgds
Susan Duncan
Product Manager
susan.duncan@oracle -
I am reading the Reference partitioning in 11g, it says "The child table is partitioned using the same partitioning key as the parent table without having to duplicate the key columns."
Does this mean the key column value is also not duplicated on disk ?I am reading the Reference partitioning in 11g, it says "The child table is partitioned using the same partitioning key as the parent table without having to duplicate the key columns."
Does this mean the key column value is also not duplicated on disk ?
It means that you, the user, do not need to duplicate the parent table's partitioning key columns in the child table.
Did you review the examples provided in the 'Reference Partitioning' section of the VLDB and Partitioning doc?
http://docs.oracle.com/cd/B28359_01/server.111/b32024/partition.htm#sthref60
Reference partitioning allows the partitioning of two tables related to one another by referential constraints. The partitioning key is resolved through an existing parent-child relationship, enforced by enabled and active primary key and foreign key constraints.
The benefit of this extension is that tables with a parent-child relationship can be logically equi-partitioned by inheriting the partitioning key from the parent table without duplicating the key columns. The logical dependency will also automatically cascade partition maintenance operations, thus making application development easier and less error-prone.
Immediately following that description is an example. -
Problem with trigger and entity in JHeadsart, JBO-25019
Hi to all,
I am using JDeveloper 10.1.2 and developing an application using ADF Business Components and JheadStart 10.1.2.27
I have a problem with trigger and entity in JHeadsart
I have 3 entity and 3 views
DsitTelephoneView based on DsitTelephone entity based on DSIT_TELEPHONE database table.
TelUoView based on TelUo entity based on TEL_UO database table.
NewAnnuaireView based on NewAnnuaire entity based on NEW_ANNUAIRE database view.
I am using JHS to create :
A JHS table-form based on DsitTelephoneView
A JHS table based on TelUoView
A JHS table based on NewAnnuaireView
LIB_POSTE is a :
DSIT_TELEPHONE column
TEL_UO column
NEW_ANNUAIRE column
NEW_ANNUAIRE database view is built from DSIT_TELEPHONE database table.
Lib_poste is an updatable attribut in TelUo entity, DsitTelephone entity, NewAnnuaire entity.
Lib_poste is upadated in JHS table based on TelUoView
I added a trigger on my database shema « IAN » to upadate LIB_POSTE in DSIT_TELEPHONE database table :
CREATE OR REPLACES TRIGGER “IAN”.TEL_UO_UPDATE_LIB_POSTE
AFTER INSERT OR UPDATE OFF lib_poste ONE IAN.TEL_UO
FOR EACH ROW
BEGIN
UPDATE DSIT_TELEPHONE T
SET t.lib_poste = :new.lib_poste
WHERE t.id_tel = :new.id_tel;
END;
When I change the lib_poste with the application :
- the lib_poste in DSIT_TELEPHONE database table is correctly updated by trigger.
- but in JHS table-form based on DsitTelephoneView the lib_poste is not updated. If I do a quicksearch it is updated.
- in JHS table based on NewAnnuaireView the lib_poste is not updated. if I do a quicksearch, I have an error:
oracle.jbo.RowAlreadyDeletedException: JBO-25019: The row of entity of the key oracle.jbo. Key [null 25588] is not found in NewAnnuaire.
25588 is the primary key off row in NEW_ANNUAIRE whose lib_poste was updated by the trigger.
It is as if it had lost the bond with the row in the entity.
Could you help me please ?
Regards
LaurentThe following example should help.
SQL> create sequence workorders_seq
2 start with 1
3 increment by 1
4 nocycle
5 nocache;
Sequence created.
SQL> create table workorders(workorder_id number,
2 description varchar2(30),
3 created_date date default sysdate);
Table created.
SQL> CREATE OR REPLACE TRIGGER TIMESTAMP_CREATED
2 BEFORE INSERT ON workorders
3 FOR EACH ROW
4 BEGIN
5 SELECT workorders_seq.nextval
6 INTO :new.workorder_id
7 FROM dual;
8 END;
9 /
Trigger created.
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
Session altered.
SQL> insert into workorders(description) values('test1');
1 row created.
SQL> insert into workorders(description) values('test2');
1 row created.
SQL> select * from workorders;
WORKORDER_ID DESCRIPTION CREATED_DATE
1 test1 30-NOV-2004 15:30:34
2 test2 30-NOV-2004 15:30:42
2 rows selected. -
I've lost the use of Appleworks by upgrading to 10.9.2.
Is it possible to partition the internal hard drive of my MacBook Pro and install an older Mac OS (10.6.8) on the second partition with OS 10.9.2 on the other? I'd like to be able to boot to the older OS when I need Appleworks and few other applications that aren't available on OS 10.9.2.
Any suggestions?
Thank you for your help.Hello again, WZZZ,
Here's an update. I was successful in creating two partitions on my internal drive, and in installing OS 10.6.6 on the second partition, as per your guidence. I now have it up to 10.6.8 with all the security updates and AppleWorks. A great thing.
Some thoughts:
• The partitioning had one hitch; it failed at first. But once I "repaired" the disc with Disc Utility the partitioning went thru.
• The partitioning took a long time in 'resizing the partition.' A few hours I think it was. Lots of progress bar watching.
• If I had it to do again, I'd size the two partitions differently. My original data was occupying about 230 Gb of the 320 Gb disc. I made the new partitions share the space, about 230 and 75Gb. That left very little available space for the main disc. I ought to have put some breathing room in there. As it is, it's an incentive to clean up all those files, especially all those iTunes files. I now have about 10% of available space there and mean to continue deleting.
So, all in all a good project that got me where I wanted to go. Thank you for your help.
Appreciatively,
wallah -
Can't boot into Windows after splitting Mac partition with Disk Utility
Hi everyone,
I installed Windows 8 with BOOTCAMP, creating a large 871 GB partition with a smaller 127 GB partition for Windows. After installing Windows, I then went back to Disk Utility to shrink the Mac partition and add two new partitions, one for storage and another blank, hoping to use it to install Linux into one day.
I did that, and now Windows doesn't boot. A Windows blue screen tells me I need to use the Windows DVD to repair it.
I tried to use Disk Utility to delete the two new partitions I made and grow the Mac partition back to the max size of 871 GB. But it doesn't let me do this. The Disk Utility log doesn't report an error and thinks it worked but the partition stays the same size. However if I make the partition a little smaller, like 870 GB then it works. I'm wondering if the Recovery partition is hiding there and preventing me from fully expanding the Mac partition.
What can I do?
Here's what I get if I type in sudo gpt -r -vv show disk0:
Code:
gpt show: disk0: mediasize=1000204886016; sectorsize=512; blocks=1953525168 gpt show: disk0: Suspicious MBR at sector 0 gpt show: disk0: Pri GPT at sector 1 gpt show: disk0: Sec GPT at sector 1953525167 start size index contents 0 1 MBR 1 1 Pri GPT header 2 32 Pri GPT table 34 6 40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B 409640 1701278816 2 GPT part - 48465300-0000-11AA-AA11-00306543ECAC 1701688456 1269536 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC 1702957992 1846360 1704804352 248002560 4 GPT part - EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 1952806912 718223 1953525135 32 Sec GPT table 1953525167 1 Sec GPT header
Here's what I get if I type sudo fdisk /dev/rdisk0:
Code:
Disk: /dev/rdisk0 geometry: 121601/255/63 [1953525168 sectors] Signature: 0xAA55 Starting Ending #: id cyl hd sec - cyl hd sec [ start - size] ------------------------------------------------------------------------ 1: EE 1023 254 63 - 1023 254 63 [ 1 - 409639] <Unknown ID> 2: AF 1023 254 63 - 1023 254 63 [ 409640 - 1701278816] HFS+ 3: AB 1023 254 63 - 1023 254 63 [1701688456 - 1269536] Darwin Boot *4: 07 1023 254 63 - 1023 254 63 [1704804352 - 248002560] HPFS/QNX/AUX
Here's what I get with gdisk:
Code:
Disk /dev/rdisk0: 1953525168 sectors, 931.5 GiB Logical sector size: 512 bytes Disk identifier (GUID): 6075110F-7CEF-4604-85EE-6231B850E2AE Partition table holds up to 128 entries First usable sector is 34, last usable sector is 1953525134 Partitions will be aligned on 8-sector boundaries Total free space is 2564589 sectors (1.2 GiB) Number Start (sector) End (sector) Size Code Name 1 40 409639 200.0 MiB EF00 EFI System Partition 2 409640 1701688455 811.2 GiB AF00 1 TB APPLE HDD HTS54101 3 1701688456 1702957991 619.9 MiB AB00 Recovery HD 4 1704804352 1952806911 118.3 GiB 0700 BOOTCAMPOkay, the code I typed didn't display properly in the forum. Let me try again.
Here's what I get when I type
sudo gpt -r -vv show disk0:
gpt show: disk0: mediasize=1000204886016; sectorsize=512; blocks=1953525168
gpt show: disk0: Suspicious MBR at sector 0
gpt show: disk0: Pri GPT at sector 1
gpt show: disk0: Sec GPT at sector 1953525167
start size index contents
0 1 MBR
1 1 Pri GPT header
2 32 Pri GPT table
34 6
40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
409640 1701278816 2 GPT part - 48465300-0000-11AA-AA11-00306543ECAC
1701688456 1269536 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
1702957992 1846360
1704804352 248002560 4 GPT part - EBD0A0A2-B9E5-4433-87C0-68B6B72699C7
1952806912 718223
1953525135 32 Sec GPT table
1953525167 1 Sec GPT header
And when I type fdisk /dev/rdisk0 I get:
Disk: /dev/rdisk0 geometry: 121601/255/63 [1953525168 sectors]
Signature: 0xAA55
Starting Ending
#: id cyl hd sec - cyl hd sec [ start - size]
1: EE 1023 254 63 - 1023 254 63 [ 1 - 409639] <Unknown ID>
2: AF 1023 254 63 - 1023 254 63 [ 409640 - 1701278816] HFS+
3: AB 1023 254 63 - 1023 254 63 [1701688456 - 1269536] Darwin Boot
*4: 07 1023 254 63 - 1023 254 63 [1704804352 - 248002560] HPFS/QNX/AUX
Kappy,
I can boot into OS X just fine so if possible, I'd like to avoid reformatting everything. If you know of a way I can add more than 2 partitions with OS X and Boot Camp, please let me know. As far as I know, rEFInd can recognize more than two partitions, but I don't think it can edit partitions on a GPT / hybrid MBR system. Or am I wrong? -
Where in a lion install disc made from the installer can I find a bootable file with disk utilities?
I downloaded lion a few hours ago and installed.
I have 4 large hardrives (1.5TB) arranged in 8 partitions to allow me to work on a varied group of projects. I have some projects that need to be worked on with software running under Tiger, some with software that runs under leopard, some with snow leopard, and now Lion. I own multiple copies of software and multiple user pack system install discs.
On a regular basis, I back up chunks of work on projects to external discs, and special backup areas on one of my drives. I don't like using time machine because I am running many operating environments on my mac pro and I don't ever want to be tied to one operating environment for important functionality, and I want to maximize the open space on my hard drives.
Several partitions involve large video files. I am working on them in various versions of final cut, premiere and imovie. Because I have to use the same software versions my different clients are running, I don't want to move all the files " up" to a modern version. It would be professional suicide to stop accommodating my various clients. I say this to try to head off being told to make my clients upgrade. There are too many different clients and they are not going to replace all their equipment.
On these volumes with video files, I often fill them up and copy off what I need to backup before I erase and do a clean install. I also run VM Fusion and windows XP. I absolutely do not want to have a Recovery HD partition left on the drive when I erase or influencing the other partions using different OS versions on that same drive.
Is there a bootable disc on the installer disk I just made? Can I use the disk utilities to do a low level erase on the lion volume that will remove the recovery partition?
Can I go back to booting from Snow Leopard and erasing the Recovery partition with the drive partition that way? Will the Snow Leopard utility take out the Lion HD recovery partition?
I am used to erasing my drives and rebuilding my machine and I believe it is the right way to use my multiple drives in my workflow. Now that I've got this invisible recovery partition, can you help me remove it and create a bootable disc that includes disk utilities?I assume the unix pdisk command will show you want is going on.
You may not have a big worry. There have always been a lot of hidden partitions. Disk Utility under 10.4.11 reports this drive has three (my now report four ) partitions, when pdisk reports that there are 15.
Macintosh-HD -> Applications -> Utilities -> Terminal
Press return when done typing sudo pdisk -l
-l include a lower case L
The sudo command will ask for your administration password. No characters will appear when typing your password. Press return when done typing. sudo stands for super user do. It's just like root. Be careful.
mac $ sudo pdisk -l
Password:
Partition map (with 512 byte blocks) on '/dev/rdisk0'
#: type name length base ( size )
1: Apple_partition_map Apple 63 @ 1
2: Apple_Driver43*Macintosh 56 @ 64
3: Apple_Driver43*Macintosh 56 @ 120
4: Apple_Driver_ATA*Macintosh 56 @ 176
5: Apple_Driver_ATA*Macintosh 56 @ 232
6: Apple_FWDriver Macintosh 512 @ 288
7: Apple_Driver_IOKit Macintosh 512 @ 800
8: Apple_Patches Patch Partition 512 @ 1312
9: Apple_Bootstrap untitled 1954 @ 149319048
10: Apple_HFS Apple_HFS_Untitled_1 2254440 @ 263968 ( 1.1G)
11: Apple_UNIX_SVR2 untitled 6617188 @ 149321002 ( 3.2G)
12: Apple_HFS Apple_HFS_Untitled_2 146538496 @ 2780552 ( 69.9G)
13: Apple_UNIX_SVR2 swap 363298 @ 155938190 (177.4M)
14: Apple_Free Extra 262144 @ 1824 (128.0M)
15: Apple_Free Extra 262144 @ 2518408 (128.0M)
Device block size=512, Number of Blocks=156301488 (74.5G)
DeviceType=0x0, DeviceId=0x0
Drivers-
1: 23 @ 64, type=0x1
2: 36 @ 120, type=0xffff
3: 21 @ 176, type=0x701
4: 34 @ 232, type=0xf8ff -
How do I create a Boot Camp partition with Windows & blank NTFS partitions?
I'm trying to create this kind of setup:
OS X partition
Windows 7 partition
blank NTFS partition (no OS)
blank NTFS partition (no OS)
This would be much easier if I created just the OS X partition and the Windows 7 partition with the Boot Camp Assistant tool (done this many times before successfully on other computers). The problem begins when I try to split the Boot Camp partition through the Windows 7 DVD partition manager during setup by deleting the Boot Camp partition, and recreating three partitions from the unallocated space. After installing Windows 7 on one of those new three partitions, I'm getting all kinds of startup errors when I try to install the Boot Camp drivers.
What would the best way to achieve this setup?Yes, I researched many options for three partition dual boot set ups. After many trials and tribulations, there is a simple method that I have used on multiple MBPs.
1. Run Boot Camp Assistant, as per the Boot Camp Installation and Setup Guide. Once you have Mac OS X and Windows 7 set up, check the partitions and back them up (with Time Machine, and Winclone).
2. Get iPartition, and resize the Mac and Windows partitions to what you want, say 100GB each, and set up your other partitions to the size you want. I put mine "after" the Windows partition, at the end of the disk, and have had no problems. It takes a few minutes to create the bootable CD for iPartition, but you get everything you need to do so from Coriolis Systems. You will need your Mac OS X installl disc.
3. Install Paragon's HFS+ for Windows and NTFS for Mac, and everybody can read and write everything.
4. I have both Time Machine and Norton 360 back up the Data partitions, just in case -- to an external drive, of course.
You can boot to either OS and access any partition. -
Windows 7 won't boot after adding a third partition with gparted
I needed extra space on my bootcamp partition so I :
1. resized macintosh hd partition with disk utility
2.booted with gparted live disk and "growed" the bootcamp partition
Now Windows doesn't boot. Any suggestions?Hi,
your Topic says '...adding a third partition' whereas in your message you don't mention this third partition...
So what is the right one ?
Nonetheless you might have a look at EasyBCD http://neosmart.net/dl.php?id=1 to repair the Windows 7 bootloader.
Regards
Stefan -
How to Implement 30 days Range Partitioning with Date column. Not Interval
Hi,
I am using the db:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit
Current table structure is:
CREATE TABLE A
a NUMBER,
CreationDate DATE
PARTITION BY RANGE (CreationDate)
INTERVAL ( NUMTODSINTERVAL (30, 'DAY') )
(PARTITION P_FIRST
VALUES LESS THAN (TIMESTAMP ' 2001-01-01 00:00:00'))
How can I define virtual column based partitioning with RANGE partitioning without using INTERVAL partitioning.
And that is with Intervals of 30 days.
For monthly I am trying as
CREATE TABLE A
a NUMBER,
CreationDate DATE,
monthly_interval date as (to_char(CreationDate,'MM-YYYY')) VIRTUAL
PARTITION BY RANGE (monthly_interval)
partition p_AUG12 values less than (to_date('08-2012','mm-yyyy')),
partition p_SEP12 values less than (to_date('09-2012','mm-yyyy')),
partition p_OCT12 values less than (to_date('10-2012','mm-yyyy'))
Enable ROw Movement
BUT CAN'T INSERT the data even for that:
Insert into a (a, CreationDate)
Values (1, '12-10-2012')
Insert into a (a, CreationDate)
Values (1, '12-10-2012')
Please suggest..Hi rp,
Interval Partitioned to Range. Created Daily Partitions from Monthly Part. got complicated so I am posting here.
Basically,
I know Interval Partitioning is a kind of Range partitioning. But explicitly for Interval Partitioned tables XML Indexes are not allowed as discussed here:
XMLIndexes on an Interval Partitioned Table??
I can do monthly partitions as :
CREATE TABLE A
a NUMBER,
CreationDate DATE,
monthly_interval varchar2(8) as (to_char(CreationDate,'MM-YYYY')) VIRTUAL
PARTITION BY RANGE (monthly_interval)
partition p_AUG12 values less than ('09-2012'),
partition p_SEP12 values less than ('10-2012'),
partition p_OCT12 values less than ('11-2012')
) Enable ROw Movement
Insert into a (a, CreationDate)
Values (1, '12-SEP-2012')
Insert into a (a, CreationDate)
Values (1, '14-SEP-2012')
Select * from A partition (p_SEP12)
Select * from A partition (p_AUG12)
Select * from A partition (p_OCT12)
Can we do it for 30 days partitions, instead of the monthly partitions. ANY suggestions..
Thanks..
Maybe you are looking for
-
How do I get a song from another site (MP3rocket) to itunes?
I subscribe to mp3rocket music downloads and can play the songs on my computer but can't get them over to itunes and onto my ipod. The site says they go right to itunes but I don't know how to get them into the library. I am new to all this.
-
Is it possible to upgrade only 1 memory slot in a Macbook Pro?
I have a 2010 Macbook Pro 7.1 running Maverick and I want to upgrade my memory from the standard 4bg. Is it possible to add a single 8gb card in one slot and keep a 2gb in the other? I have read that this model can run up to 16gb but I'm not sure I n
-
Is there any way at all to install Windows 7 32 bit onto a MacBook Pro 13 in Retina Late 2013? The windows software I need won't run on Windows 7 64 bit.
-
Connection-to-server-unsuccessful-phonegap-android-emulator
connection-to-server-unsuccessful-phonegap-android-emulator error during lunch, I think code js is old but phonegap is new from github this wk... any comment? just replaced index.html from a finished book chapter in phonegap 2011-2 (c)... <script typ
-
Display Size of the Visual Admininstrator
I have an interesting problem - the display size of the Visual Administrator in one of my Netweaver systems has opened very small - I cannot stretch it any bigger. In all the rest of my systems the Visual Admininstrator opens full screen (we are AIX-