Constraint Enable

Hi All,
I have created a table like this
CREATE TABLE table2(a NUMBER CONSTRAINT a_pk PRIMARY KEY);
INSERT INTO table2 VALUES(1);
INSERT INTO table2 VALUES(2);
After that i disabled the primary key like this
ALTER TABLE table2 DISABLE CONSTRAINT a_PK;
Then i inserted duplicate values
INSERT INTO table2 VALUES(1);
INSERT INTO table2 VALUES(1);
Now i want to restrict duplicate values in to that column.
for that i have used
ALTER TABLE pt MODIFY CONSTRAINT pt_apk ENABLE NOVALIDATE
cannot validate (RAMESH.a_pk) - primary key violated.
How can we enable the key.
Please help me.
Thanks in advance

user9077483 wrote:
How can we enable the key.The reason it fails - PK is enforced by unique key. So even though you say novalidate existing data and validate new data only, unique key is not capable of that - it fails any duplicate new or old. In order to take advantage of NOVALIDATE you must implement PK via non-unique index:
SQL> CREATE TABLE table2(a NUMBER)
  2  /
Table created.
SQL> CREATE INDEX a_PK on table2(a)
  2  /
Index created.
SQL> ALTER TABLE table2
  2    ADD CONSTRAINT a_PK
  3      PRIMARY KEY(a)
  4      USING INDEX a_PK
  5  /
Table altered.
SQL> INSERT INTO table2 VALUES(1);
1 row created.
SQL> INSERT INTO table2 VALUES(2);
1 row created.
SQL> ALTER TABLE table2 DISABLE CONSTRAINT a_PK;
Table altered.
SQL> INSERT INTO table2 VALUES(1);
1 row created.
SQL> INSERT INTO table2 VALUES(1);
1 row created.
SQL> ALTER TABLE table2 MODIFY CONSTRAINT a_PK ENABLE NOVALIDATE
  2  /
Table altered.
SQL> select * from table2
  2  /
         A
         1
         2
         1
         1
SQL> INSERT INTO table2 VALUES(1);
INSERT INTO table2 VALUES(1)
ERROR at line 1:
ORA-00001: unique constraint (SCOTT.A_PK) violated
SQL> As you can see, non-unique index allows us to ignore existing duplicates but rejects new ones.
SY.

Similar Messages

  • ALTER TABLE privilege and CREATE/DROP/ENABLE/DISABLE constraint privilege

    Hi,
    I am looking for some detailed info regarding the below previleges
    ALTER TABLE, CREATE CONSTRAINT, DROP CONSTRAINT, ENABLE CONSTRAINT AND DISABLE CONSTRAINT PRiVILEGES.
    I have two schemas 'A' and 'B', I want to provide user 'A' with Alter table, create or drop constraint,Enable or Disable constraint on schema B.
    Please let me know how to make this work.
    Thank you

    I got the answer for my second question, I have an option to grant 'Alter ANY table' privilege to the user.Yes, but you should not do that.
    Regarding question one, Suppose I have two schemas A and B and I want Schema A to have alter table privilege on all tables of Schema B.
    Can I do this in one command No
    or I need to grant alter on each table saperately?Yes
    If I am chosing the second option for each table saperately then whenever a table is added in schema B we need to grant privilege on that table as well.Yes. But nothing strange there. Designing and creating objects includes the privileges on them.
    If user A is granted with alter table privilege on a table which user B owns then can user A drop/create/enable/disable constraints for that table?Yes, isn't that what all this about?
    Again, letting one user alter the objects of another user is generally not such a good idea. Hope you see this from our discussion.
    Alter table privilege includes adding and dropping columns. This is why I suggested writing a procedure that does exactly what you need. And then grant execute on that to A.
    The best thing of course would be NOT TO disable the constraints, they are probably there for a reason.
    I am currently handling an issue where one session doing this, deadlocks with another session doing only selects - From other tables, that is!
    Regards
    Peter

  • Enable constraints in parallel

    Hi,
    I am in the process of seeding up import.
    In our Datawarehouse environment enabling constrainta during import takes lot of time..To improve the performance of this step, i created constraints script seperately added parallelism in all commands (eg: alter table x1 add constraint xc1 .....enable parallel 4;)
    i changed degree os paralllelism of each table to 4 , i included following command at the start of the script.
    alter session force paralllel ddl parallel 4;
    Still constraints are not being enabled in parallel.
    How can i make these constraints enable step uses paralllelism.
    Is there is any other way to improve enable constraint performence during import.
    We are on 10.2.0.4 ...sun solaris 10 and we are using data pump.
    Thanks
    Pramod

    To speed up the import use exclude=statistics parameter
    see this link for enabling index
    http://www.asktherealtom.ch/?p=214

  • Deleting Datas from a table without disabling constraints.

    Hi,
    I am working in Oracle9i and solaris 5.8. In that i want to delete half of the datas in a table . the table contains one lakh rows but i need to delete only 50 thousand rows. But the table is constraints enabled.Please send me some queries how to delete the datas withput disabling the constraints.

    What type of constraint do you have ?
    In case of not null, unique, primary key constraint you can delete the rows without disabling the constraints.
    In case of referential integrity constraint you can also delete the rows without disable the constraints but you have to specify on delete cascade constraints clause. By doing so, Oracle will delete the rows in the child table as well.
    http://www.techonthenet.com/oracle/foreign_keys/foreign_delete.php

  • ----Constraints and Performance issues----

    Hi all,
    I have a major concern and I would like ur suggestion on the best way to handle it.
    I have a staging table cust_staging. I have 2 target tables customer, customer_address which must be populated from this staging table.
    The customer key in all 3 tables is the primary key. For table customer_address, the customer_key is also referenced from the customer table.
    Incremental data will be available in the staging table (aorund 0.2 million) and the customer table would have appx 2 million records.
    The concern I have is that i have to insert/update this information into the target tables without disabling the foreign key constraints.
    I tried to insert into both the target tables with the constraints enabled but the mapping just hangs and I am forced to kill the process. I had tried using a single mapping to populate both tables and when that was going into hang mode, i tried with first customer and then another mapping for customer_address. This also just goes on hang mode.
    Next I tried to disable the constarint and enable it again in the mapping itself. My concern here is that if I do a blind insert and when I re-enable the constraints, if there is a violation, the target table may goto an unusable state and my target table will become non usable.
    My concern here is how to tackle this problem. Can i first disable the constraints, incorporate some logic using the pre-mappings wherein I can apply business rules to check the constraints explicitly and then redirect the bad records to reject and other records to the actual target.
    Please do let me know how I should handle this situation in OWB bearing in mind the performance issues also.
    we use owb 9.2.

    Hi all,
    I have a major concern and I would like ur suggestion on the best way to handle it.
    I have a staging table cust_staging. I have 2 target tables customer, customer_address which must be populated from this staging table.
    The customer key in all 3 tables is the primary key. For table customer_address, the customer_key is also referenced from the customer table.
    Incremental data will be available in the staging table (aorund 0.2 million) and the customer table would have appx 2 million records.
    The concern I have is that i have to insert/update this information into the target tables without disabling the foreign key constraints.
    I tried to insert into both the target tables with the constraints enabled but the mapping just hangs and I am forced to kill the process. I had tried using a single mapping to populate both tables and when that was going into hang mode, i tried with first customer and then another mapping for customer_address. This also just goes on hang mode.
    Next I tried to disable the constarint and enable it again in the mapping itself. My concern here is that if I do a blind insert and when I re-enable the constraints, if there is a violation, the target table may goto an unusable state and my target table will become non usable.
    My concern here is how to tackle this problem. Can i first disable the constraints, incorporate some logic using the pre-mappings wherein I can apply business rules to check the constraints explicitly and then redirect the bad records to reject and other records to the actual target.
    Please do let me know how I should handle this situation in OWB bearing in mind the performance issues also.
    we use owb 9.2.

  • Time Constraints - Hr ABAP

    Hi Everyone
    Can anybody throw some light on the various time constraints used in the HR-ABAP and their relation to the infotypes.
    Also specify if their some Transaction code from where we can get to know about the value's of the time constraints.
    And also if their's any Table associated with them.
    <REMOVED BY MODERATOR>
    Thanks & Regards
    Gaurav
    Edited by: Alvaro Tejada Galindo on Feb 22, 2008 10:24 AM

    Hi,
    Please refer to the document below:
    When you update an infotype, old data is not lost but archived for historical evaluation. The system records a specific period of validity for each infotype, This enables the system to store more than one infotype record at the same time, even if their validity periods overlap. This means that the time relationships between infotype records must be defined. The concept of time constraints enables you to do this.
    HR master data uses the following three time constraints:
    Time Constraint 1
    For the entire time that the employee works at the enterprise, exactly one valid infotype record must exist. The validity periods of the individual records must not overlap. If a new record is created, the system automatically uses the start date of the new record as the delimitation date of the old record. Gaps are only allowed between the employee’s entry date and the start date of the first record.
    Time constraint 1 must be used for all of the infotypes containing information that must be available at all times. This is particularly true of personal and organizational assignment data.
    If a record is delimited because of time constraint 1, the system displays an appropriate message.
    Time Constraint 2
    No more than one valid record can exist at any one time. Records with constraint 2 must not overlap. Their existence is not obligatory. If a new record is created, the system automatically delimits the previous record, if one exists.
    If a record is delimited because of time constraint 2, the system displays an appropriate message.
    Time Constraint 3
    Any number of valid records can exist at any one time. The individual records do not conflict with each other.
    The system also contains the following time constraint indicators:
    Time Constraint A
    Infotypes with time constraint A must have no more than one record. The system automatically assigns the record a validity period from January 01, 1800 through December 31, 9999. This validity period cannot be subdivided.
    Infotype records with time constraint A cannot be deleted.
    Time Constraint B
    Infotypes with time constraint B must have no more than one record. The system automatically assigns the record a validity period from January 01, 1800 through December 31, 9999. This validity period cannot be subdivided.
    Infotype records with time constraint B can be deleted.
    Time Constraint T
    Infotype records with time constraint T depend on the subtype.
    The principles of data entry and time constraints that apply to infotypes ensure that data is consistent and accurate. They also constitute the basis of time recording, payroll accounting, and reporting.
    Thanks,
    Sriram Ponna.

  • Time Constraint Class

    What is the number (0 to 7) stands for time constraint class in absence for the screen number 2001. I have a leave type CL (causual leave). What time constraint class shall i assign to it.
    Regards,
    Chinmay

    for ur info alreadt raghu has given the answer any way check this tooo
    A: Only one record may ever exist for the infotype (from 01/01/1800 - to 31/12/9999). Infotypes with time constraint A may not be deleted.
    B: Only one record may ever exist for the infotype (from 01/01/1800 - to 31/12/9999). Infotypes with time constraint B may be deleted.
    T: Time constraint varies depending on subtype.
    Z : Refers to time management infotypes.Time constraint for these ITs depend on time constraint class in table V_T554S_I. Collision checks : V_T554Y
    Apart from 1, 2, 3 there are some other types of Time Constraints: A, B, T, Z.
    The Infotypes with TC type A must exist, must have only one record in its lifetime, and these ITs cannot be deleted.
    Example: IT0003 (Payroll Status)
    The Infotypes with TC type B must have only one record in its lifetime.
    Example: IT0031 (Reference Personnel numbers)
    The Infotypes with TC type T will have subtypes, and the TC is based on the subtype.
    Example: IT0009 (Bank Details)
    The Time Mnagegement Infotypes will have TC type Z .
    Example: IT2001 (Absences)
    rule that determines whether collisions in time data are allowed, and if so, specifies how the system reacts to such collisions.
    Time contraints comprise the following:
    Time constraint classes that determine which collisions in time data records are allowed
    Time constraint table that contains the time-based collisions allowed in the time data records
    Time constraint indicator that displays whether a new data record that collides with an existing time data record can be transferred to the system or whether the transfer is prohibited
    When you update an infotype, old data is not lost but archived for historical evaluation. The system records a specific period of validity for each infotype, This enables the system to store more than one infotype record at the same time, even if their validity periods overlap. This means that the time relationships between infotype records must be defined. The concept of time constraints enables you to do this.
    HR master data uses the following three time constraints:
    Time Constraint 1
    For the entire time that the employee works at the enterprise, exactly one valid infotype record must exist. The validity periods of the individual records must not overlap. If a new record is created, the system automatically uses the start date of the new record as the delimitation date of the old record. Gaps are only allowed between the employeeu2019s entry date and the start date of the first record.
    Time constraint 1 must be used for all of the infotypes containing information that must be available at all times. This is particularly true of personal and organizational assignment data.
    If a record is delimited because of time constraint 1, the system displays an appropriate message.
    Time Constraint 2
    No more than one valid record can exist at any one time. Records with constraint 2 must not overlap. Their existence is not obligatory. If a new record is created, the system automatically delimits the previous record, if one exists.
    If a record is delimited because of time constraint 2, the system displays an appropriate message.
    Time Constraint 3
    Any number of valid records can exist at any one time. The individual records do not conflict with each other.
    Time Constraints:
    When an info type is updated, the old data is not lost. Instead, it remains in the system so that you can perform historical evaluations. Each info type is stored with a specific validity period. This means that the system can contain more than one record of the same info type at the sometime, even if their validity periods coincide.
    If you enter and save new information in an info type, the system checks whether the record already exists for this info type. If this is the case, the system reacts based on rules or TIME CONSTRAINTS set up for that particular info type or subtype.
    Time Constraint 1: This is mandatory information that must be uniquely available, gaps are not allowed
    Time Constraint 2: This is optional information that, if available, must be unique, gaps allowed.
    Time Constraint 3: This is optional information that, if available, can exist more than once.

  • Empty String

    Hi,
    I need to confirm on this issue.
    In my Table contains SO_ID (Null column).Data will come from output dataset to oracle table.In output dataset SO_ID is getting loaded with space.When it comes to oracle table that space has trimmed and loaded with empty(null).
    I thought Null cant be accepted by space.so i changed the null into not null constraint.so all the Space got loaded into my oracle table.When i changed the Constraint as not null.everything worked fine.But i stucked on this question is : when i inserted single row with Space (' ') It's inserted one row.In my perspective Null cant be accepted by space.Why it was inserted in to table ?
    eg: Insert into emp values(' ');
    1 row inserted
    Please help on this

    user613197 wrote:
    Hi,
    I thought Null cant be accepted by space.so i changed the null into not null constraint.so all the Space got loaded into my oracle table.When i changed the Constraint as not null.everything worked fine.But i stucked on this question is : when i inserted single row with Space (' ') It's inserted one row.In my perspective Null cant be accepted by space.Why it was inserted in to table ?
    eg: Insert into emp values(' ');The definition of null is "I don't know what this value is", so any null value is unknown. This results in various kinds of wierdness like null never compares (=, !=, <, >, etc) to anything including itself, so using expessions like null = null will always be false - if you don't know what null is, then you can't know if it is equal or not equal or <, etc. another value including another null. The IS [NOT] NULL operator exists to detect nulls in expressions, along with the NVL() and variant functions.
    A space is a known value and NOT null - a space is a space, an ASCII 32 so your insert should work. Try using the null keyword in your test with the NOT NULL constraint enabled and see what happens.
    Edited by: riedelme on Aug 6, 2010 11:26 AM

  • Problems to Download Dreamweaver CS4 with Akamai Download Manager!

    I want to download Dreamweaver CS4 Italian Windows Trial version.
    I have Internet Explorer 7, PC with OS Windows XP x64bit Corporate, Core2Quad Q9650, Motherboard MSI P45Platinum with Ethernet 10/100/1000,
    4GBRAM,3 HDD 500GB WesternDigital,Nvidia GeForce 9400GT.
    I have installed "Akamai Download Manager ActiveX" control asked on the top of window popup.
    Clicked Yes:Installation confirmation.
    But I have tried to click the button "Download" and just one window with this message appears and disappears immediately:
    Per scaricare il software è necessario installare Akamai Download Manager...
    Problemi di download?
    Assistenza per il download...
    Il sistema non dispone dei requisiti minimi per eseguire Akamai Download Manager.
    (The system doesn't meets the minimum system requirements to run Akamai Download Manager.)
    I can't download anything. Why?
    I have disable Pop-ups Blocker, and disabled also Anti-Phishing Filter.
    Disabled also Antivirus and the Firewall.
    Set and lowered Security Protection from Tools>Internet Options: to "Medium"; and also, another time, set to "Personalized" with all the controls
    switched to "Active".
    Set Privacy>Settings>"Accetta tutti i cookie"(lowered to full bottom).
    I have read this Akamai Download Manager FAQ:
    -"Nothing happens when I click the download link"
    If nothing happened after you clicked the download link, either your pop-up blocker or a high security setting on your web browser may have stopped
    the Akamai DLM window from opening.
    If you have a pop-up blocker enabled in your web browser or in your Internet toolbar (such as the Yahoo! toolbar or the Google toolbar), you will
    need to disable your pop-up blocker to start your download. For additional information, and for instructions on changing your security setting, see
    the "Akamai Download Manager system requirements" (ServiceNote kb400530).
    Note: The download manager can sometimes take up to a minute to load. Please allow time for the initial loading before assuming there is a problem
    Thus, I have followed this rules in Akamai Download Manager System Requirements:
    Internet Explorer 7
    1) Access your Internet options from the Windows Control Panel (Internet Options) or by selecting Tools > Internet Options in Internet Explorer
    2) Select the Security Tab
    3) Click Custom Level
    4) Scroll through the list of security options and set:
    -ActiveX controls and plug-ins > Download signed ActiveX controls > Enable or Prompt
    -ActiveX controls and plug-ins > Run ActiveX controls and plug-ins > Enable or Prompt
    -ActiveX controls and plug-ins > Script ActiveX controls marked safe for scripting > Enable or Prompt
    -Downloads > File download > Enable
    -Miscellaneous > Allow script-initiated windows without size or position constraints > Enable
    -Scripting > Active scripting > Enable
    5) Click OK
    6) Click OK again
    But nothing changes.
    On my PC I have also this installation: "J2SE Runtime Environment 5.0 Update 5" and "Java(TM)6 Update 15".
    But the download doesn't start at all.
    1) How can I do to download Dreamweaver CS4-Windows-ITALIAN language?
    2) Is it possible to download 364.8 MegaBytes without Akamai Download Manager?
    Please Hurry!
    Horsepower0171.

    another option you may have is resettofactory
    start- run- type:
    cmd
    press enter
    type:
    cd c:\program files\common files\research in motion\apploader
    press enter
    type:
    loader.exe /resettofactory
    press enter
    please perform a backup first
    test your browser before you restore your backup, and to be ssafe I would do a selective restore
    open DTM, go to backup/restore- advanced
    top left click on file-open
    find your backup, it will load on the left side
    you current working BB will be on the right side
    transfer ONLY the absolute important databases from the left to the right(address, cal, phone logs, messages, sms, tasks, memo)... dont do any that you dont know and dont do any OPTIONS ones
    Message Edited by drizzt09 on 07-13-2009 08:40 PM

  • Logical Model - Entity Attribute?

    Hi ,
    Thanks in responding to my posting.
    I am using SQL DM 3.0.0.665. I need your thoughs on following.
    1) Do we able to select multiple entities for entity reporting? I find an option for all entities then it ask us to pick only one entity to report.
    2) Are we able to build a report for all entities in a subview?
    3) I see Formula (Default Value), Preferred Abbreviation and Synonyms for each attribute in Entity Details report. I am not able to find a place to fill those in attribute definitions. Where can we supply those details, so it could print in reports?
    Thanks in helping us out.

    I find the following for Attribute Properties in SQL DM Help
    =============
    Attribute Properties
    This dialog box displays the properties of an attribute, which is a component of an entity in the Logical Model.
    General
    Name: Name of the attribute.
    Synonym: Synonym for the attribute.
    Preferred Abbreviation: Name that will be used for any corresponding table column during forward-engineering if the Use Preferred Abbreviations option is enabled in the Engineering dialog box.
    Long Name: Long name in the format: entity-name.attribute-name.
    Allow Nulls: Controls whether null values are allowed for the attribute. If this option is enabled, a non-null value is mandatory.
    Datatype: Enables you to specify a domain, logical type, distinct type, collection type, or structured type as the data type of the attribute. You can click the ellipsis (...) button to specify further details for the selected type.
    Entity: Name of the entity with which the attribute is associated.
    Source Name: User-specified name of the source for this attribute.
    Source Type: Manual, System, Derived, or Aggregate.
    Formula Description: For a derived or aggregate source type, the formula for the attribute.
    Scope: For a structured type with Reference enabled, limits the scope by specifying the table in which the type is implemented.
    Type Substitution: For a structured type with Reference disabled, or for a structured type applied to an entity, controls whether a substitutional structured type is generated in the DDL.
    Default and Constraint
    Constraint Name: Name of the constraint.
    Default Value: Default value for the attribute.
    Use Domain Constraints: Controls whether the properties defined in Domains Administration for the associated domain are used. If this option is disabled, you can use the remaining fields to specify the database type for the constraint and the ranges or a list of values.
    Constraint: Enables you to specify a constraint for one or more types of databases.
    Ranges: Enables you to specify one or more value ranges for the attribute.
    Value List: Enables you to specify a list of valid values for the attribute.
    Permitted Subtypes
    For a structured data type, lists all subtypes for the attribute, and lets you specify whether each is permitted for the attribute.
    =============
    but most of the above are not showing up in attribute definition window. How do we see those?

  • BRIDGE statement in a loop : dynamic destination and source table names ...

    Hello,
    I can't find the right syntax to do what I need, if it's possible :
    Context :
    I work actualy on a migration from MS Access applications to Oracle (datas only). So I copied all MS Access Tables into Oracle and created manualy all the relationnal constraints like primary and foreign keys ('cause constraints are not included in 'Copy to Oracle').
    I Have often to refresh my datas, because the MS access applications are still in use. Therefore I wrote PL/SQL scripts. They do the folowing, using dynamic SQL with 'Execute Immediate' statement :
    Script 1
    - disable all user's constraints
    - disable all user's triggers
    - truncate all user's tables
    (Here, I have to do a manual copy of all MS Access tables to Oracle, checking the Append check-box, because the BRIDGE statement doesn't support 'Execute Immediate', and wait...)
    Script 2
    - enable all user's constraints
    - enable all user's triggers
    Could someone let me know how Il could do the same as :
    For t in (select table_name from user_tables) loop
    -- Copy the datas from an Access table into the same Oracle table
    execute immediate ('BRIDGE ' || t.table_name || ' AS MyAccessConnName(select * from ' || t.table_name || ') APPEND') ;
    -- News flash ...
    dbms_output.put_line('Table ' || t.table_name || ' filled') ;
    end loop;
    -------------------------------------------------------------------------------------------------------------------------- ==> 00900. 00000 - "invalid SQL statement"
    If a dynamic table name substitution is applicable in the Bridge statement from a query, I take it with joice!
    Thank you for helping me...
    Daniel

    Hi Daniel,
    The BRIDGE statement is just an extra command I implemented in the SQL Developer worksheet script runner.
    It gets interpreted by SQL Developer and it dynamically creates (CREATE TABLE , INSERT INTO , SELECT ... ) statements and runs them against the connections specified.
    It was developed to improve certain migration features of SQL Developer. We haven't really spent any time developing it into a customer friendly statement to be used in custom scripts.
    Hence the lack of doc. But it is there and if you can make it work for yourself all the better.
    When I say "One way of doing what you want". I mean I haven't thought about your particular problem exhaustively and I wouldn't want you to take my solution as gospel :)
    If you are happy running a script in SQL Developer, but would rather not run 2 scripts , or cut and paste results around, you could SPOOL the results and execute them.
    --call your other scripts to disable constraints during the data move
    set echo off;
    set feedback off;
    set linesize 1000;
    set pagesize 0;
    set headsep off;
    set termout off;
    set verify off;
    set heading off;
    SET PAGES 0;
    SET HEAD OFF;
    spool c:\mydynamicscript.sql
    select 'BRIDGE ' || table_name || ' AS MyAccessConnName(select * from ' || table_name||');' from user_tables ;
    spool off
    @c:\mydynamicscript.sql
    --call another script to enable your constraints again
    Regards,
    Dermot.
    SQL Developer Team.

  • Capturing error in Exceptions Block in a PLSQL Procedure

    Hi,
    I am creating a procedure where i need to update a table with some constraints.
    i need to update atleast a million records with data from another table.
    but here is the catch while updating million records, there may be some records which wont be vaild because of the constraints and cannot be updated and hence will give an error.
    but in my procedure i want to write an exception block where it captures the error, ignores the error and keep coninuing the procedure and update all the remaining records instaed of getting hanged at the point of erorr.
    How can i do this.
    I know i can disable the constraints in the table.
    but i want the constraints enabled, so that the errors are trapped and skipped and only the records that are valid are updated.
    Can seomone help me write this exception block which does this function.
    Thanks,
    Philip.

    Hi,
    I used the exception bloack as u said.
    i have a sample of 20 records and i know the 11th record is not valid and should be inserted in a different way from the rest 19 records.
    so i ran the same query with the exceptions block.
    but what happened, until 10th record everything was fine.
    on the 11th record the execution went into the exceptions block and executed whatever was there in the exception block, but then the script just exited after exception block, it did not go back to fetch the remaining records from 12th unitl 20th.
    How can i fix this.
    Philip.

  • Migration from 7 , 9 version to 11.1.2

    Hi All,
    Where can I find information related to migration of version 7 and version 9 to 11.1.2 ?
    Can I directly copy all the artifacts from the essbasepath/App/database of lower version to higher version except data, after creating application and database in new version? I understand that data should be exported and then re-imported.
    How about the security and filters ? Should everything needs to be re-created or can I import them as well ?
    Any other areas which requires attention ?

    no need to create .. use cssexportimport utility it will take care
    Prior to installation make sure u do all this
    It is safest to do a clean install. An install on top of a pre-existing Hyperion System 9 or 7 installation would lead to the loss of prior data and configuration files and lead to a new environment... Make sure u clean all registries from regedit
    Since many of the applications of EPM 11.1.2.1 are 64-bit savvy and could benefit from the optimization features of Microsoft Windows 2008 R2 64-bit...that would be recommended.
    ensure that your compression/decompression utility will handle long file path names (greater than the 260 character Microsoft Windows path limitation).
    If all Hyperion products that can be accessed via Oracle EPM Foundation 11.1.2.x were installed and activated it could easily require around 14 gigabytes or more of RAM. This pretty much eliminates consideration of a 32-bit install (where each machine can access at best 4 gigabytes of memory...and no less than 1 gigabyte of that would be for the operating system). The processing load for many separate Java virtual machines would also make it practical to have four or more CPUs. The memory and CPU requirements might be reduced somewhat if fewer applications or JVMs were run, but it would not be advisable to run even a test and development install with fewer than two CPUs and 8 gigabytes of RAM as there are around two dozen processes to support...there are too many variables involved. It would be best to ramp up the system with actual processes and loads and project from that.
    For Essbase it has  migration wizard in EAS which take care
    Oracle Data Provider (ODP) for .NET 2.0 (from the Oracle Data Access Component (ODAC) package) is required and must be installed by a user with Windows administrator rights for the following products:
    For Microsoft Windows 2008 servers, ensure that IIS 7 is installed with IIS 6 compatibility features (needed for HFM). Ensure that ASP.NET has been installed as well. In Windows 2008: Start > All Programs > Administrative Tools > Server Manager (or use icon in tool-bar) > Roles Summary > click Add Roles > ...complete verifications... > On Select Server Roles screen select Web Server (IIS) > et cetera.
    The Microsoft Windows Services control panel should have a domain user with rights to start a service as the owner of each Oracle EPM service, so that user should be determined before installation.
    Internet Options > Security Settings tab > Custom level...
    Allow script-initiated windows without size or position constraints (Enable)
    Allow websites to open windows without address or status bar (Enable)
    Internet Options > Security tab > Enable Protected Mode (Uncheck)

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • DEFERRED TRANSACTION QUEUE의 내용을 지우는 여러가지 방법 (replication)

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    PURPOSE
    advanced replication 환경에서, 특정 master site나 updatable snapshot
    site은 다른 remote의 master site로 데이타를 propagation시키기 위해서
    해당 local site에 deferred transaction queue를 유지한다.
    remote site로 잘 전달된 데이타는 주기적으로 dbms_defer_sys.purge job에
    의해 deferred transaction queue인 DEFCALL에서 지워지게 되는데, 경우에
    따라 문제가 발생하면서 DEFCALL의 내용이 remote site로 전달이 안되면서
    계속해서 지워지지 않고 남게 될 수 있다.
    이러한 경우 다음 트랜잭션의 진행이나 전달에도 방해가 될 수 있어
    강제로 지우고자 하는 상황이 발생하는데 그러한 경우의 조치 방법에 대해서
    자세히 설명한다.
    SCOPE
    Advanced Replication Feature는 8~10g Standard Edition에서는 지원하지
    않는다.
    Explanation
    1. 특정 deferred transaction id를 지우는 경우
    기본적으로 deferred transaction queue에 쌓인 트랜잭션을 지우는
    방법은 다음과 같다.
    dbms_defer_sys.delete_tran(deferred_tran_id, destination)
    즉 다음 예와 같다.
    SQL>exec dbms_defer_sys.delete_tran('2.7.10', 'rep2.world');
    이때, 해당 destination에 대한 모든 트랜잭션인경우는 앞의 argument를
    null로 하고, 특정 transaction id에 대한 모든 destination에 대해서인
    경우는 뒷부분의 argument를 null로 한다.
    결국 저장된 모든 deferred transaction이라면, 다음과 같다.
    SQL>exec dbms_defer_sys.delete_tran(null,null);
    2. 특정 table에 관한 내용만 지우는 경우
    예를 들어 특정 table, 여기서는 DEPT table에 관한 사항을 DEFCALL에서
    지우고자 한다면 다음과 같이 조치하면 된다.
    SQL>connect repadmin/repadmin
    SQL>set pagesize 1000
    SQL>set head off
    SQL>spool purgedefcall.sql
    SQL>select 'exec dbms_defer_sys.delete_tran('''
    || deferred_tran_id || ''', null);'
    from defcall
    where packagename like 'DEPT%';
    SQL>spool off
    spool에 의해 만들어진 purgedefcall.sql을 깨끗하게 편집한 후 다시 save한다.
    SQL>connect repadmin/repadmin
    SQL>@purgedefcall.sql
    이때 만약 특정 site로의 전달만을 막고자 한다면, null대신 MS_B.WORLD와 같이
    해당 site를 가리키는 database link이름을 직접 지정하면 된다.
    3. 전체 queue의 내용을 모두 지우는 경우
    DEFCALL의 내용을 모두 지우는 경우라면 기본적으로는 앞에서 사용한
    DBMS_DEFER_SYS.DELETE_TRAN을 이용하면 된다.
    SQL>connect repadmin/repadmin
    SQL>exec dbms_defer_sys.delete_tran(null,null);
    DEFERROR의 내용을 모두 지우는 경우에는 DBMS_DEFER_SYS.DELETE_ERROR를
    사용한다.
    SQL>exec dbms_defer_sys.delete_error(null,null);
    그런데 이 delete_tran과 delete_error의 경우는 내부적으로 delete문장을
    사용하면서 undo record를 위해 rollback을 사용하면서 지워야 하는 데이타가
    매우 많은 경우 속도도 문제가 되고 rollback space오류도 발생 가능하다.
    이러한 경우에는 다음과 같이 truncate command를 이용하여 간단하고 빠르게
    deferred transaction queue의 내용을 정리할 수 있다.
    (1) Oracle7의 경우
    DEF$_CALL, DEF$_CALLDEST, DEF$_ERROR 를 모두 truncate시킨다.
    단 이때 DEF$_CALLDEST가 DEF$_CALL을 reference하는 constraint가 있는 관계로,
    DEF$_CALLDEST를 모두 truncate하여 데이타가 전혀 없는 상태에서도,
    DEF$_CALL이 truncate가 되지 않는다.
    delete operation의 경우 child table이 비어 있다면 master table의 데이타를
    지우는데 오류가 없지만, truncate의 경우는 데이타 확인 없이 바로 지우는
    것이기 때문에 child table에 데이타가 없다하더라도 그러한 check없이,
    무조건 자신을 reference하는 child table의 constraint가 enable되어 있는한은
    master table이 truncate가 불가능하게 된다.
    SQL>connect as system/password
    SQL>alter table system.DEF$_CALLDEST disable constraint
    DEF$_CALLDEST_CALL;
    SQL>truncate table system.DEF$_CALL;
    SQL>truncate table system.DEF$_CALLDEST;
    SQL>truncate table system.DEF$_ERROR;
    SQL>alter table system.DEF$_CALLDEST enable constraint DEF$_CALLDEST_CALL;
    (2) Oracle8의 경우
    Oracle8에서는 DEF$_CALLDEST가 더이상 DEF$_CALLDEST_CALL constraint를
    가지지 않으므로 이 부분에 대한 고려는 필요없이 다음과 같이 해당 table들을
    truncate시키면 된다.
    SQL>truncate table system.DEF$_AQCALL;
    SQL>truncate table system.DEF$_CALLDEST;
    SQL>truncate table system.DEF$_ERROR;
    SQL>truncate table system.DEF$_AQERROR;
    4. deferror의 내용을 지우는 방법
    deferred transaction이 remote로 전달되어 반영되다가 오류가 발생하면
    source가 되는 데이타베이스의 queue에서는 해당 내용이 사라지고,
    반영되던 destination 데이타베이스의 deferror와 defcall/deftran에
    해당 내용이 쌓이게 된다.
    이러한 경우 error의 내용을 다시 반영을 시도하거나 아니면 내용을
    확인 후 지우게 된다.
    다음과 같이 지우면 된다.
    SQL>exec dbms_defer_sys.delete_error(null,null);
    참고로 다시 수행하는 것은
    SQL>exec dbms_defer_sys.execute_error(null,null);
    Reference Documents
    <Note:190885.1> How to Clear Down the Deferred Queue and DBMS_DEFER_SYS.
    DELETE_TRAN

Maybe you are looking for

  • T.code for checking stock at WM level

    Hi Gurus,                I want to check the warehouse stock at Storage type level in previous week,Can any one give me T.Code where we can satisfy this requirement Regards Narayana

  • Corrupted iPod... and iTunes won't let me restore

    Because of an interrupted sync, my iPod is not being recognized by my Mac, or rather, iTunes tells me that my iPod is "corrupted" and needs to be restored. But when I try to restore it, iTunes tells me that it can't complete the operation because the

  • Cfdocument with foreign languages (urdu/farsi)

    Adobe claims that writing from right to left should be possible ( http://www.adobe.com/devnet/coldfusion/articles/printing_04.html) - but I have no success with my tests. I'm trying to create a document in urdu (later some other languages like farsi,

  • Change default language of Ip phone on CUCM 10.5

    Hello Everybody, I have Aa CUCM 10.5 installed, and i need to change the default-language of a DX80 inside my cluster to French, Someon know , how can i do that? Thx in advance

  • AirPort Help: Configuring "Network"

    In attempting to install NetWare for the iPhone 3G (another story!), I must have inadvertently deleted all of the Network and Airport settings that had allowed me to use Airport alone to pick up an open signal/network in my neighborhood. I had no rec