Restore data to an altered table.

Hello,
I 'd like to Alter a table in my database, and add a column.
A lot of our old data are stored in tapes.
What will happen if I later try to restore the data from the tables to the modified table?
Do i have to restore all our tapes and then alter the table and then back up again?
DB is 9i
Thank you

Hi,
I 'd like to Alter a table in my database, and add a column.
A lot of our old data are stored in tapes. - is it backup?
What will happen if I later try to restore the data from the tables to the modified table?you cannot restore table, you can restore tablespace or datafile but not objects.
These all wont work , do one thing before doing alterations to a table take export of a table
*$exp system/**** file=exp_table.dmp tables=abc owner=user1*
what ever changes you can do,if not then again drop the table and import back again
Thanks.

Similar Messages

  • NEED ALTER TABLE ALTER COLUMN FOR DATE FORMAT

    Need something like this :
    ALTER TABLE ABC
    ALTER COLUMN DATE1 AS (DD/MM/YYYY)
    need to appear in this format (29/03/2014) in the table.
    also needs to be recorded in DB in that format.

    changed system date format, works. Thanks!
    You should always store values as dates itself in date/datetime related field in SQLServer
    The formatting can very easily be done in your front end (presentation layer) using format function
    Even in T-SQL you can use CONVERT or FORMAT functions to get the date values in the format you want
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page
    I don't follow you? Anyway, Olaf Helper's answer was the solution.
    SQL Server has a data type called DATETIME. To correctly query dates, it is easier to use this as it allows the SQL engine to do all the calculations for you, also the canonical format for dates in any database language is YYYY-MM-DD, this ensures correctly
    tiered dates that are stored in the right order. The way you store data and the way you display it should be kept as two seperate entities. This is the ISO standard and has been thoroughly investigated ...alot to say the least. This is where the term "front-end"
    and "back-end" developers come from, and also the distinction between server-side and client-side scripting.
    To cut a long story short, take the advice of the multiple professionals here, and follow standards otherwise you'll find yourself stuck, or worse, your legacy code will make someone tear their hair out.

  • Can I restore the deleted statistical data from the database tables?

    Hi all,
       I have deleted the statistical data from the database tables like(Ex: RSDDSTAT, RSDDSTATWHM,..) by mistake through RSA1> Tools> BW Statistics for Infoproviders--> Delete.
    Is there any way to restore the deleted data back? Thanks in advance.

    Now I'm really confused-
    Your first post said
    "<b>I have deleted the statistical data from the database tables like(Ex: RSDDSTAT, RSDDSTATWHM</b>,..) by mistake through RSA1> Tools> BW Statistics for Infoproviders--> Delete."
    but your last respsonse said
    "I have deleted the BW Statistics data, <b>not the actual data in RSDDSTAT tables</b> through
    RSA1 -> Tools -> BW Statistics for InfoProviders -> clicked 'Delete' bin to delete data."
    If you used the RSA1 -> Tools -> BW Statistics for InfoProviders -> clicked 'Delete' - <b>then you deleted the data from the RSDDSTAT tables</b>. This assumes you accepted the default date range that would have popped up after the clicking on the Delete button which specified to delete thru the current date.  If this is what you did, the data is gone.  Your only hope is be to recover from a DB backup.  
    The data in the RSDDSTAT tables is what is used to feed the BW Statistics cubes, generally on a daily basis.

  • Restore data in fact table

    Hi All
    Can you please let me know whether the data in the fact table can be moved from Dev to QA?
    If yes, how?
    Many Thanks!
    Regards,
    Afreen

    It's not possible with transport.
    For testing purpose you can add DEV source system in QUA, create datasource for DEV cube in QUA and transformation to cube in QUA (direct map for all fields) and load data with standard BW flow: infopackage and DTP.
    You can generate datasource for cube by right click at infocube->Additional functions->Generate export DataSource. Datasource name will be 8<cube_name>.

  • Need help restoring data

    I have several disparate Windows 2003-based database with limited storage and memory resources. Each database is running in NOARCHIVE mode. I have a need to backup the database prior to upgrades. If errors occur, I'd like to be able to restore the database to its previous state.
    However, each time I perform a restore/recover, I see the updated data in all tables rather than the original data from the backup. Am I under the false impression that RMAN can restore data within the same database? Any help is appreciated.
    I'll attach the scripts that made use of see if there is anything that I am missing.
    CREATING A BACKUP
    1.     rman nocatalog
    2.     RMAN> connect target sys@ncr1
    3.     RMAN>shutdown immediate;
    4.     RMAN>startup mount;
    5.     RMAN> report schema;
    6.     RMAN> backup tag ‘ncr178_pre_upgrade’ database;
    7.     RMAN> list backup;
    8.     RMAN> startup
    RESTORING THE DATABASE
    1.     rman nocatalog
    2.     RMAN> connect target sys@ncr1
    3.     startup force mount;
    4.     restore database;
    5.     recover database;
    6.     alter database open;

    Rman will always try to recover the DB to the point of failure unless otherwise specified. If the changes made by you is available in the online logs(since your DB is running in noarchivelog), it will always apply that changes too while recovering.
    if you execute the restore commands shown by you, RMAN will restore the datafiles (if any file is missing and backup is available), apply the changes (if available in the logs) and open it..
    Rman is more suited for productin DB s where DB is running most of the time and in archivelog mode. In your scenario, I think, a cold backup is suited...
    Rgds
    Sameer

  • Generate DDL - change is a new column and I want to generate a alter table

    Morning all,
    I have searched and looked all over the data modeler and I cannot find this option ... yet I did find it easily in Designer.
    I hope you can help me.
    SQL Developer Data Modeler v3.0.0.665.
    I have added a new column to a table and when I generate the DDL I would like it to be an alter table add column rather than a create table.
    This feature is in Designer so I would think it would be in data modeler.
    Just incase my description is not clear here are the high level steps so it is clear.
    1. create the logical model
    2. create the relational from the logical.
    3. create the physical from the relational.
    4. generate DDL and run in database. At this point I go to production with my system and all is well.
    5. At this point we have an enhancement request. For the model it will be a new column in a table.
    6. update logical model.
    7. update relational from logical
    8. update physical from relational
    9. generate DDL. Here I would like to have the generate be aware the it needs only to generate an alter table add column and not create the table.
    This is something I do alot as all my models are in production. I cannot find how to do this step of getting data modeler to generate the alter.
    Designer does this exceptionally well.
    Quite often it is more than a single column. The changes can be many and made over time and at the time of generating the DDL you may not recall every single change you made. To have the tool discover those changes for you and generate the appropriate DDL is a feature I regard as very high.
    I hope this is clear and you can help me.
    Cheers
    Chris ....

    Hi Chris,
    you need to compare your model against database - import from database into same relational model and use "swap target" option - in this case "alter statements" against database will be generated.
    You can look at demonstrations here http://www.oracle.com/technetwork/developer-tools/datamodeler/demonstrations-224554.html
    Probably this particular one will be most helpful http://download.oracle.com/otn_hosted_doc/sqldev/importddl/importddl.html
    Soon or later your changes will require table to be recreated and you'll need to backup your data - you can consider usage of "Advanced DDL" option - script will be generated that will unload the content of your table (including LOBs) to file system accessible from database and restore it after changes. Well don't try it directly on production system :).
    Philip

  • Is there any way to restore data truncated with TRUNCATE query in Oracle9i

    Hi, Is there any way to restore data truncated with TRUNCATE query in Oracle9i DB...
    Thanks in advance...

    Hi,
    you can flash back DML queries like insert, update
    but it is not possible for some DDL like truncate, alter, and and drop columns etc that change structure of table.
    e.g.
    SQL> select * from t2;
    C1
    3
    SQL> select dbms_flashback.get_system_change_number from dual;
    GET_SYSTEM_CHANGE_NUMBER
    496378
    SQL> truncate table t2;
    Table truncated.
    SQL> select * from t2 as of scn(496378);
    select * from t2 as of scn(496378)
    ERROR at line 1:
    ORA-01466: unable to read data - table definition has changed
    Andrey

  • How to Restore Data.

    Hi
    I have truncated all the data of my all the tables accidentally. I Have rman backup of previous day. And the deletion occurred at 9:15am. I want to go back at 9:10 am. My database is in archive mode.
    How can I restore data until 9.10 am ? Any body explains me the rman command..
    Arif .

    Set the NLS parameters at shell prompt:
    NLS_LANG=american; export NLS_LANG
    NLS_DATE_FORMAT='Mon DD YYYY HH24:MI:SS'; export NLS_DATE_FORMAT
    At rman prompt do the following:
    (Change the channel to disk if restoring from disk media instead of tapes)
    connect target /               -- Connection to the target database
    connect rcvcat rman/rman@rcat          -- Connection to the recovery catalog database
    shutdown immediate;               -- Shutdown the target database for point-in-time recovery
    startup mount;                    -- Mount the target database
    run
    set until time 'Dec 20 2004 9:10:00';     -- Set the time to which the database should be recovered
    allocate channel ch1 type 'sbt_tape';     -- Allocate channels
    restore database;               -- Restore the database from backup
    recover database;               -- Recover the database by applying archivelogs
    release channel ch1;               -- Release channels
    alter database open resetlogs ;          -- Open the database with resetlogs after successful                          recovery.

  • Can not restore data files from backup set

    I am trying to restore Server A's backup data to Server B (they are all oracle11g) using rman. The restore commands is below:
    rman target /;
    shutdown immediate;
    startup nomount;
    restore controlfile from '/usr/local/oracle/backup/20100418/ctl_xxx'
    alter database mount;
    catalog start with '/usr/local/oracle/backup/20100418/';
    restore database;
    recover database;
    alter database open resetlogs;
    For the first time, it works. But when i tried to restore another backup data by the same way:
    rman target /;
    shutdown immediate;
    startup nomount;
    restore controlfile from '/usr/local/oracle/backup/20100425/ctl_xxx'
    alter database mount;
    catalog start with '/usr/local/oracle/backup/20100425/';
    restore database;
    recover database;
    alter database open resetlogs;
    The second time, i found that rman restore the old backup data, which means that it restore the data file under '/usr/local/oracle/backup/20100418/' instead of '/usr/local/oracle/backup/20100425/'.
    So I run 'list backup of database summary' to see the backup set lists in control file.
    List of Backups
    ===============
    Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
    910 B 0 A DISK 18-APR-10 1 1 NO TAG20100418T020007
    945 B 0 A DISK 25-APR-10 1 1 NO TAG20100425T020007
    But when i run ‘restore database preview summary’ to see the backup set list to restore
    List of Backups
    ===============
    Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
    910 B 0 A DISK 18-APR-10 1 1 NO TAG20100418T020007
    there's no backup set 945 at all. that's why i could not restore data file under '/usr/local/oracle/backup/20100425/' at the second time.
    So, why two backup list is different ? how can i restore datafile under '/usr/local/oracle/backup/20100425/' ?
    My backup script is below:
    run{
    allocate channel c1 type disk;
    backup incremental level 0 as backupset format '$DIR/`date +%Y%m%d`/data_%d_c0_%T_%u' database;
    sql 'alter system archive log current';
    backup archivelog from time 'sysdate-14' format '$DIR/`date +%Y%m%d`/log_%d_%T_%u';
    backup current controlfile format '$DIR/`date +%Y%m%d`/ctl_%d_%T_%I_%u';
    release channel c1;
    Thanks
    Edited by: user13055376 on 2010-4-29 上午1:20

    yeah, I am sure Tag: TAG20100425T020007 exists
    RMAN> list backupset 945;
    List of Backup Sets
    BS Key Type LV Size Device Type Elapsed Time Completion Time
    945 Incr 0 6.40G DISK 00:05:46 25-APR-10
    BP Key: 945 Status: AVAILABLE Compressed: NO Tag: TAG20100425T020007
    Piece Name: /usr/local/oracle/backup/20100425/data_QIANGL_c0_20100425_thlbvjt7
    List of Datafiles in backup set 945
    File LV Type Ckp SCN Ckp Time Name
    1 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/system01.dbf
    2 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/sysaux01.dbf
    3 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/undotbs01.dbf
    4 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/users01.dbf
    5 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/dict01.dbf
    6 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/support01.dbf
    7 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/supportindex01.dbf
    8 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/log01.dbf
    9 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/logindex01.dbf
    10 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/lobindex01.dbf
    11 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/data01.dbf
    12 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/indexes01.dbf
    13 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/image001.dbf
    14 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/tongbuimage001.dbf
    15 0 Incr 2498880165194 25-APR-10 /usr/local/oracle/oradata/qiangl/imagebackup001.dbf
    My purpose is to use the newest backupset of Server A to update Server B. So if Server A crush, Server B will be useful. Is there any other way to do that ?
    retention policy :'configure retention policy to redundancy 4'
    And Server A do LV0 backup every 7 day, and do LV1 backup ervery other day.
    Edited by: user13055376 on 2010-4-29 上午3:45

  • Unable to view data in some HR tables

    Since I upgraded to SQL Developer 2.1.1.64 I have not been able to view data in the data tab for the following tables that I have run accross: per_all_people_f, per_all_assignments_f or per_all_positions_f.
    Other tables that I have been using seem to be working fine, but these I need to use all the time. I get the column numbers returned, but no data or column headings.
    Another co-worker that has upgraded is experiencing the same problem.

    I could not get the ddl to show up on the sql tab in version 2.1.1.64, but I did run it in version 1.5.4 and this is what was returned.
    -- Unable to Render DDL with DBMS_METADATA using internal generator.
    CREATE TABLE HR.PER_ALL_PEOPLE_F
    PERSON_ID NUMBER(10, 0) NOT NULL,
    EFFECTIVE_START_DATE DATE NOT NULL,
    EFFECTIVE_END_DATE DATE NOT NULL,
    BUSINESS_GROUP_ID NUMBER(15, 0) NOT NULL,
    PERSON_TYPE_ID NUMBER(15, 0) NOT NULL,
    LAST_NAME VARCHAR2(150 BYTE) NOT NULL,
    START_DATE DATE NOT NULL,
    APPLICANT_NUMBER VARCHAR2(30 BYTE),
    BACKGROUND_CHECK_STATUS VARCHAR2(30 BYTE),
    BACKGROUND_DATE_CHECK DATE,
    BLOOD_TYPE VARCHAR2(30 BYTE),
    COMMENT_ID NUMBER(15, 0),
    CORRESPONDENCE_LANGUAGE VARCHAR2(30 BYTE),
    CURRENT_APPLICANT_FLAG VARCHAR2(30 BYTE),
    CURRENT_EMP_OR_APL_FLAG VARCHAR2(30 BYTE),
    CURRENT_EMPLOYEE_FLAG VARCHAR2(30 BYTE),
    DATE_EMPLOYEE_DATA_VERIFIED DATE,
    DATE_OF_BIRTH DATE,
    EMAIL_ADDRESS VARCHAR2(240 BYTE),
    EMPLOYEE_NUMBER VARCHAR2(30 BYTE),
    EXPENSE_CHECK_SEND_TO_ADDRESS VARCHAR2(30 BYTE),
    FAST_PATH_EMPLOYEE VARCHAR2(30 BYTE),
    FIRST_NAME VARCHAR2(150 BYTE),
    FTE_CAPACITY NUMBER(5, 2),
    FULL_NAME VARCHAR2(240 BYTE),
    HOLD_APPLICANT_DATE_UNTIL DATE,
    HONORS VARCHAR2(45 BYTE),
    INTERNAL_LOCATION VARCHAR2(45 BYTE),
    KNOWN_AS VARCHAR2(80 BYTE),
    LAST_MEDICAL_TEST_BY VARCHAR2(60 BYTE),
    LAST_MEDICAL_TEST_DATE DATE,
    MAILSTOP VARCHAR2(45 BYTE),
    MARITAL_STATUS VARCHAR2(30 BYTE),
    MIDDLE_NAMES VARCHAR2(60 BYTE),
    NATIONALITY VARCHAR2(30 BYTE),
    NATIONAL_IDENTIFIER VARCHAR2(30 BYTE),
    OFFICE_NUMBER VARCHAR2(45 BYTE),
    ON_MILITARY_SERVICE VARCHAR2(30 BYTE),
    ORDER_NAME VARCHAR2(240 BYTE),
    PRE_NAME_ADJUNCT VARCHAR2(30 BYTE),
    PREVIOUS_LAST_NAME VARCHAR2(150 BYTE),
    PROJECTED_START_DATE DATE,
    REHIRE_AUTHORIZOR VARCHAR2(30 BYTE),
    REHIRE_REASON VARCHAR2(60 BYTE),
    REHIRE_RECOMMENDATION VARCHAR2(30 BYTE),
    RESUME_EXISTS VARCHAR2(30 BYTE),
    RESUME_LAST_UPDATED DATE,
    REGISTERED_DISABLED_FLAG VARCHAR2(30 BYTE),
    SECOND_PASSPORT_EXISTS VARCHAR2(30 BYTE),
    SEX VARCHAR2(30 BYTE),
    STUDENT_STATUS VARCHAR2(30 BYTE),
    SUFFIX VARCHAR2(30 BYTE),
    TITLE VARCHAR2(30 BYTE),
    VENDOR_ID NUMBER(15, 0),
    WORK_SCHEDULE VARCHAR2(30 BYTE),
    WORK_TELEPHONE VARCHAR2(60 BYTE),
    COORD_BEN_MED_PLN_NO VARCHAR2(30 BYTE),
    COORD_BEN_NO_CVG_FLAG VARCHAR2(30 BYTE),
    DPDNT_ADOPTION_DATE DATE,
    DPDNT_VLNTRY_SVCE_FLAG VARCHAR2(30 BYTE),
    RECEIPT_OF_DEATH_CERT_DATE DATE,
    USES_TOBACCO_FLAG VARCHAR2(30 BYTE),
    BENEFIT_GROUP_ID NUMBER(15, 0),
    REQUEST_ID NUMBER(15, 0),
    PROGRAM_APPLICATION_ID NUMBER(15, 0),
    PROGRAM_ID NUMBER(15, 0),
    PROGRAM_UPDATE_DATE DATE,
    ATTRIBUTE_CATEGORY VARCHAR2(30 BYTE),
    ATTRIBUTE1 VARCHAR2(150 BYTE),
    ATTRIBUTE2 VARCHAR2(150 BYTE),
    ATTRIBUTE3 VARCHAR2(150 BYTE),
    ATTRIBUTE4 VARCHAR2(150 BYTE),
    ATTRIBUTE5 VARCHAR2(150 BYTE),
    ATTRIBUTE6 VARCHAR2(150 BYTE),
    ATTRIBUTE7 VARCHAR2(150 BYTE),
    ATTRIBUTE8 VARCHAR2(150 BYTE),
    ATTRIBUTE9 VARCHAR2(150 BYTE),
    ATTRIBUTE10 VARCHAR2(150 BYTE),
    ATTRIBUTE11 VARCHAR2(150 BYTE),
    ATTRIBUTE12 VARCHAR2(150 BYTE),
    ATTRIBUTE13 VARCHAR2(150 BYTE),
    ATTRIBUTE14 VARCHAR2(150 BYTE),
    ATTRIBUTE15 VARCHAR2(150 BYTE),
    ATTRIBUTE16 VARCHAR2(150 BYTE),
    ATTRIBUTE17 VARCHAR2(150 BYTE),
    ATTRIBUTE18 VARCHAR2(150 BYTE),
    ATTRIBUTE19 VARCHAR2(150 BYTE),
    ATTRIBUTE20 VARCHAR2(150 BYTE),
    ATTRIBUTE21 VARCHAR2(150 BYTE),
    ATTRIBUTE22 VARCHAR2(150 BYTE),
    ATTRIBUTE23 VARCHAR2(150 BYTE),
    ATTRIBUTE24 VARCHAR2(150 BYTE),
    ATTRIBUTE25 VARCHAR2(150 BYTE),
    ATTRIBUTE26 VARCHAR2(150 BYTE),
    ATTRIBUTE27 VARCHAR2(150 BYTE),
    ATTRIBUTE28 VARCHAR2(150 BYTE),
    ATTRIBUTE29 VARCHAR2(150 BYTE),
    ATTRIBUTE30 VARCHAR2(150 BYTE),
    LAST_UPDATE_DATE DATE,
    LAST_UPDATED_BY NUMBER(15, 0),
    LAST_UPDATE_LOGIN NUMBER(15, 0),
    CREATED_BY NUMBER(15, 0),
    CREATION_DATE DATE,
    PER_INFORMATION_CATEGORY VARCHAR2(30 BYTE),
    PER_INFORMATION1 VARCHAR2(150 BYTE),
    PER_INFORMATION2 VARCHAR2(150 BYTE),
    PER_INFORMATION3 VARCHAR2(150 BYTE),
    PER_INFORMATION4 VARCHAR2(150 BYTE),
    PER_INFORMATION5 VARCHAR2(150 BYTE),
    PER_INFORMATION6 VARCHAR2(150 BYTE),
    PER_INFORMATION7 VARCHAR2(150 BYTE),
    PER_INFORMATION8 VARCHAR2(150 BYTE),
    PER_INFORMATION9 VARCHAR2(150 BYTE),
    PER_INFORMATION10 VARCHAR2(150 BYTE),
    PER_INFORMATION11 VARCHAR2(150 BYTE),
    PER_INFORMATION12 VARCHAR2(150 BYTE),
    PER_INFORMATION13 VARCHAR2(150 BYTE),
    PER_INFORMATION14 VARCHAR2(150 BYTE),
    PER_INFORMATION15 VARCHAR2(150 BYTE),
    PER_INFORMATION16 VARCHAR2(150 BYTE),
    PER_INFORMATION17 VARCHAR2(150 BYTE),
    PER_INFORMATION18 VARCHAR2(150 BYTE),
    PER_INFORMATION19 VARCHAR2(150 BYTE),
    PER_INFORMATION20 VARCHAR2(150 BYTE),
    PER_INFORMATION21 VARCHAR2(150 BYTE),
    PER_INFORMATION22 VARCHAR2(150 BYTE),
    PER_INFORMATION23 VARCHAR2(150 BYTE),
    PER_INFORMATION24 VARCHAR2(150 BYTE),
    PER_INFORMATION25 VARCHAR2(150 BYTE),
    PER_INFORMATION26 VARCHAR2(150 BYTE),
    PER_INFORMATION27 VARCHAR2(150 BYTE),
    PER_INFORMATION28 VARCHAR2(150 BYTE),
    PER_INFORMATION29 VARCHAR2(150 BYTE),
    PER_INFORMATION30 VARCHAR2(150 BYTE),
    OBJECT_VERSION_NUMBER NUMBER(9, 0),
    DATE_OF_DEATH DATE,
    ORIGINAL_DATE_OF_HIRE DATE,
    TOWN_OF_BIRTH VARCHAR2(90 BYTE),
    REGION_OF_BIRTH VARCHAR2(90 BYTE),
    COUNTRY_OF_BIRTH VARCHAR2(90 BYTE),
    GLOBAL_PERSON_ID VARCHAR2(30 BYTE),
    COORD_BEN_MED_PL_NAME VARCHAR2(80 BYTE),
    COORD_BEN_MED_INSR_CRR_NAME VARCHAR2(80 BYTE),
    COORD_BEN_MED_INSR_CRR_IDENT VARCHAR2(80 BYTE),
    COORD_BEN_MED_EXT_ER VARCHAR2(80 BYTE),
    COORD_BEN_MED_CVG_STRT_DT DATE,
    COORD_BEN_MED_CVG_END_DT DATE,
    PARTY_ID NUMBER(15, 0),
    NPW_NUMBER VARCHAR2(30 BYTE),
    CURRENT_NPW_FLAG VARCHAR2(30 BYTE),
    GLOBAL_NAME VARCHAR2(240 BYTE),
    LOCAL_NAME VARCHAR2(240 BYTE)
    , CONSTRAINT PER_PEOPLE_F_PK PRIMARY KEY
    PERSON_ID,
    EFFECTIVE_START_DATE,
    EFFECTIVE_END_DATE
    ENABLE
    TABLESPACE "HR_DATA_SPACE_01"
    LOGGING
    PCTFREE 10
    PCTUSED 40
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 48K
    NEXT 8000K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PEOPLE_F_FK1 FOREIGN KEY
    BUSINESS_GROUP_ID
    REFERENCES HR.HR_ALL_ORGANIZATION_UNITS
    ORGANIZATION_ID
    ) ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PEOPLE_F_FK2 FOREIGN KEY
    PERSON_TYPE_ID
    REFERENCES HR.PER_PERSON_TYPES
    PERSON_TYPE_ID
    ) ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT HR_PER_DATE_OF_DEATH CHECK
    (DATE_OF_DEATH >= DATE_OF_BIRTH)
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_ON_MILITARY_SRV_CHK CHECK
    (ON_MILITARY_SERVICE IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_DPDNT_VLNTRY_SVCE_FLAG_CHK CHECK
    (DPDNT_VLNTRY_SVCE_FLAG in ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_SECOND_PASSPORT_CHK CHECK
    (SECOND_PASSPORT_EXISTS IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_FAST_PATH_EMPLOYEE_CHK CHECK
    (FAST_PATH_EMPLOYEE IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_COORD_BEN_NO_CVG_FLAG CHECK
    (COORD_BEN_NO_CVG_FLAG in ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_RESUME_EXISTS_CHK CHECK
    (RESUME_EXISTS IN ('Y','N'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_SEX_CHK CHECK
    (SEX IN ('M', 'F'))
    ENABLE
    ALTER TABLE HR.PER_ALL_PEOPLE_F
    ADD CONSTRAINT PER_PER_EXPENSE_CHECK_SEND_CHK CHECK
    (EXPENSE_CHECK_SEND_TO_ADDRESS IN ('H', 'O', 'P'))
    ENABLE
    CREATE INDEX HR.CSUH_PPF_ATTR12_IDX ON HR.PER_ALL_PEOPLE_F (ATTRIBUTE12 ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 2
    MAXTRANS 255
    STORAGE
    INITIAL 1M
    NEXT 104K
    MINEXTENTS 1
    MAXEXTENTS 8192
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.CSUH_PPF_ATTR1_IDX ON HR.PER_ALL_PEOPLE_F (ATTRIBUTE1 ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 560K
    NEXT 160K
    MINEXTENTS 1
    MAXEXTENTS 8192
    PCTINCREASE 0
    FREELISTS 1
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N1 ON HR.PER_ALL_PEOPLE_F (UPPER(FULL_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N2 ON HR.PER_ALL_PEOPLE_F (UPPER(LAST_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N50 ON HR.PER_ALL_PEOPLE_F (LAST_NAME ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 496K
    NEXT 160K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N51 ON HR.PER_ALL_PEOPLE_F (EMPLOYEE_NUMBER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 328K
    NEXT 80K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N52 ON HR.PER_ALL_PEOPLE_F (APPLICANT_NUMBER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 8K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N53 ON HR.PER_ALL_PEOPLE_F (NATIONAL_IDENTIFIER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 496K
    NEXT 160K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N54 ON HR.PER_ALL_PEOPLE_F (FULL_NAME ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 0
    INITRANS 16
    MAXTRANS 255
    STORAGE
    INITIAL 760K
    NEXT 240K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 1
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N55 ON HR.PER_ALL_PEOPLE_F (PARTY_ID ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N56 ON HR.PER_ALL_PEOPLE_F (NPW_NUMBER ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N57 ON HR.PER_ALL_PEOPLE_F (UPPER(GLOBAL_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N58 ON HR.PER_ALL_PEOPLE_F (UPPER(LOCAL_NAME) ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 256K
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N59 ON HR.PER_ALL_PEOPLE_F (EMAIL_ADDRESS ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    CREATE INDEX HR.PER_PEOPLE_F_N60 ON HR.PER_ALL_PEOPLE_F (GLOBAL_NAME ASC) TABLESPACE "HR_INDEX_SPACE_01"
    LOGGING
    PCTFREE 10
    INITRANS 11
    MAXTRANS 255
    STORAGE
    INITIAL 16K
    NEXT 4M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    FREELISTS 4
    FREELIST GROUPS 4
    BUFFER_POOL DEFAULT
    ;

  • How to alter table in sql server2008

    Hi Friends, I have import one table and want to alter table. How do i do this

    Hi ,
    YOu need Alter Command for the same. but you have to specify what you want to Alter.
    For Reference
    To add a column in a table, use the following syntax:
    ALTER TABLE table_name
    ADD column_name datatype
    To delete a column in a table, use the following syntax (notice that some database systems don't allow deleting a column):
    ALTER TABLE table_name
    DROP COLUMN column_name
    To change the data type of a column in a table, use the following syntax:
    ALTER TABLE table_name
    ALTER COLUMN column_name datatype
    http://www.techonthenet.com/sql/tables/alter_table.php
    http://www.tutorialspoint.com/sql/sql-alter-command.htm
    Thanks
    Please Mark This As Answer or vote for Helpful Post if this helps you to solve your question/problem.

  • Alter Table Add column not null default value

    I want to add two columns to a table with not null and default as 0 for both columns
    Can i write the whole in one statement or do i have to split statement
    I tried this, but didn't work
    alter table DWSODS01.DWT00301_ORD_DTL_OMS add (
    COMB_ORD_FLG NUMBER(5,0) default 0 not null,
    COMB_ORD_NO NUMBER(12,0)
    default 0 not null);
    How can i modify the code?

    user10390682 wrote:
    I tried this, but didn't workSince you are specifying default values, it should work (regardless if table DWSODS01.DWT00301_ORD_DTL_OMS is empty or not):
    SQL> select count(*) from emp1
      2  /
      COUNT(*)
            14
    SQL> alter table emp1 add (
      2  COMB_ORD_FLG NUMBER(5,0) default 0 not null,
      3  COMB_ORD_NO NUMBER(12,0)
      4  default 0 not null);
    Table altered.
    SQL> desc emp1
    Name                                      Null?    Type
    EMPNO                                              NUMBER(4)
    ENAME                                              VARCHAR2(10)
    JOB                                                VARCHAR2(9)
    MGR                                                NUMBER(4)
    HIREDATE                                           DATE
    SAL                                                NUMBER(7,2)
    COMM                                               NUMBER(7,2)
    DEPTNO                                             NUMBER(2)
    COMB_ORD_FLG                              NOT NULL NUMBER(5)
    COMB_ORD_NO                               NOT NULL NUMBER(12)
    SQL> What error are you getting?
    SY.

  • Deadlocks with ALTER TABLE DISABLE CONSTRAINT

    Hello,
    We're deleting millions of redundant rows from a particular table in our live 10g database. This is being done online because the downtime would be unacceptable. The table in question has 30 child tables, so for speed I am disabling the foreign keys using ALTER TABLE DISABLE CONSTRAINT before the deletion (we haven't had any constraint violations for ages). Without this, deletion takes about 1 second per row i.e. a very long time.
    However, we're finding that ALTER TABLE DISABLE CONSTRAINT often reports ORA-00060: deadlock detected. This is causing problems with the live system. Can anyone think of the reason why a deadlock might occur in this situation and what we could do to prevent it happening? Note that any solution has to be doable without downtime unless it takes less than 30 minutes.
    Thanks a lot
    Ed
    Edited by: edwiles on Feb 4, 2009 6:02 AM

    look suggestions in the similar thread:
    Re: Deadlock when deleting a not linked data record in a parent table

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • Multiple Alter Table Statements in one batch

    Hi Team,
    We have in one of our upcoming release two columns being added to a table that has over 20 million records and 14 indexes.
    We needed to add two columns to the table both not null (bit). Because it was taking a while to add the columns, we thought that putting these two alter statements in one batch would speed up the operation significantly but to my surprise it did not.
    Conclusion from my test: individual alter statements or batch alter statements take the same time
    Here are me test and results - table Order1 and Order2 are exactly the same structure and data.
    Test case 1:
    ===================
    ALTER TABLE Order1
    ADD OR_N BIT DEFAULT 0 NOT NULL
    go
    ALTER TABLE AccountTradeConfirmation_Alter1
    ADD OR_S BIT DEFAULT 0 NOT NULL
    Go
    Elapsed Time: 2 hrs
                 Mar 18 2015 5:56PM
    (1 row affected)
    Non-clustered index (index id = 3) is being rebuilt.
    Non-clustered index (index id = 4) is being rebuilt.
    Non-clustered index (index id = 5) is being rebuilt.
    Non-clustered index (index id = 6) is being rebuilt.
    Non-clustered index (index id = 7) is being rebuilt.
    Non-clustered index (index id = 8) is being rebuilt.
    Non-clustered index (index id = 9) is being rebuilt.
    Non-clustered index (index id = 10) is being rebuilt.
    Non-clustered index (index id = 11) is being rebuilt.
    Non-clustered index (index id = 12) is being rebuilt.
    Non-clustered index (index id = 13) is being rebuilt.
    Non-clustered index (index id = 14) is being rebuilt.
    (21777920 rows affected)
    Non-clustered index (index id = 3) is being rebuilt.
    Non-clustered index (index id = 4) is being rebuilt.
    Non-clustered index (index id = 5) is being rebuilt.
    Non-clustered index (index id = 6) is being rebuilt.
    Non-clustered index (index id = 7) is being rebuilt.
    Non-clustered index (index id = 8) is being rebuilt.
    Non-clustered index (index id = 9) is being rebuilt.
    Non-clustered index (index id = 10) is being rebuilt.
    Non-clustered index (index id = 11) is being rebuilt.
    Non-clustered index (index id = 12) is being rebuilt.
    Non-clustered index (index id = 13) is being rebuilt.
    Non-clustered index (index id = 14) is being rebuilt.
    (21777920 rows affected)
                 Mar 18 2015 7:52PM
    Test case 2:
    ===================
    ALTER TABLE Order2
    ADD OR_N BIT DEFAULT 0 NOT NULL, OR_S BIT DEFAULT 0 NOT NULL
    go
    2 hrs elapsed time
                 Mar 20 2015 11:10AM
    (1 row affected)
    Non-clustered index (index id = 3) is being rebuilt.
    Non-clustered index (index id = 4) is being rebuilt.
    Non-clustered index (index id = 5) is being rebuilt.
    Non-clustered index (index id = 6) is being rebuilt.
    Non-clustered index (index id = 7) is being rebuilt.
    Non-clustered index (index id = 8) is being rebuilt.
    Non-clustered index (index id = 9) is being rebuilt.
    Non-clustered index (index id = 10) is being rebuilt.
    Non-clustered index (index id = 11) is being rebuilt.
    Non-clustered index (index id = 12) is being rebuilt.
    Non-clustered index (index id = 13) is being rebuilt.
    Non-clustered index (index id = 14) is being rebuilt.
    (21777920 rows affected)
                 Mar 20 2015 1:12PM

    Hi Kiran,
    I have read your response a few times and I was not able to understand your angle. I assume based on the results of my test that Sybase does the following in processing the alter statements
    ALTER TABLE Order2
    ADD OR_N BIT DEFAULT 0 NOT NULL, OR_S BIT DEFAULT 0 NOT NULL
    go
    process alter ADD OR_N BIT
    -- > make copy of table
    ---> alter original table
    --> put data back in
    process alterOR_S BIT
    -- > make copy of table
    ---> alter original table
    --> put data back in
    rebuild index
    my expectation was that it would make a copy of the table only once and process the two alter statements. Also when doing the alter separately (test1) it rebuilt the index twice, however using the batch the index was rebuilt once (at least only one message displayed).
    Regards.

Maybe you are looking for

  • Pictures are showing up as jpeg attachments in mail, how do I correct this?

    When receiving an email with photos they show up as a jpeg attachement in the messgae, I can view them through preview. I've tried changing settings, stopped into an Apple store (they couldn't figure it out) and re-installed Lion - Any ideas to get t

  • Need more disk space

    I need more disk space. Can I delete CS5 and LR4 now that I am running CC?

  • Xml data in pdf

    in form xml data showing perfectly but when i converted into pdf and stored on desk top, xml data is not showing in the form(because no xml connection in form) how i can view the xml data in pdfform.

  • Payment approval procedure

    Dear Guru Our Management want to do setup for automatic payment procedure for approval Please provide us the setup in SAP set wise Regards Vijay

  • Syncing to a Mobile Phone

    Is there any software that syncs your files with a mobile phone. NOT AN IPHONE!