R12 on Oracle linux: how to tune the R12?

Enter the R12. When i open a new page, the system is so bz.
By the sar : idle is 0
the top: idle is under 50%.
Maybe the ways of performancing tuning r Database, JVM and so on .Could pls tell me the details or some new methods to allocate the resources the system provided with reason?
regards
maratsafin

Top Oracle Tuning Tips By Alan Kendall May 2007
#1) Set your pga_aggregate_target large enough so that the average disk sort in meg is greater than 200 meg. This way most of the sorts will be done in memory and I/O waits on sorting will not slow down other processing.
DISK_SORTS MEMORY_SORTS PCT_DISK_SORTS
95 14030323 0
AVERAGE_DISK_SORT_IN_MEG
241.61179
VALUE NAME
740294656 pga_aggregate_target
AUTO workarea_size_policy
#2) When you calculate the buffer hit ratio, calculate the average physical reads per hour and try to keep this physical reads per hour less than a million per hour.
In the following example, the physical reads per hour are larger than a million so I increased the block buffers by 50%.
INSTANCE_N UPDAYS CONSISTENT BLKHIT PHYSRDS_PER_HOUR
CURACAO9 86.37 430521443475 99.16 1766521
STARTUP_NAME MEMORY_VALUE
db_block_buffers 240000
Alter system set db_block_buffers=360000 scope=spfile;
#3) Set your log_buffer=14 meg (The 10g default on Unix) because cpu's today can write several meg a second.
SYS AS SYSDBA> alter system set log_buffer=163840 scope=spfile;
SYS AS SYSDBA> shutdown immediate;
SYS AS SYSDBA> startup
SYS AS SYSDBA> @insert_a_million_rows
Elapsed: 00:00:01.89 (time to insert a million rows is 1.89 seconds.
SYS AS SYSDBA> alter system set log_buffer=14680064 scope=spfile;
SYS AS SYSDBA> shutdown immediate;
SYS AS SYSDBA> startup
SYS AS SYSDBA> @insert_a_million_rows
Elapsed: 00:00:01.84 (time to insert a million rows is now 2% better.
In this case, the 2% benefit was not that much but on some systems where there is heavy write contention, it is not uncommon to get 35% to 50% write benefits.
#4) Make sure you are not doing log swaps more than three per hour.
SQL> select count(*)/5/24 average_log_swaps_per_hour from v$loghist where first_time > sysdate-5;
AVERAGE_LOG_SWAPS_PER_HOUR
5.48333333
In this case the average is greater than three so we need to drop log groups and recreate them. This can be done safely online.
#5) Run stats pack or awr and identify the top waits in Oracle. Oracle tells us what it is waiting on and we should pay attention to what it is telling us. In the following example the top wait events that statspack would show (I queried sys.v_$system_event directly to get the following report for the waits since startup).
Seconds_Wait WAITS_PER_HOUR %_wait EVENT_NAME
45,859,349 22116.2837 79 db file sequential read
4,188,633 2020.024 7 library cache load lock
3,878,043 1870.23811 6 buffer busy waits
1,592,185 767.852326 2 library cache pin
748,263 360.860025 1 db file scattered read
We see that there is a large amount of waiting on seqential reads. This can be helped by increasing the buffer cache (and archiving historic rows), better indexing, tuning sql, and upgrading the hardware.
Also the system is latching on the library cache and v$latch will show this:
ECURACAO9 > select name,wait_time from v$latch
2 where wait_time > (select 1*avg(wait_time) from v$latch)
3 order by wait_time;
NAME WAIT_TIME
library cache pin allocation 7484686
process allocation 10019421
row cache objects 23045242
parallel query alloc buffer 33908108
library cache load lock 54770430
SQL memory manager latch 108655668
library cache 139125871
session allocation 174105073
shared pool 278591232
library cache pin 803601839
This can be helped by increasing the shared_pool, setting cursor_sharing=similar, add bind variable to code, pinning packages and in this case upgrading Oracle because of a bug in this version.
#6) Look at sar and identify if we have mostly a cpu problem, an I/O problem or no problem.
a) If a cpu problem, identify the worst cpu hogs by:
select cpu_time,sql_text from v$sqlarea order by cpu_time;
b) If an I/O problem, identify the worst I/O hogs by:
select DISK_READS,sql_text from v$sqlarea order by disk_reads;
#7) Look at the logical reads and writes verses the physical reads and writes:
EDEVPROD> alter system set statistics_level=TYPICAL scope=both;
System altered.
EDEVPROD > select sum(value),statistic_name,owner||'.'||object_name
2 from v$segment_statistics
3 where upper(statistic_name) like '%WRITE%' or
4 upper(statistic_name) like '%READ%' or
5 upper(statistic_name) like '%LOGICAL%'
6 group by statistic_name,owner||'.'||object_name
7 order by sum(value);
SUM(VALUE) STATISTIC_NAME OWNER||'.'||OBJECT_NAME
249477 physical reads GUIDE_PROD.SAVED_FOLDERS
251904 logical reads SYS.I_OBJAUTH2
286880 logical reads SYS.I_SYSAUTH1
301760 logical reads SYS.DUAL
307696 logical reads SYS.I_ARGUMENT2
348992 logical reads SYSTEM.AQ$_QUEUE_TABLES
361168 logical reads GUIDE_PROD.XPKSEARCH_KEYWORDS
463728 logical reads SYS.OBJ$
639504 logical reads GUIDE_PROD.SRATING_RATINGS
668688 logical reads GUIDE_PROD.REPLOG_MIG
8333408 logical reads GUIDE_PROD.SAVED_FOLDERS
8516864 logical reads GUIDE_PROD.SYS_C004048
Logical reads burn up cpu, and can be helped by adding indexes and tuning sql. It is rare but the cpu can be impacted by too many indexes on inserts and that can be helped by removing indexes or removing writes.
Physical reads burn up I/O and that can be helped by better partitioning, better indexing, purging historic rows and tuning sql. If a sql burn up a lot of I/O on a small table, it is because the query is not run frequently enough to stay in memory, and you have to cache small lookup tables to avoid the reads with "alter table scott.emp cache;"
#8) Look at what is taking up most of the memory and work to tune those reads and writes:
EPANAY9 > SELECT COUNT(*)*8192/1024/1024 meg_in_memory,
2 o.object_type,o.OBJECT_NAME Object_in_Memory
3 FROM DBA_OBJECTS o, V$BH bh
4 WHERE o.DATA_OBJECT_ID = bh.OBJD
5 GROUP BY o.OBJECT_NAME,o.object_type
6 having COUNT(*)*8192/1024/1024>100
7 ORDER BY COUNT(*);
MEG_IN_MEMORY OBJECT_TYPE OBJECT_IN_MEMORY
120 INDEX SYS_C004290
138 INDEX XPKLOCATIONS
148 INDEX XPKENTITIES
215 TABLE ENTITIES
262 TABLE USERS
267 TABLE LOCATIONS
1405 TABLE REVIEWS
In this case the reviews table needs better indexing or a purge or rows. If the table is partitioned, sometimes recreating the partitions smaller will help.
9) Look at when you do the most log swaps, and add the append hint and nologging to create and insert statements in intermediate (work) tables that do not need to be replicated or restored vi the redo logs.
EMUSH9 > select to_char(first_time,'MM-DD-RRRR HH24:MI:SS')
2 "swaps_last_day"
3 from v$loghist where first_time > sysdate-1;
swaps_last_day
05-22-2007 12:47:02
05-22-2007 12:58:35
05-22-2007 13:11:06
05-22-2007 13:22:29
05-22-2007 13:33:39
05-22-2007 13:44:32
05-22-2007 14:12:47
05-22-2007 15:05:42
05-22-2007 18:19:47
05-22-2007 18:31:54
05-22-2007 18:43:12
05-22-2007 18:54:42
05-22-2007 19:08:02
05-22-2007 19:19:51
05-22-2007 19:37:17
05-22-2007 19:49:57
05-22-2007 20:02:12
05-22-2007 20:14:28
05-22-2007 20:25:53
05-22-2007 20:39:32
05-22-2007 20:54:16
05-22-2007 21:12:06
05-22-2007 23:05:08
05-23-2007 05:32:15
05-23-2007 06:25:37
05-23-2007 07:40:25
05-23-2007 07:51:29
05-23-2007 08:04:13
05-23-2007 08:17:07
05-23-2007 08:29:35
05-23-2007 08:40:27
05-23-2007 09:07:18
05-23-2007 10:51:10
05-23-2007 11:13:20
34 rows selected.
In this database, Between 6 and 8 in the evening a job does a lot of log swaps, also between 7:30 and 9 in the morning. These jobs should be looked at to reduce the redo logs using
insert /*+ APPEND */ into owner.intermediate_table nologging select * from some_other_table; and
create table owner.intermediate_table nologging as select * from some_other_table;

Similar Messages

  • How to Tune the Transactions/ Z - reports /Progr..of High response time

    Dear friends,
    in <b>ST03</b> work load anlysis menu.... there are some z-reports, transactions, and some programmes are noticed contineously that they are taking the <b>max. response time</b> (and mostly >90%of time is  DB Time ).
    how to tune the above situation ??
    Thank u.

    Siva,
    You can start with some thing like:
    ST04  -> Detail Analysis -> SQL Request (look at top disk reads and buffer get SQL statements)
    For the top SQL statements identified you'd want to look at the explain plan to determine if the SQL statements is:
    1) inefficient
    2) are your DB stats up to date on the tables (note up to date stats does not always means they are the best)
    3) if there are better indexes available, if not would a more suitable index help?
    4) if there are many slow disk reads, is there an I/O issue?
    etc...
    While you're in ST04 make sure your buffers are sized adequately.
    Also make sure your Oracle parameters are set according to this OSS note.
    Note 830576 - Parameter recommendations for Oracle 10g

  • Rapidwiz - libXtst.so.6 error when installing EBS R12 on Oracle Linux 6.3

    Hello All,
    I am getting libXtst.so.6 error when installing EBS R12 on Oracle Linux 6.3
    I could not find the exact file to download. Can anyone of you help me to the proper location.
    Thanks in advance.

    987696 wrote:
    Hello All,
    I am getting libXtst.so.6 error when installing EBS R12 on Oracle Linux 6.3
    I could not find the exact file to download. Can anyone of you help me to the proper location.
    Thanks in advance.Please see these docs -- Search for "libXtst.so.6".
    Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86 [ID 761564.1]
    Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.1.1) for Linux x86-64 [ID 761566.1]
    Oracle Forms Upgrade to 10.1.2.3 fails with error /usr/lib/libXtst.so.6: undefined reference [ID 1120527.1]
    Thanks,
    Hussein

  • Hi i am using oracle 10g how to view the content of the stored procedure or trigger ?

    Hi i am using oracle 10g .How to edit  the content of the stored procedure or trigger ?

    jklopkjl wrote:
    Hi i am using oracle 10g .How to view the content of the stored procedure or trigger ?
    query ALL_SOURCE
    SQL> desc all_source
    Name                                      Null?    Type
    OWNER                                              VARCHAR2(30)
    NAME                                               VARCHAR2(30)
    TYPE                                               VARCHAR2(12)
    LINE                                               NUMBER
    TEXT                                               VARCHAR2(4000)

  • How to tune the follwoing procedure?

    create or replace procedure sample(verror_msg in out varchar2,
    vbrn_num in tb_branches.brn_num%type) is
    ltext1 varchar2(500);
    ltext2 varchar2(500);
    ltable_name varchar2(50);
    lcolumn_name varchar2(50);
    ldata_type varchar2(50);
    lold_rcn_num number;
    lnew_rcn_num number;
    lvalue varchar2(50);
    lunit_type char(1);
    lsql_stmt1 varchar2(500);
    lstring varchar2(500);
    lcol varchar2(10);
    lstart_time VARCHAR2(100);
    lend_time VARCHAR2(100);
    lcommit VARCHAR2(10) := 'COMMIT;';
    lfile_handle1 utl_file.file_type;
    lfile_handle2 utl_file.file_type;
    lfile_handle3 utl_file.file_type;
    lfile_handle4 utl_file.file_type;
    lfile_name1 VARCHAR2(50) := 'RCN_UPDATE_STMTS_' || vbrn_num || '.SQL';
    lfile_name2 VARCHAR2(50) := 'RCNSUCCESS_' || vbrn_num || '.TXT';
    lfile_name3 VARCHAR2(50) := 'RCNFAIL_' || vbrn_num || '.TXT';
    lfile_name4 VARCHAR2(50) := 'RCNERROR_' || vbrn_num || '.TXT';
    ldirectory_name VARCHAR2(100);
    ldirectory_path VARCHAR2(100);
    lspool_on VARCHAR2(100);
    lspool_off VARCHAR2(100);
    TYPE ref_cur IS REF CURSOR;
    cur_tab_cols ref_cur;
    cursor c1 is
    SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE
    FROM USER_TAB_COLS
    WHERE TABLE_NAME NOT LIKE 'TB_CONV%'
    and TABLE_NAME LIKE 'TB_%'
    AND COLUMN_NAME LIKE '%RCN%'
    UNION
    SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE
    FROM USER_TAB_COLS
    WHERE TABLE_NAME in ('TB_UNITCODES', 'TB_HIST_UNITCODES')
    AND COLUMN_NAME = 'UNIT_CODE'
    order by table_name;
    BEGIN
    verror_msg := nvl(verror_msg, 0);
    begin
    SELECT DISTINCT directory_path, directory_name
    INTO ldirectory_path, ldirectory_name
    FROM tb_conv_path
    WHERE brn_num = vbrn_num;
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    SP_CO_RAISEERROR('00SY00402', 'F', 'T');
    WHEN others THEN
    SP_CO_RAISEERROR('00SY00401X', 'F', 'T');
    END;
    lfile_handle1 := utl_file.fopen(ldirectory_name, lfile_name1, 'W', 32767);
    lfile_handle2 := utl_file.fopen(ldirectory_name, lfile_name2, 'W', 32767);
    lfile_handle3 := utl_file.fopen(ldirectory_name, lfile_name3, 'W', 32767);
    lfile_handle4 := utl_file.fopen(ldirectory_name, lfile_name4, 'W', 32767);
    SELECT 'SPOOL ' || ldirectory_path || '/LOG_' || lfile_name1
    INTO lspool_on
    FROM dual;
    utl_file.put_line(lfile_handle1, lspool_on);
    utl_file.new_line(lfile_handle1, 1);
    select 'EXEC SP_CONV_START_TIMELOG(' || '''' || 'LOG_' || lfile_name1 || '''' || ',' ||
    vbrn_num || ');'
    into lstart_time
    from dual;
    UTL_FILE.PUT_LINE(lfile_handle1, lstart_time);
    UTL_FILE.NEW_LINE(lfile_handle1, 1);
    open C1;
    loop
    Fetch C1
    into ltable_name, lcolumn_name, ldata_type;
    Exit When C1%notFound;
    lsql_stmt1 := 'select column_name from user_tab_columns where table_name =' || '''' ||
    ltable_name || '''' ||
    ' AND column_name in (''BRN_NUM'',''BRANCH'',''BRANCH_NUMBER'')';
    begin
    execute immediate lsql_stmt1
    into lcol;
    exception
    when no_data_found then
    lcol := null;
    end;
    if lcol is not null then
    if ltable_name in ('TB_UNITCODES', 'TB_HIST_UNITCODES') then
    ltext2 := 'select distinct ' || lcolumn_name || ' from ' ||
    ltable_name ||
    ' a, (select distinct new_rcn_num col from tb_conv_rcn_mapping where brn_num = ' ||
    vbrn_num || ') b where a.' || lcolumn_name ||
    ' = b.col(+) and b.col is null and a.' || lcolumn_name ||
    ' is not null and ' || lcol || ' = ' || vbrn_num ||
    ' and a.unit_type=''9''';
    else
    ltext2 := 'select distinct ' || lcolumn_name || ' from ' ||
    ltable_name ||
    ' a, (select distinct new_rcn_num col from tb_conv_rcn_mapping where brn_num = ' ||
    vbrn_num || ') b where a.' || lcolumn_name ||
    ' = b.col(+) and b.col is null and a.' || lcolumn_name ||
    ' is not null and ' || lcol || ' = ' || vbrn_num;
    end if;
    OPEN cur_tab_cols FOR ltext2;
    loop
    fetch cur_tab_cols
    into lvalue;
    exit when cur_tab_cols%notfound;
    begin
    IF VBRN_NUM IN (21, 6, 7, 8) THEN  Commented during NAP HK SIT cycle1
    SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
    INTO LNEW_RCN_NUM, LOLD_RCN_NUM
    FROM TB_CONV_RCN_MAPPING
    WHERE OLD_RCN_NUM = LVALUE
    AND BRN_NUM = VBRN_NUM;
    /* ELSE
    SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
    INTO LNEW_RCN_NUM, LOLD_RCN_NUM
    FROM TB_CONV_RCN_MAPPING
    WHERE OLD_RCN_NUM = LVALUE
    AND NEW_RCN_NUM NOT LIKE '40%'
    AND NEW_RCN_NUM NOT LIKE '41%'
    AND NEW_RCN_NUM NOT LIKE '42%'
    AND NEW_RCN_NUM NOT LIKE '65%'
    AND BRN_NUM = VBRN_NUM;
    END IF; */ -- Commented during NAP HK SIT cycle1
    if ldata_type = 'NUMBER' then
    if ltable_name in ('TB_UNITCODES', 'TB_HIST_UNITCODES') and
    lcolumn_name = 'UNIT_CODE' then
    begin
    select distinct unit_type
    into lunit_type
    from TB_UNITCODES
    where lcol = vbrn_num
    and unit_code = lvalue
    and unit_type = '9';
    exception
    when no_data_found then
    lunit_type := null;
    end;
    if lunit_type is not null then
    ltext1 := 'update ' || ltable_name || ' set ' ||
    lcolumn_name || ' = ' || lnew_rcn_num ||
    ' where ' || lcolumn_name || ' = ' ||
    lold_rcn_num || ' and ' || lcol || ' = ' ||
    vbrn_num || ' and unit_type = ' || '''9''' || ';';
    utl_file.put_line(lfile_handle1, ltext1);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle1, lcommit);
    utl_file.put_line(lfile_handle2,
    ltable_name || ' - ' || lcolumn_name ||
    ' - ' || lold_rcn_num || ' - ' ||
    lnew_rcn_num || ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle2, 0);
    end if;
    else
    ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
    ' = ' || lnew_rcn_num || ' where ' || lcolumn_name ||
    ' = ' || lold_rcn_num || ' and ' || lcol || ' = ' ||
    vbrn_num || ';';
    utl_file.put_line(lfile_handle1, ltext1);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle1, lcommit);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle2,
    ltable_name || ' - ' || lcolumn_name ||
    ' - ' || lold_rcn_num || ' - ' ||
    lnew_rcn_num || ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle2, 0);
    end if;
    else
    if ltable_name in ('TB_UNITCODES', 'TB_HIST_UNITCODES') and
    lcolumn_name = 'UNIT_CODE' then
    begin
    lstring := 'select distinct unit_type from ' || ltable_name ||
    ' where ' || lcol || ' = ' || vbrn_num ||
    ' and ' || lcolumn_name || ' = ' || '''' ||
    lvalue || '''' || ' and unit_type = ' || '''9''';
    execute immediate lstring
    into lunit_type;
    exception
    when no_data_found then
    lunit_type := null;
    end;
    if lunit_type is not null then
    ltext1 := 'update ' || ltable_name || ' set ' ||
    lcolumn_name || ' = ' || '''' || lnew_rcn_num || '''' ||
    ' where ' || lcolumn_name || ' = ' || '''' ||
    lold_rcn_num || '''' || ' and ' || lcol || ' = ' ||
    vbrn_num || ' and unit_type = ' || '''9''' || ';';
    utl_file.put_line(lfile_handle1, ltext1);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle1, lcommit);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle2,
    ltable_name || ' - ' || lcolumn_name ||
    ' - ' || lold_rcn_num || ' - ' ||
    lnew_rcn_num || ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle2, 0);
    end if;
    else
    ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
    ' = ' || '''' || lnew_rcn_num || '''' || ' where ' ||
    lcolumn_name || ' = ' || '''' || lold_rcn_num || '''' ||
    ' and ' || lcol || ' = ' || vbrn_num || ';';
    utl_file.put_line(lfile_handle1, ltext1);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle1, lcommit);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle2,
    ltable_name || ' - ' || lcolumn_name ||
    ' - ' || lold_rcn_num || ' - ' ||
    lnew_rcn_num || ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle2, 0);
    end if;
    end if;
    exception
    When NO_DATA_FOUND THEN
    utl_file.put_line(lfile_handle3,
    ltable_name || ' - ' || lcolumn_name || ' - ' ||
    lvalue || ' - ' || 'NO MAPPING FOUND' ||
    ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle3, 0);
    when others then
    utl_file.put_line(lfile_handle4,
    ltable_name || ' - ' || lcolumn_name || ' - ' ||
    lvalue || ' - ' || SQLERRM || ' - ' ||
    vbrn_num);
    utl_file.new_line(lfile_handle4, 0);
    end;
    end loop;
    ELSE
    ltext2 := 'select distinct ' || lcolumn_name || ' from ' ||
    ltable_name ||
    ' a, (select distinct new_rcn_num col from tb_conv_rcn_mapping where brn_num = ' ||
    vbrn_num || ') b where a.' || lcolumn_name ||
    ' = b.col(+) and b.col is null and a.' || lcolumn_name ||
    ' is not null';
    OPEN cur_tab_cols FOR ltext2;
    loop
    fetch cur_tab_cols
    into lvalue;
    exit when cur_tab_cols%notfound;
    begin
    IF VBRN_NUM IN (21, 6, 7, 8) THEN  Commented during NAP HK SIT cycle1
    SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
    INTO LNEW_RCN_NUM, LOLD_RCN_NUM
    FROM TB_CONV_RCN_MAPPING
    WHERE OLD_RCN_NUM = LVALUE
    AND BRN_NUM = VBRN_NUM;
    /* ELSE
    SELECT DISTINCT NEW_RCN_NUM, OLD_RCN_NUM
    INTO LNEW_RCN_NUM, LOLD_RCN_NUM
    FROM TB_CONV_RCN_MAPPING
    WHERE OLD_RCN_NUM = LVALUE
    AND NEW_RCN_NUM NOT LIKE '40%'
    AND NEW_RCN_NUM NOT LIKE '41%'
    AND NEW_RCN_NUM NOT LIKE '42%'
    AND NEW_RCN_NUM NOT LIKE '65%'
    AND BRN_NUM = VBRN_NUM;
    END IF; */ -- Commented during NAP HK SIT cycle1
    if ldata_type = 'NUMBER' then
    ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
    ' = ' || lnew_rcn_num || ' where ' || lcolumn_name ||
    ' = ' || lold_rcn_num || ';';
    utl_file.put_line(lfile_handle1, ltext1);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle1, lcommit);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle2,
    ltable_name || ' - ' || lcolumn_name || ' - ' ||
    lold_rcn_num || ' - ' || lnew_rcn_num ||
    ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle2, 0);
    else
    ltext1 := 'update ' || ltable_name || ' set ' || lcolumn_name ||
    ' = ' || '''' || lnew_rcn_num || '''' || ' where ' ||
    lcolumn_name || ' = ' || '''' || lold_rcn_num || '''' || ';';
    utl_file.put_line(lfile_handle1, ltext1);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle1, lcommit);
    utl_file.new_line(lfile_handle1, 0);
    utl_file.put_line(lfile_handle2,
    ltable_name || ' - ' || lcolumn_name || ' - ' ||
    lold_rcn_num || ' - ' || lnew_rcn_num ||
    ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle2, 0);
    end if;
    exception
    When NO_DATA_FOUND THEN
    utl_file.put_line(lfile_handle3,
    ltable_name || ' - ' || lcolumn_name || ' - ' ||
    lvalue || ' - ' || 'NO MAPPING FOUND' ||
    ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle3, 0);
    when others then
    utl_file.put_line(lfile_handle4,
    ltable_name || ' - ' || lcolumn_name || ' - ' ||
    lvalue || ' - ' || SQLERRM || ' - ' ||
    vbrn_num);
    utl_file.new_line(lfile_handle4, 0);
    end;
    end loop;
    end if;
    end loop;
    close c1;
    utl_file.new_line(lfile_handle1, 1);
    select 'EXEC SP_CONV_END_TIMELOG(' || '''' || 'LOG_' || lfile_name1 || '''' || ',' ||
    vbrn_num || ');'
    into lend_time
    from dual;
    UTL_FILE.PUT_LINE(lfile_handle1, lend_time);
    UTL_FILE.NEW_LINE(lfile_handle1, 1);
    SELECT 'SPOOL OFF;' INTO lspool_off FROM dual;
    utl_file.put_line(lfile_handle1, lspool_off);
    utl_file.new_line(lfile_handle1, 1);
    utl_file.fclose(lfile_handle1);
    utl_file.fclose(lfile_handle2);
    utl_file.fclose(lfile_handle3);
    utl_file.fclose(lfile_handle4);
    exception
    when others then
    verror_msg := sqlcode || ' ~ ' || sqlerrm;
    utl_file.put_line(lfile_handle4,
    ltable_name || ' - ' || lcolumn_name || ' - ' ||
    lvalue || ' - ' || SQLERRM || ' - ' || vbrn_num);
    utl_file.new_line(lfile_handle4, 0);
    utl_file.new_line(lfile_handle4, 0);
    utl_file.fclose(lfile_handle1);
    utl_file.fclose(lfile_handle2);
    utl_file.fclose(lfile_handle3);
    utl_file.fclose(lfile_handle4);
    end sample;

    duplicate:
    how to tune the follwoing procedure?

  • How to tune the query...?

    Hi all,
    I am having a table with millions of records and the query is taking hours
    time. How to tune the query apart from doing the following things.
    1. Creating or Deleting indexes.
    2. Using Bind variables.
    3. Using Hints.
    4. Updating the Statitics regurarly.
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.
    Thanks in advance,
    Pal

    user546710 wrote:
    Actually, i have asked this question in interview how to tune the query.
    I told him the above 4 things. Then he told, these are not working, then
    how you will tune this query.It actually depends on the scenario/problem given.
    You may want to read this first.
    When your query takes too long ...
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting

  • How to tune the performance of Oracle SQL/XML query?

    Hi all,
    I am running Oracle 9i and like to run the following Oracle SQL/XML query. It takes about 3+ hour and still not finish. If I get rid all the XML stuffs it only take minutes to run. Does anybody know how to what's the reason of this slow and how to tune it?
    SELECT XMLElement("CUSTOMER",
    XMLForest(C_CUSTKEY "C_CUSTKEY", C_NAME "C_NAME", C_ADDRESS "C_ADDRESS", C_PHONE "C_PHONE", C_MKTSEGMENT "C_MKTSEGMENT", C_COMMENT "C_COMMENT"),
    (SELECT XMLAgg(XMLElement("ORDERS",
    XMLForest(O_ORDERKEY "O_ORDERKEY", O_CUSTKEY "O_CUSTKEY", O_ORDERSTATUS "O_ORDERSTATUS", O_ORDERPRIORITY "O_ORDERPRIORITY", O_CLERK "O_CLERK", O_COMMENT "O_COMMENT"),
    (SELECT XMLAgg(XMLElement("LINEITEM",
    XMLForest(L_ORDERKEY "L_ORDERKEY", L_RETURNFLAG "L_RETURNFLAG", L_LINESTATUS "L_LINESTATUS", L_SHIPINSTRUCT "L_SHIPINSTRUCT", L_SHIPMODE "L_SHIPMODE", L_COMMENT "L_COMMENT")
    FROM LINEITEM
    WHERE LINEITEM.L_ORDERKEY = ORDERS.O_ORDERKEY)
    FROM ORDERS
    WHERE ORDERS.O_CUSTKEY = CUSTOMER.C_CUSTKEY)
    FROM CUSTOMER ;
    Thanks very much in advance for your time,
    Jinghao Liu

    ajallen wrote:
    Why not something more like
    SELECT *
    FROM fact1 l,
    FULL OUTER JOIN fact1 d
    ON l.company = d.company
    AND l.transactiontypeid = 1
    AND d.transactiontypeid = 2;
    Because this is not an equivalent of the original query.
    drop table t1 cascade constraints purge;
    drop table t2 cascade constraints purge;
    create table t1 as select rownum t1_id from dual connect by level <= 5;
    create table t2 as select rownum+2 t2_id from dual connect by level <= 5;
    select * from (select * from t1 where t1_id > 2) t1 full outer join t2 on (t1_id = t2_id);
    select * from t1 full outer join t2 on (t1_id = t2_id and t1_id > 2);
         T1_ID      T2_ID
             3          3
             4          4
             5          5
                        6
                        7
         T1_ID      T2_ID
             1
             2
             3          3
             4          4
             5          5
                        6
                        7

  • Excel import on Oracle Linux - How to create an ODBC Connection

    Hi,
    We have Oracle BI EE on Oracle Linux. We need to create an ODBC DNS and import tables to Admin tool.
    How would you create a ODBC connection inside Linux to Microsoft excel file using unixodbc. What drivers we need.
    Please let us know.
    Thanks!
    Nilaksha.

    See this post here: Re: [NQODBC][SQL_STATE: HY000][nQSError: 100[nQSError: 43093][nQSError: 16023]
    You need to find an Excel ODBC driver for Linux. The ones that come with OBIEE won't read Excel as far as I know.
    For info on creating an ODBC connection for OBIEE on Linux check the manual or search this forum. You don't need unixodbc for it.

  • How to tune the SQL & solve UNIQUE Contraint issue using without duplicates

    CREATE TABLE REL_ENT_REF
      ROLL_ENT        VARCHAR2(4 BYTE)              NOT NULL,
      ROLL_SUB_ENT    VARCHAR2(3 BYTE)              NOT NULL,
      ROLL_ENT_DESCR  VARCHAR2(50 BYTE),
      ENT             VARCHAR2(4 BYTE)              NOT NULL,
      SUB_ENT         VARCHAR2(3 BYTE)              NOT NULL,
      ENT_DESCR       VARCHAR2(50 BYTE)
    CREATE UNIQUE INDEX REL_ENT_REF_IDX_PK ON REL_ENT_REF
    (ROLL_ENT, ROLL_SUB_ENT, ENT, SUB_ENT);
    ALTER TABLE REL_ENT_REF ADD (
      CONSTRAINT REL_ENT_REF_IDX_PK
    PRIMARY KEY
    (ROLL_ENT, ROLL_SUB_ENT, ENT, SUB_ENT);
    TOTAL NUMBER OF RECORDS FOR TABLE REL_ENT_REF : 123542
    CREATE TABLE REL_COA_REF
      ACCT                    VARCHAR2(9 BYTE)      NOT NULL,
      ACCT_LVL                VARCHAR2(2 BYTE)      NOT NULL,
      ACCT_ID                 VARCHAR2(9 BYTE)      NOT NULL,
      REL_TYPE                VARCHAR2(10 BYTE)     NOT NULL,
      ACCT_TYPE               VARCHAR2(2 BYTE)      NOT NULL,
      ACCT_DESCR              VARCHAR2(43 BYTE),
      POST_ACCT               VARCHAR2(9 BYTE)      NOT NULL,
      POST_ACCT_TYPE          VARCHAR2(2 BYTE),
      POST_ACCT_DESCR         VARCHAR2(43 BYTE),
      SIGN_REVRSL             NUMBER
    CREATE INDEX REL_COA_REF_IDX_01 ON REL_COA_REF
    (ACCT_ID, REL_TYPE, POST_ACCT);
    CREATE UNIQUE INDEX REL_COA_REF_IDX_PK ON REL_COA_REF
    (ACCT_ID, ACCT, REL_TYPE, POST_ACCT);
    TOTAL NUMBER OF RECORDS FOR TABLE REL_COA_REF : 4721918
    CREATE TABLE REL_CTR_HIER_REF
      ENT           VARCHAR2(4 BYTE)                NOT NULL,
      SUB_ENT       VARCHAR2(3 BYTE)                NOT NULL,
      HIER_TBL_NUM  VARCHAR2(3 BYTE)                NOT NULL,
      HIER_ROLL     VARCHAR2(14 BYTE)               NOT NULL,
      HIER_CODE     VARCHAR2(14 BYTE)               NOT NULL,
      SUM_FLAG      VARCHAR2(14 BYTE)               NOT NULL,
      CTR_OR_HIER   VARCHAR2(14 BYTE)               NOT NULL,
      CTR_DETAIL    VARCHAR2(14 BYTE)               NOT NULL,
      CTR_DESCR     VARCHAR2(50 BYTE)
    CREATE INDEX REL_CTR_HIER_REF_IDX_01 ON REL_CTR_HIER_REF
    (HIER_TBL_NUM, HIER_ROLL, SUM_FLAG);
    CREATE UNIQUE INDEX REL_CTR_HIER_REF_IDX_PK ON REL_CTR_HIER_REF
    (ENT, SUB_ENT, HIER_TBL_NUM, HIER_ROLL, SUM_FLAG,
    CTR_DETAIL, CTR_OR_HIER, HIER_CODE);
    CREATE INDEX REL_CTR_HIER_REF_IDX_02 ON REL_CTR_HIER_REF
    (ENT, SUB_ENT, HIER_TBL_NUM, CTR_OR_HIER, SUM_FLAG,
    CTR_DETAIL);
    TOTAL NUMBER OF RECORDS FOR TABLE REL_CTR_HIER_REF : 24151811
    CREATE TABLE REL_TXN_ACT_CM
      ENT               VARCHAR2(4 BYTE),
      SUB_ENT           VARCHAR2(3 BYTE),
      POST_ACCT         VARCHAR2(9 BYTE),
      CTR               VARCHAR2(7 BYTE),
      POST_DATE         DATE,
      EFF_DATE          DATE,
      TXN_CODE          VARCHAR2(2 BYTE),
      TXN_TYPE          VARCHAR2(1 BYTE),
      TXN_AMOUNT        NUMBER(17,2),
      TXN_DESCR         VARCHAR2(46 BYTE),
      TXN_SOURCE        VARCHAR2(1 BYTE)
    CREATE INDEX REL_TXN_ACT_CM_IDX_01 ON REL_TXN_ACT_CM
    (ENT, SUB_ENT, POST_ACCT, POST_DATE, EFF_DATE,
    TXN_AMOUNT);
    CREATE INDEX REL_TXN_ACT_CM_IDX_PK ON REL_TXN_ACT_CM
    (ENT, SUB_ENT, CTR, POST_ACCT, POST_DATE,
    EFF_DATE, TXN_AMOUNT);
    TOTAL NUMBER OF RECORDS FOR TABLE REL_TXN_ACT_CM : 111042301
    CREATE TABLE REL_CLPR_TBOX_GL_TXN
      ORGANIZATION  VARCHAR2(10 BYTE)               NOT NULL,
      ACCOUNT       VARCHAR2(10 BYTE)               NOT NULL,
      APPLICATION   VARCHAR2(10 BYTE)               NOT NULL,
      AMOUNT        NUMBER(17,2)                    NOT NULL
    CREATE UNIQUE INDEX REL_CLPR_TBOX_GL_TXN_IDX ON REL_CLPR_TBOX_GL_TXN
    (ORGANIZATION, ACCOUNT, APPLICATION);
        DELETE FROM REL_CLPR_TBOX_GL_TXN;
        INSERT INTO REL_CLPR_TBOX_GL_TXN
          ORGANIZATION,
          ACCOUNT,
          APPLICATION,
          AMOUNT
        SELECT  --+ INDEX(T REL_TXN_ACT_CM_IDX_PK)
          SUBSTR(REL_CTR_HIER_REF.HIER_CODE, 1, 5) || '.....',
          TXN.POST_ACCT,
          'GL-' || SUBSTR(TXN.TXN_DESCR, 1, 3),
          SUM
            CASE
              WHEN TXN.TXN_CODE IN ('01', '21') THEN 1
                    WHEN TXN.TXN_CODE IN ('02', '22') THEN -1
                    ELSE 0
            END
            CASE
              WHEN REL_COA_REF.ACCT_TYPE IN ('01', '25', '30', '40', '90', '95') THEN 1
                    WHEN REL_COA_REF.ACCT_TYPE IN ('05', '10', '20', '35') THEN -1
                    ELSE 0
            END
            REL_COA_REF.SIGN_REVRSL
            TXN.TXN_AMOUNT
        FROM
          REL_TXN_ACT_CM REL_TXN
            INNER JOIN
          REL_CTR_HIER_REF
            ON
              REL_TXN.ENT = REL_CTR_HIER_REF.ENT AND
              REL_TXN.SUB_ENT = REL_CTR_HIER_REF.SUB_ENT AND
              REL_TXN.CTR = REL_CTR_HIER_REF.CTR_DETAIL
            INNER JOIN
          REL_COA_REF
            ON REL_TXN.POST_ACCT = REL_COA_REF.POST_ACCT
            INNER JOIN
          REL_ENT_REF
            ON
              REL_CTR_HIER_REF.ENT = REL_ENT_REF.ENT AND
              REL_CTR_HIER_REF.SUB_ENT = REL_ENT_REF.SUB_ENT
        WHERE
          REL_TXN.EFF_DATE BETWEEN L_MONTH AND LAST_DAY(L_MONTH) AND
          REL_CTR_HIER_REF.HIER_TBL_NUM = '001' AND
          REL_CTR_HIER_REF.SUM_FLAG = 'D' AND
          REL_CTR_HIER_REF.CTR_OR_HIER = REL_CTR_HIER_REF.CTR_DETAIL AND
          REL_CTR_HIER_REF.HIER_CODE BETWEEN 'AAA' AND 'ZZZ' AND
          REL_COA_REF.REL_TYPE = ' ' AND
          REL_COA_REF.ACCT_ID = 'ALPTER' AND
          REL_COA_REF.ACCT_LVL = '9' AND
          REL_ENT_REF.ROLL_ENT = '999' AND
          REL_ENT_REF.ROLL_SUB_ENT = '111'
        GROUP BY
          SUBSTR(REL_CTR_HIER_REF.HIER_CODE, 1, 5),
          REL_TXN.POST_ACCT,
          SUBSTR(REL_TXN.TXN_DESCR, 1, 3)
        HAVING
          SUM
            CASE
              WHEN REL_TXN.TXN_CODE IN ('01', '21') THEN 1
                    WHEN REL_TXN.TXN_CODE IN ('02', '22') THEN -1
                    ELSE 0
            END
            CASE
              WHEN REL_COA_REF.ACCT_TYPE IN ('01', '25', '30', '40', '90', '95') THEN 1
                    WHEN REL_COA_REF.ACCT_TYPE IN ('05', '10', '20', '35') THEN -1
                    ELSE 0
            END
            REL_COA_REF.SIGN_REVRSL
            REL_TXN.TXN_AMOUNT
          ) <> 0;
    [\CODE]
    While try to run the query(only select statement), it is taking around 3+ hours & while try to insert the same query in the table, getting error called ORA-00001: unique constraint (INSIGHT.CLPR_TBOX_GL_TXN_IDX) violated.
    [\CODE]
    How to tune & resolve this UNIQUE Contraint issue using without duplicates?

    Should the SELECT statement be returning duplicate rows? If you know that there are duplicate rows in the underlying tables, you could add a DISTINCT to the select. Which forces Oracle to do an extra sort which will slow down the insert. If you don't expect duplicate rows, you would need to figure out what join criteria is missing from your query and add that criteria.
    Justin

  • How to tune the query and difference between CBO AND RBO.. Which is good

    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query,
    2)What approach you take to tune query? Do you use Hints?
    3)Where did you tune the query and what are the issue with query?
    4)What is difference between RBO and CBO? where u use RBO and CBO.
    5)Give some information about hash join?
    6) Using explain plan how do u know where the bottle neck in query .. how u will identify where the bottle neck is from explain plan .
    thanks/Kumar

    Hi,
    kumar73 wrote:
    Hello Friends,
    Here are some questions I have pls reply back with complete description and url if any ..
    1)How Did you tune Query, Use EXPLAIN PLAN to see exactly where it is spending its time, and address those areas.
    See the forum FAQ
    SQL and PL/SQL FAQ
    "3. How to improve the performance of my query?"
    2)What approach you take to tune query? Do you use Hints?Hints can help.
    Even more helpful is writing the SQL efficiently (avoiding multiple scans of the same table, filtering early, using built-in rather than user-defined functions, ...), creating and using indexes, and, for large tables, partitioning.
    Table design can have a big impact on performace.
    Look for ways to do part of what you need before the query. This includes denormalizing (when appropriate), the kind of pre-digesting that often takes place in data warehouses, function-based indexes, and, starting in Oracle 11, virtual columns.
    3)Where did you tune the query and what are the issue with query?Either this question is a vague summary of the entire thread, or I don't understand it. Can you re-phrase this part?
    4)What is difference between RBO and CBO? where u use RBO and CBO.Basically, use RBO if you have Oracle 7 or earlier.

  • Oracle WebADI: How to Download the WebADI Excel File with Parameter

    Hello Friends,
    How to Download the Oracle WebADI Excel File with Parameter??
    For Ex: How to download the Employees for Specific Department from Oracle WebADI.
    And After to change the specific changes on Employee Data to Upload.
    Thanks in Advance.

    Hi Team.
    Any Advise on it!!

  • Oracle 11g: How to ensure the same transaction across several BPEL calls?

    How to ensure transaction semantics across invocations of several BPEL services with a Database operations (Insert, update)? We are using transaction REQUIRED property in all of our BPELs. We are using webserive and JCA to access and modify the same row. Our code uses a combination of JCA, Spring bean, enity services, EJBs in these BPELs. The code can be more efficient, but, at this point, we have no option but to fix the transaction issue in this code. So, our question is how to ensure the same transaction context is used in all these BPELs to inser/update the same row? We have tried to set the GetUnitOfWork in the JCA Adapter but it did not provide any solution. Apaert from setting transaction in BPEL to REQUIRED and the JCA Adapter to use Unit of work, we are out of ideas. Any help is much apprecited. We are using Oracle SOA Suite 11g 11.1.1.5 version. --chary

    Hi,
    I can help you if you can describe the processes.
    There can be some difficulties when you try to use the same transaction especially when you use many DB transactions & BPEL processes.
    Using unit of work only ,might not be enough.
    Thanks
    Arik

  • How to tune  the threshold to a very low value

    Hi,
    how to set the thershold to a very low value for database 9i. on which critieria we will tune this thershold value of DB

    user606944 wrote:
    I/o thershold value?There is no threshold value... unless there is some base line to measure against.
    Also, I/O threshold values in what respect? Number of reads/sec? Writes/sec? Latency per I/O call? Number of bytes per second?
    A solution is only as good as the problem definition. You have not defined any problem.. thus it is quite difficult to suggest any type of solution to you.

  • Using Oracle 11g How to change the Log mode from NoArchieve to Archieve Log

    Hi,
    I currently using oracle 11g How can I change database from NoArchivelog node to Archivelog node using spfile.
    And where exactly the spfile will be located?
    My instance is EPM11 in my local oracle is present in D folder where can i found the pfile?
    In this path i found 1 pfile in my Local Machine
    "D:/Oracle/Product/11g/admin/epm11/pfile" .I have added the following commands in this pfile
    # Archive Log Destinations -benr(10/15/04)
    log_archive_dest_1='location=/u02/oradata/cuddle/archive'
    log_archive_start=TRUE
    Then i have ran the shutdown command.
    Database instance is showtdown.
    After that i am not able to perform startup.
    So please suggest me how to change the mode using SPfile and tell me the path where spfile and pfile should be located?
    And also do i need to set the "Oracle_Home" path in my environment variables"
    Thanks In Advance,
    Chandana

    user11225122 wrote:
    Hi,
    I currently using oracle 11g How can I change database from NoArchivelog node to Archivelog node using spfile.
    And where exactly the spfile will be located?
    My instance is EPM11 in my local oracle is present in D folder where can i found the pfile?
    In this path i found 1 pfile in my Local Machine
    "D:/Oracle/Product/11g/admin/epm11/pfile" .I have added the following commands in this pfile
    # Archive Log Destinations -benr(10/15/04)
    log_archive_dest_1='location=/u02/oradata/cuddle/archive'
    log_archive_start=TRUE
    Then i have ran the shutdown command.
    Database instance is showtdown.
    After that i am not able to perform startup.
    So please suggest me how to change the mode using SPfile and tell me the path where spfile and pfile should be located?
    And also do i need to set the "Oracle_Home" path in my environment variables"
    Thanks In Advance,
    Chandanaremove log_archive_start=TRUE from pfile (it is depricated from 10g onwards)
    SQL>startup nomount pfile="D:/Oracle/Product/11g/admin/epm11/pfile/initYOUR_SID_NAME.ora"
    SQL>Create spfile from pfile="D:/Oracle/Product/11g/admin/epm11/pfile/initYOUR_SID_NAME.ora"
    SQL>SHUT IMMEDIATE;
    SQL>STARTUP MOUNT
    SQL>ALTER DATABASE ARCHIVELOG;
    SQL>ALTER DATABASE OPEN;
    SQL>ARCHIVE LOG LIST;
    SQL>SHOW PARAMATER SPFILE;
    YOU WILL FIND THE LOCATION OF SPFILE
    SQL>

  • How to tune the Update statement for 20 million rows

    Hi,
    I want to update 20 million rows of a table. I wrote the PL/SQL code like this:
    DECLARE
    v1
    v2
    cursor C1 is
    select ....
    BEGIN
    Open C1;
    loop
    fetch C1 bulk collect into v1,v2 LIMIT 1000
    exit when C1%NOTFOUND;
    forall i in v1.first..v1.last
    update /*+INDEX(tab indx)*/....
    end loop;
    commit;
    close C1;
    END;
    The above code took 24 mins to update 100k records, so for around 20 million records it will take 4800 mins (80 hrs).
    How can I tune the code further ? Will a simple Update statement, instead of PL/SQL make the update faster ?
    Will adding few more hints help ?
    Thanks for your suggestions.
    Regards,
    Yogini Joshi

    Hello
    You have implemented this update in the slowest possible way. Cursor FOR loops should be absolute last resort. If you post the SQL in your cursor there is a very good chance we can re-code it to be a single update statement with a subquery which will be the fastest possible way to run this. Please remember to use the {noformat}{noformat} tags before and after your code so the formatting is preserved.
    David                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for

  • How to download youtube videos to ipad without using a computer.

    How to download youtube videos to ipad without using a computer???  Looking to download some videos people created using Dr. Jean music.  My kid used Dr. Jean in preschool and wanted to keep singing them at home. Looking for a friend that doesn't use

  • Trying to install the Photostop Elements 12 on my iMac

    Trouble installing the trial version of Elements 12 on iMac. It says it going to take 8-9 hours but then stalls & i have to restart. Getting frustrating. Have cleared my cache & installed latest Flash Player as mentioned in other people.  Left it to

  • Objective Setting and Appraisals: No BSP-Template found!

    I am implementing MSS and am getting an error under tab Employee Review for Objective Setting and Appraisals.  The error is: An error occurred: java.lang.Exception: Error in CFViewgroup.createListModel: No BSP-Template found! Any clues as to how to f

  • PO output determination

    Hi, I am looking for an exit where I can update the partner in the message output types in PO. My requirement is when a PO is created or changed, it has some partner number to whom the idoc has to be sent, I want to replace this partner number based

  • How do I update to iTunes 11.1 on Windows 8?

    I have a Toshiba Satellite C855D-S5354 with Windows 8. The only way I was ever able to download iTunes was through an app(?) called Pokki. (It's a Windows 8 Start Menu) I believe I have version 11.0.5.5, but I can't sync my iPhone 4 with that, which