"Partitioning" option during table creation...
While creating table using "SQL Developer", there is a option on the left pane named "Partitioning". I noticed that this option is available for one or two tables but is not visible for other tables. What this option is for and how it is related to tables?
Read about Partitions in:
1. Database Concept
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/toc.htm
2. Partioned Tables and Indexes
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/partconc.htm#sthref2570
3. Creating Tables
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2095331
Similar Messages
-
Field 'material goup' need to be made optional during PO creation
Dear Friends,
I want to maintain the field 'material group' as optional while creating a PO line item with item category 'D'. Is this possible?
Pl suggest.
regds
amitavaHi,
You have the material group both on PO item and service level (EKPO-MATKL and ESLL-MATKL).
Please be informed that the material group is not updated at service level automatically by changing the
material group in the item. Services can have different material groups, therefore you must change the material group at service level afterwards.
For checking the field selection on purchasing (PO item) level please use the note 30316 to find out which field selection keys are used while creating of the PO.
For ESLL-MATKL please check: SPRO - materials management - external services management - Define Screen Layout - for field selection keys 4, ME21, ME22, NB, PT9...
Regards,
Edit -
Bad file is not created during the external table creation.
Hello Experts,
I have created a script for external table in Oracle 10g DB. Everything is working fine except it does not create the bad file, But it creates the log file. I Cann't figure out what is the issue. Because my shell scripts is failing and the entire program is failing. I am attaching the table creation script and the shell script where it is refering and the error. Kindly let me know if something is missing. Thanks in advance
Table Creation Scripts:_-------------------------------
create table RGIS_TCA_DATA_EXT
guid VARCHAR2(250),
badge VARCHAR2(250),
scheduled_store_id VARCHAR2(250),
parent_event_id VARCHAR2(250),
event_id VARCHAR2(250),
organization_number VARCHAR2(250),
customer_number VARCHAR2(250),
store_number VARCHAR2(250),
inventory_date VARCHAR2(250),
full_name VARCHAR2(250),
punch_type VARCHAR2(250),
punch_start_date_time VARCHAR2(250),
punch_end_date_time VARCHAR2(250),
event_meet_site_id VARCHAR2(250),
vehicle_number VARCHAR2(250),
vehicle_description VARCHAR2(250),
vehicle_type VARCHAR2(250),
is_owner VARCHAR2(250),
driver_passenger VARCHAR2(250),
mileage VARCHAR2(250),
adder_code VARCHAR2(250),
bonus_qualifier_code VARCHAR2(250),
store_accuracy VARCHAR2(250),
store_length VARCHAR2(250),
badge_input_type VARCHAR2(250),
source VARCHAR2(250),
created_by VARCHAR2(250),
created_date_time VARCHAR2(250),
updated_by VARCHAR2(250),
updated_date_time VARCHAR2(250),
approver_badge_id VARCHAR2(250),
approver_name VARCHAR2(250),
orig_guid VARCHAR2(250),
edit_type VARCHAR2(250)
organization external
type ORACLE_LOADER
default directory ETIME_LOAD_DIR
access parameters
RECORDS DELIMITED BY NEWLINE
BADFILE ETIME_LOAD_DIR:'tstlms.bad'
LOGFILE ETIME_LOAD_DIR:'tstlms.log'
READSIZE 1048576
FIELDS TERMINATED BY '|'
MISSING FIELD VALUES ARE NULL(
GUID
,BADGE
,SCHEDULED_STORE_ID
,PARENT_EVENT_ID
,EVENT_ID
,ORGANIZATION_NUMBER
,CUSTOMER_NUMBER
,STORE_NUMBER
,INVENTORY_DATE char date_format date mask "YYYYMMDD HH24:MI:SS"
,FULL_NAME
,PUNCH_TYPE
,PUNCH_START_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,PUNCH_END_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,EVENT_MEET_SITE_ID
,VEHICLE_NUMBER
,VEHICLE_DESCRIPTION
,VEHICLE_TYPE
,IS_OWNER
,DRIVER_PASSENGER
,MILEAGE
,ADDER_CODE
,BONUS_QUALIFIER_CODE
,STORE_ACCURACY
,STORE_LENGTH
,BADGE_INPUT_TYPE
,SOURCE
,CREATED_BY
,CREATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,UPDATED_BY
,UPDATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,APPROVER_BADGE_ID
,APPROVER_NAME
,ORIG_GUID
,EDIT_TYPE
location (ETIME_LOAD_DIR:'tstlms.dat')
reject limit UNLIMITED;
_***Shell Script*:*----------------_*
version=1.0
umask 000
DATE=`date +%Y%m%d%H%M%S`
TIME=`date +"%H%M%S"`
SOURCE=`hostname`
fcp_login=`echo $1|awk '{print $3}'|sed 's/"//g'|awk -F= '{print $2}'`
fcp_reqid=`echo $1|awk '{print $2}'|sed 's/"//g'|awk -F= '{print $2}'`
TXT1_PATH=/home/ac1/oracle/in/tsdata
TXT2_PATH=/home/ac2/oracle/in/tsdata
ARCH1_PATH=/home/ac1/oracle/in/tsdata
ARCH2_PATH=/home/ac2/oracle/in/tsdata
DEST_PATH=/home/custom/sched/in
PROGLOG=/home/custom/sched/logs/rgis_tca_to_tlms_create.sh.log
PROGNAME=`basename $0`
PROGPATH=/home/custom/sched/scripts
cd $TXT2_PATH
FILELIST2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.dat for i in $FILELIST2
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES2 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES1="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST1="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST1
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES1 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
cd $TXT1_PATH
FILELIST3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.datfor i in $FILELIST3
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES3 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES4="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST4="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST4
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES4 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
#connecting to oracle to generate bad files
sqlplus -s $fcp_login<<EOF
select count(*) from rgis_tca_data_ext;
select count(*) from rgis_tca_data_history_ext;
exit;
EOF
#counting the records in files
tot_rec_in_tstlms=`wc -l $DEST_PATH/tstlms.dat | awk ' { print $1 } '`
tot_rec_in_tstlmsedits=`wc -l $DEST_PATH/tstlmsedits.dat | awk ' { print $1 } '`
tot_rec_in_tstlms_bad=`wc -l $DEST_PATH/tstlms.bad | awk ' { print $1 } '`
tot_rec_in_tstlmsedits_bad=`wc -l $DEST_PATH/tstlmsedits.bad | awk ' { print $1 } '`
#updating log table
echo "pl/sql block started"
sqlplus -s $fcp_login<<EOF
define tot_rec_in_tstlms = '$tot_rec_in_tstlms';
define tot_rec_in_tstlmsedits = '$tot_rec_in_tstlmsedits';
define tot_rec_in_tstlms_bad = '$tot_rec_in_tstlms_bad';
define tot_rec_in_tstlmsedits_bad='$tot_rec_in_tstlmsedits_bad';
define fcp_reqid ='$fcp_reqid';
declare
l_tstlms_file_id number := null;
l_tstlmsedits_file_id number := null;
l_tot_rec_in_tstlms number := 0;
l_tot_rec_in_tstlmsedits number := 0;
l_tot_rec_in_tstlms_bad number := 0;
l_tot_rec_in_tstlmsedits_bad number := 0;
l_request_id fnd_concurrent_requests.request_id%type;
l_start_date fnd_concurrent_requests.actual_start_date%type;
l_end_date fnd_concurrent_requests.actual_completion_date%type;
l_conc_prog_name fnd_concurrent_programs.concurrent_program_name%type;
l_requested_by fnd_concurrent_requests.requested_by%type;
l_requested_date fnd_concurrent_requests.request_date%type;
begin
--getting concurrent request details
begin
SELECT fcp.concurrent_program_name,
fcr.request_id,
fcr.actual_start_date,
fcr.actual_completion_date,
fcr.requested_by,
fcr.request_date
INTO l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
l_requested_by,
l_requested_date
FROM fnd_concurrent_requests fcr, fnd_concurrent_programs fcp
WHERE fcp.concurrent_program_id = fcr.concurrent_program_id
AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
exception
when no_data_found then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log, 'No data found for request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when retrieving request_id request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlms.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlms_file_id,
'tstlms.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlms,
&tot_rec_in_tstlms_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlms file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlmsedits.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlmsedits_file_id,
'tstlmsedits.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlmsedits,
&tot_rec_in_tstlmsedits_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlmsedits file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
end;
exit;
EOF
echo "rgis_tca_to_tlms_process.sql started"
sqlplus -s $fcp_login @$SCHED_TOP/sql/rgis_tca_to_tlms_process.sql $fcp_reqid
exit;
echo "rgis_tca_to_tlms_process.sql ended"
_**Error:*----------------------------------*_
RGIS Scheduling: Version : UNKNOWN
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
TCATLMS module: TCA To TLMS Import Process
Current system time is 18-AUG-2011 06:13:27
COUNT(*)
16
COUNT(*)
25
wc: cannot open /home/custom/sched/in/tstlms.bad
wc: cannot open /home/custom/sched/in/tstlmsedits.bad
pl/sql block started
old 33: AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
new 33: AND fcr.request_id = 18661823; --fnd_global.conc_request_id();
old 63: &tot_rec_in_tstlms,
new 63: 16,
old 64: &tot_rec_in_tstlms_bad,
new 64: ,
old 97: &tot_rec_in_tstlmsedits,
new 97: 25,
old 98: &tot_rec_in_tstlmsedits_bad,
new 98: ,
ERROR at line 64:
ORA-06550: line 64, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql stddev sum variance
execute forall merge time timestamp interval date
<a string literal with character set specification>
<a number> <a single-quoted SQL string> pipe
<an alternatively-quoted string literal with character set specification>
<an alternatively-q
ORA-06550: line 98, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql st
rgis_tca_to_tlms_process.sql started
old 12: and concurrent_request_id = '&1';
new 12: and concurrent_request_id = '18661823';
old 18: and concurrent_request_id = '&1';
new 18: and concurrent_request_id = '18661823';
old 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,&1);
new 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,18661823);
old 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,&1);
new 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,18661823);
old 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',&1);
new 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',18661823);
declare
ERROR at line 1:
ORA-20001: Error occured when executing RGIS_TCA_TO_TLMS_PROCESS.sql ORA-01403:
no data found
ORA-06512: at line 59
Executing request completion options...
------------- 1) PRINT -------------
Printing output file.
Request ID : 18661823
Number of copies : 0
Printer : noprint
Finished executing request completion options.
Concurrent request completed successfully
Current system time is 18-AUG-2011 06:13:29
---------------------------------------------------------------------------Hi,
Check the status of the batch in SM35 transaction.
if the batch is locked by mistake or any other error, now you can release it and aslo you can process again.
To Release -Shift+F4.
Also you can analyse the job status through F2 button.
Bye -
following is the table creation script with partition
CREATE TABLE customer_entity_temp (
BRANCH_ID NUMBER (4),
ACTIVE_FROM_YEAR VARCHAR2 (4),
ACTIVE_FROM_MONTH VARCHAR2 (3),
partition by range (ACTIVE_FROM_YEAR,ACTIVE_FROM_MONTH)
(partition yr7_1999 values less than ('1999',TO_DATE('Jul','Mon')),
partition yr12_1999 values less than ('1999',TO_DATE('Dec','Mon')),
it gives an error
ORA-14036: partition bound value too large for column
but if I increase the size of the ACTIVE_FROM_MONTH column to 9 , the script works and creates the table. Why is it so ?
Also, by creating a table in this way and populating the table data in their respective partitions, all rows with month less than "JULY" will go in yr7_1999 partition and all rows with month value between "JUL" and "DEC" will go in the second partition yr12_1999 , where will the data with month value equal to "DEC" go?
Plz help me in solving this problem
thanks n regards
MoloyHi,
You declared ACTIVE_FROM_MONTH VARCHAR2 (3) and you try to check it against a date in your partitionning clause:TO_DATE('Jul','Mon')so you should first check your data model and what you are trying to achieve exactly.
With such a partition decl, you will not be able to insert dates from december 1998 included and onward. The values are stricly less than (<) not less or equal(<=) hence such lines can't be inserted. I'd advise you to check the MAXVALUE value jocker and the ENABLE ROW MOVEMENT partitionning clause.
Regards,
Yoann. -
Different databse tables affected during the creation of a PR
Can someone please explain the different databse tables affected during the creation of a Purchase Requisition except EBAN?
Hi Ajit,
Thanks for the reply.
Kindly let me know how these tables will be affected when PR will be created(STXH,STXL)?
Thanks,
Lina -
No option to specify interval partition when creating table
I am trying to create a table in APPS which has Range partition with interval 1 (interval partition). When I go through the options to create the table, I see only Range, List and Hash partitions. How do I create interval partitions?
Thanks Chris for the update. Where can I find the correct syntax to use. Below is the current content for partition in the .table file.
<entry>
<key>OracleTablePartitions</key>
<value class="oracle.javatools.db.ora.OracleTablePartitions">
<ID class="oracle.javatools.db.IdentifierBasedID">
<name><![CDATA[PARTITION]|http://forums.oracle.com/forums/]></name>
<identifier class="java.lang.String"><![CDATA[8dd284eb-9b8a-4eb9-9f35-2f05b1911094]|http://forums.oracle.com/forums/]></identifier>
<parent class="oracle.javatools.db.TemporaryObjectID">
</parent>
<schemaName><![CDATA[FUSION]|http://forums.oracle.com/forums/]></schemaName>
<type><![CDATA[PARTITION MODEL]|http://forums.oracle.com/forums/]></type>
</ID>
<objectType>PARTITION</objectType>
<partitionColumns>
<partitionColumn class="oracle.javatools.db.IdentifierBasedID">
<name><![CDATA[ATTR_GROUP_ID]|http://forums.oracle.com/forums/]></name>
<identifier class="java.lang.String"><![CDATA[979467ab-e769-42f9-a631-735b24b01670]|http://forums.oracle.com/forums/]></identifier>
<parent class="oracle.javatools.db.IdentifierBasedID">
<name><![CDATA[EGO_ITEM_EFF_B]|http://forums.oracle.com/forums/]></name>
<identifier class="java.lang.String"><![CDATA[3b9592e8-c9d7-4e83-a2b6-9bebec1ea955]|http://forums.oracle.com/forums/]></identifier>
<schemaName><![CDATA[FUSION]|http://forums.oracle.com/forums/]></schemaName>
<type><![CDATA[TABLE]|http://forums.oracle.com/forums/]></type>
</parent>
<schemaName><![CDATA[FUSION]|http://forums.oracle.com/forums/]></schemaName>
<type><![CDATA[COLUMN]|http://forums.oracle.com/forums/]></type>
</partitionColumn>
</partitionColumns>
*<partitionType>RANGE</partitionType>*
<partitions>
<partition>
<ID class="oracle.javatools.db.IdentifierBasedID">
<name><![CDATA[AG_ZERO]|http://forums.oracle.com/forums/]></name>
<identifier class="java.lang.String"><![CDATA[3a8e802d-41eb-4fc5-bac6-853fb54c0864]|http://forums.oracle.com/forums/]></identifier>
<parent class="oracle.javatools.db.IdentifierBasedID">
<name><![CDATA[PARTITION]|http://forums.oracle.com/forums/]></name>
<identifier class="java.lang.String"><![CDATA[8dd284eb-9b8a-4eb9-9f35-2f05b1911094]|http://forums.oracle.com/forums/]></identifier>
<parent class="oracle.javatools.db.TemporaryObjectID">
</parent>
<schemaName><![CDATA[FUSION]|http://forums.oracle.com/forums/]></schemaName>
<type><![CDATA[PARTITION MODEL]|http://forums.oracle.com/forums/]></type>
</parent>
<schemaName><![CDATA[FUSION]|http://forums.oracle.com/forums/]></schemaName>
<type><![CDATA[PARTITION]|http://forums.oracle.com/forums/]></type>
</ID>
<name><![CDATA[AG_ZERO]|http://forums.oracle.com/forums/]></name>
<compression><![CDATA[NOCOMPRESS]|http://forums.oracle.com/forums/]></compression>
<objectType>PARTITION</objectType>
<partitionType>RANGE</partitionType>
<values>
<value class="java.lang.String"><![CDATA[1]|http://forums.oracle.com/forums/]></value>
</values>
</partition>
</partitions> -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Vendor Text / Internal notes during Invoice creation
Hello,
We can create user ID for a Vendor contact using "Employee for Business partner" under "Manage Business partner" option.
With this user ID, while trying to create an Invoice, Vendor text / Internal notes drop down does not appear under "Document tab" at item level. We only see one option "Comments from Vendor".
But during invoice creation by a regular user , we see the drop down for "Vendor Text / Internal notes". We dont see the option "Comments from Vendor"
Question : Is it possible to have the "Vendor Notes / Internal notes" drop down made available for Vendor Contacts just like reuglar users?
FYI - We are using Standalone scenairo.
Thanks for your reply in advance.
Regards,
UM.Hi Jon,
Like said Yann, try to use BBP_DOC_CHECK badi.
In this badi You can use FM BBP_PD_SC_GETDETAIL.
Example:
call function 'BBP_PD_SC_GETDETAIL'
exporting
i_guid = iv_doc_guid
importing
e_header = wa_e_header
tables
e_item = e_item
e_account = e_account
e_longtext = e_longtext
e_messages = e_messages.
This FM show You all longtext attached in document.
Regards,
Marcin Gajewski -
Copying STO PO pricing condition to Billing During Billing creation
Hi,
MM and SD pricing conditions are not similar.
And I cant use Copy PO Basic price in billing Document
I was advised to try USEREXIT_PRICING_PREPARE_TKOMP or USEREXIT_PRICING_PREPARE_TKOMK.
How do I copy the Pricing condition from the STO PO to Billing during billing creation?
Is there an FM i can use to extract the pricing condition given the STO PO?
thanks,
NCHi
If the option of Raghavendra doesn't work for you (parameters in copy rule for invoice), try with VOFM subroutines (Condition formula for alternative calculation type) in the pricing procedure. Surely it will be slower, but it must work. How? You have in KOMP the values for VGBEL and VGPOS with the delivery number and item. So, seach the PO number in the same way. When you have the PO number, with the value of field KNUMV search the value of condition in table KONV.
I hope this helps you
Regards
Eduardo
PD: I forgot. The userexits that you mention is to populate KOMP/KOMK for customer fields. See SAP Note 531835 - Using field PSTYV in the condition access for futher information.
Edited by: E_Hinojosa on May 16, 2011 9:21 AM -
User Exit/ Badi for Changing Quant parameters during TO Creation
Hi Gurus,
Could you please guide me to advice the User Exit/Badi which can be used for changing Quant Data during TO Creation.
User Requirement: Using "Recepient Field" in MIGO as a Key Value for FIFO in WM during goods issue. Receipient is copied into TR and TO (Standard SAP Functionality). For the purpose of Stock Removal based on Receipient Value, we need to copy this value into Quant Data field named Certificate Number ("LQUA-ZEUGN").
I will highly appreciate reply from Gurus.
Regards,
Gupta MHi manish,
Use the Exit MWMTO001 for this purpose and modify the table accordingly. This will solve your problem.
Thanks,
Shibashis -
Error message during cube creation
Hi expert,
System show the following error message during cube creation, please info how to solve this problem. Thanks.
Define the characteristics of the validity table for non-cumulatives
Message no. R7846
Diagnosis
The InfoCube contains non-cumulative values. A validity table is created for these non-cumulative values, in which the time interval is stored, for which the non-cumulative values are valid.
The validity table automatically contains the "most detailed" of the selected time characteristics (if such a one does not exist, it must be defined by the user, meaning transfered into the InfoCube).
Besides the most detailed time characteristic, additional characteristics for the InfoCube can be included in the validity table:
Examples of such characteristics are:
A "plan/actual" indicator, if differing time references exist for plan and actual values (actual values for the current fiscal year, plan values already for the next fiscal year),
the characteristic "plant", if the non-cumulative values are reported per plant, and it can occur that a plant already exists for plant A for a particular time period, but not yet for plant B.
Procedure
Define all additional characteristics that should be contained in the validity table by selecting the characteristics.
In the example above, the characteristics "plan/actual" and "plant" must be selected.
The system automatically generates the validity table according to the definition made. This table is automatically updated when data is loaded.Hi,
Go to the Extras Tab in the definition of the Cube and there select " Maintain Non-Cumulatives" and there select the Plant check box. You can also check Material but it is not recommended. Now try to activate the cube. I think it will work for you. -
Automatic Partner Determination Std Contact Person during Activity Creation
Hi ,
There is a requirement in my project where the Standard Contact Person of the Sold to Party should be automatically determined during Activity Creation .I have completed the Partner Function configurations and now the Standard Contact Person is being successfully determined as the Partner.
However, there is a problem in a scenario where in another Contact Person other than the Standard Contact Person is to be determined during Activity Creation . In that case, the option to select the other Person is not being offered through a pop up window, but the System automatically populates the next Contact Person based on the BP Id ( Next on the Number Range )
Is there any way , so that the normal procedure of selecting one Contact Person from a list in the Pop Window can be incorporated , once the standard Contact Person is being removed from the Contact Person field in the Business Transaction .
<< Moderator message - Please do not promise points >>
Regards,
Maroof
Edited by: Rob Burbank on Jan 5, 2011 5:06 PMHello Maroof,
you should check first the standard relationship flag of the account. Means check the account and go to the
relationship contact person.
There you will find the standard Relationship Flag, which is relevant for the system to determine the main CP.
Furthermore please check the partner procedure:
For each partner function you can enter a value in field Maximum and Selection Limit. Please consider the F1 help
for the field Selection Limit where you can find the explanation when and how the selection popup is triggered:
Example 1
You enter 3 in this field.
When the system finds three or more partners who could fulfill this partner function, it displays a selection list.
You choose from the list which partners to enter in the transaction.
Example 2
You do not mark this field, but enter 5 as the Maximum for this partner function.
When the system finds six or more appropriate partners it displays a selection list.
Hopefully this information helps you.
Regards
Rene -
Serial Number validation during Delivery Creation itself(PGI-system checks)
Process:
Sales Order PR(Purchase Order) GR( New Serial Numbers are created automatically or manually keyed in)
Once the Goods Receipt is received, we do the (SO) DELIVERYu2014SERIAL NUMBER ASSIGNMENT Post Goods Issue
Issue:
Current Serial Number Profile Management does not do a valid serial number check during DELIVERY CREATION where we do a SERIAL NUMBER ASSIGNMENT.
Only during POST GOODS ISSUE, the check happens for valid serial numbers; this is too late in the game for business as there is a time lag of 3 days from the DELIVERY CREATION and PGI.
Reason being--- We donu2019t check the following u201C Existing Stock Checku201D( which does a serial number validation during delivery creation as well)
SPRO>Plant Maintenance and Customer Service>Master Data in Plant Maintenance and Customer Service>Technical Objects>Serial Number Management>Define Serial Number Profiles
If we check this option, the business requirement to do a valid serial number check does happen during delivery creation, however we cannot create new serial numbers during Purchase Order- GOODS RECEIPT.
QUESTION:
Can we have the system check the valid serial numbers from stock during delivery creation and serial number assignment.
And Also create new serial numbers during Purchase Order- Goods Receipt.Hi
1.In std SAP its not possible to check the serail number during Delivery creation, through enhancement it can be done.
2.During GR for PO serial numbers can be created.
Rgds
Ramesh -
Automating Goods issue during Delivery creation
Hi all,
We have a requirement to automate the Goods issue creation for certain type of orders when the Delivery is getting created. The orders that need to be automatically Goods issued during delivery creation are identified based on certain plants. These plants are linked to certain output type and in the output type routine is the standard program RVADEK01 with one additonal code for automating the Goods Issue creation.
We have a custom table that holds the status of orders and there is a code in user exit userexit_save_document_prepare which changes the status of the order as closed when the goods issue is done.
But when the delivery is saved, in this case when an automatic goods issue needs to happen, when the flow reaches this user exit, the output type code dosenot get executed and the Goods issue is not done and so the custom table will not be updated with the closed status. So we are in need to findout a place where we can update the status of the order in that table.
The output type code is not executed even before the other user exit userexit_save_document. The output type code gets executed and goods issue is done after the this userexit_save_document when the COMMIT statement is executed in the subroutine BELEG_SICHEN_POST in the include FV50XF0B_BELEG_SICHERN.
I need help in finding out if any user exit or badi is called after this commit statement, so that I can add my code to close the status of the order in my custom table. Just after this commit the Goods issue happens and the VBFA table gets updated with the 'R' records for goods issue.
Please let me know if anyone has any idea on this. The ultimate goal is to find some place after the goods issue is done to update the status of the order as closed in the custom table we have.
Thanks,
PackiaDear Siva,
As informed yesterday I changed the language from DE to EN, to match the other shipping points settings in table V_TVST, this did not bring the solution.
Please let me summarize, I am really desparate here:
This is only IM related, Not WM.
Picking lists are not printed for any Shipping Point from this warehouse, this is just a small subsidiary of my customer in Finland.
Issue is not Aut. PGI.
VP01SHP has not been configured for any shipping points, still there we do get the PR except for the new shipping point.
In the deliveries of correct processed shipping points I do not find any picking output type.
Item category in new shipping is equal to Item category in already existing shipping points, so no need to config here.
There is no picking block active.
PR creation happens once I enter the pick qty in the delivery in VL02N. This is the part that we need to have automated.
Can you please try to help me out?
Tnx & regards,
Chris -
Exchange rate issue during PO creation
Dear Friends
I am getting the below error whenever I am trying to create an Import PO
Enter rate USD / INR rate type for 28.10.2010 in the system settings
Message no. SG105
I have entered the rates in OB08 table- for direct & Indiret exchange rates for M
I also have cheched the OBBS and made entries for the date 28.10.2010
But still I am getting the error
How do I proceed
Samuel
Edited by: samuel mendis on Oct 28, 2010 1:07 PMGuys
We have solved this issue. Even though we were getting the SG105 error, the real problem was some where else.
We have never used CIN in our our organisation as it is not applicable. The problem we did is to use the Import pricing procedure for our use which was copied from IN: Purchasing for Imported materials.
Whenever we tried to create a Import PO it was validating the CVD entries which we make in the company code settings for Tax on Goods Movements. Once I maintained a dummy G/L account the error was gone.
The problem is why it validated CVD during PO creation we never understood. Since in my opinion CVD capturing is done during Invoicing verification stage and not during PO stage. Either way we have already updated these details to the SAP Support bench.
Thanks
Samuel
Maybe you are looking for
-
Login error in the login page ...
Hi all, I have a few applications in my work space but one of the application I can not run, when try to run getting this error message - ORA-06550: line 1, column 10: PLS-00201: identifier 'RATIS_USER.IS_ADMIN' must be declared ORA-06550: line 1, >c
-
Converting a Word document with form fields to become a form fillable PDF file with email
We are trying to convert a number of word(97,and 2000) documents but it never works correctly. The converted PDF file is missing fields and the fields it does convert is one character long. The idea was to have LiveCycle just convert it without anyon
-
I am having trouble with my iPod 5th gen after iOS update. It's in recovery mode and iTunes keeps timing out when I try to restore it. What should I do??? If possible, I would also like to keep all of my data.
-
I Continue To Be Impressed with Photoshop CS6
Just doing some testing today with the released Ps CS6 code, and all I can say is WOW, it doesn't cease to impress... I just stitched a 25893 x 6119 x 16 bit pano from 11 raw files in under 3 minutes. Then I Content Aware Filled some rough edges and
-
What has changed in iTunes 10.1?
Are the colored icons back? Can we remove or at least hide Ping WITHOUT doing the same to the iTunes Store?