DW Tables Creation - Columns Missing
Hi All,
When i run the ETL for the execution plan containing tasks related to SCM, some tasks fail, predominantly of which are due to columns missing in the DW tables.
I dropped and recreated the DW tables ( Tools -> ETLMgmt -> Configure) but it didnt help.
I inspected the ctl file, oracle_bi_dw.ctl in ( \OracleBI\DAC\conf\sqlgen\ctl-file), which i believe is used to create the DW tables. The CTL file has definitions for all the tables needed but certain columns are missing, and due to this my ETL fails . I tried creating the columns manually after each ETL task failed.
I need guidance into how to recreate the tables or which configuration step to redo to get the correct CTL file used for the DW table creation.
Thanks,
Raghav
Hi All,
Re-installing BIApps did not work . I guess the ctl is created while you import the DAC metadata.
I copied the ctl file from a fellow developer. I dropped and created all the tables. All the DW table definitions referenced from the new ctl file are fine now.
Now i would always keep a backup of a proper ctl file to face such future problems :)
Thanks,
Raghav
Similar Messages
-
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Bad file is not created during the external table creation.
Hello Experts,
I have created a script for external table in Oracle 10g DB. Everything is working fine except it does not create the bad file, But it creates the log file. I Cann't figure out what is the issue. Because my shell scripts is failing and the entire program is failing. I am attaching the table creation script and the shell script where it is refering and the error. Kindly let me know if something is missing. Thanks in advance
Table Creation Scripts:_-------------------------------
create table RGIS_TCA_DATA_EXT
guid VARCHAR2(250),
badge VARCHAR2(250),
scheduled_store_id VARCHAR2(250),
parent_event_id VARCHAR2(250),
event_id VARCHAR2(250),
organization_number VARCHAR2(250),
customer_number VARCHAR2(250),
store_number VARCHAR2(250),
inventory_date VARCHAR2(250),
full_name VARCHAR2(250),
punch_type VARCHAR2(250),
punch_start_date_time VARCHAR2(250),
punch_end_date_time VARCHAR2(250),
event_meet_site_id VARCHAR2(250),
vehicle_number VARCHAR2(250),
vehicle_description VARCHAR2(250),
vehicle_type VARCHAR2(250),
is_owner VARCHAR2(250),
driver_passenger VARCHAR2(250),
mileage VARCHAR2(250),
adder_code VARCHAR2(250),
bonus_qualifier_code VARCHAR2(250),
store_accuracy VARCHAR2(250),
store_length VARCHAR2(250),
badge_input_type VARCHAR2(250),
source VARCHAR2(250),
created_by VARCHAR2(250),
created_date_time VARCHAR2(250),
updated_by VARCHAR2(250),
updated_date_time VARCHAR2(250),
approver_badge_id VARCHAR2(250),
approver_name VARCHAR2(250),
orig_guid VARCHAR2(250),
edit_type VARCHAR2(250)
organization external
type ORACLE_LOADER
default directory ETIME_LOAD_DIR
access parameters
RECORDS DELIMITED BY NEWLINE
BADFILE ETIME_LOAD_DIR:'tstlms.bad'
LOGFILE ETIME_LOAD_DIR:'tstlms.log'
READSIZE 1048576
FIELDS TERMINATED BY '|'
MISSING FIELD VALUES ARE NULL(
GUID
,BADGE
,SCHEDULED_STORE_ID
,PARENT_EVENT_ID
,EVENT_ID
,ORGANIZATION_NUMBER
,CUSTOMER_NUMBER
,STORE_NUMBER
,INVENTORY_DATE char date_format date mask "YYYYMMDD HH24:MI:SS"
,FULL_NAME
,PUNCH_TYPE
,PUNCH_START_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,PUNCH_END_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,EVENT_MEET_SITE_ID
,VEHICLE_NUMBER
,VEHICLE_DESCRIPTION
,VEHICLE_TYPE
,IS_OWNER
,DRIVER_PASSENGER
,MILEAGE
,ADDER_CODE
,BONUS_QUALIFIER_CODE
,STORE_ACCURACY
,STORE_LENGTH
,BADGE_INPUT_TYPE
,SOURCE
,CREATED_BY
,CREATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,UPDATED_BY
,UPDATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,APPROVER_BADGE_ID
,APPROVER_NAME
,ORIG_GUID
,EDIT_TYPE
location (ETIME_LOAD_DIR:'tstlms.dat')
reject limit UNLIMITED;
_***Shell Script*:*----------------_*
version=1.0
umask 000
DATE=`date +%Y%m%d%H%M%S`
TIME=`date +"%H%M%S"`
SOURCE=`hostname`
fcp_login=`echo $1|awk '{print $3}'|sed 's/"//g'|awk -F= '{print $2}'`
fcp_reqid=`echo $1|awk '{print $2}'|sed 's/"//g'|awk -F= '{print $2}'`
TXT1_PATH=/home/ac1/oracle/in/tsdata
TXT2_PATH=/home/ac2/oracle/in/tsdata
ARCH1_PATH=/home/ac1/oracle/in/tsdata
ARCH2_PATH=/home/ac2/oracle/in/tsdata
DEST_PATH=/home/custom/sched/in
PROGLOG=/home/custom/sched/logs/rgis_tca_to_tlms_create.sh.log
PROGNAME=`basename $0`
PROGPATH=/home/custom/sched/scripts
cd $TXT2_PATH
FILELIST2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.dat for i in $FILELIST2
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES2 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES1="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST1="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST1
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES1 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
cd $TXT1_PATH
FILELIST3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.datfor i in $FILELIST3
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES3 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES4="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST4="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST4
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES4 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
#connecting to oracle to generate bad files
sqlplus -s $fcp_login<<EOF
select count(*) from rgis_tca_data_ext;
select count(*) from rgis_tca_data_history_ext;
exit;
EOF
#counting the records in files
tot_rec_in_tstlms=`wc -l $DEST_PATH/tstlms.dat | awk ' { print $1 } '`
tot_rec_in_tstlmsedits=`wc -l $DEST_PATH/tstlmsedits.dat | awk ' { print $1 } '`
tot_rec_in_tstlms_bad=`wc -l $DEST_PATH/tstlms.bad | awk ' { print $1 } '`
tot_rec_in_tstlmsedits_bad=`wc -l $DEST_PATH/tstlmsedits.bad | awk ' { print $1 } '`
#updating log table
echo "pl/sql block started"
sqlplus -s $fcp_login<<EOF
define tot_rec_in_tstlms = '$tot_rec_in_tstlms';
define tot_rec_in_tstlmsedits = '$tot_rec_in_tstlmsedits';
define tot_rec_in_tstlms_bad = '$tot_rec_in_tstlms_bad';
define tot_rec_in_tstlmsedits_bad='$tot_rec_in_tstlmsedits_bad';
define fcp_reqid ='$fcp_reqid';
declare
l_tstlms_file_id number := null;
l_tstlmsedits_file_id number := null;
l_tot_rec_in_tstlms number := 0;
l_tot_rec_in_tstlmsedits number := 0;
l_tot_rec_in_tstlms_bad number := 0;
l_tot_rec_in_tstlmsedits_bad number := 0;
l_request_id fnd_concurrent_requests.request_id%type;
l_start_date fnd_concurrent_requests.actual_start_date%type;
l_end_date fnd_concurrent_requests.actual_completion_date%type;
l_conc_prog_name fnd_concurrent_programs.concurrent_program_name%type;
l_requested_by fnd_concurrent_requests.requested_by%type;
l_requested_date fnd_concurrent_requests.request_date%type;
begin
--getting concurrent request details
begin
SELECT fcp.concurrent_program_name,
fcr.request_id,
fcr.actual_start_date,
fcr.actual_completion_date,
fcr.requested_by,
fcr.request_date
INTO l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
l_requested_by,
l_requested_date
FROM fnd_concurrent_requests fcr, fnd_concurrent_programs fcp
WHERE fcp.concurrent_program_id = fcr.concurrent_program_id
AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
exception
when no_data_found then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log, 'No data found for request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when retrieving request_id request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlms.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlms_file_id,
'tstlms.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlms,
&tot_rec_in_tstlms_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlms file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlmsedits.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlmsedits_file_id,
'tstlmsedits.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlmsedits,
&tot_rec_in_tstlmsedits_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlmsedits file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
end;
exit;
EOF
echo "rgis_tca_to_tlms_process.sql started"
sqlplus -s $fcp_login @$SCHED_TOP/sql/rgis_tca_to_tlms_process.sql $fcp_reqid
exit;
echo "rgis_tca_to_tlms_process.sql ended"
_**Error:*----------------------------------*_
RGIS Scheduling: Version : UNKNOWN
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
TCATLMS module: TCA To TLMS Import Process
Current system time is 18-AUG-2011 06:13:27
COUNT(*)
16
COUNT(*)
25
wc: cannot open /home/custom/sched/in/tstlms.bad
wc: cannot open /home/custom/sched/in/tstlmsedits.bad
pl/sql block started
old 33: AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
new 33: AND fcr.request_id = 18661823; --fnd_global.conc_request_id();
old 63: &tot_rec_in_tstlms,
new 63: 16,
old 64: &tot_rec_in_tstlms_bad,
new 64: ,
old 97: &tot_rec_in_tstlmsedits,
new 97: 25,
old 98: &tot_rec_in_tstlmsedits_bad,
new 98: ,
ERROR at line 64:
ORA-06550: line 64, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql stddev sum variance
execute forall merge time timestamp interval date
<a string literal with character set specification>
<a number> <a single-quoted SQL string> pipe
<an alternatively-quoted string literal with character set specification>
<an alternatively-q
ORA-06550: line 98, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql st
rgis_tca_to_tlms_process.sql started
old 12: and concurrent_request_id = '&1';
new 12: and concurrent_request_id = '18661823';
old 18: and concurrent_request_id = '&1';
new 18: and concurrent_request_id = '18661823';
old 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,&1);
new 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,18661823);
old 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,&1);
new 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,18661823);
old 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',&1);
new 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',18661823);
declare
ERROR at line 1:
ORA-20001: Error occured when executing RGIS_TCA_TO_TLMS_PROCESS.sql ORA-01403:
no data found
ORA-06512: at line 59
Executing request completion options...
------------- 1) PRINT -------------
Printing output file.
Request ID : 18661823
Number of copies : 0
Printer : noprint
Finished executing request completion options.
Concurrent request completed successfully
Current system time is 18-AUG-2011 06:13:29
---------------------------------------------------------------------------Hi,
Check the status of the batch in SM35 transaction.
if the batch is locked by mistake or any other error, now you can release it and aslo you can process again.
To Release -Shift+F4.
Also you can analyse the job status through F2 button.
Bye -
How to create Dynamic internal table with columns also created dynamically.
Hi All,
Any info on how to create a dynamic internal table along with columns(fields) also to be created dynamically.
My requirement is ..On the selection screen I enter the number of fields to be in the internal table which gets created dynamically.
I had gone thru some posts on dynamic table creation,but could'nt find any on the dynamic field creation.
Any suggestions pls?
Thanks
NaraI don't understand ...
something like that ?
* Form P_MODIFY_HEADER. *
form p_modify_header.
data : is_fieldcatalog type lvc_s_fcat ,
v_count(2) type n ,
v_date type d ,
v_buff(30).
* Update the fieldcatalog.
loop at it_fieldcatalog into is_fieldcatalog.
check is_fieldcatalog-fieldname+0(3) eq 'ABS' or
is_fieldcatalog-fieldname+0(3) eq 'VAL' .
move : is_fieldcatalog-fieldname+3(2) to v_count ,
p_perb2+5(2) to v_date+4(2) ,
p_perb2+0(4) to v_date+0(4) ,
'01' to v_date+6(2) .
v_count = v_count - 1.
call function 'RE_ADD_MONTH_TO_DATE'
exporting
months = v_count
olddate = v_date
importing
newdate = v_date.
if is_fieldcatalog-fieldname+0(3) eq 'ABS'.
concatenate 'Quantité 0'
v_date+4(2)
v_date+0(4)
into v_buff.
else.
concatenate 'Montant 0'
v_date+4(2)
v_date+0(4)
into v_buff.
endif.
move : v_buff to is_fieldcatalog-scrtext_s ,
v_buff to is_fieldcatalog-scrtext_m ,
v_buff to is_fieldcatalog-scrtext_l ,
v_buff to is_fieldcatalog-reptext .
modify it_fieldcatalog from is_fieldcatalog.
endloop.
* Modify the fieldcatalog.
call method obj_grid->set_frontend_fieldcatalog
exporting it_fieldcatalog = it_fieldcatalog.
* Refresh the display of the grid.
call method obj_grid->refresh_table_display.
endform. " P_MODIFY_HEADER -
following is the table creation script with partition
CREATE TABLE customer_entity_temp (
BRANCH_ID NUMBER (4),
ACTIVE_FROM_YEAR VARCHAR2 (4),
ACTIVE_FROM_MONTH VARCHAR2 (3),
partition by range (ACTIVE_FROM_YEAR,ACTIVE_FROM_MONTH)
(partition yr7_1999 values less than ('1999',TO_DATE('Jul','Mon')),
partition yr12_1999 values less than ('1999',TO_DATE('Dec','Mon')),
it gives an error
ORA-14036: partition bound value too large for column
but if I increase the size of the ACTIVE_FROM_MONTH column to 9 , the script works and creates the table. Why is it so ?
Also, by creating a table in this way and populating the table data in their respective partitions, all rows with month less than "JULY" will go in yr7_1999 partition and all rows with month value between "JUL" and "DEC" will go in the second partition yr12_1999 , where will the data with month value equal to "DEC" go?
Plz help me in solving this problem
thanks n regards
MoloyHi,
You declared ACTIVE_FROM_MONTH VARCHAR2 (3) and you try to check it against a date in your partitionning clause:TO_DATE('Jul','Mon')so you should first check your data model and what you are trying to achieve exactly.
With such a partition decl, you will not be able to insert dates from december 1998 included and onward. The values are stricly less than (<) not less or equal(<=) hence such lines can't be inserted. I'd advise you to check the MAXVALUE value jocker and the ENABLE ROW MOVEMENT partitionning clause.
Regards,
Yoann. -
Dynamic Table Creation & Fill Up
Hello,
Can anyone please guide where I can find examples for dynamic table creation (programmaticaly), with dynamic number of columns and rows, used to place inside text components (or whatever) to fill them with data.
All programmatic.
Using JSF, ADF BC
JDeveloper 10.1.3.1
Thanks
Message was edited by:
RJundiHi,
Meybe this article helps: http://technology.amis.nl/blog/?p=2306
Kuba -
Hi,
I am new at SAP UI5 and i am trying to create a table.
I used the code given on the sap ui5 demokit but when i am running the application the table is not displaying.
can anyone help me with this.
Thanks in advance.
var aData = [
{lastName: "Dente", name: "Al", gender : "male"},
{lastName: "Friese", name: "Andy", gender : "female"},
{lastName: "Mann", name: "Anita", gender : "female"}
//table creation
var oTable = new sap.ui.table.Table({
title: "Guest House list",
visibleRowCount: 3,
firstVisibleRow: 2,
selectionMode: sap.ui.table.SelectionMode.Single,
//column creation
oTable.addColumn(new sap.ui.table.Column({
label: new sap.ui.commons.Label({text: "Last Name"}),
template: new sap.ui.commons.TextView().bindProperty("text", "lastName"),
sortProperty: "lastName",
filterProperty: "lastName",
width: "200px"
oTable.addColumn(new sap.ui.table.Column({
label: new sap.ui.commons.Label({text: "First Name"}),
template: new sap.ui.commons.TextField().bindProperty("value", "name"),
sortProperty: "name",
filterProperty: "name",
width: "100px"
oTable.addColumn(new sap.ui.table.Column({
label: new sap.ui.commons.Label({text: "Gender"}),
template: new sap.ui.commons.ComboBox({items: [
new sap.ui.core.ListItem({text: "female"}),
new sap.ui.core.ListItem({text: "male"})
]}).bindProperty("value","gender"),
sortProperty: "gender",
filterProperty: "gender"
//data collection
var oModel = new sap.ui.model.json.JSONModel();
oModel.setData({modelData: aData});
oTable.setModel(oModel);
oTable.bindRows("/modelData");
//Initially sort the table
oTable.sort(oTable.getColumns()[0]);
oTable.placeAt("table");Hi Anshul
it should be
press: function() {
oTable.setVisible(true); // or false
am I right?
-D -
Dear experts,
is it possible to build a form with a table, defined in web dynpro abap?
in my scenario, there should be a web dynpro screen defining the columns to be visible or not. These values have to be passed to an interactive form.
Does anyone know how to achieve this?
The only way I know to build a table on an interactive form is using table creation assistent in layout tab.
Regards,
FlorianHello,
1) you should NOT use the tables at all, I mean the tables like in MS Word, the tables not based on subforms, they´re not flexible, weird in all aspects
2) you should understand how it works in Adobe: you must create everything you will possibly use in the form and then hide the parts you don´t use. That means for table: you must create the whle table - all columns - and on runtime you can hide few columns if you like. But you cannot make visible something you have not created before
3) I am looking forward to see you around with some SDN points received. That would, in my eyes, mean that you help people with their beginners´ questions if you ask your own.
Regards Otto -
PointBase automatic table creation mapping reliability?
If I specify this in the weblogic-cmp-rdbms-jar.xml file for Automatic Table Creation:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1000)
If I specify this:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
<database-type>POINTBASE</database-type>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1)
What's wrong? And how reliable is the PointBase mapping?Hi
The <dbms-column-type> is not intended for generally specifying
the desired column type for the cmp-field. It is meant to
cause the container to generate code to handle the cmp-field
as a java.sql.Blob in the persistence layer.
What the default table creation does is to examine the
java type of the cmp-field and then make its best guess
at a DBMS column type that will support the cmp-field.
In the case of POINTBASE, byte[] fields are made into
BLOB.
Here's the conversion that the container uses to map
java types to POINTBASE Column types:
if(type.isPrimitive()) {
if(type == Boolean.TYPE) return "BOOLEAN";
if(type == Byte.TYPE) return "SMALLINT";
if(type == Character.TYPE) return "CHAR(1)";
if(type == Double.TYPE) return "DOUBLE PRECISION";
if(type == Float.TYPE) return "FLOAT";
if(type == Integer.TYPE) return "INTEGER";
if(type == Long.TYPE) return "DECIMAL"; // PointBase DECIMAL is DECIMAL(38,0)
// 10**38 is approx: 2**126
it's big enough
if(type == Short.TYPE) return "SMALLINT";
} else {
if (type == String.class) return "VARCHAR(150)";
if (type == BigDecimal.class) return "DECIMAL(38,19)";
if (type == Boolean.class) return "BOOLEAN";
if (type == Byte.class) return "SMALLINT";
if (type == Character.class) return "CHAR(1)";
if (type == Double.class) return "DOUBLE PRECISION";
if (type == Float.class) return "FLOAT";
if (type == Integer.class) return "INTEGER";
if (type == Long.class) return "DECIMAL";
if (type == Short.class) return "SMALLINT";
if (type == java.util.Date.class)return "DATE";
if (type == java.sql.Date.class) return "DATE";
if (type == java.sql.Time.class) return "TIME";
if (type == java.sql.Timestamp.class) return "TIMESTAMP";
if (type.isArray() &&
type.getComponentType() == Byte.TYPE) return "BLOB";
if (!ClassUtils.isValidSQLType(type) &&
java.io.Serializable.class.isAssignableFrom(type)) return "BLOB";
"Brian L" <[email protected]> wrote:
>
If I specify this in the weblogic-cmp-rdbms-jar.xml file for Automatic Table
Creation:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1000)
If I specify this:
<field-map>
<cmp-field>ADDRESS</cmp-field>
<dbms-column>U_ADDRESS</dbms-column>
<dbms-column-type>VARCHAR(30)</dbms-column-type>
</field-map>
<field-map>
<cmp-field>addresses</cmp-field>
<dbms-column>U_ADDRESSES</dbms-column>
<dbms-column-type>RAW(5000)</dbms-column-type>
</field-map>
<create-default-dbms-tables>True</create-default-dbms-tables>
<database-type>POINTBASE</database-type>
PointBase creates a table w/ the two columns defined as:
U_ADDRESS VARCHAR(150)
U_ADDRESSES BLOB(1)
What's wrong? And how reliable is the PointBase mapping? -
Problem with table creation using CTAS parallel hint
Hi,
We have a base table (CARDS_TAB) with 1,083,565,232 rows, and created a replica table called T_CARDS_NEW_201111. But the count in new table is 1,083,566,976 the difference is 1744 additional row. I have no idea how the new table can contain more rows compared to original table!!
Oracle version is 11.2.0.2.0.
Both table count were taken after table creation. Script that was used to create replica table is:
CREATE TABLE T_CARDS_NEW_201111
TABLESPACE T_DATA_XLARGE07
PARTITION BY RANGE (CPS01_DATE_GENERATED)
SUBPARTITION BY LIST (CPS01_CURRENT_STATUS)
SUBPARTITION TEMPLATE
(SUBPARTITION T_NULL VALUES (NULL),
SUBPARTITION T_0 VALUES (0),
SUBPARTITION T_1 VALUES (1),
SUBPARTITION T_3 VALUES (3),
SUBPARTITION T_OTHERS VALUES (DEFAULT)
PARTITION T_200612 VALUES LESS THAN (TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_200612_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200612_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 ),
PARTITION T_200701 VALUES LESS THAN (TO_DATE(' 2007-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_200701_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_200701_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 )
PARTITION T_201211 VALUES LESS THAN (TO_DATE(' 2012-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_201211_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201211_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 ),
PARTITION T_201212 VALUES LESS THAN (TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
TABLESPACE T_DATA_XLARGE07
( SUBPARTITION T_201212_T_NULL VALUES (NULL) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_0 VALUES (0) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_1 VALUES (1) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_3 VALUES (3) TABLESPACE T_DATA_XLARGE07,
SUBPARTITION T_201212_T_OTHERS VALUES (DEFAULT) TABLESPACE T_DATA_XLARGE07 )
NOCACHE
NOPARALLEL
MONITORING
ENABLE ROW MOVEMENT
AS
SELECT /*+ PARALLEL (T,40) */ SERIAL_NUMBER ,
PIN_NUMBER ,
CARD_TYPE ,
DENOMINATION ,
DATE_GENERATED ,
LOG_PHY_IND ,
CARD_ID ,
OUTLET_CODE ,
MSISDN ,
BATCH_NUMBER ,
DATE_SOLD ,
DIST_CHANNEL ,
DATE_CEASED ,
DATE_PRINTED ,
DATE_RECHARGE ,
LOGICAL_ORDER_NR ,
DATE_AVAILABLE ,
CURRENT_STATUS ,
ACCESS_CODE from CARDS_TAB T
/Also base table CARDS_TAB has a primary key on SERIAL_NUMBER column. when trying to create a primary key on new table it throws exception:
ALTER TABLE T_CARDS_NEW_201111 ADD
CONSTRAINT T_PK2_1
PRIMARY KEY (SERIAL_NUMBER) USING INDEX
TABLESPACE T_INDEX_XLARGE07
PARALLEL 10 NOLOGGING;
CONSTRAINT TP_PK2_1
ERROR at line 2:
ORA-02437: cannot validate (T_PK2_1) - primary key violatedThanks in advance.
With Regards,
Farooq AbdullaFor parallel processing the documentation suggests the use of automatic degree of parallelism (determined by the system at run time) or choosing a power of 2 value
Look at Florian's post in yours presently neighbour post How to Delete Duplicate rows from a Table to locate the violations (seemingly due to parallel processing)
Regards
Etbin -
Weblogic LLR tables RECORDSTR column size issues
Hi ,
We have configured the LLR (Last logging resource to emulate 2 Phase commit XA ) non XA datasouce in our weblogic 10.3.5 domain and I see that weblogic internally creates WL_LLR_<servername> tables in the schema confgured with that LLR datasouce per managed server.
But during some transactions commit we are facing an error stating that RECORDSTR column size (which is default 1000 Bytes) is too small for the COMMIT RECORD. It works fine when we increased the RECORDSTR column to 2000 bytes . But the issue is what is the criteria to increase the column size , our application have a large global transactions and will 2000 bytes will be enough to hold that.
So my question is what actually gets stored within the LLR tables for a transaction , transaction logs or COMMIT record ? , what does COMMIT RECORD actually mean?
Is there a way to specify the table and column length of LLR Tables during creation of LLR data sources itself.Maybe this can shed some light on things: http://docs.oracle.com/cd/E21764_01/web.1111/e13737/transactions.htm#i1145819
-
Issue with DWH DB tables creation
Hi,
While generating Datawarehouse tables (sec 4.10.1 How to Create Data Warehouse Tables), i have landed up with error that states "Creating Datawarehouse tables Failure'
But when i checked in the log file 'generate_ctl.log', it have the below message:
+"Schema will be created from the following containers:+
Oracle 11.5.10
Oracle R12
Universal
Conflict(s) between containers:
Table Name : W_BOM_ITEM_FS
Column Name: INTEGRATION_ID.
+The column properties that are different :[keyTypeCode]+
Success! "
When i checked in the DWH Database, i could found DWH tables but not sure whether all tables were created?
Can anyone tell me whether my DWH tables are all created? How many tables would be created for the above EBS containers?
Also, should i need to drop any of EBS container to create DWH tables successfully?
The Installation guide states when DWH tables creation fails then 'createtables.log' won't be created. But, in my case, this log file got created!
Edited by: userOO7 on Nov 19, 2008 2:41 PMI saw the same message. I also noticed I am unable to load any BOM Items into that fact table. It looks like the BOM_EXPLODER package call is not keeping any rows in BOM_EXPLOSION_TEMP, so no rows are loaded into that fact table. Someone needs to log an SR for this.
*****START LOAD SESSION*****
Load Start Time: Wed Nov 19 17:13:42 2008
Target tables:
W_BOM_ITEM_FS
READER_2_1_1> BLKR_16019 Read [0] rows, read [0] error rows for source table [BOM_EXPLOSION_TEMP] instance name [mplt_BC_ORA_BOMItemFact.BOM_EXPLOSION_TEMP]
READER_2_1_1> BLKR_16008 Reader run completed.
TRANSF_2_1_1> DBG_21216 Finished transformations for Source Qualifier [mplt_BC_ORA_BOMItemFact.SQ_BOM_EXPLOSION_TEMP]. Total errors [0]
WRITER_2_*_1> WRT_8167 Start loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8168 End loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8035 Load complete time: Wed Nov 19 17:13:42 2008
LOAD SUMMARY
============
WRT_8036 Target: W_BOM_ITEM_FS (Instance Name: [W_BOM_ITEM_FS])
WRT_8044 No data loaded for this target
WRITER_2__1> WRT_8043 ****END LOAD SESSION*****
WRITER_2_*_1> WRT_8006 Writer run completed.
I now see it is covered in the release notes:
http://download.oracle.com/docs/cd/E12127_01/doc/bia.795/e12087/chapter.htm#CHDFJHHB
1.3.31 No Data Is Loaded Into W_BOM_ITEM_F And W_BOM_ITEM_FS
The mapping SDE_ORA_BOMItemFact needs to call a Stored Procedure (SP) in the Oracle EBS instance, which inserts rows into a global temporary table (duration SYS$SESSION, that is, the data will be lost if the session is closed). This Stored Procedure does not have an explicit commit. The Stored Procedure then needs to read the rows in the temporary table into the warehouse.
In order for the mapping to work, Informatica needs to share the same connection for the SP and the SQL qualifier during ETL.This feature was available in the Informatica 7.X release, but it is not available in the Informatica release 8.1.1 (SP4). As a result, W_BOM_ITEM_FS and W_BOM_ITEM_F are not loaded properly.
Workaround
For all Oracle EBS customers:
Open package body bompexpl.
Look for text "END exploder_userexit;", scroll a few lines above, and add a "commit;" command before "EXCEPTION".
Save and compile the package. -
How do I control a table's column visible in Java
Using JDeveloper 11.1.1.4.0
I want to control a rich tree table's column visibility programatically in Java. I've looked for syntax and do not find an example like I need. I need to directly control the column similar to how a panel collection does. The visibility of the column will be set and then the table refreshed (I've got the refresh part working), I just need to correctly reference the column. This logic will be triggered by a rowDisclosureListener that is defined as pageFlowScope. I've tried experimenting on myTable which is a RichTreeTable, but the plethora of syntax choices after "myTable." is enormous. Also, is it possible to allow the panel collection to override this logic or remove the column from its columns list?
Thanks in advance,
TroyWow, I wish I could include a screenshot to show what is happening. My bean has somewhat similar logic, only I am trying to have my treetable show an additional column when a node of the treetable is expanded. I thought this problem might be caused by the panel collection so I tried taking it off, but it still behaves the same. I can show the column, but no header shows up for it. The original column headers stay fixed as does the data of node 0, but the subsequent (node 1) data after the first column is shifted to the right with each column added between the first column and the end column. If I do a browser refresh after expanding a node, it refreshes with the column header that was missing--strange.
from my bean:
public void onNodeDisclosure(RowDisclosureEvent rowDisclosureEvent) {
boolean isCloseEvent = false;
RowKeySet rowKeySet = rowDisclosureEvent.getAddedSet();
//did disclosure event open a new node ?
if (rowKeySet.iterator().hasNext()) {
isCloseEvent = false;
nodeLevel++;
} else {
isCloseEvent = true;
nodeLevel--;
//get the previously disclosed set
rowKeySet = rowDisclosureEvent.getRemovedSet();
if (nodeLevel == 1 && isCloseEvent == false) {
setShowTaxyear(Boolean.TRUE);
} else if (nodeLevel == 1 && isCloseEvent == true) {
setShowTaxyear(Boolean.FALSE);
setShowTaxunit(Boolean.FALSE);
if (nodeLevel == 2 && isCloseEvent == false) {
setShowTaxyear(Boolean.TRUE);
setShowTaxunit(Boolean.TRUE);
} else if (nodeLevel == 2 && isCloseEvent == true) {
showTaxunit = true;
if (nodeLevel == 3 && isCloseEvent == false) {
setShowTaxyear(Boolean.TRUE);
setShowTaxunit(Boolean.TRUE);
partiallyrefreshUIComponent();
public void setAmtsOwedTreeTbl(RichTreeTable amtsOwedTreeTbl) {
this.amtsOwedTreeTbl = amtsOwedTreeTbl;
public RichTreeTable getAmtsOwedTreeTbl() {
return amtsOwedTreeTbl;
public void setShowTaxyear(boolean showTaxyear) {
this.showTaxyear = showTaxyear;
public boolean isShowTaxyear() {
return showTaxyear;
public void setShowTaxunit(boolean showTaxunit) {
this.showTaxunit = showTaxunit;
public boolean isShowTaxunit() {
return showTaxunit;
* PRIVATE METHOD
private void partiallyrefreshUIComponent() {
AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();
adfFacesContext.addPartialTarget(getAmtsOwedTreeTbl());
}my treetable:
<af:treeTable value="#{bindings.GetAmtsGrandTotal_VO2.treeModel}"
var="node"
selectionListener="#{bindings.GetAmtsGrandTotal_VO2.treeModel.makeCurrent}"
rowSelection="multiple" id="amtsowedtt1" width="900"
columnSelection="multiple"
inlineStyle="border-style:none;"
summary="This table dynamically displays the amounts due, (interest, penalties and attorney fees--if any) and a total due. The rows can be expanded to show the above amounts by year, tax unit and owner."
shortDesc="Amounts Due" autoHeightRows="0"
immediate="false"
clientComponent="true"
rowDisclosureListener="#{pageFlowScope.browseAmtsOwedTreeTblBean.onNodeDisclosure}"
binding="#{pageFlowScope.browseAmtsOwedTreeTblBean.amtsOwedTreeTbl}">
<f:facet name="nodeStamp">
<!-- this column is to always show -->
<af:column id="amtsowedc1" width="130"
inlineStyle="#{node.bindings.NodeType=='aggregate'?'font-weight:bold;':''}"
visible="#{true}">
<af:outputText value="#{node.bindings.NodeLabel}"
id="amtsowedot1"/>
</af:column>
</f:facet>
<!-- this column should show when node 1 or greater is exposed -->
<af:column width="45" id="amtsowedc2"
visible="#{pageFlowScope.browseAmtsOwedTreeTblBean.showTaxyear}" inlineStyle="text-align:center;"
sortable="true" sortProperty="#{node.bindings.Taxyear}">
<af:outputText value="#{node.bindings.Taxyear}" id="amtsowedot2"/>
<f:facet name="header">
<af:outputText value="Tax Year" id="amtsowedot33"
inlineStyle="text-align:center;"
visible="#{pageFlowScope.browseAmtsOwedTreeTblBean.showTaxyear}"
noWrap="true" rendered="#{pageFlowScope.browseAmtsOwedTreeTblBean.showTaxyear}"/>
</f:facet>
</af:column>
<!-- this column should show when node 2 or greater is exposed -->
<af:column width="40" id="amtsowedc3" visible="#{pageFlowScope.browseAmtsOwedTreeTblBean.showTaxunit}"
inlineStyle="text-align:center;"
sortable="true" sortProperty="#{node.bindings.Taxunit}"
filterable="true" filterFeatures="caseInsensitive">
<af:outputText value="#{node.bindings.Taxunit}" id="amtsowedot3"/>
<f:facet name="header">
<af:outputText value="Tax Unit" id="amtsowedot66"
inlineStyle="text-align:center;"
visible="#{pageFlowScope.browseAmtsOwedTreeTblBean.showTaxunit}"
noWrap="true" rendered="#{pageFlowScope.browseAmtsOwedTreeTblBean.showTaxunit}"/>
</f:facet>
</af:column>
<af:column id="amtsowedc4" align="right" headerText="Calculated Levy"
inlineStyle="#{node.bindings.NodeType=='aggregate'?'font-weight:bold;':''};"
visible="false">
<af:outputText value="#{node.bindings.Calclevy}" id="amtsowedot4"/>
</af:column>
<af:column id="amtsowedc5" align="right" headerText="Levy Due"
inlineStyle="#{node.bindings.NodeType=='aggregate'?'font-weight:bold;':''};"
visible="false">
<af:outputText value="#{node.bindings.Ballevydue}"
id="amtsowedot5"/>
</af:column>
<af:column id="amtsowedc6" align="right" headerText="Interest Due"
inlineStyle="#{node.bindings.NodeType=='aggregate'?'font-weight:bold;':''};"
visible="false">
<af:outputText value="#{node.bindings.Intdue}" id="amtsowedot6"/>
</af:column>
<af:column id="amtsowedc7" align="right" headerText="Penalty Due"
inlineStyle="#{node.NodeType=='aggregate'?'font-weight:bold;':''};"
visible="false">
<af:outputText value="#{node.bindings.Pendue}" id="amtsowedot7"/>
</af:column>
<af:column id="amtsowedc8" align="right" headerText="Attorney Due"
inlineStyle="#{node.bindings.NodeType=='aggregate'?'font-weight:bold;':''};"
visible="false">
<af:outputText value="#{node.bindings.Attydue}" id="amtsowedot8"/>
</af:column>
<!-- this column is to always show -->
<af:column id="amtsowedc9" align="right" headerText="Total Due"
inlineStyle="#{node.bindings.NodeType=='aggregate'?'font-weight:bold;':''};"
visible="#{true}">
<af:outputText value="#{node.bindings.Totalbaldue}"
id="amtsowedot9"/>
</af:column>
<f:facet name="pathStamp">
<af:outputText value="#{node}" id="amtsowedot0"/>
</f:facet>
</af:treeTable> -
Table creation - order of events
I am trying to get some help on the order I should be carrying out table creation tasks.
Say I create a simple table:
create table title (
title_id number(2) not null,
title varchar2(10) not null,
effective_from date not null,
effective_to date not null,
constraint pk_title primary key (title_id)
I believe I should populate the data, then create my index:
create unique index title_title_id_idx on title (title_id asc)
But I have read that Oracle will automatically create an index for my primary key if I do not do so myself.
At what point does Oracle create the index on my behalf and how do I stop it?
Should I only apply the primary key constraint after the data has been loaded as well?
Even then, if I add the primary key constraint will Oracle not immediately create an index for me when I am about to create a specific one matching my naming conventions?yeah but just handle it the way you would handle any other constraint violation - with the EXCEPTIONS INTO clause...
SQL> select index_name, uniqueness from user_indexes
2 where table_name = 'APC'
3 /
no rows selected
SQL> insert into apc values (1)
2 /
1 row created.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
Table altered.
SQL> insert into apc values (2)
2 /
insert into apc values (2)
ERROR at line 1:
ORA-00001: unique constraint (APC.APC_PK) violated
SQL> alter table apc drop constraint apc_pk
2 /
Table altered.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> @%ORACLE_HOME%/rdbms/admin/utlexcpt.sql
Table created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 exceptions into EXCEPTIONS
4 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> select * from apc where rowid in ( select row_id from exceptions)
2 /
COL1
2
2
SQL> All this is in the documentation. Find out more.
Cheers, APC -
Good day,
I searched through the forum and cant find anything.
I have around 300 published reports on SSRS and we are busy migrating to a new system.
They have already setup their tables on the new system and I need to provide them with a list of table names and column names that are being used currently to generate the 300 reports on SSRS.
We use various tables and databases to generate these reports, and will take me forever to go through each query to get this info.
Is it at all possible to write a query in SQL 2008 that will give me all the table names and columns being used?
Your assistance is greatly appreciated.
I thank you.
Andre.There's no straightforward method for that I guess. There are couple of things you can use to get these details
1. query the ReportServer.dbo.Catalog table
for getting details
you may use script below for that
http://gallery.technet.microsoft.com/scriptcenter/42440a6b-c5b1-4acc-9632-d608d1c40a5c
2. Another method is to run the reports and run sql profiler trace on background to retrieve queries used.
But in some of these cases the report might be using a procedure and you will get only procedure. Then its upto you to get the other details from procedure like tables used, columns etc
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs
Maybe you are looking for
-
Openoffice 3.2 calc doesn't save row height in xls
Hi All, I have upgraded to ooo 3.2 recently. I work with excels spreadsheets so I have to save files into xls. Suddenly I realized that Calc doesn't save custom row height, when I open the file all rows are displayed with default height. It worked i
-
Down Payment settlement Line is not appearing in the final Billing
Hello All, Activated the Milestone billing plan. Created a Sales order > Down payment Billing document > and cleared the Down payment billing document. While trying to bill the final invoice the down payment settlement line is not appearing. Please s
-
ORA-20103: Null input is not allowed
Hi, Got a bit of a problem I'm struggling with. Hoping someone can help ! I've got some PL/SQL code which performs some simple checks against XML to see if its Malformed.I can't cut and paste the real code due to security constraints but this should
-
Safari 5.1 crashes on Snow Leopard?
This is my crash report. Any idea, please? Thanks Process: Safari [9766] Path: /Applications/Safari.app/Contents/MacOS/Safari Identifier: com.apple.Safari Version: 5.1.10 (6534.59.10) Build Info: WebBrowser-753459
-
Mathamatical Operations in Scripts
Dear Experts, In scripts ,I am fasing a problem. I want to do some calculation in scripts.For that , by using Perform & endperform, I created a form in a progrm & doing. But my issue is I am passing two currency fields(data type wrshb) from script t