Text table creation
hi all,
can you explain me , How to create text tables for transaparent table in abap dictionary..
Thanks in Advance.
parashuram
Moderator message: please search for available information.
Edited by: Thomas Zloch on Feb 6, 2012
Hi
I've just created a table and its text table for a my developoment (I haven't yet generated maintenance program for SM30)
When I defined the text table I've inserted:
MANDT (foreign key with T000), key field
SPRAS (foreign key with T002), key field
<my field> (foreign key with my z-table, checked text table, cardinality 1:CN), key field
descr (domain BEZEI40)
Now I see the record of my z-table by SE16 I can see the descr field of text table
Max
Similar Messages
-
Dynamic Table Creation & Fill Up
Hello,
Can anyone please guide where I can find examples for dynamic table creation (programmaticaly), with dynamic number of columns and rows, used to place inside text components (or whatever) to fill them with data.
All programmatic.
Using JSF, ADF BC
JDeveloper 10.1.3.1
Thanks
Message was edited by:
RJundiHi,
Meybe this article helps: http://technology.amis.nl/blog/?p=2306
Kuba -
Hi,
I am new at SAP UI5 and i am trying to create a table.
I used the code given on the sap ui5 demokit but when i am running the application the table is not displaying.
can anyone help me with this.
Thanks in advance.
var aData = [
{lastName: "Dente", name: "Al", gender : "male"},
{lastName: "Friese", name: "Andy", gender : "female"},
{lastName: "Mann", name: "Anita", gender : "female"}
//table creation
var oTable = new sap.ui.table.Table({
title: "Guest House list",
visibleRowCount: 3,
firstVisibleRow: 2,
selectionMode: sap.ui.table.SelectionMode.Single,
//column creation
oTable.addColumn(new sap.ui.table.Column({
label: new sap.ui.commons.Label({text: "Last Name"}),
template: new sap.ui.commons.TextView().bindProperty("text", "lastName"),
sortProperty: "lastName",
filterProperty: "lastName",
width: "200px"
oTable.addColumn(new sap.ui.table.Column({
label: new sap.ui.commons.Label({text: "First Name"}),
template: new sap.ui.commons.TextField().bindProperty("value", "name"),
sortProperty: "name",
filterProperty: "name",
width: "100px"
oTable.addColumn(new sap.ui.table.Column({
label: new sap.ui.commons.Label({text: "Gender"}),
template: new sap.ui.commons.ComboBox({items: [
new sap.ui.core.ListItem({text: "female"}),
new sap.ui.core.ListItem({text: "male"})
]}).bindProperty("value","gender"),
sortProperty: "gender",
filterProperty: "gender"
//data collection
var oModel = new sap.ui.model.json.JSONModel();
oModel.setData({modelData: aData});
oTable.setModel(oModel);
oTable.bindRows("/modelData");
//Initially sort the table
oTable.sort(oTable.getColumns()[0]);
oTable.placeAt("table");Hi Anshul
it should be
press: function() {
oTable.setVisible(true); // or false
am I right?
-D -
Issue with DWH DB tables creation
Hi,
While generating Datawarehouse tables (sec 4.10.1 How to Create Data Warehouse Tables), i have landed up with error that states "Creating Datawarehouse tables Failure'
But when i checked in the log file 'generate_ctl.log', it have the below message:
+"Schema will be created from the following containers:+
Oracle 11.5.10
Oracle R12
Universal
Conflict(s) between containers:
Table Name : W_BOM_ITEM_FS
Column Name: INTEGRATION_ID.
+The column properties that are different :[keyTypeCode]+
Success! "
When i checked in the DWH Database, i could found DWH tables but not sure whether all tables were created?
Can anyone tell me whether my DWH tables are all created? How many tables would be created for the above EBS containers?
Also, should i need to drop any of EBS container to create DWH tables successfully?
The Installation guide states when DWH tables creation fails then 'createtables.log' won't be created. But, in my case, this log file got created!
Edited by: userOO7 on Nov 19, 2008 2:41 PMI saw the same message. I also noticed I am unable to load any BOM Items into that fact table. It looks like the BOM_EXPLODER package call is not keeping any rows in BOM_EXPLOSION_TEMP, so no rows are loaded into that fact table. Someone needs to log an SR for this.
*****START LOAD SESSION*****
Load Start Time: Wed Nov 19 17:13:42 2008
Target tables:
W_BOM_ITEM_FS
READER_2_1_1> BLKR_16019 Read [0] rows, read [0] error rows for source table [BOM_EXPLOSION_TEMP] instance name [mplt_BC_ORA_BOMItemFact.BOM_EXPLOSION_TEMP]
READER_2_1_1> BLKR_16008 Reader run completed.
TRANSF_2_1_1> DBG_21216 Finished transformations for Source Qualifier [mplt_BC_ORA_BOMItemFact.SQ_BOM_EXPLOSION_TEMP]. Total errors [0]
WRITER_2_*_1> WRT_8167 Start loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8168 End loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8035 Load complete time: Wed Nov 19 17:13:42 2008
LOAD SUMMARY
============
WRT_8036 Target: W_BOM_ITEM_FS (Instance Name: [W_BOM_ITEM_FS])
WRT_8044 No data loaded for this target
WRITER_2__1> WRT_8043 ****END LOAD SESSION*****
WRITER_2_*_1> WRT_8006 Writer run completed.
I now see it is covered in the release notes:
http://download.oracle.com/docs/cd/E12127_01/doc/bia.795/e12087/chapter.htm#CHDFJHHB
1.3.31 No Data Is Loaded Into W_BOM_ITEM_F And W_BOM_ITEM_FS
The mapping SDE_ORA_BOMItemFact needs to call a Stored Procedure (SP) in the Oracle EBS instance, which inserts rows into a global temporary table (duration SYS$SESSION, that is, the data will be lost if the session is closed). This Stored Procedure does not have an explicit commit. The Stored Procedure then needs to read the rows in the temporary table into the warehouse.
In order for the mapping to work, Informatica needs to share the same connection for the SP and the SQL qualifier during ETL.This feature was available in the Informatica 7.X release, but it is not available in the Informatica release 8.1.1 (SP4). As a result, W_BOM_ITEM_FS and W_BOM_ITEM_F are not loaded properly.
Workaround
For all Oracle EBS customers:
Open package body bompexpl.
Look for text "END exploder_userexit;", scroll a few lines above, and add a "commit;" command before "EXCEPTION".
Save and compile the package. -
How to observe a table creation/insertion ?
Hello,
Questions :
1] How can I observe a table insert operation ?
2] When we cut/paste existing table frame what is the order of page items hierarchy creation ? like create text frame first then insert table in it.
Currently I have implemented a code to observe page items creation (of text frames, rectangle frames etc. ) using IID_IHIERARCHY_DOCUMENT and kNewPageItemCmdBoss. This is working fine.
But when I tried to copy/cut-paste existing table frame I get the kNewPageItemCmdBoss for text frame pageitem which contains table frame. but I did not get notification for table creation.
What I want to do is, When user paste table frame I need to access to newly created table and get its model UID (using GetNthModelUID), just after paste process (i.e. during processing of kNewPageItemCmdBoss)
I have tried following code but, it gives me table model count as 0,
Note : following code is added in IID_IHIERARCHY_DOCUMENT protocol observer code (please refer PstLstDocObserver.cpp code snippet of InDesign SDK)
if((theChange == kNewPageItemCmdBoss))
int32 SelectedFrames = itemList.Length();
if (SelectedFrames > 0)
UIDRef frameUIDRef, childUIDRef, parentUIDRef;
UID frameModelUID, childUID, parentUID;
for(int i=0; i<SelectedFrames; i++)
frameUIDRef = itemList.GetRef(i);
IDataBase *db = frameUIDRef.GetDataBase();
InterfacePtr<IHierarchy> itemHierarchy(frameUIDRef, IID_IHIERARCHY);
if(itemHierarchy)
if((itemHierarchy->GetChildCount()) > 0 )
childUID = itemHierarchy->GetChildUID(0);
childUIDRef = UIDRef(db, childUID);
if(childUIDRef)
int32 tableCount = 0;
UIDRef tableModelUIDRef;
InterfacePtr<IMultiColumnTextFrame> mcFrame(childUIDRef, UseDefaultIID());
if (mcFrame)
InterfacePtr<ITextModel> textModel(mcFrame->QueryTextModel(), UseDefaultIID());
if(textModel)
InterfacePtr<ITableModelList> tableList(textModel, UseDefaultIID());
if(tableList)
//Here I get the tableCount as 0
tableCount = tableList->GetModelCount();
InterfacePtr<ITableModel> table(tableList->QueryNthModel(tableCount-1));
if(table)
tableModelUIDRef = ::GetUIDRef(table);
frameModelUID = tableModelUIDRef.GetUID();
Note : using CS6/CC SDK
Thanks,
-HarshObserver attached on kDocBoss will not give notification on table creation as table does not fall into document Hierarchy.
You can try to attach Observer on kDocWorkspaceBoss and see if you get the notification.
Also, as for you problem of not getting table at the time of copy/paste, are you getting the reference for IMultiColumnTextFrame?
You might not get some entities like(XML tags,Labels etc) on page item in kDocObserver as soon as frame is pasted.
You can use a workaround to capture pasted table frame and get it's table by hooking up to responder service 'kNewStorySignalResponderService'
and then use your above logic inside it. -
How to create Text Tables and Value Tables ?
Hi all,
How we can create the Text Table ? Step by Step
How we can create the Value Table ? Step by Step
Note : I am not asking about the creation of simple transparant table ...
Thanks In advance.
How to use Text tables ?
With Example Please..
Thanks in advance.
Regards.
RajHi Raj,
Table A is a text table of table B if the key of A comprises the key of B and an additional language key field (field of data type LANG). Table A may therefore contain explanatory text in several languages for each key entry of B.
To link the key entries with the text, text table A must be linked with table B using a foreign key. Key fields of a text table must be selected here for the type of foreign key fields
If table B is the check table of a field, the existing key entries of table B are displayed as possible input values when the input help (F4) is pressed. The explanatory text (contents of the first character-like non-key-field of text table A) is also displayed in the user's logon language for each key value in table B.
Only one text table can be created for table B! The system checks this when you attempt to activate a table with text foreign keys for B.
Regards
Aneesh. -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Table creation - order of events
I am trying to get some help on the order I should be carrying out table creation tasks.
Say I create a simple table:
create table title (
title_id number(2) not null,
title varchar2(10) not null,
effective_from date not null,
effective_to date not null,
constraint pk_title primary key (title_id)
I believe I should populate the data, then create my index:
create unique index title_title_id_idx on title (title_id asc)
But I have read that Oracle will automatically create an index for my primary key if I do not do so myself.
At what point does Oracle create the index on my behalf and how do I stop it?
Should I only apply the primary key constraint after the data has been loaded as well?
Even then, if I add the primary key constraint will Oracle not immediately create an index for me when I am about to create a specific one matching my naming conventions?yeah but just handle it the way you would handle any other constraint violation - with the EXCEPTIONS INTO clause...
SQL> select index_name, uniqueness from user_indexes
2 where table_name = 'APC'
3 /
no rows selected
SQL> insert into apc values (1)
2 /
1 row created.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
Table altered.
SQL> insert into apc values (2)
2 /
insert into apc values (2)
ERROR at line 1:
ORA-00001: unique constraint (APC.APC_PK) violated
SQL> alter table apc drop constraint apc_pk
2 /
Table altered.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> @%ORACLE_HOME%/rdbms/admin/utlexcpt.sql
Table created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 exceptions into EXCEPTIONS
4 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> select * from apc where rowid in ( select row_id from exceptions)
2 /
COL1
2
2
SQL> All this is in the documentation. Find out more.
Cheers, APC -
0LANGU - missing in text table of an InfoObject - 0PLANT
Hi there,
*I'm loking for a great assistance*.
The IO 0PLANT was fine, in BI7.0, and i viewed transformation code in InfoSpoke (not OHD), migrated from BW3.5 and when i gave the syntax check option it says "0LANGU" as unknown field in /BI0/TPLANT table. When i checked there it's not present in the text table and the 0PLANT object is in $TMP package. I've no clues how the 0PLANT got changed and / or how the 0LANGU is missing.
I checked this in Dev serevr and everything is FINE in Prod server.
Any comments?Praveen,
Happy to hear from u AGAIN..
and U've marked my silly mistake once AGAIN..
The "Lang. Depnt" field is unmarked. But don't know how that happened.
Now dude tell me one point...If i change the IO and move that to production will that trasportation affect all the other objects using this IO, say DSO/Cube become inactive? Though I'm goin to perform the changes under a DUMMY transport, i just want to be clear in this part. -
How to create monthly table creation?
Hi Mates,
Unable to create table by month in analytic database but load the data to the previous table continuous as attached screenshot, Schema user has the creation privilege. We are using Webcenter interaction 10gR4.
How to create monthly table creation please?
Thanks,
KatherineHi Trevor,
Thanks for your help. We were able to create table and load data till Apr as attached.
However the analytic user privilege has been modified on Apr due to server operation.
Since then, there was a message saying there is no permission to create tables in the analytic log,
analytic user privilege has been granted after checked this message, As I suspected, the issue occurred after modifying analytic user privilege.
Currently, analytic users are granted with all privilege.
Any idea please?
Thanks,
Kathy -
Maintenance view does not bring the default language in the Text table
Hi Experts,
I am creating a maintenance view with the join of two tables
TAB1 has following fields
MANDT
CATEGORY
TAB2 (is the text table for TAB1) has the following fields
MANDT
LANGU
CATEGORY
DESCRIPTION
I have created the maintenance view with
MANDT
CATEGORY
DESCRIPTION
When Itry to maintain the data through the view I can insert teh data in TAB1 but in TAB2 it doesnt take the default language key.
I want my default language key as 'EN'.
I have referred V_T77TMC_EDUTYP, but I cannot find the reason why in my custom view I dont get the language key.
Request your help.
Thanks
Anu.Hello Anu,
I believe that according to definition of maintenance view, for text tables there should be an automatic entry for sy-langu in SPRAS field. So, you should be able to get this without activating any TMG events.
I suppose you have created a maintenance generator for your maintenance view and updating entries using SM30 for that maintenance view.
Please check that you have created the text table properly. Check while putting TAB1 in SE16n and pressing enter, do you see TAB2 in 'text table' field ???
Make sure while defining Foreign key relationship, you have used 'Key fields of a Text table' option.
Do share if you face any problem ?
regards,
Diwakar -
"Text Table must be converted" error
BW Gurus,
We just applied stack 15 to our BI 7.0 system (SAP_BW 0017), and are now getting an error with one of our infoobjects.
The error states that the text table must be converted, as below, but we're not quite sure what this means. This infoobject is in several infoproviders, and we get a follow up error that we must remove the infoobject from the transfer rules before it can be converted. Has anyone come up with a solution/work around for this?
Thanks
First Error:
Characteristic 0IND_CODE: The Long Text Field will be deleted, text table must be converted
Message no. R7451
Diagnosis
The Long Text Field was deleted from the text table of characteristic 0IND_CODE.
System Response
All texts, that have been saved in this field, are deleted in the activation.
The text table must be converted after the activation if it contains data.
Second error:
InfoObject 0TXTLG is used in transfer rules (T) of characteristic 0IND_CODE
Message no. R7256
Diagnosis
InfoObject 0TXTLG is still used for characteristic 0IND_CODE in the transfer rules of type T ('M': Master data, 'T': Texts). You have the following options:
1. InfoObject 0TXTLG is an attribute of characteristic 0IND_CODE
2. InfoObject 0TXTLG is the 'Valid to' field of the time-dependent characteristic 0IND_CODE (0DATETO)
3. InfoObject 0TXTLG is the 'Valid from' field of the time-dependent characteristic 0IND_CODE (0DATEFROM)
4. Characteristic 0IND_CODE is compounded to InfoObject 0TXTLG.
5. Characteristic 0TXTLG is a text field of charactersitic 0IND_CODE (OTXTSH, 0TXTMD or 0TXTLG).
6. Characteristic 0TXTLG is the language field for characteristic 0IND_CODE (0LANGU).
System Response
2 cases have to be distinguished between here:
InfoObject 0TXTLG with characteristic 0IND_CODE can not be removed in the dialog.
InfoObject 0TXTLG is removed with characteristic 0IND_CODE in the transport post-processing.
Procedure
You must first remove InfoObject 0TXTLG from the transfer rules for characteristic 0IND_CODE in the dialog before you can remove it with characteristic 0IND_CODE.
If this messages appears with transport post-processing, you should check the affected transfer rules.
If there is still an InfoObject 0TXTLG in them, you should either remove it from the transfer rules or reinsert it with characteristic 0IND_CODE.Thanks - sometimes the simplest answer is the best.
I had been trying to figure out why the med & long text had become unchecked, but rechecking them solved the problem.
Thanks -
Dynamic Internal Table creation and population
Hi gurus !
my issue refers to the slide 10 provided in this slideshow : https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b332e090-0201-0010-bdbd-b735e96fe0ae
My example is gonna sound dumb, but anyway: I want to dynamically select from a table into a dynamically created itab.
Letu2019s use only EKPO, and only field MENGE.
For this, I use Classes cl_abap_elemdescr, cl_sql_result_set and the Data Ref for table creation. But while fetching the resultset, program dumps when fields like MENGE, WRBTR are accessed. Obviously their type are not correctly taken into account by my program.
Here it comes:
DATA: element_ref TYPE REF TO cl_abap_elemdescr,
vl_fieldname TYPE string,
tl_components TYPE abap_component_tab,
sl_components LIKE LINE OF tl_components_alv,
linetype_lcl TYPE REF TO cl_abap_structdescr,
ty_table_type TYPE REF TO cl_abap_tabledescr,
g_resultset TYPE REF TO cl_sql_result_set
u2026
CONCATENATE sg_columns-table_name '-' sg_columns-column_name INTO vl_fieldname.
* sg_columns-table_name contains 'EKPO'
* sg_columns-column_name contains 'MENGE'
* getting the element as a component
element_ref ?= cl_abap_elemdescr=>describe_by_name( vl_fieldname ).
sl_components-name = sg_columns-column_name.
sl_components-type ?= element_ref.
APPEND sl_components TO tl_components.
* dynamic creation of internal table
linetype_lcl = cl_abap_structdescr=>create( tl_components ).
ty_table_type = cl_abap_tabledescr=>create(
p_line_type = linetype_lcl ).
u2026
* Then I will create my field symbol table and line. Code has been cut here.
CREATE DATA dy_line LIKE LINE OF <dyn_table>.
u2026
* Then I will execute my query. Here itu2019s: Select MENGE From EKPO Where Rownum = 1.
g_resultset = g_stmt_ref->execute_query( stmt_str ).
* Then structure for the Resultset is set
CALL METHOD g_resultset->set_param_struct
EXPORTING
struct_ref = dy_line.
* Fetching the lines of the resultset => Dumpu2026
WHILE g_resultset->next( ) > 0.
ASSIGN dy_line->* TO <dyn_wa>.
APPEND <dyn_wa> TO <dyn_table>.
ENDWHILE.
Anyone has any clue to how prevent my Dump ??
The component for MENGE seems to be described as a P7 with 2 decimals. And the resultset wanna use a QUAN type... or something like that !Hello
I have expanded your sample coding for selecting three fields out of EKPO:
*& Report ZUS_SDN_SQL_RESULT_SET
*& Thread: Dynamic Internal Table creation and population
*& <a class="jive_macro jive_macro_thread" href="" __jive_macro_name="thread" modifiedtitle="true" __default_attr="1375510"></a>
*& NOTE: Coding for dynamic structure / itab creation taken from:
*& Creating Flat and Complex Internal Tables Dynamically using RTTI
*& https://wiki.sdn.sap.com/wiki/display/Snippets/Creating+Flat+and+
*& Complex+Internal+Tables+Dynamically+using+RTTI
REPORT zus_sdn_sql_result_set.
TYPE-POOLS: abap.
DATA:
go_sql_stmt TYPE REF TO cl_sql_statement,
go_resultset TYPE REF TO cl_sql_result_set,
gd_sql_clause TYPE string.
DATA:
gd_tabfield TYPE string,
go_table TYPE REF TO cl_salv_table,
go_sdescr_new TYPE REF TO cl_abap_structdescr,
go_tdescr TYPE REF TO cl_abap_tabledescr,
gdo_handle TYPE REF TO data,
gdo_record TYPE REF TO data,
gs_comp TYPE abap_componentdescr,
gt_components TYPE abap_component_tab.
FIELD-SYMBOLS:
<gs_record> TYPE ANY,
<gt_itab> TYPE STANDARD TABLE.
START-OF-SELECTION.
continued. -
Bad file is not created during the external table creation.
Hello Experts,
I have created a script for external table in Oracle 10g DB. Everything is working fine except it does not create the bad file, But it creates the log file. I Cann't figure out what is the issue. Because my shell scripts is failing and the entire program is failing. I am attaching the table creation script and the shell script where it is refering and the error. Kindly let me know if something is missing. Thanks in advance
Table Creation Scripts:_-------------------------------
create table RGIS_TCA_DATA_EXT
guid VARCHAR2(250),
badge VARCHAR2(250),
scheduled_store_id VARCHAR2(250),
parent_event_id VARCHAR2(250),
event_id VARCHAR2(250),
organization_number VARCHAR2(250),
customer_number VARCHAR2(250),
store_number VARCHAR2(250),
inventory_date VARCHAR2(250),
full_name VARCHAR2(250),
punch_type VARCHAR2(250),
punch_start_date_time VARCHAR2(250),
punch_end_date_time VARCHAR2(250),
event_meet_site_id VARCHAR2(250),
vehicle_number VARCHAR2(250),
vehicle_description VARCHAR2(250),
vehicle_type VARCHAR2(250),
is_owner VARCHAR2(250),
driver_passenger VARCHAR2(250),
mileage VARCHAR2(250),
adder_code VARCHAR2(250),
bonus_qualifier_code VARCHAR2(250),
store_accuracy VARCHAR2(250),
store_length VARCHAR2(250),
badge_input_type VARCHAR2(250),
source VARCHAR2(250),
created_by VARCHAR2(250),
created_date_time VARCHAR2(250),
updated_by VARCHAR2(250),
updated_date_time VARCHAR2(250),
approver_badge_id VARCHAR2(250),
approver_name VARCHAR2(250),
orig_guid VARCHAR2(250),
edit_type VARCHAR2(250)
organization external
type ORACLE_LOADER
default directory ETIME_LOAD_DIR
access parameters
RECORDS DELIMITED BY NEWLINE
BADFILE ETIME_LOAD_DIR:'tstlms.bad'
LOGFILE ETIME_LOAD_DIR:'tstlms.log'
READSIZE 1048576
FIELDS TERMINATED BY '|'
MISSING FIELD VALUES ARE NULL(
GUID
,BADGE
,SCHEDULED_STORE_ID
,PARENT_EVENT_ID
,EVENT_ID
,ORGANIZATION_NUMBER
,CUSTOMER_NUMBER
,STORE_NUMBER
,INVENTORY_DATE char date_format date mask "YYYYMMDD HH24:MI:SS"
,FULL_NAME
,PUNCH_TYPE
,PUNCH_START_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,PUNCH_END_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,EVENT_MEET_SITE_ID
,VEHICLE_NUMBER
,VEHICLE_DESCRIPTION
,VEHICLE_TYPE
,IS_OWNER
,DRIVER_PASSENGER
,MILEAGE
,ADDER_CODE
,BONUS_QUALIFIER_CODE
,STORE_ACCURACY
,STORE_LENGTH
,BADGE_INPUT_TYPE
,SOURCE
,CREATED_BY
,CREATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,UPDATED_BY
,UPDATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
,APPROVER_BADGE_ID
,APPROVER_NAME
,ORIG_GUID
,EDIT_TYPE
location (ETIME_LOAD_DIR:'tstlms.dat')
reject limit UNLIMITED;
_***Shell Script*:*----------------_*
version=1.0
umask 000
DATE=`date +%Y%m%d%H%M%S`
TIME=`date +"%H%M%S"`
SOURCE=`hostname`
fcp_login=`echo $1|awk '{print $3}'|sed 's/"//g'|awk -F= '{print $2}'`
fcp_reqid=`echo $1|awk '{print $2}'|sed 's/"//g'|awk -F= '{print $2}'`
TXT1_PATH=/home/ac1/oracle/in/tsdata
TXT2_PATH=/home/ac2/oracle/in/tsdata
ARCH1_PATH=/home/ac1/oracle/in/tsdata
ARCH2_PATH=/home/ac2/oracle/in/tsdata
DEST_PATH=/home/custom/sched/in
PROGLOG=/home/custom/sched/logs/rgis_tca_to_tlms_create.sh.log
PROGNAME=`basename $0`
PROGPATH=/home/custom/sched/scripts
cd $TXT2_PATH
FILELIST2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.dat for i in $FILELIST2
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES2 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES1="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST1="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST1
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT2_PATH/test/.
mv $i.$DATE $TXT2_PATH/test/.
done
if test $NO_OF_FILES1 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
cd $TXT1_PATH
FILELIST3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
NO_OF_FILES3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
$DEST_PATH/tstlmsedits.datfor i in $FILELIST3
do
cat $i >> $DEST_PATH/tstlmsedits.dat
printf "\n" >> $DEST_PATH/tstlmsedits.dat
mv $i $i.$DATE
#mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES3 -eq 0
then
echo " no tstlmsedits.dat file exists " >> $PROGLOG
else
echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
echo "-------------------------------------------" >> $PROGLOG
fi
NO_OF_FILES4="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
FILELIST4="`ls -lrt tstlms*.dat |awk '{print $9}'`"
$DEST_PATH/tstlms.datfor i in $FILELIST4
do
cat $i >> $DEST_PATH/tstlms.dat
printf "\n" >> $DEST_PATH/tstlms.dat
mv $i $i.$DATE
# mv $i $TXT1_PATH/test/.
mv $i.$DATE $TXT1_PATH/test/.
done
if test $NO_OF_FILES4 -eq 0
then
echo " no tstlms.dat file exists " >> $PROGLOG
else
echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
fi
#connecting to oracle to generate bad files
sqlplus -s $fcp_login<<EOF
select count(*) from rgis_tca_data_ext;
select count(*) from rgis_tca_data_history_ext;
exit;
EOF
#counting the records in files
tot_rec_in_tstlms=`wc -l $DEST_PATH/tstlms.dat | awk ' { print $1 } '`
tot_rec_in_tstlmsedits=`wc -l $DEST_PATH/tstlmsedits.dat | awk ' { print $1 } '`
tot_rec_in_tstlms_bad=`wc -l $DEST_PATH/tstlms.bad | awk ' { print $1 } '`
tot_rec_in_tstlmsedits_bad=`wc -l $DEST_PATH/tstlmsedits.bad | awk ' { print $1 } '`
#updating log table
echo "pl/sql block started"
sqlplus -s $fcp_login<<EOF
define tot_rec_in_tstlms = '$tot_rec_in_tstlms';
define tot_rec_in_tstlmsedits = '$tot_rec_in_tstlmsedits';
define tot_rec_in_tstlms_bad = '$tot_rec_in_tstlms_bad';
define tot_rec_in_tstlmsedits_bad='$tot_rec_in_tstlmsedits_bad';
define fcp_reqid ='$fcp_reqid';
declare
l_tstlms_file_id number := null;
l_tstlmsedits_file_id number := null;
l_tot_rec_in_tstlms number := 0;
l_tot_rec_in_tstlmsedits number := 0;
l_tot_rec_in_tstlms_bad number := 0;
l_tot_rec_in_tstlmsedits_bad number := 0;
l_request_id fnd_concurrent_requests.request_id%type;
l_start_date fnd_concurrent_requests.actual_start_date%type;
l_end_date fnd_concurrent_requests.actual_completion_date%type;
l_conc_prog_name fnd_concurrent_programs.concurrent_program_name%type;
l_requested_by fnd_concurrent_requests.requested_by%type;
l_requested_date fnd_concurrent_requests.request_date%type;
begin
--getting concurrent request details
begin
SELECT fcp.concurrent_program_name,
fcr.request_id,
fcr.actual_start_date,
fcr.actual_completion_date,
fcr.requested_by,
fcr.request_date
INTO l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
l_requested_by,
l_requested_date
FROM fnd_concurrent_requests fcr, fnd_concurrent_programs fcp
WHERE fcp.concurrent_program_id = fcr.concurrent_program_id
AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
exception
when no_data_found then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log, 'No data found for request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when retrieving request_id request_id');
fnd_file.put_line(fnd_file.log, sqlerrm);
raise_application_error(-20001,
'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlms.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlms_file_id,
'tstlms.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlms,
&tot_rec_in_tstlms_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlms file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
--calling ins_or_upd_tca_process_log to update log table for tstlmsedits.dat file
begin
rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
(l_tstlmsedits_file_id,
'tstlmsedits.dat',
l_conc_prog_name,
l_request_id,
l_start_date,
l_end_date,
&tot_rec_in_tstlmsedits,
&tot_rec_in_tstlmsedits_bad,
null,
null,
null,
null,
null,
null,
null,
l_requested_by,
l_requested_date,
null,
null,
null,
null,
null);
exception
when others then
fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
fnd_file.put_line(fnd_file.log,
'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlmsedits file');
fnd_file.put_line(fnd_file.log, sqlerrm);
end;
end;
exit;
EOF
echo "rgis_tca_to_tlms_process.sql started"
sqlplus -s $fcp_login @$SCHED_TOP/sql/rgis_tca_to_tlms_process.sql $fcp_reqid
exit;
echo "rgis_tca_to_tlms_process.sql ended"
_**Error:*----------------------------------*_
RGIS Scheduling: Version : UNKNOWN
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
TCATLMS module: TCA To TLMS Import Process
Current system time is 18-AUG-2011 06:13:27
COUNT(*)
16
COUNT(*)
25
wc: cannot open /home/custom/sched/in/tstlms.bad
wc: cannot open /home/custom/sched/in/tstlmsedits.bad
pl/sql block started
old 33: AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
new 33: AND fcr.request_id = 18661823; --fnd_global.conc_request_id();
old 63: &tot_rec_in_tstlms,
new 63: 16,
old 64: &tot_rec_in_tstlms_bad,
new 64: ,
old 97: &tot_rec_in_tstlmsedits,
new 97: 25,
old 98: &tot_rec_in_tstlmsedits_bad,
new 98: ,
ERROR at line 64:
ORA-06550: line 64, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql stddev sum variance
execute forall merge time timestamp interval date
<a string literal with character set specification>
<a number> <a single-quoted SQL string> pipe
<an alternatively-quoted string literal with character set specification>
<an alternatively-q
ORA-06550: line 98, column 4:
PLS-00103: Encountered the symbol "," when expecting one of the following:
( - + case mod new not null others <an identifier>
<a double-quoted delimited-identifier> <a bind variable> avg
count current exists max min prior sql st
rgis_tca_to_tlms_process.sql started
old 12: and concurrent_request_id = '&1';
new 12: and concurrent_request_id = '18661823';
old 18: and concurrent_request_id = '&1';
new 18: and concurrent_request_id = '18661823';
old 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,&1);
new 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,18661823);
old 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,&1);
new 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,18661823);
old 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',&1);
new 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',18661823);
declare
ERROR at line 1:
ORA-20001: Error occured when executing RGIS_TCA_TO_TLMS_PROCESS.sql ORA-01403:
no data found
ORA-06512: at line 59
Executing request completion options...
------------- 1) PRINT -------------
Printing output file.
Request ID : 18661823
Number of copies : 0
Printer : noprint
Finished executing request completion options.
Concurrent request completed successfully
Current system time is 18-AUG-2011 06:13:29
---------------------------------------------------------------------------Hi,
Check the status of the batch in SM35 transaction.
if the batch is locked by mistake or any other error, now you can release it and aslo you can process again.
To Release -Shift+F4.
Also you can analyse the job status through F2 button.
Bye -
following is the table creation script with partition
CREATE TABLE customer_entity_temp (
BRANCH_ID NUMBER (4),
ACTIVE_FROM_YEAR VARCHAR2 (4),
ACTIVE_FROM_MONTH VARCHAR2 (3),
partition by range (ACTIVE_FROM_YEAR,ACTIVE_FROM_MONTH)
(partition yr7_1999 values less than ('1999',TO_DATE('Jul','Mon')),
partition yr12_1999 values less than ('1999',TO_DATE('Dec','Mon')),
it gives an error
ORA-14036: partition bound value too large for column
but if I increase the size of the ACTIVE_FROM_MONTH column to 9 , the script works and creates the table. Why is it so ?
Also, by creating a table in this way and populating the table data in their respective partitions, all rows with month less than "JULY" will go in yr7_1999 partition and all rows with month value between "JUL" and "DEC" will go in the second partition yr12_1999 , where will the data with month value equal to "DEC" go?
Plz help me in solving this problem
thanks n regards
MoloyHi,
You declared ACTIVE_FROM_MONTH VARCHAR2 (3) and you try to check it against a date in your partitionning clause:TO_DATE('Jul','Mon')so you should first check your data model and what you are trying to achieve exactly.
With such a partition decl, you will not be able to insert dates from december 1998 included and onward. The values are stricly less than (<) not less or equal(<=) hence such lines can't be inserted. I'd advise you to check the MAXVALUE value jocker and the ENABLE ROW MOVEMENT partitionning clause.
Regards,
Yoann.
Maybe you are looking for
-
Who can help me out with my built-in iSight issue?
Okay, so I have a question regarding my built-in iSight. Now, I know... This question may have been asked before but I just can't find it, I guess it got lost in the mass. If you would be friendly enough to forward me the message or instruct me on wh
-
What do i do if i forget my passcode to my ipod touch and it wont connect to the computer
whatdo i do if i forget my passcode for my ipod touch and it wont connect to the computer?
-
How to use native XML driver in CR XI 2 (to avoid "Database Logon Failed")?
Hello, I have a simple scenario: The Java Part: -Our Java application creates a pair of files: one XML and one XSD. -Both files are uploaded to an ASP.NET page along with a report (RPT file - based on the same XSD and some generated XML using the nat
-
HT4863 Can I use g-mail rather than outlook?
Can I use g-mail rather than outlook with icloud?
-
Error TSV_TNEW_PAGE_ALLOC_FAILED
Hi, I have creted a program to create the custome idoc. Generally this contains large amount of data. while executing the program in production system i am getting error like TSV_TNEW_PAGE_ALLOC_FAILED. Could you please give me the reason and solutio