Using ADF against changing table structures
Hello,
We are developing a JSP app that will hit the same tables in different dbs/scheams, but the attributes (columns) of the tables could change (all except the primary key and name of the table). I want to write the app once and allow it to hit the same tables even if they have different columns (depending on what db is connected). As an example, using ResultSetMetaData I can easily "loop" thru through all column "labels" and then loop thru the ResultSet to display the data.
I am assume that ADF would NOT be a good fit for this type of application but I wanted to see what others had to say.
Thanks.
The biggest problem is that the number of columns in the tables could differ. Is it possible/feasible to create dynamic ViewObjects based on the structure of the currently connect db? As in, once the app initially connects, it dynamically build the ViewObjects and then allows the app to access these ViewObjects as needed.
Similar Messages
-
Problem changing table structure CDS
I created a table using a .hdbdd file. I was then able to import data from a csv file. Everything worked like a charm. Later I tried to add a field to the table but when I tried to save the updated hdbdd and csv files I get an error that the number of columns in the table and csv file don't match. So, I tried to delete the entry for the table from the hdbti table. It allowed me to save the file but still get the column mismatch error.
Then I deleted the database table but now I get an error: Table import target table cannot be found. Table import target table cannot be found.
Finally, I tried to delete the csv file but the file appears in the list of files but I can't open it and I still get the same error.
Now I can't change any part of the data model with getting the error. Is there no way out of this maze?
RossMy package structure:
The hdbdd file (The customers file is the one I want to delete):
If I activate this file, it activates fine but if I delete the customer table definition I get the error "Error while activating /f14e/S001/GBI/data/GBI_S001.hdbdd:Unknown object name "f14e.S001.GBI.data::GBI_S001.CUSTOMERS"."
namespace f14e.S001.GBI.data;
@Schema: 'F14E_STUDENT_001'
context GBI_S001 {
@Catalog.tableType: #COLUMN
entity SALES_ORG {
key SALES_ORG : String(4);
DESCRIPTION : String(16);
@Catalog.tableType: #COLUMN
entity CUSTOMERS {
key ID : String(10);
COMPANY_NAME : String(35);
CITY : String(20);
key SALES_ORG : String(4);
COUNTRY: String(2);
@Catalog.tableType: #COLUMN
entity PRODUCTS {
key product : String(8);
product_name : String(40);
product_category : String(3);
division : String(2);
internal_price: Decimal(17,3);
price: Decimal(17,3);
color: String(10);
product_group: String(20);
@Catalog.tableType: #COLUMN
entity SALES_ORDERS {
created_at : LocalDate;
created_by : String(20);
customer_number : String(10);
key order_number : String(10);
gross_amount : Decimal(17,3);
discount : Decimal(17,3);
currency : String(3);
status : String(15);
@Catalog.tableType: #COLUMN
entity SALES_ORDER_DETAILS {
key order_number : String(10);
key order_item : String(3);
product : String(8);
quantity : Integer;
unit_of_measure : String(3);
revenue : Decimal(17,3);
currency : String(3);
discount : Decimal(17,3);
Tables in the catalog
Customer table structure
hdbti file (with other tables removed)
import = [
table = "f14e.S001.GBI.data::GBI_S001.CUSTOMERS";
schema = "F14A_STUDENT_001";
file = "f14e.S001.GBI.data:customers2.csv";
header = false;
If I attempt to activate this file I get the error "Error while activating /f14e/S001/GBI/data/importdata.hdbti:Table import target table cannot be found. Table import target table cannot be found."
I appreciate your help on this.
Ross -
Use of value change table cdpos
i got the idea from madhvi (thanx) about using two tables when fetching data from cdpos table .like we can give tabname = 'eket'
or = 'ekpo'.but what about fieldname which are associated with these two tables how to write feilds,like for the first two which belongs to eket table i write fname in ('eindt', 'menge') but what about the third field netpr related to second table ekpo.actually in output the changes of only first two is reflecting if am using 'or' for distingushing between tables and fields.Please refer to:
1) [Applications Electronic Technical Reference Manuals (eTRM) |https://metalink2.oracle.com/metalink/plsql/tec_main.etrm?p_version=ETRM_SITE_URL]
2) [Oracle Integration Repository |http://irep.oracle.com/] -
TIPS(18) : CREATING SCRIPTS TO RECREATE A TABLE STRUCTURE
제품 : SQL*PLUS
작성날짜 : 1996-11-12
TIPS(18) : Creating Scripts to Recreate a Table Structure
=========================================================
The script creates scripts that can be used to recreate a table structure.
For example, this script can be used when a table has become fragmented or to
get a defintion that can be run on another database.
CREATES SCRIPT TO RECREATE A TABLE-STRUCTURE
INCL. STORAGE, CONSTRAINTS, TRIGGERS ETC.
This script creates scripts to recreate a table structure.
Use the script to reorganise a table that has become fragmented,
to get a definition that can be run on another database/schema or
as a basis for altering the table structure (eg. drop a column!).
IMPORTANT: Running the script is safe as it only creates two new scripts and
does not do anything to your database! To get anything done you have to run the
scripts created.
The created scripts does the following:
1. save the content of the table
2. drop any foreign key constraints referencing the table
3. drop the table
4. creates the table with an Initial storage parameter that
will accomodate the entire content of the table. The Next
parameter is 25% of the initial.
The storage parameters are picked from the following list:
64K, 128K, 256K, 512K, multiples of 1M.
5. create table and column comments
6. fill the table with the original content
7. create all the indexes incl storage parameters as above.
8. add primary, unique key and check constraints.
9. add foreign key constraints for the table and for referencing
tables.
10.Create the table's triggers.
11.Compile any depending objects (cascading).
12.Grant table and column privileges.
13.Create synonyms.
This script must be run as the owner of the table.
If your table contains a LONG-column, use the COPY
command in SQL*Plus to store/restore the data.
USAGE
from SQL*Plus:
start reorgtb
This will create the scripts REORGS1.SQL and REORGS2.SQL
REORGS1.SQL contains code to save the current content of the table.
REORGS2.SQL contains code to rebuild the table structure.
undef tab;
set echo off
column a1 new_val stor
column b1 new_val nxt
select
decode(sign(1024-sum(bytes)/1024),-1,to_char((round(sum(bytes)/(1024*1
024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
decode(sign(512-sum(bytes)/1024), -1,'1M',
decode(sign(256-sum(bytes)/1024), -1,'512K',
decode(sign(128-sum(bytes)/1024), -1,'256K',
decode(sign(64-sum(bytes)/1024) , -1,'128K',
'64K'
a1,
decode(sign(1024-sum(bytes)/4096),-1,to_char((round(sum(bytes)/(4096*1
024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
decode(sign(512-sum(bytes)/4096), -1,'1M',
decode(sign(256-sum(bytes)/4096), -1,'512K',
decode(sign(128-sum(bytes)/4096), -1,'256K',
decode(sign(64-sum(bytes)/4096) , -1,'128K',
'64K'
b1
from user_extents
where segment_name=upper('&1');
set pages 0 feed off verify off lines 150
col c1 format a80
spool reorgs1.sql
PROMPT drop table bk_&1
prompt /
PROMPT create table bk_&1 storage (initial &stor) as select * from &1
prompt /
spool off
spool reorgs2.sql
PROMPT spool reorgs2
select 'alter table '||table_name||' drop constraint
'||constraint_name||';'
from user_constraints where r_constraint_name
in (select constraint_name from user_constraints where
table_name=upper('&1')
and constraint_type in ('P','U'));
PROMPT drop table &1
prompt /
prompt create table &1
select decode(column_id,1,'(',',')
||rpad(column_name,40)
||decode(data_type,'DATE' ,'DATE '
,'LONG' ,'LONG '
,'LONG RAW','LONG RAW '
,'RAW' ,'RAW '
,'CHAR' ,'CHAR '
,'VARCHAR' ,'VARCHAR '
,'VARCHAR2','VARCHAR2 '
,'NUMBER' ,'NUMBER '
,'unknown')
||rpad(
decode(data_type,'DATE' ,null
,'LONG' ,null
,'LONG RAW',null
,'RAW' ,decode(data_length,null,null
,'('||data_length||')')
,'CHAR' ,decode(data_length,null,null
,'('||data_length||')')
,'VARCHAR' ,decode(data_length,null,null
,'('||data_length||')')
,'VARCHAR2',decode(data_length,null,null
,'('||data_length||')')
,'NUMBER' ,decode(data_precision,null,' '
,'('||data_precision||
decode(data_scale,null,null
,','||data_scale)||')')
,'unknown'),8,' ')
||decode(nullable,'Y','NULL','NOT NULL') c1
from user_tab_columns
where table_name = upper('&1')
order by column_id
prompt )
select 'pctfree '||t.pct_free c1
,'pctused '||t.pct_used c1
,'initrans '||t.ini_trans c1
,'maxtrans '||t.max_trans c1
,'tablespace '||s.tablespace_name c1
,'storage (initial '||'&stor' c1
,' next '||'&stor' c1
,' minextents '||t.min_extents c1
,' maxextents '||t.max_extents c1
,' pctincrease '||t.pct_increase||')' c1
from user_Segments s, user_tables t
where s.segment_name = upper('&1') and
t.table_name = upper('&1')
and s.segment_type = 'TABLE'
prompt /
select 'comment on table &1 is '''||comments||''';' c1 from
user_tab_comments
where table_name=upper('&1');
select 'comment on column &1..'||column_name||
' is '''||comments||''';' c1 from user_col_comments
where table_name=upper('&1');
prompt insert into &1 select * from bk_&1
prompt /
set serveroutput on
declare
cursor c1 is select index_name,decode(uniqueness,'UNIQUE','UNIQUE')
unq
from user_indexes where
table_name = upper('&1');
indname varchar2(50);
cursor c2 is select
decode(column_position,1,'(',',')||rpad(column_name,40) cl
from user_ind_columns where table_name = upper('&1') and
index_name = indname
order by column_position;
l1 varchar2(100);
l2 varchar2(100);
l3 varchar2(100);
l4 varchar2(100);
l5 varchar2(100);
l6 varchar2(100);
l7 varchar2(100);
l8 varchar2(100);
l9 varchar2(100);
begin
dbms_output.enable(100000);
for c in c1 loop
dbms_output.put_line('create '||c.unq||' index '||c.index_name||' on
&1');
indname := c.index_name;
for q in c2 loop
dbms_output.put_line(q.cl);
end loop;
dbms_output.put_line(')');
select 'pctfree '||i.pct_free ,
'initrans '||i.ini_trans ,
'maxtrans '||i.max_trans ,
'tablespace '||i.tablespace_name ,
'storage (initial '||
decode(sign(1024-sum(e.bytes)/1024),-1,
to_char((round(sum(e.bytes)/(1024*1024))+1))||'M',
decode(sign(512-sum(e.bytes)/1024), -1,'1M',
decode(sign(256-sum(e.bytes)/1024), -1,'512K',
decode(sign(128-sum(e.bytes)/1024), -1,'256K',
decode(sign(64-sum(e.bytes)/1024) , -1,'128K',
'64K'))))) ,
' next '||
decode(sign(1024-sum(e.bytes)/4096),-1,
to_char((round(sum(e.bytes)/(4096*1024))+1))||'M',
decode(sign(512-sum(e.bytes)/4096), -1,'1M',
decode(sign(256-sum(e.bytes)/4096), -1,'512K',
decode(sign(128-sum(e.bytes)/4096), -1,'256K',
decode(sign(64-sum(e.bytes)/4096) , -1,'128K',
'64K'))))) ,
' minextents '||s.min_extents ,
' maxextents '||s.max_extents ,
' pctincrease '||s.pct_increase||')'
into l1,l2,l3,l4,l5,l6,l7,l8,l9
from user_extents e,user_segments s, user_indexes i
where s.segment_name = c.index_name
and s.segment_type = 'INDEX'
and i.index_name = c.index_name
and e.segment_name=s.segment_name
group by s.min_extents,s.max_extents,s.pct_increase,
i.pct_free,i.ini_trans,i.max_trans,i.tablespace_name ;
dbms_output.put_line(l1);
dbms_output.put_line(l2);
dbms_output.put_line(l3);
dbms_output.put_line(l4);
dbms_output.put_line(l5);
dbms_output.put_line(l6);
dbms_output.put_line(l7);
dbms_output.put_line(l8);
dbms_output.put_line(l9);
dbms_output.put_line('/');
end loop;
end;
declare
cursor c1 is
select constraint_name, decode(constraint_type,'U',' UNIQUE',' PRIMARY
KEY') typ,
decode(status,'DISABLED','DISABLE',' ') status from user_constraints
where table_name = upper('&1')
and constraint_type in ('U','P');
cname varchar2(100);
cursor c2 is
select decode(position,1,'(',',')||rpad(column_name,40) coln
from user_cons_columns
where table_name = upper('&1')
and constraint_name = cname
order by position;
begin
for q1 in c1 loop
cname := q1.constraint_name;
dbms_output.put_line('alter table &1');
dbms_output.put_line('add constraint '||cname||q1.typ);
for q2 in c2 loop
dbms_output.put_line(q2.coln);
end loop;
dbms_output.put_line(')' ||q1.status);
dbms_output.put_line('/');
end loop;
end;
declare
cursor c1 is
select c.constraint_name,c.r_constraint_name cname2,
c.table_name table1, r.table_name table2,
decode(c.status,'DISABLED','DISABLE',' ') status,
decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
delete_rule
from user_constraints c,
user_constraints r
where c.constraint_type='R' and
c.r_constraint_name = r.constraint_name and
c.table_name = upper('&1')
union
select c.constraint_name,c.r_constraint_name cname2,
c.table_name table1, r.table_name table2,
decode(c.status,'DISABLED','DISABLE',' ') status,
decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
delete_rule
from user_constraints c,
user_constraints r
where c.constraint_type='R' and
c.r_constraint_name = r.constraint_name and
r.table_name = upper('&1');
cname varchar2(50);
cname2 varchar2(50);
cursor c2 is
select decode(position,1,'(',',')||rpad(column_name,40) colname
from user_cons_columns
where constraint_name = cname
order by position;
cursor c3 is
select decode(position,1,'(',',')||rpad(column_name,40) refcol
from user_cons_columns
where constraint_name = cname2
order by position;
begin
dbms_output.enable(100000);
for q1 in c1 loop
cname := q1.constraint_name;
cname2 := q1.cname2;
dbms_output.put_line('alter table '||q1.table1||' add constraint ');
dbms_output.put_line(cname||' foreign key');
for q2 in c2 loop
dbms_output.put_line(q2.colname);
end loop;
dbms_output.put_line(') references '||q1.table2);
for q3 in c3 loop
dbms_output.put_line(q3.refcol);
end loop;
dbms_output.put_line(') '||q1.delete_rule||q1.status);
dbms_output.put_line('/');
end loop;
end;
col c1 format a79 word_wrap
set long 32000
set arraysize 1
select 'create or replace trigger ' c1,
description c1,
'WHEN ('||when_clause||')' c1,
trigger_body ,
'/' c1
from user_triggers
where table_name = upper('&1') and when_clause is not null
select 'create or replace trigger ' c1,
description c1,
trigger_body ,
'/' c1
from user_triggers
where table_name = upper('&1') and when_clause is null
select 'alter trigger '||trigger_name||decode(status,'DISABLED','
DISABLE',' ENABLE')
from user_Triggers where table_name='&1';
set serveroutput on
declare
cursor c1 is
select 'alter table
'||'&1'||decode(substr(constraint_name,1,4),'SYS_',' ',
' add constraint ') a1,
decode(substr(constraint_name,1,4),'SYS_','
',constraint_name)||' check (' a2,
search_condition a3,
') '||decode(status,'DISABLED','DISABLE','') a4,
'/' a5
from user_constraints
where table_name = upper('&1') and
constraint_type='C';
b1 varchar2(100);
b2 varchar2(100);
b3 varchar2(32000);
b4 varchar2(100);
b5 varchar2(100);
fl number;
begin
open c1;
loop
fetch c1 into b1,b2,b3,b4,b5;
exit when c1%NOTFOUND;
select count(*) into fl from user_tab_columns where table_name =
upper('&1') and
upper(column_name)||' IS NOT NULL' = upper(b3);
if fl = 0 then
dbms_output.put_line(b1);
dbms_output.put_line(b2);
dbms_output.put_line(b3);
dbms_output.put_line(b4);
dbms_output.put_line(b5);
end if;
end loop;
end;
create or replace procedure dumzxcvreorg_dep(nam varchar2,typ
varchar2) as
cursor cur is
select type,decode(type,'PACKAGE BODY','PACKAGE',type) type1,
name from user_dependencies
where referenced_name=upper(nam) and referenced_type=upper(typ);
begin
dbms_output.enable(500000);
for c in cur loop
dbms_output.put_line('alter '||c.type1||' '||c.name||' compile;');
dumzxcvreorg_dep(c.name,c.type);
end loop;
end;
exec dumzxcvreorg_dep('&1','TABLE');
drop procedure dumzxcvreorg_Dep;
select 'grant '||privilege||' on '||table_name||' to '||grantee||
decode(grantable,'YES',' with grant option;',';') from
user_tab_privs where table_name = upper('&1');
select 'grant '||privilege||' ('||column_name||') on &1 to
'||grantee||
decode(grantable,'YES',' with grant option;',';')
from user_col_privs where grantor=user and
table_name=upper('&1')
order by grantee, privilege;
select 'create synonym '||synonym_name||' for
'||table_owner||'.'||table_name||';'
from user_synonyms where table_name=upper('&1');
PROMPT REM
PROMPT REM YOU MAY HAVE TO LOG ON AS SYSTEM TO BE
PROMPT REM ABLE TO CREATE ANY OF THE PUBLIC SYNONYMS!
PROMPT REM
select 'create public synonym '||synonym_name||' for
'||table_owner||'.'||table_name||';'
from all_synonyms where owner='PUBLIC' and table_name=upper('&1') and
table_owner=user;
prompt spool off
spool off
set echo on feed on verify on
The scripts REORGS1.SQL and REORGS2.SQL have been
created. Alter these script as necesarry.
To recreate the table-structure, first run REORGS1.SQL.
This script saves the content of your table in a table
called bk_.
If this script runs successfully run REORGS2.SQL.
The result is spooled to REORGTB.LST.
Check this file before dropping the bk_ table.
*/Please do NOT cross-postings: create a deep structure for dynamic internal table
Regards
Uwe -
Change Data Capture,Change Tables
Hi,
We installed Change Data Capture in our DB (10gR2),however there is a problem with change tables.Change tables are being cleaned somehow.According to text quoted below,the job which is cleaning change tables is cdc$_default_purge_job.However
select log_date
+, job_name+
+, status+
from dba_scheduler_job_log
where job_name='CDC$_DEFAULT_PURGE_JOB';
results no data. Also this morning i came across that data cleansing again, and run the query below.And again no result.
select job_name
+, session_id+
+, running_instance+
+, elapsed_time+
+, cpu_used+
from dba_scheduler_running_jobs;
/*We didn't even use DBMS_SCHEDULER.SET_ATTRIBUTE procedure so far*/
What is cleaning change tables and how can i control it?
Tnx in advance
Quotation:
Change Data Capture creates a purge job using the
DBMS_SCHEDULER package (which runs under the account of the publisher who created the first change table). This purge job calls the
DBMS_CDC_PUBLISH.PURGE procedure to remove data that subscribers are no longer using from the change tables. This job has the name
cdc$_default_purge_job. By default, this job runs every 24 hours. You can change the schedule of this job using
DBMS_SCHEDULER.SET_ATTRIBUTE and set the
repeat_interval attribute. You can verify or modify any other attributes with the
DBMS_SCHEDULER package.Setting up CDC is a fairly complex process with different options. Setting just the filter in OWB is only a very small part.
There is a blog post below on how to use code templates to do CDC which gives some insight;
http://www.rittmanmead.com/2009/10/changed-data-capture-and-owb11gr2/
Plus and older one illustrating how to use Oracle logs;
http://www.rittmanmead.com/2006/04/asynchronous-hotlog-distributed-change-data-capture-and-owb-paris/
Cheers
David -
ADF Faces Core Table, TableSelectOne question
Greetings
I am using ADF Faces Core Table to display a list and I have it setup with TableSelectOne, it works great. My question is this, I set required = true on the select button and it works. If I dont select one it no longer submits. My question is how do i get the error message back out I tried adding an af:message tag with from = tableSelectOne1 and I dont get anything back out. Is there a way to get this error message out so I can display it for the end user?
Thanks
troHi,
Try using af:messages instead, as it displays all messages added to the FacesContext.
Regards.
Fábio -
Importing table structure only consumes space in GB
Hello,
I have used oracle 9i Exp commmand to export a schema structure.(No rows) There are only 150 tables and the export is perfect without any warnings.
When i try to import into a different schema , I can see that it consumes 3 GB of space in the default tablespace for that schema. I checked the free space in the tablespace before importing...there is 3 GB of free space..
When i start import it gets full , and import hangs. It does not proceed any further. Im not able to understand, why 3 GB of space it is using for just the table structure.
Please help
ThanksUNKNOWN007 wrote:
Hello,
I have used oracle 9i Exp commmand to export a schema structure.(No rows) There are only 150 tables and the export is perfect without any warnings.
When i try to import into a different schema , I can see that it consumes 3 GB of space in the default tablespace for that schema. I checked the free space in the tablespace before importing...there is 3 GB of free space..
When i start import it gets full , and import hangs. It does not proceed any further. Im not able to understand, why 3 GB of space it is using for just the table structure.neither are we without your export and import commands. I could take a guess though. That would be that your export uses compress=y either explicitly or because it's the default, and also that you are using dictionary managed tablespaces on your target. In that case each new table gets created with an initial extent the same size as the data on the source. In this case the immediate solution is not to use compress and the longer term one is to move as well to locally managed tablespaces.
still if we could see the commands you used then we wouldn't need to guess.
Niall Litchfield
http://www.orawin.info/ -
Error inserting a row into a table with identity column using cfgrid on change
I got an error on trying to insert a row into a table with identity column using cfgrid on change see below
also i would like to use cfstoreproc instead of cfquery but which argument i need to pass and how to use it usually i use stored procedure
update table (xxx,xxx,xxx)
values (uu,uuu,uu)
My component
<!--- Edit a Media Type --->
<cffunction name="cfn_MediaType_Update" access="remote">
<cfargument name="gridaction" type="string" required="yes">
<cfargument name="gridrow" type="struct" required="yes">
<cfargument name="gridchanged" type="struct" required="yes">
<!--- Local variables --->
<cfset var colname="">
<cfset var value="">
<!--- Process gridaction --->
<cfswitch expression="#ARGUMENTS.gridaction#">
<!--- Process updates --->
<cfcase value="U">
<!--- Get column name and value --->
<cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
<cfset value=ARGUMENTS.gridchanged[colname]>
<!--- Perform actual update --->
<cfquery datasource="#application.dsn#">
UPDATE SP.MediaType
SET #colname# = '#value#'
WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
</cfquery>
</cfcase>
<!--- Process deletes --->
<cfcase value="D">
<!--- Perform actual delete --->
<cfquery datasource="#application.dsn#">
update SP.MediaType
set Deleted=1
WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
</cfquery>
</cfcase>
<cfcase value="I">
<!--- Get column name and value --->
<cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
<cfset value=ARGUMENTS.gridchanged[colname]>
<!--- Perform actual update --->
<cfquery datasource="#application.dsn#">
insert into SP.MediaType (#colname#)
Values ('#value#')
</cfquery>
</cfcase>
</cfswitch>
</cffunction>
my table
mediatype:
mediatypeid primary key,identity
mediatypename
my code is
<cfform method="post" name="GridExampleForm">
<cfgrid format="html" name="grid_Tables2" pagesize="3" selectmode="edit" width="800px"
delete="yes"
insert="yes"
bind="cfc:sp3.testing.MediaType.cfn_MediaType_All
({cfgridpage},{cfgridpagesize},{cfgridsortcolumn},{cfgridsortdirection})"
onchange="cfc:sp3.testing.MediaType.cfn_MediaType_Update({cfgridaction},
{cfgridrow},
{cfgridchanged})">
<cfgridcolumn name="MediaTypeID" header="ID" display="no"/>
<cfgridcolumn name="MediaTypeName" header="Media Type" />
</cfgrid>
</cfform>
on insert I get the following error message ajax logging error message
http: Error invoking xxxxxxx/MediaType.cfc : Element '' is undefined in a CFML structure referenced as part of an expression.
{"gridaction":"I","gridrow":{"MEDIATYPEID":"","MEDIATYPENAME":"uuuuuu","CFGRIDROWINDEX":4} ,"gridchanged":{}}
ThanksIs this with the Travel database or another database?
If it's another database then make sure your columns
allow nulls. To check this in the Server Navigator, expand
your DataSource down to the column.
Select the column and view the Is Nullable property
in the Property Sheet
If still no luck, check out a tutorial, like Performing Inserts, ...
http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/index.jsp
John -
How to refresh a table of data after inserting a row using ADF BC
Hi,
i am using ADF BC, JSF to construct a page to browse and another to create/update record. But after creating or updating the record it is not reflected when i come back to browse page.
I tried to insert invokeaction in page definition page but i can not find the iterator bindings for that table in Binds option.
can any one provide any example how to do it.
Please help me.On the page def for your submit page, add inside the executables (on the structure pane) your iterator you want refreshed (if it isn't already there).
Then double click on the submit button and bind it to a backing bean. My method in the backing bean looks like:
if ( !operationBinding.getErrors().isEmpty() )
return null;
DCBindingContainer dcbindings = (DCBindingContainer)bindings;
DCIteratorBinding appsIter = dcbindings.findIteratorBinding("ApplicationsIterator");
appsIter.executeQuery();
return null;
}The executeQuery method will refresh that iterator for your return page. The "ApplicationsIterator" is what the structure pane calls your iterator. Works for me.
Brian -
Authenticate ADF application using adf security wizard against LDAP OID
I have an adf application which i intend to authorise using LDAP. For now , i have actually hand coded in java for authenticating the users of my application. Using JNDI I directly connect to LDAP and authenticate users. However , recently it came to my notice that i can also do that using ADF sercurity wizard , but i am unable to do so. which securing the ADF application ,no where in the wizard LDAP configuration is mentioned. do i have to change some file manually ? i have no idea on how to proceed on that.
i have setup wls , making th OIDAuthentication as Sufficient. but i dont know how to configure from ADF side so that it can authenticate against LDAP. when i try the ADF sercurity wizard option , it tells me to create new Roles . Is there any way where i can import the ldap credentials to the security wizard ..?
-
Demantra PTP table structure changed in 7.3.X.X
The staging tables structure is changed in Demantra version 7.3 onwards as compared to Demantra 7.2.X.X. Some of the "Required" columns in "BIIO_Promotion" table which is used to load promotion data`has been removed in Demantra 7.3. And some more "Required" columns are added in "BIIO_Promotion" table.
The disappointing thing is that the implementation and user guide doesn't speak about it. How the implementer know, what type of values to load in these mandatory columns to populate the promotion data in Demantra.
Regards,
Milind...go to tables statements -
> double click on the structure----> it will takes you to that structure -
Change of structure of database table
A custom database table has been created and is already in use by 10 programs. Now a new requirement has come to add a field i.e. a check box to the database table to be used by one of the 10 programs. Can we change the structure of the database table after activating it and when the table is in use? Please provide me inputs on this.
Thanks and regards
RadhikaYes. U can change it. But I suggest u to do it through the process of 'Append Structure'.
If u have any probelm after changing follow this process.
Goto Utilities > Database Utilities> Activate and Adjust Database
Awrd Points if useful
Bhupal -
Suggestion required in target table structure change
Hi all good morning,
I have one requirement. I have a target table T with some coulmns. Now I need to add one additional column to T and one procedure is there to populate the data for new column. This target table T is used by 50 mappings.my questions is if we change the structure of the table do we need to import(I think we must import). If we import what could be the impact of this re-import. do we need to rebound and validate alla the mappings and then deploy?
the other possibility what I can do is I will create a replica of T as T1 with one additional column. I will create a new mapping which dumps data from T to T1and I will replace my T with T1 in my reporting environment.
pls suggest which is good one. it is urgent.
thanks in advance.
VikiHello, if you don't need the new column in the 50 mappings, you can temporarily leave them alone (you say it's urgent), and just feed the new col..
When you have more time, you should reconcile the mappings with table T and re-deploy them. OMBPlus scripting is ideal for this task.
Hope this helps, Antonio -
Oracle Data Pump - Table Structure change
Hi,
we have daily partitioned table, and for backup we are using data pump (expdp). we policy to drop partition after backup (archiving).
we have archived dump files for 1year, few days back developer made changes with table structure they added one new column to table.
Now we are unable to restore old partitions is there a way to restore partition if new column added / dropped from currect table.
Thanks
SachinIf a new column has been added to the table, you can import only the the data from the old structure to the new structure. Use the parameter CONTENT=DATA_ONLY.
-
Assuming that an OraLite 10g is now running on CE.Net and Msync is working fine to do data synchronization between Oralite 10g and an enterprise 9i database. If I make structure changes to a table on the enterprise database, how do I reflect the changes to the OraLite database on the CE.net device.
Thank you.There's a good post about this on metalink...
A) Add column
1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
2. Stop Mobile Server listener
3. Change the Oracle8i/9i database schema (add column)
4. Create a Java program to call the Consolidator Admin API AlterPublicationItem()
5. Start Mobile Server
6. Execute a sync from the client
7. The new column should be seen on the client. Use MSQL to check snapshot definitions.
B) Drop column
1. Upload all client data. Clients should not add new data until they are told by the administrator to sync again!!
2. Stop Mobile Server listener
3. Delete column of the base table in the Oracle database schema
4. Create a Java program to call the Consolidator Admin API DropPublicationItem()
5. Create a Java program to call the Consolidator Admin API CreatePublicationItem() and AddPublicationItem().
6. Start Mobile Server
7. Execute a sync from the client
8. The new column should be seen on the cliet. Use MSQL to check snapshot definitions.
C) Change column datatype
Changing datatypes in a repliatated system is not an easy task. You have to follow certain procedures in order to make it to work. Use DropPublicationItem, CreatePublicationItem and AddPublicationItem methods from the Consolidator Admin API. You must stop/start Mobile Server listener to refresh the cache.
1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
2. Stop Mobile Server listener
3. Drop/create column (do not use conversion procudures) at the base table
4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
5. Call CreatePublicationItem() and AddPublicationItem(). Check if the ErrorQueue and InQueue reflect the new column datatype
6. Start Mobile Server. This automatically resumes application
7. Client executes sync. This should drop the old snapshot and recreate the new snapshot. Use MSQL to check
snapshot definitions.
D) Drop table
1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
2. Stop Mobile Server listener
3. Drop base table
4. Call DropPublicationItem(). Check if the ErrorQueue and InQueue no longer exist.
5. Start Mobile Server. This automatically resumes application
6. Client executes sync. This should drop the old snapshot. Use MSQL to check snapshot definitions.
E) Add table
1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
2. Stop Mobile Server listener
3. Add new base table
4. Call CreatePublicationItem() and AddPublicationItem() method
5. Start Mobile Server. This automatically resumes application
6. Client executes sync. This should add the new snapshot. Use MSQL to check snapshot definitions.
F) Changing Primary Keys
Chaning PK is a severe operation which must be executed manually. A snapshot must be deleted and recreated to propagate the changes to the clients. This causes a full refresh on this snapshot.
1. Client syncs. Clients should not add new data until they are told by the administrator to sync again!!
2. Stop Mobile Server listener
3. Drop the snapshot using DropPublicationItem() method o
4. Alter the base table
5. Call CreatePublicationItem()and AddPublicationItem() methods for the altered table
6. Start Mobile Server. This automatically resumes application
7. Client executes sync. The old snapshot will be replaced by the new snapstot using a full refresh. Use MSQL to check snapshot definitions.
G) To Change a Table Weight =>
Follow the procedure below to change the Table Weight parameter. The table weight is used by the Mobile Server/Synchronization to determin the sequence in which client records are applied to the Oracle database.
1. Run MGP to apply any changes in the InQueue to the Oracle databse
2. Change table weight using SetTemplateItemMetadata() method
3. Add/change constraint on the the base table which reflects the change in table weight
4. Synchronize
Maybe you are looking for
-
Help! How do I get back to the original set-up? I want to go back to the very beginning, right after install. I've inadvertently made some changes and can't undo. I've tried uninstall/reinstall to no avail. Please help!? I am using the Dreamweaver 8
-
Self-assigned IP****cannot connect to university wireless network
After i installed snow leopard I cannot connect properly to my university network. It says that a self assigned IP and cannot connect to the internet. I have tried many things but nothing seems to change this problem I need to be able to connect to t
-
Anyone battling to load Captivate onto LMS so it can be viewed on my iPad?
I have tried to view all the tutorials as to how to load onto Moodle and then to view my captivate on my iPad....I am definitely missing something! Please can someone help?
-
Fsmailserver: memory full. Close applications and ...
Hi, I am using 4 e-mail accounts: Three imap accounts for which i am syncing e-mail every 15 minutes One Mail for Exchange account i am syncing google calendar with After a while (sometimes already after a few hours, sometimes after a few days) i get
-
Open link in a browser from jTexPane hyperlink
Hi, I create a hyperlink in a jTextPane, and it's like this: jTextPane1.setContentType("text/html"); jTextPane1.setText("<a href=http://www.yahoo.com>Yahoo!</a>"); It's working, but then the yahoo page is displayed in that jTextPane1..How do I change