All_tab_columns and all_tab_cols
Hey guys,
I've come across this problem, what can I do? I know what all_tab_cols and all_tab_columns are supposed to have information about all the tables in the database and that they're very useful.
The problem is, all_tab_cols will show that a certain tables exists, or that a certain column belongs to a table. When I try to query that table directly I get a message saying the table or column don't exist.
For example,
If I execute the following:
select table_name, column_name from all_tab_cols
where owner='ME' and table_name = 'THE_TABLE_I_WANT'
then I get this response:
table_name - column_name
THE_TABLE_I_WANT - first_column
THE_TABLE_I_WANT - second_column
THE_TABLE_I_WANT - third_column
Then suppose I do the following:
select first_column from THE_TABLE_I_WANT
An error shows up saying that the column doesn't exist.
This happens with whole tables as well, where the table will show up in the all_tab_columns table but can't be queried directly with a select *.
What can cause this and how can I get accurate information from all_tab_columns/all_tab_cols??
user11764599 wrote:
Hey guys,
I've come across this problem, what can I do? I know what all_tab_cols and all_tab_columns are supposed to have information about all the tables in the database and that they're very useful.
The problem is, all_tab_cols will show that a certain tables exists, or that a certain column belongs to a table. When I try to query that table directly I get a message saying the table or column don't exist.
For example,
If I execute the following:
select table_name, column_name from all_tab_cols
where owner='ME' and table_name = 'THE_TABLE_I_WANT'
then I get this response:
table_name - column_name
THE_TABLE_I_WANT - first_column
THE_TABLE_I_WANT - second_column
THE_TABLE_I_WANT - third_column
Then suppose I do the following:
select first_column from THE_TABLE_I_WANT
An error shows up saying that the column doesn't exist.
This happens with whole tables as well, where the table will show up in the all_tab_columns table but can't be queried directly with a select *.
What can cause this and how can I get accurate information from all_tab_columns/all_tab_cols??Do you have SELECT privileges against THE_TABLE_I_WANT?
Similar Messages
-
Difference between ALL_TAB_COLUMNS and USER_TAB_COLUMNS
Hi,
I want to know the difference between ALL_tab_columns and User_tab_columns.
Kindly tell me solution
Regards
SelvaALL_TAB_COLUMNS describes the columns of the tables, views, and clusters accessible to the current user.
USER_TAB_COLUMNS describes the columns of the tables, views, and clusters owned by the current user. Its columns (except for OWNER) are the same as those in "ALL_TAB_COLUMNS". -
Difference between user_tab_columns and all_tab_columns
Hi,
Can anybody please let me know what are the differeneces between user_tab_columns and all_tab_columns.
Thank you.Hi,
In addition to sybrand_b information
ALL_TAB_COLUMNS is a view that has an entry for every column of every table of every users schema, describes the columns of the tables, views, and clusters accessible to the current user
USER_TAB_COLUMNS is a view that has an entry for every column of every table in a certain user's schema. When you use desc on a table to show the columns of that table the order of the columns shown is determined by the position that you can see when you look at the column_id in all_tab_columns and/or user_tab_columns. The DBA may change the order of these columns. It is very important not to assume that the columns are in a certain order and will stay that way, describes the columns of the tables, views, and clusters owned by the current user. Its columns (except for OWNER) are the same as those in "ALL_TAB_COLUMNS
:) -
Good morning,
I have here a SQL query that searches for entries in "user_tab_columns".
Unfortunately the mentioned entries are not present there (but they are present in "all_tab_columns", but I also think that the query is correct.
Therefore I wonder:
is it possible to modify the database (using granting permissions, DB links or other) so that the mentioned entries in "all_tab_columns" also get visible in "user_tab_columns"?
Thanks
DominiqueHi, Dominique,
You can create a private synonym called user_tab_columns that will stand for all_tab_columns
CREATE SYNONYM user_tab_columns FOR sys.all_tab_columns;That means, whenever you say user_tab_columns, Oracle will understand you really want sys.all_tab_columns.
If you need to reference the real user_tab_columns, then you can create another synonym like this:
CREATE SYNONYM real_user_tab_columns FOR sys.user_tab_columns;This will affect only the user who creates the synonym. If you log in as DOMINIQUE, then anyone logged in as DOMINIQUE will be affected, but people who log in as SCOTT (for example) will not be. (Of course, SCOTT can also create a priovate synonym, which would only affect users logged in as SCOTT).
Remember that sys.all_tab_columns and sys.user_table_columns are different. Something that was designed to work in user_tab_columns will not necessarily work in all_tab_columns. For example, a query that finds how many columns are in the emp table. If dominique.emp has 5 columns, then when dominique runs
SELECT COUNT (*) AS total_columns
FROM user_tab_columns
WHERE table_name = 'EMP';the answer will be 5, but if dominque has privileges on the scott.emp table, which has 8 columns, then the results of:
SELECT COUNT (*) AS total_columns
FROM all_tab_columns
WHERE table_name = 'EMP';will be 5 + 8 = 13, which is not the correct number of columns in either emp table.
Edited by: Frank Kulash on Aug 11, 2011 12:46 PM
Corrected typos -
Accessing ALL_TAB_COLUMNS from a Procedure
Hi,
I want a help in Accessing ALL_TAB_COLUMNS from a procedure.
I am an getting Error as Insufficient Privileges while Executing the Procedure.
Any help will be Benefitial
Thanks and RegardsYou should not really be using SYS.
Is this a general issue with accessing "ALL_" views (i.e. information about other schemas than your own) within procedures? If so perhaps the account needs SELECT ANY DICTIONARY system privilege. although as with any system privilege you should consider the security implications.
If it's specifically ALL_TAB_COLUMNS and not the rest of the dictionary views, then a grant on that view to the owner of the package will do it. -
Want to construct insert statement from all_tab_columns
I am using Oracle 10GR2 and a new user for plsql
I want to write a plsql block or SQL in order to insert some dummy data.
I want to use the column list from all_tab_columns and want to construct insert statement
EX: I am trying to construct SQL as below
for i in (select column_name from all_tab_columns where table_name='EMP')
loop
v_sql := insert into emptest valuesi.column_name
could you please let me know any pointer for the same ..?
can I use any other technique for the same .. like collection, nested table etcAm not clear for this requirement, if possible could you please elaborate.
Also specify the requirement, inputs and expected outputs with your query in code format...
This will help.. -
Table compare and derive alter script between 2 schemas
I am in Oracle 10g.
We are in need to synchronise table structures between two different database.
and execute the alter script in the target database.
I have an idea to find the tables which have the table definition changed using all_tab_columns and datbase link;
A sample of it as below:
prompt
prompt columns having same name but difference in datatype or length:
prompt -------------------------------------------------------------------------
select
a.column_name, a.data_type, a.data_length, a.data_scale,a.data_precision,
b.column_name, b.data_type, b.data_length, b.data_scale,b.data_precision
from
all_tab_columns a, all_tab_columns@link b
where
a.table_name in (select tablename from test) and
a.table_name = b.table_name and
a.column_name = b.column_name and
a.data_type <> b.data_type or
a.data_length <> b.data_length or
a.data_scale <> b.data_scale or
a.data_precision <> b.data_precision
prompt columns present in &table1 but not in &table2
prompt ----------------------------------------------
select
column_name
from
all_tab_columns@link
where
table_name in (select tablename from test)
minus
select
column_name --, data_type, data_length, data_scale, data_precision
from
all_tab_columns
where
table_name in (select tablename from test) ;
prompt columns present in &table2 but not in &table1
prompt ----------------------------------------------
select
column_name --, data_type, data_length, data_scale, data_precision
from
all_tab_columns@link
where
table_name in (select tablename from test)
minus
select
column_name
from
all_tab_columns
where
table_name in (select tablename from test)
just looking for idea to derive the alter scripts on these? Please share your ideas on this.You don't have to write lots of triggers. You only need to write one trigger e.g :-
create table ddl_audit
(audit_date date,
username varchar2(30),
instance_number integer,
database_name varchar2(9),
object_type varchar2(19),
object_owner varchar2(30),
object_name varchar2(128),
sql_text varchar2(2000));
create or replace trigger BEFORE_DDL_TRG before ddl on database
declare
l_sql_text ora_name_list_t;
l_count NUMBER;
l_puser VARCHAR2(30) := NULL;
l_sql varchar2(2000);
begin
l_count := ora_sql_txt(l_sql_text);
l_puser := SYS_CONTEXT('USERENV', 'KZVDVCU');
l_count := ora_sql_txt(l_sql_text);
for i in 1..l_count
loop
l_sql := l_sql||l_sql_text(i);
end loop;
insert into ddl_audit (audit_date, username, instance_number, database_name, object_type, object_owner, object_name, sql_text)
values (sysdate, l_puser,ora_instance_num,ora_database_name,ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name,l_sql);
exception
when others then
null;
end;
show errors; -
How to generate test data for all the tables in oracle
I am planning to use plsql to generate the test data in all the tables in schema, schema name is given as input parameters, min records in master table, min records in child table. data should be consistent in the columns which are used for constraints i.e. using same column value..
planning to implement something like
execute sp_schema_data_gen (schemaname, minrecinmstrtbl, minrecsforchildtable);
schemaname = owner,
minrecinmstrtbl= minimum records to insert into each parent table,
minrecsforchildtable = minimum records to enter into each child table of a each master table;
all_tables where owner= schemaname;
all_tab_columns and all_constrains - where owner =schemaname;
using dbms_random pkg.
is anyone have better idea to do this.. is this functionality already there in oracle db?Ah, damorgan, data, test data, metadata and table-driven processes. Love the stuff!
There are two approaches you can take with this. I'll mention both and then ask which
one you think you would find most useful for your requirements.
One approach I would call the generic bottom-up approach which is the one I think you
are referring to.
This system is a generic test data generator. It isn't designed to generate data for any
particular existing table or application but is the general case solution.
Building on damorgan's advice define the basic hierarchy: table collection, tables, data; so start at the data level.
1. Identify/document the data types that you need to support. Start small (NUMBER, VARCHAR2, DATE) and add as you go along
2. For each data type identify the functionality and attributes that you need. For instance for VARCHAR2
a. min length - the minimum length to generate
b. max length - the maximum length
c. prefix - a prefix for the generated data; e.g. for an address field you might want a 'add1' prefix
d. suffix - a suffix for the generated data; see prefix
e. whether to generate NULLs
3. For NUMBER you will probably want at least precision and scale but might want minimum and maximum values or even min/max precision,
min/max scale.
4. store the attribute combinations in Oracle tables
5. build functionality for each data type that can create the range and type of data that you need. These functions should take parameters that can be used to control the attributes and the amount of data generated.
6. At the table level you will need business rules that control how the different columns of the table relate to each other. For example, for ADDRESS information your business rule might be that ADDRESS1, CITY, STATE, ZIP are required and ADDRESS2 is optional.
7. Add table-level processes, driven by the saved metadata, that can generate data at the record level by leveraging the data type functionality you have built previously.
8. Then add the metadata, business rules and functionality to control the TABLE-TO-TABLE relationships; that is, the data model. You need the same DETPNO values in the SCOTT.EMP table that exist in the SCOTT.DEPT table.
The second approach I have used more often. I would it call the top-down approach and I use
it when test data is needed for an existing system. The main use case here is to avoid
having to copy production data to QA, TEST or DEV environments.
QA people want to test with data that they are familiar with: names, companies, code values.
I've found they aren't often fond of random character strings for names of things.
The second approach I use for mature systems where there is already plenty of data to choose from.
It involves selecting subsets of data from each of the existing tables and saving that data in a
set of test tables. This data can then be used for regression testing and for automated unit testing of
existing functionality and functionality that is being developed.
QA can use data they are already familiar with and can test the application (GUI?) interface on that
data to see if they get the expected changes.
For each table to be tested (e.g. DEPT) I create two test system tables. A BEFORE table and an EXPECTED table.
1. DEPT_TEST_BEFORE
This table has all EMP table columns and a TEST_CASE column.
It holds EMP-image rows for each test case that show the row as it should look BEFORE the
test for that test case is performed.
CREATE TABLE DEPT_TEST_BEFORE
TESTCASE NUMBER,
DEPTNO NUMBER(2),
DNAME VARCHAR2(14 BYTE),
LOC VARCHAR2(13 BYTE)
2. DEPT_TEST_EXPECTED
This table also has all EMP table columns and a TEST_CASE column.
It holds EMP-image rows for each test case that show the row as it should look AFTER the
test for that test case is performed.
Each of these tables are a mirror image of the actual application table with one new column
added that contains a value representing the TESTCASE_NUMBER.
To create test case #3 identify or create the DEPT records you want to use for test case #3.
Insert these records into DEPT_TEST_BEFORE:
INSERT INTO DEPT_TEST_BEFORE
SELECT 3, D.* FROM DEPT D where DEPNO = 20
Insert records for test case #3 into DEPT_TEST_EXPECTED that show the rows as they should
look after test #3 is run. For example, if test #3 creates one new record add all the
records fro the BEFORE data set and add a new one for the new record.
When you want to run TESTCASE_ONE the process is basically (ignore for this illustration that
there is a foreign key betwee DEPT and EMP):
1. delete the records from SCOTT.DEPT that correspond to test case #3 DEPT records.
DELETE FROM DEPT
WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3);
2. insert the test data set records for SCOTT.DEPT for test case #3.
INSERT INTO DEPT
SELECT DEPTNO, DNAME, LOC FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3;
3 perform the test.
4. compare the actual results with the expected results.
This is done by a function that compares the records in DEPT with the records
in DEPT_TEST_EXPECTED for test #3.
I usually store these results in yet another table or just report them out.
5. Report out the differences.
This second approach uses data the users (QA) are already familiar with, is scaleable and
is easy to add new data that meets business requirements.
It is also easy to automatically generate the necessary tables and test setup/breakdown
using a table-driven metadata approach. Adding a new test table is as easy as calling
a stored procedure; the procedure can generate the DDL or create the actual tables needed
for the BEFORE and AFTER snapshots.
The main disadvantage is that existing data will almost never cover the corner cases.
But you can add data for these. By corner cases I mean data that defines the limits
for a data type: a VARCHAR2(30) name field should have at least one test record that
has a name that is 30 characters long.
Which of these approaches makes the most sense for you? -
How to reference dynamically :new value in a trigger
Hi,
I have a trigger in which i have to check all fields of a table that have many fields, so i retrieve table fields from all_tab_columns and would like to check :new value but do not know how to do that. Does someone have an idea? Thanks.Tabit7 wrote:
I have a trigger in which i have to check all fields of a table that have many fields, Not fields. Records have fields. Tables have columns.
Why so many columns? That is often a sign of a poor data model or incorrect normalisation.
so i retrieve table fields from all_tab_columns and would like to check :new value but do not know how to do that. Sounds like a bad idea.
Does someone have an idea? That depends on the actual problem. You've only described what you think a potential solution is to this unknown problem - dynamically accessing column values in a trigger.
We need to know what that problem is, in order to comment on your approach and what other approaches can be considered. -
Query to list tables with a particular column name in a schema!!
Is there any way to know?
Thanks in advance.You can query the data dictionary views to get this information.
For example
untested
select table_name
from user_tab_columns
where column_name = 'my particular column name';There is also all_tab_columns and dba_tab_columns. Both of which would include the schema name.
Edited by: Sven W. on Sep 9, 2011 2:24 PM -
Using toad to search for columns in entire datamart
can anyone inform me if it is possible, using toad, to conduct a "quick search" search though an entire datamart for a specific column name and if possible how?
eg: column <pubs_visited incambridge> exists in table <things_to_do_in_cambridge> in a datamart and i want to find out if the column <pubs_visited_in_cambridge> is repeated in another table within the datamart.<QUOTE> Not really true, all_tab_columns will give you all tables for which you've grant.</QUOTE>
tried executing the query with both (DBA_TAB_COLUMNS AND ALL_TAB_COLUMNS) and I have received different results (more results with the dba). apparently, the schema i queried in our datamart, does not contain some of the tables that have been listed using the dba function.
how could it be possible that tables which do not exist in a specified schema get listed when quering using the dba function? -
Query to know number of columns in a table
please can anyone suggest me a query to know number of columns in a table i.e.
if I want to know how many number of colums are present in a specific table then what would be the query
Message was edited by:
user625519Give this a shot:
SELECT table_name,count(*) as "# of Columns"
FROM dba_tab_cols
WHERE table_name = <table name>
GROUP BY table_name
ORDER BY table_name;There are other views as well such as USER_TAB_COLS and ALL_TAB_COLS.
HTH! -
Loop through tables based on data dict values
Hi,
I working on an old v7.3.4 database that I'm not familiar with and I want to loop through the tables and count the occurrence of a field value based on table names i've retrieved from the data dictionary. None of the tables have relational keys defined.
In a cursor I can loop thru all_tab_columns and load variables with the table, column names, and the datatype, but then I want to use these values in a second nested cursor to loop through each table found by the first cursor.
When I do :
Select var_colname from var_tabname
i get
The following error has occurred:
ORA-06550: line 23, column 10:
PLS-00356: 'V_TABNAME' must name a table to which the user has access
ORA-06550: line 22, column 5:
PL/SQL: SQL Statement ignored
ORA-06550: line 22, column 12:
PLS-00320: the declaration of the type of this expression is incomplete or malformed
ORA-06550: line 27, column 7:
PL/SQL: SQL Statement ignored
so it would seem I can't use a variable to substitute the table name in the 'from' clause. Does anyone know of a way round this ?
Thanks in advanceHi,
You will have to use dynamic sql to create your second cursor.
DECLARE
v_sql_query VARCHAR2(400);
TYPE cur_typ IS REF CURSOR;
c1 cur_typ;
mYRec MyTable%rowtype;
BEGIN
v_sql_query := 'select * from MyTable';
OPEN c1 FOR v_sql_query;
LOOP
FETCH c1 INTO mYRec;
EXIT WHEN c1%NOTFOUND;
EXIT WHEN c1%NOTFOUND IS NULL;
/*processing here*/
END LOOP;
CLOSE c1;
END;
Regards -
How to Compare Data length of staging table with base table definition
Hi,
I've two tables :staging table and base table.
I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
Can anyone tell me how to get desired results?
Thanks,
Mahender.This is a shot in the dark but, take a look at this example below:
SQL> DROP TABLE STAGING;
Table dropped.
SQL> DROP TABLE BASE;
Table dropped.
SQL> CREATE TABLE STAGING
2 (
3 ID NUMBER
4 , A VARCHAR2(40)
5 , B VARCHAR2(40)
6 , ERR_LENGTH VARCHAR2(1)
7 );
Table created.
SQL> CREATE TABLE BASE
2 (
3 ID NUMBER
4 , A VARCHAR2(25)
5 , B VARCHAR2(25)
6 );
Table created.
SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
SQL> UPDATE STAGING ST
2 SET ERR_LENGTH = 'Y'
3 WHERE EXISTS
4 (
5 WITH columns_in_staging AS
6 (
7 /* Retrieve all the columns names for the staging table with the exception of the primary key column
8 * and order them alphabetically.
9 */
10 SELECT COLUMN_NAME
11 , ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
12 FROM ALL_TAB_COLUMNS
13 WHERE TABLE_NAME='STAGING'
14 AND COLUMN_NAME != 'ID'
15 ORDER BY 1
16 ), staging_unpivot AS
17 (
18 /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
19 * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
20 * in the same order as the ROW_NUMBER() function in columns_in_staging
21 */
22 SELECT ID
23 , COLUMN_NAME
24 , DECODE
25 (
26 RN
27 , 1,A
28 , 2,B
29 ) AS VAL
30 FROM STAGING
31 CROSS JOIN COLUMNS_IN_STAGING
32 )
33 /* Only return IDs for records that have at least one column value that exceeds the length. */
34 SELECT ID
35 FROM
36 (
37 /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
38 * the check to see if there are any differences in the length if so set a flag.
39 */
40 SELECT STAGING_UNPIVOT.ID
41 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
42 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
43 FROM STAGING_UNPIVOT
44 JOIN ALL_TAB_COLUMNS ATC ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
45 WHERE ATC.TABLE_NAME='BASE'
46 ) A
47 WHERE COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
48 AND ST.ID = A.ID
49 )
50 /
2 rows updated.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX Y
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX Y
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
Thanks! -
Hi,
Can any body give me give suggestions of how I can implement the Hierarchical posting of data into database.
For ex. I have a XML document which contains records for Dept. and Emp tables. I want to post these datagrams into respective tables.
<ROWSET>
<ROW>
<DEPTNO>10</DEPTNO>
<LOC>New York</LOC>
<NAME>Accounting</NAME>
<tname>DEPT</tname>
</ROW>
<ROW>
<EMPNO>3456</EMPNO>
<ENAME>SMITH</ENAME>
<MGR>5677</MGR>
<tname>EMP</tname>
</ROW>
</ROWSET>
here tname identifies the table into which data will go and we don't which tables info. the xml document will have and the relation ship between those tables.
I am looking for more generalized mechanism to hadle this. I mean the xml document may contain more than two tables info. and may contain in random order.
Note:
I think if we get the hierarchy from all_constraints view we could solve the problem.
Thanks in adv.
Hari.For a totally generic solution that's driven off information in ALL_TABLES, ALL_TAB_COLUMNS, and ALL_CONS_COLUMNS, and ALL_CONSTRAINTS, you'd need to author it yourself unfortunately.
If you're doing it in PL/SQL, you might find some useful constraint-walking code in the DBXML package that is part of our sample code we put on the web a long time ago called the "PLSXML Examples and Utilities".
Specifically, the DBXML.SQL package has some generic procedures for retrieving the inbound and outbound constraints on a table.
http://technet.oracle.com/tech/xml/info/index2.htm?Info&plsxml/xml4plsql.htm
Maybe you are looking for
-
Decimal upto 3 places in ALV report output
Hi Experts My requirement is that User wants to have 3 decimal places in report output.I have used type P decimal 3,also have used field catalog Decimals_o = '3' .In the report output Lets say Previously it was 123 now after the changes its coming 12
-
Probelm with Creating purchase order using BAPI_PO_CREATE1
Hi Experts, I am trying to create a PO using the "BAPI_PO_CREATE1". The BAPI is returning a PO number but when I am checking for the same in ME23 its saying that the number doesnot exsists. In BAPI retrun I am gettig an info message as "Stock transpo
-
How to Add New Parameter in Cluster with LabVIEW Executable Program?
In this VI,it can write the parameter in Cluster to the database (Access 2003).it run well. But When I add another parameter in Cluster,such as EE,I have to modify this Cluster. And at the same time,I have to modiy the database (I should add
-
dear oracle-experts, I try to upgrade a Oracle 9.2.0.1.0 database to 9.2.0.4.0. After installing the new Univeras Installer I tried to perform the path. During the installation of the patch the program stated that there were some background processes
-
Hi all, I want to add a new field in ME21N transaction, please tell me how to add new field in table control in ME21n tcode. I tried by using variant but according to my requirement invisible fields are not useful. I want to add a new field called