Defining a descriptor without a base table
Is it possible? I see this defined in Toplink 2.5, what is the equivalent
methodology in 10g? thanks
Generally a TopLink descriptor relates a class to one or more tables. The only exceptions I can think of are interface descriptors and aggregate descriptors.
Can you explain your required mapping scenario in more detail. If it was possible in TopLink 2.5 it should still be possible although the mechanism may have evolved.
Doug
Similar Messages
-
What is non-base table when refered in forms ?
nullmoorthy kathirava (guest) wrote:
: What is non-base table when refered in forms ?
When you create a Block/form without any base table, ie., Form
containing controls which are not necessarily related to one
table is called non base table block/form.
Murugan
null -
Hello All,
We are working on a project where we are analysing the pros and cons of using SQL Named Queries Vs Named Queries defined in Java.
Can we define a Toplink Descriptor without a table definition (None of the fields are mapped). I was able to generate Deployment XML file, but the application failed at runtime saying "Descriptor must have a table name defined"
Also, if ever that was possible, what are the implications of not mapping the the object to a table and just having Named SQL queries directly load into the Java objects.
How do the update, insert and delete work in this scenario?
Has anyone tried Custom SQLs/ SQL Named Queries?
Thanks,
NeerajYou need to map a class to a table. But, it doesn't have to be a "real" table. You can just fake one in the Mapping Workbench, and then if all your SQL overrides don't actually use that Table, no problems. We need to map to a table - if anything - to have a place to store column names that can be used to map the results of the SQL to the attributes in the class. I see this kind of thing when people want to map classes to stored procs. They fake a table in the MW, map the class to it, and then in all the CRUD overrides they put their stored proc calls and the results from the stored procs need to have "column" names that match those in the fake table.
- Don -
Adding 2 more rows to a select without inserting rows to base table
hello all,
i have a below simple select statement which is querying a table.
select * from STUDY_SCHED_INTERVAL_TEMP
where STUDY_KEY = 1063;
but here is the situations. As you can see its returning 7 rows. But i need to add
2 more rows..with everything else default value or what exist... except adding 2 more rows.
i cannot insert into base table. I want my end results to increment by 2 days in
measurement_date_Taken to 01-apr-09....so basically measurement_date_taken should
end at study_end_Date...
IS THAT EVEN POSSIBLE WITHOUT INSERTING ROWS INTO THE TABLE AND JUST PLAYIHY AROUND WITH
THE SELECT STATEMENT??
sorry if this is confusing...i am on 10.2.0.3
Edited by: S2K on Aug 13, 2009 2:19 PMWell, I'm not sure if this query looks as good as my lawn, but seems to work anyway ;)
I've used the 'simplified version', but the principle should work for your table to, S2K.
As Frank already pointed out (and I stumbled upon it while clunging): you just select your already existing rows and union them with the 'missing records', you calculate the number of days you're 'missing' based on the study_end_date:
MHO%xe> alter session set nls_date_language='AMERICAN';
Sessie is gewijzigd.
Verstreken: 00:00:00.01
MHO%xe> with t as ( -- generating your data here, simplified by me due to cat and lawn
2 select 1063 study_key
3 , to_date('01-MAR-09', 'dd-mon-rr') phase_start_date
4 , to_date('02-MAR-09', 'dd-mon-rr') measurement_date_taken
5 , to_date('01-APR-09', 'dd-mon-rr') study_end_date
6 from dual union all
7 select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('04-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
8 select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('09-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
9 select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('14-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
10 select 1063, to_date('03-MAR-09', 'dd-mon-rr') , to_date('19-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
11 select 1063, to_date('22-MAR-09', 'dd-mon-rr') , to_date('23-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual union all
12 select 1063, to_date('22-MAR-09', 'dd-mon-rr') , to_date('30-MAR-09', 'dd-mon-rr') , to_date('01-APR-09', 'dd-mon-rr') from dual
13 ) -- actual query:
14 select study_key
15 , phase_start_date
16 , measurement_date_taken
17 , study_end_date
18 from t
19 union all
20 select study_key
21 , phase_start_date
22 , measurement_date_taken + level -- or rownum
23 , study_end_date
24 from ( select study_key
25 , phase_start_date
26 , measurement_date_taken
27 , study_end_date
28 , add_up
29 from (
30 select study_key
31 , phase_start_date
32 , measurement_date_taken
33 , study_end_date
34 , study_end_date - max(measurement_date_taken) over (partition by study_key
35 order by measurement_date_taken ) add_up
36 , lead(measurement_date_taken) over (partition by study_key
37 order by measurement_date_taken ) last_rec
38 from t
39 )
40 where last_rec is null
41 )
42 where rownum <= add_up
43 connect by level <= add_up;
STUDY_KEY PHASE_START_DATE MEASUREMENT_DATE_TA STUDY_END_DATE
1063 01-03-2009 00:00:00 02-03-2009 00:00:00 01-04-2009 00:00:00
1063 03-03-2009 00:00:00 04-03-2009 00:00:00 01-04-2009 00:00:00
1063 03-03-2009 00:00:00 09-03-2009 00:00:00 01-04-2009 00:00:00
1063 03-03-2009 00:00:00 14-03-2009 00:00:00 01-04-2009 00:00:00
1063 03-03-2009 00:00:00 19-03-2009 00:00:00 01-04-2009 00:00:00
1063 22-03-2009 00:00:00 23-03-2009 00:00:00 01-04-2009 00:00:00
1063 22-03-2009 00:00:00 30-03-2009 00:00:00 01-04-2009 00:00:00
1063 22-03-2009 00:00:00 31-03-2009 00:00:00 01-04-2009 00:00:00
1063 22-03-2009 00:00:00 01-04-2009 00:00:00 01-04-2009 00:00:00
9 rijen zijn geselecteerd.If there's a simpler way (in SQL), I hope others will join and share their example/ideas/thoughts.
I have a feeling that this is using more resources than needed.
But I've got to cut the daisies first now, they interfere my 'lawn-green-ess' ;) -
How to update view without modifying the base table ?
Hi Experts , I need help in two qurstions
1. How to update a view without modifying the base table ?
2. How to write a file unix operating system in pl/sql ? is there any built in procedure is there ?
Thank youHi,
I'm not sure what you're asking in either question. It would help if you gave a specific example of what you want to do.
SowmyRaj wrote:
Hi Experts , I need help in two qurstions
1. How to update a view without modifying the base table ?You can't.
Views don't contain any data; they just query base tables.
You can change the definition of a view (CREATE OR REPLACE VIEW ...) so that it appears that the base table(s) have changed; that won't change the base tables.
2. How to write a file unix operating system in pl/sql ? is there any built in procedure is there ?The package utl_file has routines for working with files. -
Create a materized view without primary key constraint on the base table?
Hi
I tried to create a materized view but I got this error:
SQL> CREATE MATERIALIZED VIEW TABLE1_MV REFRESH FAST
START WITH
to_date('04-25-2009 03:00:13','MM-dd-yyyy hh24:mi:ss')
NEXT
sysdate + 1
AS
select * from TABLE1@remote_db
SQL> /
CREATE MATERIALIZED VIEW TABLE1_MV REFRESH FAST
ERROR at line 1:
ORA-12014: table 'TABLE1' does not contain a primary key constraint.
TABLE1 in remote_db doesn't have a primary key constraint. Is there anyway that I can create a materized view on a base table which doesn't have a primary key constraint?
Thanks
LizHi,
Thanks for your helpful info. I created a materialized view in the source db with rowid:
SQL> CREATE MATERIALIZED VIEW log on TABLE1 with rowid;
Materialized view log created.
Then I created a MV on the target DB:
CREATE MATERIALIZED VIEW my_schema.TABLE1_MV
REFRESH FAST
with rowid
START WITH
to_date('04-25-2009 03:00:13','MM-dd-yyyy hh24:mi:ss')
NEXT
sysdate + 1
AS
select * from TABLE1@remote_db
SQL> /
CREATE MATERIALIZED VIEW my_schema.TABLE1_MV
ERROR at line 1:
ORA-12018: following error encountered during code generation for
"my_schema"."TABLE1_MV"
ORA-00942: table or view does not exist
TABLE1 exists in remote_db:
SQL> select count(*) from TABLE1@remote_db;
COUNT(*)
9034459
Any clue what is wrong?
Thanks
Liz -
Problems when trying to replace the base table of an own Business Object
Dear experts,
some time ago we created two own (not inherited) Business Ojects for use
in workflow scenarios. Now we came into the situation that for organizational
reasons we wanted to exchange the base tables of both the objects.
The fields and keys in the new tables are all the same than in the old ones,
basically just the table names have changed.
The frustrating result of trying to do so are two inconsitent Business objects
that cannot be activated/generated any more with several errors that I cannot
get rid off.
Does anybody have experience with this kind of change in a BO ?
Am I trying that in vain ? Do I have to create them both entirely new?
Please help.
Thanks in advance
Andreas FlügelHi Mike,
thanks for the prompt answer.
The first error is a syntax error concerning the unexpected end of a statement
with "LIKE". It occurred in the following generated sourcecode-section after
the generation and it is generated this way again and again (I already tried
to complete it manually according to the way it was before):
BEGIN_DATA OBJECT. " Do not change.. DATA is generated
" end of private,
BEGIN OF KEY,
JOURNALNUMBER LIKE /HOAG/P_DATJOURN-JOURNALNR,
END OF KEY,
_ LIKE.
END_DATA OBJECT. " Do not change.. DATA is generated
The second error points to the table name in the statement
get_table_property /hoag/p_datjourn.
saying "The table name /hoag/p_datjourn is implemented but not defined".
The third error comes up with a popup (again and again and again) saying
"The table is not implemented yet. Do you want a sourcecode sample to be
created for the missing part?". I've alread answered that with "Yes" for a
couple of times without anything being changed.
Thanks
Andreas
P.S.: the messages may actually read a bit different in english because I just
translated them myself from german but I hope you know them well enough to
know what the system wants from me. -
Creation of field in data base table
Hi,
I want to create a field in data base table , which holds the float values but i don't want to use the FLTP data type, why bcoz if i use this data type in selection screen of the table the the field is not appeqring, i dont want to go for the option QUAN bcoz there i need to define the ref table and ref field.
please explain what is the way to create this field.
regards
KrishnaUse NUMC
Awrd POints if useful
Bhupal -
CC&B 2.3.1 - Custom indexes for base tables
Hi,
We are seeing a couple of statements in the database that could improve its performance with new custom indexes on base tables. Questions are:
- can we create new indexes on base tables ?
- is there any recommendations about naming, characteristics and location for this indexes ?
- is there any additional step to do in CC&B in order to use the index (define metadata or ...) ?
Thanks.
Regards.Hi,
if it necessary You can crate custom index.
In this situation You should follow naming convention from Database Design Standards:
Indexes
Index names are composed of the following parts:
+[X][C/M/T]NNN[P/S]+
+• X – letter X is used as a leading character of all base index names prior to Version 2.0.0. Now the first character of product owner flag value should be used instead of letter X. For client specific implementation index in Oracle, use CM.+
+• C/M/T – The second character can be either C or M or T. C is used for control tables (Admin tables). M is for the master tables. T is reserved for the transaction tables.+
+• NNN – A three-digit number that uniquely identifies the table on which the index is defined.+
+• P/S/C – P indicates that this index is the primary key index. S is used for indexes other than primary keys. Use C to indicate a client specific implementation index in DB2 implementation.+
Some examples are:
+• XC001P0+
+• XT206S1+
+• XT206C2+
+• CM206S2+
Warning! Do not use index names in the application as the names can change due to unforeseeable reasons
There is no additional metadata information for indexes in CI_MD* tables - because change of indexes does not influence generated Java code.
Hope that helps.
Regards,
Bartlomiej -
Hello All,
I need to develop conversion interface of open purchase orders.
Need help to know what are the base tables which are used for the same.
Also Need to know what all validations needs to be done.
I have identified a few.
1 PO Number already Exists in Oracle EBS.
2 Vendor not defined in Oracle EBS.
3 Currency Code not defined in Oracle EBS.
4 Bill to Location Not defined.
5 Ship to Location not defined.
6 Payment Terms not defined in Oracle EBS.
7 Vendor Site not defined
8 Buyer Not defined in EBS
9 EBS Ship to Org does not exist
9 UOM Code not defined in EBS
10 Account Code not valid
Can someone pls help with the experiences.
ThanksHi,
3 important tables when you are converting the PO's in Oracle interface are
PO_HEADERS_INTERFACE
PO_LINES_INTERFACE
PO_DISTRIBUTIONS_INTERFACE
Normally when you are converting the data through the stanadard interface, all validations are taken care by the above interface tables, Import Standard Purchase Orders is the standard concurrent program provided by oracle to load the data from standard interface to the base tables. PO_HEADERS_ALL, PO_LINES_ALL, PO_LINE_LOCATIONS_ALL, PO_DISTRIBUTIONS_ALL.
And also you need to take care of the partially received PO's, Partially Billed PO's.
Hope this Helps,
Raghav -
Aggregation table - Diffrent agg levels for base table and agg table
Is it possible to have Different aggregation level for base table and Aggregation. Say sum on a column in AGG table and Count for the same column in Fact table.
Example
Region,Day_product,sales person, customer are dimensions and Call is a fact measure
FACT_TABLE has columns Region, Day, Product, Sales person,Customer, Call
AGG_TABLE has columns Region, Month,Product, call
We already have a Logical Table definition for the fact table say FACT_CALL
We have a Logical column called No of customers.
For the Data source as FACT_TABLE Formula for the column is "Customer" and Aggregation level is count distinct.
But agg table we already have a calculated column call TOT_CUSTOMERS. which is been calculated and aggregated in the ETL.
IF we map this to the logical column we have to set the formula as TOT_CUSTOMERS and we need to define aggregation type as SUM as this is at REGION, MONTH AND Product level. But OBI does not allow to do so.
Is there a work around for this? Can you please let us know.
Regard
Arun DThe way BI server picks up the table that would satisfy the query is through column mappings and contents levels. You have set the column mappings to TOT_CUSTOMER, which is right. When it comes to aggregation, since its already precalculated through ETL, you want to set the aggregation to SUM. Which I would say - is not correct, you can set the aggregation to COUNT DISTIMCT which is same as that of the detailed fact. But set the content levels to month in date table, and appropriate levels in region etc., So now BI Server will be aware of how to aggregate the rows when it chooses the agg table.
-
Index Compression in SAP - system/basis tables?
Hi!
In thread Oracle comression in SAP environments the Oracle 10g feature index compression was discussed. We are now going to implement it also. SAP and Oracle say, this can be done for any index.
So we selected the biggest and the most frequently used indexes and analyzed them. We could save about 100GB disk space.
But here comes my question:
In the hitlist of our most frequently used and biggest Indexes there are also some basis table indexes.
A few samples:
BALHDR~0
BALHDR~1
BALHDR~2
BALHDR~3
BDCP~0
BDCP~1
BDCP~POS
BDCPS~0
BDCPS~1
CDCLS~0
CDHDR~0
D010INC~0
D010INC~1
D010TAB~0
D010TAB~1
DD01L~0
DD03L~5
DD07L~0
E071K~0
E071K~ULI
GVD_LATCHCHILDS~0
GVD_OBJECT_DEPEN~0
GVD_SEGSTAT~0
QRFCTRACE~0
QRFCTRACE~001
QRFCTRACE~002
REPOSRC~0
SCPRSVALS~0
SEOCOMPODF~0
SMSELKRIT~0
SRRELROLES~0
SRRELROLES~002
STXH~0
STXH~REF
STXL~0
SWW_CONT~0
TBTCS~1
TODIR~0
TRFCQOUT~5
USR02~0
UST04~0
VBDATA~0
VBMOD~0
WBCROSSGT~0
Is it really recommended to compress indexes of SAP Basis Tables also - especially in the area of Repository/Dictionary, t/qRFC and/or "Verbuchung" (VB...)?
Thanx for any hint and/or comment!
Regards,
VolkerHi Volkar,
I have succesfully tested the oracle index compression on ECC5 environment for the following tables in a sandbox environment;
ppoix
pcl2
pcl4
In total I saved around 60GB in the tablespaces.
Before compression I started a payroll run to see what time this will take without compression.
After compression of the indexes I re-executed the payroll which took exactly the same time as without compression (2 hours). So no impact on performance.
Also did an update statistics in DB13 -> no impact
With brtools: force update of specific table -> no impact
So we are seriously thinking about to take this into production.
I have also looked at BI environment but concluded that there was nothing to gain.
Unfortunately our infocubes are well build meaning that the fact tables contains the actual data and the corresponding dimension tables only the surrogate IDu2019s (SIDu2019s).
Those dimension tables are actually very small (64k) and not suitable for index compression.
Next step will be some Workflow tables.
Fe:
SWW_CONT~0 INDEX PSAPFIN 26.583.040
SWPNODELOG~0 INDEX PSAPFIN 15.589.376
SWWLOGHIST~0 INDEX PSAPFIN 13.353.984
SWWLOGHIST~1 INDEX PSAPFIN 8.642.560
SWW_CONTOB~0 INDEX PSAPFIN 8.488.960
SWPSTEPLOG~0 INDEX PSAPFIN 6.808.576
SWW_CONTOB~A INDEX PSAPFIN 6.707.200
SWWLOGHIST~2 INDEX PSAPFIN 6.507.520
SWW_WI2OBJ~Z01 INDEX PSAPFIN 2.777.088
SWW_WI2OBJ~0 INDEX PSAPFIN 2.399.232
SWWWIHEAD~E INDEX PSAPFIN 2.352.128
SWP_NODEWI~0 INDEX PSAPFIN 2.304.000
SWW_WI2OBJ~001 INDEX PSAPFIN 2.289.664
SWWWIHEAD~A INDEX PSAPFIN 2.144.256
SWPNODE~0 INDEX PSAPFIN 2.007.040
SWWWIRET~0 INDEX PSAPFIN 2.004.992
SWW_WI2OBJ~002 INDEX PSAPFIN 1.907.712
If you would like to know, I can post the results on workflow tables (indexes) on ECC6 environment.
Please rewards some point if you like.
Regards,
Stephan van Loon -
How to Compare Data length of staging table with base table definition
Hi,
I've two tables :staging table and base table.
I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
Can anyone tell me how to get desired results?
Thanks,
Mahender.This is a shot in the dark but, take a look at this example below:
SQL> DROP TABLE STAGING;
Table dropped.
SQL> DROP TABLE BASE;
Table dropped.
SQL> CREATE TABLE STAGING
2 (
3 ID NUMBER
4 , A VARCHAR2(40)
5 , B VARCHAR2(40)
6 , ERR_LENGTH VARCHAR2(1)
7 );
Table created.
SQL> CREATE TABLE BASE
2 (
3 ID NUMBER
4 , A VARCHAR2(25)
5 , B VARCHAR2(25)
6 );
Table created.
SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
SQL> UPDATE STAGING ST
2 SET ERR_LENGTH = 'Y'
3 WHERE EXISTS
4 (
5 WITH columns_in_staging AS
6 (
7 /* Retrieve all the columns names for the staging table with the exception of the primary key column
8 * and order them alphabetically.
9 */
10 SELECT COLUMN_NAME
11 , ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
12 FROM ALL_TAB_COLUMNS
13 WHERE TABLE_NAME='STAGING'
14 AND COLUMN_NAME != 'ID'
15 ORDER BY 1
16 ), staging_unpivot AS
17 (
18 /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
19 * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
20 * in the same order as the ROW_NUMBER() function in columns_in_staging
21 */
22 SELECT ID
23 , COLUMN_NAME
24 , DECODE
25 (
26 RN
27 , 1,A
28 , 2,B
29 ) AS VAL
30 FROM STAGING
31 CROSS JOIN COLUMNS_IN_STAGING
32 )
33 /* Only return IDs for records that have at least one column value that exceeds the length. */
34 SELECT ID
35 FROM
36 (
37 /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
38 * the check to see if there are any differences in the length if so set a flag.
39 */
40 SELECT STAGING_UNPIVOT.ID
41 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
42 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
43 FROM STAGING_UNPIVOT
44 JOIN ALL_TAB_COLUMNS ATC ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
45 WHERE ATC.TABLE_NAME='BASE'
46 ) A
47 WHERE COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
48 AND ST.ID = A.ID
49 )
50 /
2 rows updated.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX Y
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX Y
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
Thanks! -
Steps to create Universe without using Fact Table
Dear All,
i am confronting with a problem by creating an Universe.
The problem is that we do no have any fact table.
Could you please explain the steps for creating an universe without fatc table?
Thanks
PatThe first thing to do is identify the tables in your schema that contain measures. These will be your base tables for contexts.
Then identify all the tables that relate to each of your candidate fact tables.
You may identify two related tables, both with facts in, which would give you a fan trap.
Say you have a schema with only three tables and they are related as: T1 -< T2 -< T3
T2 and T3 both have measure columns.
What you would need to do is create an alias of T2 (AT2) and join it to T2.
You would then have two contexts, T1-< T2 , T2-< T3 and T1-<T2, T2- AT2
For objects from T2, derive the dimensions from T2 and the measures from AT2.
Beyond that, it's fairly standard.
If you have a data schema it is going to be much easier for you. -
Hi experts help me,
I have one requirment
there have two machine(plant) fert1,fert2 each one have some materilas.
from one of the plant will be goining to stop(fert1).
than we need featch the issued material from fert1 in reverse order to store it than post the meterial into fert2 based on the production number.
Please help me,
THANKS advance
Moderator message: "spec dumping", please work yourself first on your requirement.
Edited by: Thomas Zloch on Oct 29, 2010 1:54 PMHi,
I'm not sure what you're asking in either question. It would help if you gave a specific example of what you want to do.
SowmyRaj wrote:
Hi Experts , I need help in two qurstions
1. How to update a view without modifying the base table ?You can't.
Views don't contain any data; they just query base tables.
You can change the definition of a view (CREATE OR REPLACE VIEW ...) so that it appears that the base table(s) have changed; that won't change the base tables.
2. How to write a file unix operating system in pl/sql ? is there any built in procedure is there ?The package utl_file has routines for working with files.
Maybe you are looking for
-
HP Deskjet 3050 All in One J610 - Failing to connect to network
Hello, I own an HP Deskjet 3050 All in One J610 device. It worked great for about a year, until one day it just stopped connecting wirelessly to my network. I haven't changed anything in the network, so I really don't have a clue why this happenned.
-
Need to install leopard on my macbook and superdrive is faulty
I recently upgraded my hard disk from 120 gb to 320 gb. After installing the new hard drive, tried to boot the machine using the original leopard disk. But the problem is that superdrive keeps ejecting the installation disk. How can I install the ope
-
Shrink cells and Font when Printing ALV
Hey everyone, I have a program that creates an ALV Report using the OO concept. When I print the program, I would like to see smaller cells and font. Is there a way to accomplish this? Thanks in advance!
-
How do you guys deal with stress ?
I am not sure if this question fits here,but i feel that stress and code are very closely related.Software industry is mostly related to stress which have its roots in insane deadlines,challenges in working in new technologies without much hands on,c
-
Can't open some old Freehand 5.1 files
Hi all, I have some ancient Freehand files I need to open and the newer versions won't recognize the content. Any ideas?