Duplicate data in table
I am designing a form that has two tables. The user is supposed to enter their data in the first table, and that data automatically copies to the second table (so when they print the form, they have two copies of the same thing while only having to enter the data once).
I created a dynamic input table with one row and set that row to repeat row for each data item, minimum count 9. This way, nine rows appear when the form renders and if we need to increase them later, it is a simple change of that number. Then, I set all of the table fields to global data binding and copied the table, setting the second table to read only.
My problem is, when I enter a value in the first data entry field, it copies the data to the entire column of the table! I don't understand why it is doing this, since Adobe automatically renames duplicate fields (ie. row1Item will be row1Item[1], row1Item[2] and so on). I even went so far as to add a script to change the name of the fields:
var rownum = this.parent.instanceIndex + 1;
this.name = "rowItemInput" + rownum;
But that hasn't helped at all. I still have the same problem.
Any ideas as to what I am doing wrong?
I set all of the table fields to global data binding
This is what causing the problem.Remove Global to the fields and try it would work.You might get different data in each field.
instead of making fields global.
You need to keep the below script on the initialize of the duplicate table item field.
this.rawValue = xfa.resolveNode("originalTable.Row2.Remarks").rawValue;
Hope this works..!!
Vjay
Similar Messages
-
Powershell and oracle and duplicate data in table
I have created powershell script to insert data in oracle table from csv file and I want to know how to stop insert duplicate row when Powershell script runs multiple time My powershell script is as follow:
'{0,-60}{1,20}'
-f
"Insert TEEN PREGNANCY ICD9 AND ICD10 CODES into the su_edit_detail ",(Get-Date
-Format yyyyMMdd:hhmmss);
$myQuery
=
SET PAGES 600;
SET LINES 4000;
SET ECHO ON;
SET serveroutput on;
WHENEVER sqlerror exit sql.sqlcode;
foreach
($file
in
dir
"$($UCMCSVLoadLocation2)"
-recurse
-filter
"*.csv")
$fileContents
=
Import-Csv
-Path
$file.fullName
foreach ($line
in
$fileContents)
$null
= Execute-NonQuery-Oracle -sql
insert into SU_EDIT_DETAIL(EDIT_FUNCTION,TABLE_FUNCTION,CODE_FUNCTION,CODE_TYPE,CODE_BEGIN,CODE_END,EXCLUDE,INCLUDE_X,OP_NBR,TRANSCODE,VOID,YMDEFF,YMDEND,YMDTRANS)
Values
('$($line."EDIT_FUNCTION")','$($line."TABLE_FUNCTION")','$($line."CODE_FUNCTION")','$($line."CODE_TYPE")','$($line."CODE_BEGIN")','$($line."CODE_END")',' ',' ', 'MIS', 'C', ' ', 20141001, 99991231,
20131120)
Vijay Patelplease read "PLEASE READ BEFORE POSTING"
This forum is about the Small Basic programming language.
Try another forum.
Jan [ WhTurner ] The Netherlands -
Find duplicates from different tables
I need to write a sql query to find out in which table
holds the duplicate data.
Duplicate : AAA company should be present(mapped) only once with XXX Partner. If it is repeated then would be considered
as duplicate
Please help in writing this query
Table A
Table B
Table C
company Partners company Partners
company Partners
AAA XXX AAA
XXX
AAA XXX
BBB YYY BBB
YYY
BBB YYY
AAA XXXX (duplicate Data)
Expected o/p :
table A contains duplicate data , or Table B contains Duplicate Data ETc...Chelseasadhu, I do not think AAA is duplicating as the partners are different XXX and XXXX.
Could you please clarify how are you finding its a duplicate value? Just the Company duplicate is your criteria?
You may try something below:
create Table TableA(Company varchar(100), Partners varchar(100))
Insert into TableA Select 'AAA','XXX'
Insert into TableA Select 'BBB','XXX'
Insert into TableA Select 'AAA','XXXX'
create Table TableB(Company varchar(100), Partners varchar(100))
Insert into TableB Select 'AAA','XXX'
Insert into TableB Select 'BBB','XXX'
--Insert into TableB Select 'BBB','XXXCC'
create Table TableC(Company varchar(100), Partners varchar(100))
Insert into TableC Select 'AAA','XXX'
Insert into TableC Select 'BBB','XXX'
Declare @DuplicateTableA int =0,@DuplicateTableB int=0,@DuplicateTableC int=0
Set @DuplicateTableA=(Select Case when exists(Select COUNT(1) From TableA Group by Company having COUNT(1)>1) then 1 else 0 end)
Set @DuplicateTableB=(Select Case when exists(Select COUNT(1) From TableB Group by Company having COUNT(1)>1) then 1 else 0 end)
Set @DuplicateTableC=(Select Case when exists(Select COUNT(1) From TableC Group by Company having COUNT(1)>1) then 1 else 0 end)
/* Once you have the existence info, its easy for you to do the display at your application layer*/
Declare @DisplayText Varchar(MAX) =''
Set @DisplayText = (Select Case when @DuplicateTableA = 1 then 'TableA contains duplicate data' else '' end)
Set @DisplayText = @DisplayText+(Select Case when Len(@DisplayText)=0 then '' else ' ' End + Case when @DuplicateTableB = 1 then 'TableB contains duplicate data' else '' end)
Set @DisplayText = @DisplayText+(Select Case when Len(@DisplayText)=0 then '' else ' ' End + Case when @DuplicateTableC = 1 then 'TableC contains duplicate data' else '' end)
Select @DisplayText
Drop table TableA,TableB,TableC -
How to delete the duplicate data from PSA Table
Dear All,
How to delete the duplicate data from PSA Table, I have the purchase cube and I am getting the data from Item data source.
In PSA table, I found the some cancellation records for that particular records quantity would be negative for the same record value would be positive.
Due to this reason the quantity is updated to target but the values would summarized and got the summarized value of all normal and cancellation .
Please let me know the solution how to delete the data while updating to the target.
Thanks
Regards,
SaiHi,
in deleting the records in PSA table difficult and how many you will the delete.
you can achieve the different ways.
1. creating the DSO maintain the some key fields it will overwrite the based on key fields.
2. you can write the ABAP logic deleting the duplicate records at info package level check with the your ABAPer.
3.you can restrict the cancellation records at query level.
Thanks,
Phani. -
Insert data into table 1 but remove the duplicate data
hello friends,
i m trying to insert data into table tab0 using hints,
query is like this..
INSERT INTO /*+ APPEND PARALLEL(tab0) */ tab NOLOGGING
(select /*+ parallel(tab1)*/
colu1,col2
from tab1 a
where a.rowid =(select max (b.rowid) from tab2 b))
but this query takes too much time around 5 hrs...
bz data almost 40-50 lacs.
i m using
a.rowid =(select max (b.rowid) from tab2 b))....
this is for remove the duplicate data..
but it takes too much time..
so please can u suggest me any ohter option to remove the duplicate data so it
resolved the optimization problem.
thanks in advance.In the code you posted, you're inserting two columns into the destination table. Are you saying that you are allowed to have duplicates in those two columns but you need to filter out duplicates based on additional columns that are not being inserted?
If you've traced the session, please post your tkprof results.
What does "table makes bulky" mean? You understand that the APPEND hint is forcing the insert to happen above the high water mark of the table, right? And you understand that this prevents the insert from reusing space that has been freed up because of deleted in the table? And that this can substantially increase the cost of full scans on the table. Did you benchmark the INSERT without the APPEND hint?
Justin -
How to load duplicate data to a temporary table in ssis
i have duplicate data in my table.i want to load unique records in one destination .and i want to load duplicate data in a temporary table in another destination. .how can we impliment package for this
Hi V60,
To achieve your goal, you can use the following two approaches:
Use Script Component to redirect the duplicate rows.
Use Fuzzy Grouping Transformation which performs data cleaning tasks by identifying rows of data that are likely to be duplicates and selecting a canonical row of data to use in standardizing the data. Then, use a Conditional Split Transform to redirect
the unique rows and the duplicate rows to different destinations.
For the step-by-step guidance about the above two methods, walk through the following blogs:
http://microsoft-ssis.blogspot.in/2011/12/redirect-duplicate-rows.html
http://hussain-msbi.blogspot.in/2013/02/redirect-duplicate-rows-using-ssis-step.html
Regards,
Mike Yin
TechNet Community Support -
Duplicate data coming in HANA table
Hi ,
we are getting duplicate data old and new records coming to HANA. Table have both old and new in filed created on in table. is it required to re triggred the particular table load. please any help on this.Hello Rama,
there is a separate forum for HANA-related questions: the SAP HANA Development Center.
Could you please ask this question there (as that seems to be a more relevant space for HANA issues)?
Regards,
Laszlo -
How to avoid duplicate data while inserting from sample.dat file to table
Hi Guys,
We have issue with duplicate data in flat file while loading data from sample.dat file to table. How to avoid duplicate data in control file.
Can any one help me on this.
Thanks in advance!
Regards,
LKRNo, a control file will not remove duplicate data.
You would be better to use an external table and then remove duplicate data using SQL as you query the data to insert it to your destination table. -
BTREE and duplicate data items : over 300 people read this,nobody answers?
I have a btree consisting of keys (a 4 byte integer) - and data (a 8 byte integer).
Both integral values are "most significant byte (MSB) first" since BDB does key compression, though I doubt there is much to compress with such small key size. But MSB also allows me to use the default lexical order for comparison and I'm cool with that.
The special thing about it is that with a given key, there can be a LOT of associated data, thousands to tens of thousands. To illustrate, a btree with a 8192 byte page size has 3 levels, 0 overflow pages and 35208 duplicate pages!
In other words, my keys have a large "fan-out". Note that I wrote "can", since some keys only have a few dozen or so associated data items.
So I configure the b-tree for DB_DUPSORT. The default lexical ordering with set_dup_compare is OK, so I don't touch that. I'm getting the data items sorted as a bonus, but I don't need that in my application.
However, I'm seeing very poor "put (DB_NODUPDATA) performance", due to a lot of disk read operations.
While there may be a lot of reasons for this anomaly, I suspect BDB spends a lot of time tracking down duplicate data items.
I wonder if in my case it would be more efficient to have a b-tree with as key the combined (4 byte integer, 8 byte integer) and a zero-length or 1-length dummy data (in case zero-length is not an option).
I would loose the ability to iterate with a cursor using DB_NEXT_DUP but I could simulate it using DB_SET_RANGE and DB_NEXT, checking if my composite key still has the correct "prefix". That would be a pain in the butt for me, but still workable if there's no other solution.
Another possibility would be to just add all the data integers as a single big giant data blob item associated with a single (unique) key. But maybe this is just doing what BDB does... and would probably exchange "duplicate pages" for "overflow pages"
Or, the slowdown is a BTREE thing and I could use a hash table instead. In fact, what I don't know is how duplicate pages influence insertion speed. But the BDB source code indicates that in contrast to BTREE the duplicate search in a hash table is LINEAR (!!!) which is a no-no (from hash_dup.c):
while (i < hcp->dup_tlen) {
memcpy(&len, data, sizeof(db_indx_t));
data += sizeof(db_indx_t);
DB_SET_DBT(cur, data, len);
* If we find an exact match, we're done. If in a sorted
* duplicate set and the item is larger than our test item,
* we're done. In the latter case, if permitting partial
* matches, it's not a failure.
*cmpp = func(dbp, dbt, &cur);
if (*cmpp == 0)
break;
if (*cmpp < 0 && dbp->dup_compare != NULL) {
if (flags == DB_GET_BOTH_RANGE)
*cmpp = 0;
break;
What's the expert opinion on this subject?
Vincent
Message was edited by:
user552628Hi,
The special thing about it is that with a given key,
there can be a LOT of associated data, thousands to
tens of thousands. To illustrate, a btree with a 8192
byte page size has 3 levels, 0 overflow pages and
35208 duplicate pages!
In other words, my keys have a large "fan-out". Note
that I wrote "can", since some keys only have a few
dozen or so associated data items.
So I configure the b-tree for DB_DUPSORT. The default
lexical ordering with set_dup_compare is OK, so I
don't touch that. I'm getting the data items sorted
as a bonus, but I don't need that in my application.
However, I'm seeing very poor "put (DB_NODUPDATA)
performance", due to a lot of disk read operations.In general, the performance would slowly decreases when there are a lot of duplicates associated with a key. For the Btree access method lookups and inserts have a O(log n) complexity (which implies that the search time is dependent on the number of keys stored in the underlying db tree). When doing put's with DB_NODUPDATA leaf pages have to be searched in order to determine whether the data is not a duplicate. Thus, giving the fact that for each given key (in most of the cases) there is a large number of data items associated (up to thousands, tens of thousands) an impressive amount of pages have to be brought into the cache to check against the duplicate criteria.
Of course, the problem of sizing the cache and databases's pages arises here. Your size setting for these measures should tend to large values, this way the cache would be fit to accommodate large pages (in which hundreds of records should be hosted).
Setting the cache and the page size to their ideal values is a process of experimenting.
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/pagesize.html
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/cachesize.html
While there may be a lot of reasons for this anomaly,
I suspect BDB spends a lot of time tracking down
duplicate data items.
I wonder if in my case it would be more efficient to
have a b-tree with as key the combined (4 byte
integer, 8 byte integer) and a zero-length or
1-length dummy data (in case zero-length is not an
option). Indeed, these should be the best alternative, but testing must be done first. Try this approach and provide us with feedback.
You can have records with a zero-length data portion.
Also, you could provide more information on whether or not you're using an environment, if so, how did you configure it etc. Have you thought of using multiple threads to load the data ?
Another possibility would be to just add all the
data integers as a single big giant data blob item
associated with a single (unique) key. But maybe this
is just doing what BDB does... and would probably
exchange "duplicate pages" for "overflow pages"This is a terrible approach since bringing an overflow page into the cache is more time consuming than bringing a regular page, and thus performance penalty results. Also, processing the entire collection of keys and data implies more work from a programming point of view.
Or, the slowdown is a BTREE thing and I could use a
hash table instead. In fact, what I don't know is how
duplicate pages influence insertion speed. But the
BDB source code indicates that in contrast to BTREE
the duplicate search in a hash table is LINEAR (!!!)
which is a no-no (from hash_dup.c):The Hash access method has, as you observed, a linear search (and thus a search time and lookup time proportional to the number of items in the buckets, O(1)). Combined with the fact that you don't want duplicate data than hash using the hash access method may not improve performance.
This is a performance/tunning problem and it involves a lot of resources from our part to investigate. If you have a support contract with Oracle, then please don't hesitate to put up your issue on Metalink or indicate that you want this issue to be taken in private, and we will create an SR for you.
Regards,
Andrei -
DTP Error: Duplicate data record detected
Hi experts,
I have a problem with loading data from DataSource to standart DSO.
In DS there are master data attr. which have a key containing id_field.
In End routine I make some operations which multiple lines in result package and fill new date field - defined in DSO ( and also in result_package definition )
I.E.
Result_package before End routine:
__ Id_field ____ attra1 ____ attr_b ...___ attr_x ____ date_field
____1________ a1______ b1_________ x1
____2________ a2______ b2_________ x2
Result_package after End routine:
__ Id_field ____ attra1 ____ attr_b ..___ attr_x ____ date_field
____1________ a1______ b1_________ x1______d1
____2________ a1______ b1_________ x1______d2
____3________ a2______ b2_________ x2______d1
____4________ a2______ b2_________ x2______d2
The date_field (date type) is in a key fields in DSO
When I execute DTP I have an error in section Update to DataStore Object: "Duplicate data record detected "
"During loading, there was a key violation. You tried to save more than one data record with the same semantic key."
As I know the result_package key contains all fields except fields type i, p, f.
In simulate mode (debuging) everything is correct and the status is green.
In DSO I have uncheched checkbox "Unique Data Records"
Any ideas?
Thanks in advance.
MGHi,
In the end routine, try giving
DELETE ADJACENT DUPLICATES FROM RESULT_PACKAGE COMPARING XXX YYY.
Here XXX and YYY are keys so that you can eliminate the extra duplicate record.
Or you can even try giving
SORT itab_XXX BY field1 field2 field3 ASCENDING.
DELETE ADJACENT DUPLICATES FROM itab_XXX COMPARING field1 field2 field3.
this can be given before you loop your internal table (in case you are using internal table and loops) itab_xxx is the internal table.
field1, field2 and field 3 may vary depending on your requirement.
By using the above lines, you can get rid of duplicates coming through the end routine.
Regards
Sunil
Edited by: Sunny84 on Aug 7, 2009 1:13 PM -
Page level validation to prevent duplicate data entry into the database
Hello,
Can anyone please help me out with this issue.
I have a form with two items based on a table. I already have an item level validation to check for null. Now I would like to create a page level validation to check that duplicate data are not entered into the database. I would like to check the database when the user clicks on ‘Create’ button to ensure they are not inserting duplicate record. If data already exist, then show the error message and redirect them to another page. I am using apex 3.2
ThanksHi,
Have you tried writing a PLSQL function to check this?
I haven't tested this specifically, but something like this should work:
1) Create a Page Level Validation
2) Choose PLSQL for the method
3) Choose Function Returning Boolean for the Type
For the validation code, you could do something like this:
DECLARE
v_cnt number;
BEGIN
select count(*)
into v_cnt
from
your_table
where
col1 = :P1_field1 AND
col2 = :P2_field2;
if v_cnt > 0 then return false;
else return true;
end if;
END;If the query returns false, then your error message will be displayed.
Not sure how you would go about redirecting after this page though. Maybe just allow the user to try again with another value (in case they made a mistake) or just press a 'cancel' button to finish trying to create a new record.
Amanda. -
How to avoid 'duplicate data record' error message when loading master data
Dear Experts
We have a custom extractor on table CSKS called ZCOSTCENTER_ATTR. The settings of this datasource are the same as the settings of 0COSTCENTER_ATTR. The problem is that when loading to BW it seems that validity (DATEFROM and DATETO) is not taken into account. If there is a cost center with several entries having different validity, I get this duplicate data record error. There is no error when loading 0COSTCENTER_ATTR.
Enhancing 0COSTCENTER_ATTR to have one datasource instead of two is not an option.
I know that you can set ignore duplicates in the infopackage, but that is not a nice solution. 0COSTCENTER_ATTR can run without this!
Is there a trick you know to tell the system that the date fields are also part of the key??
Thank you for your help
PeterAlessandro - ZCOSTCENTER_ATTR is loading 0COSTCENTER, just like 0COSTCENTER_ATTR.
Siggi - I don't have the error message described in the note.
"There are duplicates of the data record 2 & with the key 'NO010000122077 &' for characteristic 0COSTCENTER &."
In PSA the records are marked red with the same message (MSG no 191).
As you see the key does not contain the date when the record is valid. How do I add it? How is it working for 0COSTCENTER_ATTR with the same records? Is it done on the R/3 or on the BW side?
Thanks
Peter -
How to avoid duplicate data loading from SAP-r/3 to BI
Hi !
I have created one process chain that will load data into some ODS from R/3,where(in R/3)the datasources/tables r updated daily.
I want to scheduled the system such that ,if on any day the source data is not updated (if the tables r as it is) then that data shuold not be loaded into ODS.
Can any one suggest me such mechanism,so that I can always have unique data in my data targets.
Pls ! Reply soon.
Thank You !
Pankaj K.Hello Pankaj,
By setting the unique records option, you pretty much are letting the system know to not check the uniqueness of the records using the change log and the ODS active table log.
Also, in order to avoid the problem where you are having dual requests which are getting activated at the same time. Please make sure you select the options "Set Quality Status to 'OK' Automatically" and "Activate Data Automatically" that way you would be having an option to delete a request as required without having to delete the whole data.
This is all to avoid the issue where even the new request has to be deleted to delete the duplicate data.
Untill and unless the timestamp field is available in the table on top of which you have created the datasource it would be difficult to check the delta load.
Check the table used to make sure there is no timestamp field or any other numeric counter field which can be used for creating a delta queue for the datasource you are dealing with.
Let me know if the information is helpful or if you need additional information regarding the same.
Thanks
Dharma. -
hi,
how to find the duplicates in the table in the below example
Id Empfirstname Empfirstlastname empdesig
1 Xyz Abc Software Engg
2 Xyz Abc Software Engg
3 Kkk Ddd Architect
need query to display
1 Xyz Abc Software Engg
2 Xyz Abc Software Engg
duplicate recordsIn addition:
You might want to think about the 'upper-/lowercase-thing' and still being/being not duplicate in these cases.
If so, then spot the difference:
MHO%xe> select * from
2 (
3 with all_your_data_are_belong_to_us -- generate some data on the fly here
4 as (
5 select 1 Id, 'Xyz' empfirstname, 'Abc' emplastname, 'Software Eng' empdesig from dual uni
on all
6 select 2, 'Xyz', 'Abc', 'Software Eng' from dual union all
7 select 3, 'aaa', 'AAA', 'Fairy' from dual union all
8 select 4, 'AAA', 'aaa', 'Fairy' from dual union all
9 select 5, 'Zlad', 'Molvania', 'Electrician' from dual union all
10 select 6, 'Kkk', 'Ddd', 'Architect' from dual
11 )
12 select id
13 , empfirstname
14 , emplastname
15 , empdesig
16 , count(*) over ( partition by upper(empfirstname), upper(emplastname), upper(empdesig)
17 order by 'geronimo' ) rn
18 from all_your_data_are_belong_to_us --<< this could be YOUR table name, if so, omit the with-clause! ;-)
19 )
20 where rn > 1;
ID EMPF EMPLASTN EMPDESIG RN
3 aaa AAA Fairy 2
4 AAA aaa Fairy 2
2 Xyz Abc Software Eng 2
1 Xyz Abc Software Eng 2
Verstreken: 00:00:00.68
MHO%xe> select * from
2 (
3 with all_your_data_are_belong_to_us -- generate some data on the fly here
4 as (
5 select 1 Id, 'Xyz' empfirstname, 'Abc' emplastname, 'Software Eng' empdesig from dual uni
on all
6 select 2, 'Xyz', 'Abc', 'Software Eng' from dual union all
7 select 3, 'aaa', 'AAA', 'Fairy' from dual union all
8 select 4, 'AAA', 'aaa', 'Fairy' from dual union all
9 select 5, 'Zlad', 'Molvania', 'Electrician' from dual union all
10 select 6, 'Kkk', 'Ddd', 'Architect' from dual
11 )
12 select id
13 , empfirstname
14 , emplastname
15 , empdesig
16 , count(*) over ( partition by empfirstname, emplastname, empdesig
17 order by 'geronimo' ) rn
18 from all_your_data_are_belong_to_us --<< this could be YOUR table name, if so, omit the with-clause! ;-)
19 )
20 where rn > 1;
ID EMPF EMPLASTN EMPDESIG RN
1 Xyz Abc Software Eng 2
2 Xyz Abc Software Eng 2( As you can see, I used Karthick's example, and it worked after translating the 'French' part (partitoin vs. partition) ;-) ) -
Need help on How to delete duplicate data
Hi All
I need your help in finding the best way to delete duplicate data from a table.
Table is like
create table table1 (col1 varchar2(10), col2 varchar2(20), col3 varchar2(30));
Insert into table1 ('ABC', 'DEF','FGH');
Insert into table1 ('ABC', 'DEF','FGH');
Insert into table1 ('ABC', 'DEF','FGH');
Insert into table1 ('ABC', 'DEF','FGH');
Now I want a sql statement which will delete the duplicate rows and leaves 1 row of that data in the table. ie.in the above example delete 3 rows of duplicate data and leave a row of that data in the table. My oracle version is 8i.
Appreciate your help in advance.Either of these will work, the best approach will depend on how big the table is, and how many duplicates there are.
SQL> SELECT * FROM table1;
COL1 COL2 COL3
ABC DEF FGH
ABC DEF FGH
ABC DEF FGH
ABC DEF FGH
ABC DEF IJK
BCD EFG HIJ
BCD EFG HIJ
SQL> DELETE FROM table1 o
2 WHERE rowid <> (SELECT MAX(rowid)
3 FROM table1 i
4 WHERE o.col1 = i.col1 and
5 o.col2 = i.col2 and
6 o.col3 = i.col3);
4 rows deleted.
SQL> SELECT * FROM table1;
COL1 COL2 COL3
ABC DEF FGH
ABC DEF IJK
BCD EFG HIJ
SQL> ROLLBACK;
Rollback complete.
SQL> CREATE TABLE table1_tmp AS
2 SELECT DISTINCT * FROM table1;
Table created.
SQL> TRUNCATE TABLE table1;
Table truncated.
SQL> INSERT INTO table1
2 SELECT * FROM table1_tmp;
3 rows created.
SQL> SELECT * FROM table1;
COL1 COL2 COL3
ABC DEF FGH
ABC DEF IJK
BCD EFG HIJThere are other approaches as well (e.g. see the EXCEPTIONS INTO clause of the ALTER TABLE command), but these are the most straightforward.
HTH
John
Maybe you are looking for
-
Portfolio Document Restriction Trouble
I am using Adobe 9 Pro to create a large PDF portfolio that contains other PDF portfolios. This will be used by many different people using only Adobe reader. I chose to use PDF portfolios because a video forum put on by Adobe specifically said Adobe
-
Installer error - SAP NetWeaver 7.01 ABAP Trial version
Hello, I am trying to install the ABAP trial version on windows vista in my personal PC, however I have ended up with error "Installer unable to run in graphical mode. Try run the installer in -console or -silent flag". Could some one help on this. P
-
Hello In SAP note 1299009 there is path to newest SAPUP which I do not find: Entry by Application Group -> Additional Components -> Upgrade Tools -> SAPUP UNICODE -> SAPUP 7.02 UNICODE -> 51033520 ERP 6.0 SR3 UC u2013 Upgrade Master Linux on x86_64 6
-
Since upgrading to 6.01 iOS the app upgrades are locked up and will not download
Since upgrading to 6.01 iOS the app upgrades are locked up and will not download.
-
MDT and computer with SSD drives causes warnings
Has anyone noticed that computers with SSD drives are causing task sequence errors with MDT. I think that some machines boot too fast and the network is not ready yet. Deployments dont fail, but they end with error message. From BDD.log Unable to con