Multi table delete
Hello,
I have a scenario in which I have three tables I want to delete rows from simultaneously.
They do not all have the same key, but two will share a like value, and then two others will share a like value,
For instance, table one and table two both have USER_NAME, which is equal, and table two and table three both have USER_EMAIL, which are equal.
I've tried to create a stored procedure in this, as I manually created a DML statement that did it ok.
I'm getting one error on it, about the syntax.
Here is my procedure's syntax:
CREATE OR REPLACE PROCEDURE "CCTRL_REMOVE_APPROVERS_SP"
(userid IN USERS.USER_NAME%TYPE,
uemail IN USERS.USER_EMAIL%TYPE,
useridrole IN USER_ROLES.USER_NAME%TYPE,
uemailvp IN APPROVERS.BUSVP%TYPE)
AS
BEGIN
DELETE
FROM USERS a
WHERE
a.USER_NAME = UPPER(userid)
DELETE FROM
USER_ROLES b
WHERE b.USER_NAME = UPPER(useridrole)
DELETE FROM
APPROVERS c
WHERE
c.BUSVP = uemailvp
commit;
end CCTRL_REMOVE_APPROVERS_SP;
/Getting an 00936 error, about a missing expression. If I hard code values in, this will work properly.
If anyone can please point out what they see might be going wrong I'd appreciate it.
Thank you very much.
>
If I hard code values in, this will work properly.
>
No - it won't. The procedure won't work. Each individual query will work in sql*plus but that is only because of the '/' as Peter mentioned.
Replace the '/' characters with ';'.
Similar Messages
-
Multi-Row Delete from a table via a button without submitting the page
Hi,
I have a simple page based on a (temporary) table. There is a submit button that calls a PL/SQL process. However, I would like to have an 'Abort' button that deletes all rows from the table belonging to a specific user.
I had a look at 'Processes -> Data Manipulation --> Multi Row Delete' but this can only be linked to a page level event such as onSubmit. My onSubmit is linked to another process so this is not an option for me.
I thought of creating a PL/SQL function for the deletion and calling it from Javascript linked to a button. I have done the PL/SQL and the button but don't know how to call PL/SQL from JS. And is this the correct way of doing something like a deletion? Any documents that show how can this be done will be much appreciated....
I had a look at the forum and the documentation but could not find anything for multil-row deletion triggered from a button.
Your help is appreciated as I'm a newbie :-)
Thanks
AngelaHi,
I actually found the solution. I created a button (that submits) and a computation that calls the PL/SQL function conditional on that button being pressed. Initially I got confused because I already had another PL/SQL function attached to different button. I didn't think that having two buttons that submit the page and call different functions was possible.
Thanks
Angela -
Deleting with multi-table mapping
I have an object that is mapped to two tables, i.e., one object is split across two tables. When I do a deleteallquery it only executes a delete for one of the tables:
DELETE FROM DIRECT2TABLESPART1
Inserts, selects, and updates all work correctly and manipulate both tables. Is there something explicit I have to do to get it to execute the delete for the other table?The class DeleteAllQuery should only be used internally by TopLink, it is not for external use. It is only used in TopLink from a 1-m mapping and only in the case where multiple tables and private relations do not exist such that the delete of all of the objects can be optimized.
TopLink does not currently provide delete-all/update-all style of queries, you must either delete the objects one by one, or issue custom SQL to delete a set of objects. -
Hi, I'm new in apex and I tried to build master detail report on some view. Everything is cool but "delete checked" doesn't work.
"ORA-20001: Error in multi row delete operation: row= , ORA-06502: PL/SQL: numeric or value error: NULL index table key value,"
the problem is that I don't know what is wrong :), I have a special trigger "instead of delete on MY_VIEW", but in this error problem is not explained.
Anybody knows what can be wrong? It is probably a problem with trigger or multi row doesn't work with views? I couldn't find how MRD knows what kind of statement use to delete rows so I don't know if the statement that program used is correct. In debug it lokks that:
0.32: ...Do not run process "ApplyMRU", process point=AFTER_SUBMIT, condition type=REQUEST_IN_CONDITION, when button pressed=
0.32: ...Process "ApplyMRD": MULTI_ROW_DELETE (AFTER_SUBMIT) #OWNER#:MY_VIEW:ITEM1:ITEM2
0.33: Show ERROR page...
0.33: Performing rollback...
thanks for any help
//sorry for english mistakes
edit: it doesn't matter if I use in trigger delete from ... where item1=:OLD.item1 ; or if I use item1=:P4_item1 (which actually saves correct values)
Edited by: user5931224 on 2009-06-13 08:55I realised that this is not a problem with trigger, I changed trigger to "NULL;" and problem is the same. Maybe sb used master detail on view not only on tables and know what can be wrong in this situation?
-
Tabular form - Multi row delete error
Apex 4.0.2
We have a simple CRUD type of application on a bunch of tables built using Apex v1.6 that has, over the years, been upgraded to v4.0.2 and it is working mostly fine. It uses all out-of-the-box standard components, forms, classic reports, nothing too fancy. Recently one of the tabular forms started to misbehave, the multi-row-delete process raises a No Data Found error. The tabular form is based on a view with a INSTEAD OF trigger to handle the DML. Manually deleting the row in SQL*Plus works fine delete from mytab here pk_id = :pk_id but selecting the same row in Apex and clicking Delete raises the error.
How does one go about troubleshooting & fixing this sort of thing? I tried re-saving the region in the Builder, exporting/importing the entire app, nothing. Running in Debug mode doesn't really provide any additional information, just that the MRD process failed. Tabular forms are the most frustrating, opaque component in Apex, wish they were easier to troubleshoot.
Any ideas?Hello Vikas,
>> How does one go about troubleshooting & fixing this sort of thing?
By given us a bit more information :)
• Is it a manual Tabular Form (using the ITEM API) or a wizard created one?
• Are the Insert/Update operations work correctly? If not, what is the type of your PK column(s)?
• If the problem is limited to the Delete operation, maybe the problem lies with the checkbox column. Are you sure that on page it is rendered as the f01 column?
• As triggers are involved, can you save the PK that the trigger sees? Is it the expected value?
• Are there any other processes that are fired before the DML process? If so, maybe the problem is with them. You can temporarily disable them and see if it change anything.
>> Tabular forms are the most frustrating, opaque component in Apex, wish they were easier to troubleshoot
Yes, I agree. However, I believe that 4.1 made some serious advancement where Tabular Form is concerned. Having simplified Tabular Form related Validations and Process should make things easier, and as a result, prone to less errors. Still, the main problem is that the type of error you are talking about is usually the result of metadata problems and these are indeed very hard to track.
Regards,
Arie.
♦ Please remember to mark appropriate posts as correct/helpful. For the long run, it will benefit us all.
♦ Author of Oracle Application Express 3.2 – The Essentials and More -
ORA-20001: Error in multi row delete operation: ORA-01403: no data
Whenever I attempt a multi-row delete on my master detail page, I recieve the error:
ORA-20001: Error in multi row delete operation: ORA-01403: no dataI have seen in other threads that the primary key attribute of the underlying table needs to be set to 'Show' in the report attributes. I have tried this both with it displaying as 'Hidden' ('Show' is unchecked) and with it displaying as text. Either way still gives me the same error.
Is there anything else not mentioned in the other threads that could be causing this error for me?
Thanks.
BoilerUPJimmy,
In your multi row delete process you specify schema name, table and column name. Your report needs to be of type âSQL query (Updateable report)â. And your report needs to include the primary key column of your table. The column or alias name of that report column needs to correspond with the actual column name of your table.
Marc -
Multi Row Delete and then I get a unique contraint violation on my PK
I have a simple table with 2 columns, one a PK. I have a checkbox style, multi-row delete function setup on this (to be honest, APEX set this up automatically).
I removed the add/edit functionality to keep just the delete button and delete procedure.
When I select an item, and then click delete, I get a unique constraint violation that I'm violating my Primary Key.
How can I fix this, or see what it's doing when it tries to delete the row?Hi,
It sounds as though you haven't properly removed all of the add/edit functionality or that you still have some form of validation and/or computation in place or that you have a trigger that is trying to insert records into, for example, a history table (is the constraint on the table you are deleting from - the error message should tell you this?)
Check that the only process you have is ApplyMRD and that this is pointing to the correct table and has the correct primary key set. Ensure that this has Conditional Processing set for a Request of "MULTI_ROW_DELETE".
Check for any validations - there is no need to perform validations if your user can not insert or update data unless you want to check that they've ticked one or more checkboxes.
Check for processes that could run if the user clicks the Delete button. Validations and processes could be conditional on either the button click or on request = "MULTI_ROW_DELETE".
Review any triggers that you have on the table to ensure that deletions do not try to insert records into another table where the primary key on that table is not being populated.
Regards
Andy -
VLD-1119: Unable to generate Multi-table Insert statement for some or all t
Hi All -
I have a map in OWB 10.2.0.4 which is ending with following error: -
VLD-1119: Unable to generate Multi-table Insert statement for some or all targets.*
Multi-table insert statement cannot be generated for some or all of the targets due to upstream graphs of those targets are not identical on "active operators" such as "join".*
The map is created with following logic in mind. Let me know if you need more info. Any directions are highly appreciated and many thanks for your inputs in advance: -
I have two source tables say T1 and T2. There are full outer joined in a joiner and output of this joined is passed to an expression to evaluate values of columns based on
business logic i.e. If T1 is available than take T1.C1 else take T2.C1 so on.
A flag is also evaluated in the expression because these intermediate results needs to be joined to third source table say T3 with different condition.
Based on value taken a flag is being set in the expression which is used in a splitter to get results in three intermediate tables based on flag value evaluated earlier.
These three intermediate tables are all truncate insert and these are unioned to fill a final target table.
Visually it is something like this: -
T1 -- T3 -- JOINER1
| -->Join1 (FULL OUTER) --> Expression -->SPLITTER -- JOINER2 UNION --> Target Table
| JOINER3
T2 --
Please suggest.I verified that their is a limitation with the splitter operator which will not let you generate a multi split having more than 999 columns in all.
I had to use two separate splitters to achieve what I was trying to do.
So the situation is now: -
Siource -> Split -> Split 1 -> Insert into table -> Union1---------Final tableA
Siource -> Split -> Split 2 -> Insert into table -> Union1 -
Multi-table INSERT with PARALLEL hint on 2 node RAC
Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
used is what is given below.
create table t1 ( x int );
create table t2 ( x int );
insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
when (dummy='X') then into t1(x) values (y)
when (dummy='Y') then into t2(x) values (y)
select dummy, 1 y from dual;
I can see multiple sessions using the below query, but on only one instance only. This happens not
only for the above statement but also for a statement where real time table(as in table with more
than 20 million records) are used.
select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
sql.sql_text
from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
WHERE p.sid = s.sid
and p.serial# = s.serial#
and p.sid = ps.sid
and p.serial# = ps.serial#
and s.sql_address = sql.address
and s.sql_hash_value = sql.hash_value
and qcsid=945
Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
Thanks,
MaheshPlease take a look at these 2 articles below
http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
thanks
http://swervedba.wordpress.com -
View links in multi table relations
Is it advisable (in terms of performance e. g.), to create view links and view objects as local variables in multi table relations?
examle: the jdev online help says to use
such multi table relations like this:
// A (one) -> B (many) -> C (many)
ViewLink a2b = appMod.findViewLink("AtoB");
ViewLink b2c = appMod.findViewLink("BtoC");
ViewObject aV = a2b.getSource();
ViewObject bV = a2b.getDestination();
ViewObject cV = b2c.getDestination();
while(aV.hasNext())
Row aR = aV.next();
while(bV.hasNext())
Row bR = cV.next();
while(cV.hasNext())
Row cR = cV.next();
I would rather keep everything concerning
a, b, c together, especially when more
tables (d, e, ...) are added, like this
ViewLink a2b = appMod.findViewLink("AtoB");
ViewObject aV = a2b.getSource();
while(aV.hasNext())
Row aR = aV.next();
ViewLink b2c = appMod.findViewLink("BtoC");
ViewObject bV = a2b.getDestination();
while(bV.hasNext())
Row bR = cV.next();
ViewObject cV = b2c.getDestination();
while(cV.hasNext())
Row cR = cV.next();
Is there anything to say against this approach (in term of performance for example). I am not sure to remeber,
if this was the approach used in the HotelResevationSystem example.
Thanks.
Rx
nullFor this to work you have to either build a view based on the entities from which you need attributes (joined by the FK) or build a ViewObject with the sql statement giving you all the attributes you need.
The first case enables you the edit the attributes, the second gives you read only access to the attributes.
What you try to do isn't a master-detail connection, you are doing a join of some tables.
Timo -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
Problems with ReportDocument and multi-table dataset...
Hi,
First post here and hoping someone can help with me with this problem.
I have an ASP.NET app running using CR 2008 Basic to produce invoices - these are fairly complex and use multiple tables - data used is provided in lists and they work fine when displayed/printed/exported from the viewer.
My client now wants the ability to batch print these so I'm trying to develop a page which will use create a ReportDocument object and then print/export as required.
As soon as I started down this path I received the 'invalid logon params problem' - so to simplify things and to get past this I've developed a simple 1 page report with which takes data from a dataset I've populated and passed to it.
Here's the problem:
1) If I put one table in the dataset and add a field from that table to the report it works OK.
2) If I use a second table and field (without the first) it works OK.
3) as soon as I have both tables and the report has field from each I get the 'Invalid logon parameters' error.
The tables are fine (since I can use each individually), the report is find (only has 1 or 2 fields and works with individual tables) ... it's only when I have both tables that I get problems...
... this is driving me up the wall and if CR can't handle this there's no way it's going to handle the more complex invoices with subreports.
Can anyone suggest what I'm doing wrong ... or tell me whether I'm just pushing CR beyond it's capabilities?
The code I'm using to generate the ReportDocument is:
List<Invoice> rinv = Invoice.SelectOneContract(inv.Invoice_ID);
List<InvoiceLine> rline = InvoiceLine.SelectInvoice(inv.Invoice_ID);
DataSet ds = new DataSet();
ds.Tables.Add(InvoiceLineTable(rline));
ds.Tables.Add(InvoiceTable(rinv));
rdoc.FileName = Server.MapPath("~/Invoicing/test.rpt");
rdoc.SetDataSource(ds.Tables);
rdoc.ExportToDisk(ExportFormatType.PortableDocFormat, "c:
test
test.pdf");
... so not rocket science and the error is always caused at the 'ExportToDisk' line.
Thanks in advance!I've got nowhere trying to create a reportdocument and pass it a multi-table dataset, so decided to do it the 'dirty' way by adding all the controls to an aspx page and referring to them.
I know I can do this because the whole issue is printing a report that is currently viewed.
So ... I've now added the ObjectDataSources to the page as well as the CrystalReportSource and the CrystalViewer ...
.. I've tested the page and the report appears within the viewer with all the correct data ...
...so with a certain amount of excitement I've added the following line to the code behind file:
rptSrcContract.ReportDocument.PrintToPrinter(1, true, 1, 1);
... then I run the page and predictably the first thing that comes up is:
Unable to connect: incorrect log on parameters.
.. this is madness!
1) The data is retrieved and is in the correct format otherwise the report would not display.
2) the rptSrcContract.ReportDocument exists ... otherwise it would not display in the viewer.
So why does this want to logon to a database when the data is retrieved successfully and the security is running with a 'Network Services' account anyway????? (actually I know it has nothing to do with logging onto the database this is just the generic Crystal Reports 'I have a problem' message)
... sorry if this is a bit of an angry rant .. didn't get much sleep last night because of this ... all I want to be able to do is print a report from code .... surely this should be possible?? -
In which tables deleted schedule lines are stored
Hi All,
I want to know in which tables deleted schedule lines in VA02 VA32 are stored.
How to get the deleted Schedule line item quantity.Please give me reply as early as possible.
Thanks,
SarithaHi Saritha,
As my understanding there is no table to store the deleted schedule line data.
You have to identify badi or user-exit before deleting the schedule lines data in the VA02 program and in that exit you have to write the code to populate the deleted schedule line data in to your own custom tables(you have to create custom tables).I have used the same method for deleted deliveries.
In sap CDHDR /CDPOS contains only changed data but not deleted data.
Thanks,
shyla -
In which table deleted user information is stored
Hi all,
I have made one user ZTEST in sap through SU01. Its details has been stored in USR01 .
When i deleted this user than the details of this user has been deleted from the tables USR01.
After deletion on which table deleted user information is stored.
Any BAPI is available which sgives the deleted table list .
Thanks & regardsHi
You can get current database status using the following BAPIs-
BAPI_USER_EXISTENCE_CHECK
BAPI_USER_GETLIST
BAPI_USER_GET_DETAIL
Also check the report RPUAUD00 in which you can find out new infotype creation/modification etc.
Regards -
Remove multi table property defined in Parent descriptor
I tried to remove a multi table property(which is defined in a parent descriptor) in a child, but it does not get removed.
<!-- Parent descr -->
<item-descriptor name="parent" ...>
<table name="testTable" type="multi" id-column-name="id" multi-column-name="seq_num">
<property name="testProperty" column-name="property1_id" data-type="map" component-data-type="string">
</property>
</table>
</item-descriptor>
<!-- Child descr -->
<item-descriptor name="child" super-type="parent" sub-type-value="child" xml-combine="append">
<table name="testTable" type="multi" id-column-name="id" multi-column-name="seq_num">
<property name="testProperty" xml-combine="remove">
</property>
</table>
</item-descriptor>
Above code does not remove the property "testProperty" in child descriptor.
If i have a auxiliary table type, rather than multi table type, then it is working fine. Not sure why it is not working in multi table property.
Does anyone have any idea?
Thanks!!!Hi,
xml-combine is applicable when combining two or more xml definition files.
When you say it worked for you for a table of type auxiliary, the properties are in same xml or in different xml files?
In your case, you are actually trying to override a property from parent item descriptor in a child item descriptor.
I would suggest to have the testProperty removed from the parent item descriptor and let the children item descriptors define the testProperty.
Hope this helps.
Keep posting the updates / questions.
Thanks,
Gopinath Ramasamy
Maybe you are looking for
-
How to get total,maximum,mimumum no. of cases for this report
Description: Reason For Contact For Customer - This report will provide the total number of cases created for a particular Reason For Contact for a particular Customer with a valid Business Code. This report will also provide the Maximum, Minimum
-
TIME_OUT error with /sapapo/rtsinput_cube
We are in the process of cutting over to a new planning area ZPA2 with data from a backup InfoCube (ZIC21), and the /sapapo/rtsinput_cube program is failing with a TIME_OUT error. The overall job finishes with a success message, although data is only
-
Trying to find-out if any PSU/patches present in the Oracle_Home
Hi Experts, I have a Oracle_Home copied from a different server and using which , we have created a DB and it is being used for Agile applications. But , initially , after the DB creation , we have not configure the inventory for the Oracle_Home. And
-
Access MySQL from Oracle Tnsping problem.
Dears I am trying to connect to MysqlDB from Oracle. I have installed Oracle Gateway and ODBC diver on Linux server. Oracle gateway was installed on same server with Oracle Database but in different home and it has a different listener. please look a
-
Query text column of all_views table
Hi Gurus, background : After a upgrade from 8.1.7 to 9.0.2 we found that some custom views have ',,' in them, but they are still showing in the 'VALID' status. Problem: In order to find all bad views I am trying to run something like SQL/> select vie