Costs for 1.000 rows of PL/SQL code?
Hi to all of you!
What do you think how much time is necessary to develop one PL/SQL package (from first analysis to end-production) with 1.000 lines of code?
Averaged complexity, specification, .... of the requirements.
All estimations are welcome...
Hi to all of you!
What do you think how much time is necessary to
develop one PL/SQL package (from analysis to
production) with 1.000 lines of code?Depends on the task, I'd say. What is that PL/SQL
package supposed to do?
Averaged complexity, specification, .... of the
requirements.Please define averaged complexity, specification, etc.
All answers are welcome...Try this:
SELECT TRUNC(ABS(dbms_random.NORMAL) * 10) no_of_dev_days_for_package
FROM dual
It may be a wild guess, but is that some kind of in-house
quiz?
C.
Similar Messages
-
Pk index design for 500,000 row table
I have 3 tables as following relation:
http://www.visionfly.com/images/dgm.png
Intinerary - (1:N) FlightItem - (1:N) CabinPrice
DepCity, ArrCity, DepDate in Intinerary represent identity of one row in the table. I want to reduce space in FlightItem and Cabin, I add a field of flightId(autoIncrease) as pk of Intinerary. Also I add an index for DepCity, ArrCity, DepDate in Intinerary.
FlightId and FlightNo is pk of FlightItem. FlightId is Fk of FlightItem. FlightId, FlightNo,Cabin,priceType is pk of CabinPrice. FlightId, FlightNo is Fk of CabinPrice.
Interaray will keep about 10,000 rows.
FlightItem will keep about 50,000 rows.
CabinPrice will keep about 500,000 rows.
These 3 tables can regard as a whole. There 2 method operations in them. One is
select * from itinerary a, flightitem f, cabinprice c where a.flightId=f.flightId and f.flightId=c.flightId and f.flightNo=c.flightNo
and a.depcity='zuh' and a.arrcity='sha' and a.depdate='2004-7-1'.
It use index of Intinerary.
There 100 times of select in 1 seconds in highest hits.
Another operation is delete and add new one. I use cascade delete. delete where cause is the same as select. The highest hit is 50 operations in one minutes.
I intent to use ejb cmp to control them. Is the good design for above performance request? Any suggestion will be appericated.
Stephenthis is current design base ms-sql. We are planning to move to Oracle. Ignore data type design.
Stephen -
MatchCode for 100.000 rows = dump
Hi!
I use FM F4IF_INT_TABLE_VALUE_REQUEST for a personal matchcode in a dynpro field.
But if the internal table has got 100.000 rows, the system dump.
How can I do for display the match code without dump?
Thanks very much!A matchcode where you have more than 100.000 rows is not a good matchcode !
You should provide at least some criterion to restrict the list. The maximum number of hits is only 4 digits in SAP and you should always restrict your list according to this maximum
you do this by adding to your SELECT statement:
up to callcontrol-maxrecords rows -
How to Post Actual Costs for Overhead
Hi All,
Kindly let me know the steps or T-Code as to how to post the actual costs for the Overheads.Hi
Use the t.code KGI2 and post the actual costs for the Overheads.
Award points if it is useful.
Thanks & Regards,
A.Anandarajan. -
Dear all,
With respect to the above subject, we are trying to calculate the costing
for the semifinsihed material using a T-code CK11N.
But, Most of the operations carried out by the job workers are on our
machinery. So, whenever we are creating a routing for operation in our
factory we are mentioning the work center and the control key as PP02. But,
when we are trying to capture operation cost for the PP02 it is taking
only the cost which is mentioned in the purchase info records or which is
mentioned in routing.
So, please specify what necessary change in the fields are required so that we can capture the
exact cost of the components.
Best regards,
ShaileshHow about creating Activity Types for each machine and assiging these activities to the material BOM? The cost centre(s) assigned to the corresponding work centres will help calculate the unit cost for the sake of product costing.
Cheers. -
SQL*Plus two fetches for getting one row.
Hi all.
I have tested following script.
alter session set events '10046 trace name context forever, level 12';
select * from dual;And achieved such results (extract from .trc file).
SQL*Plus: Release 11.2.0.1.0; (Oracle Version 10.2.0.4.0, 11.2.0.1.0)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 0 0 1
total 4 0.00 0.00 0 0 0 1SQL*Plus: Release 8.1.7.0.0; (Oracle Version 8.1.7.0.0)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 1 4 1
total 5 0.00 0.00 0 1 4 1Allround Automations PL/SQL Developer 8.0.4; (Oracle Version 10.2.0.4.0, 11.2.0.1.0)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1Allround Automations PL/SQL Developer 8.0.4; (Oracle Version 8.1.7.0.0)
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 1 4 1
total 3 0.00 0.00 0 1 4 11) I can't figure out why sqlplus does TWO fetches for getting ONE row (instead of pl/sql developer).
8i raw trace
PARSING IN CURSOR #1 len=31 dep=0 uid=0 oct=3 lid=0 tim=0 hv=3549852361 ad='4a0155c'
select 'hello world' from dual
END OF STMT
PARSE #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=0
BINDS #1:
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=4,tim=0
WAIT #1: nam='SQL*Net message to client' ela= 0 p1=1111838976 p2=1 p3=0
FETCH #1:c=0,e=0,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=4,tim=0
WAIT #1: nam='SQL*Net message from client' ela= 0 p1=1111838976 p2=1 p3=0
FETCH #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=0
WAIT #1: nam='SQL*Net message to client' ela= 0 p1=1111838976 p2=1 p3=0
WAIT #1: nam='SQL*Net message from client' ela= 0 p1=1111838976 p2=1 p3=0
STAT #1 id=1 cnt=1 pid=0 pos=0 obj=195 op='TABLE ACCESS FULL DUAL '11g raw trace
PARSING IN CURSOR #3 len=30 dep=0 uid=96 oct=3 lid=96 tim=1581355246985 hv=1158622143 ad='b8a1bcdc' sqlid='5h2yvx92hyaxz'
select 'hello world' from dual
END OF STMT
PARSE #3:c=0,e=130,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1388734953,tim=1581355246984
EXEC #3:c=0,e=40,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1388734953,tim=1581355247154
WAIT #3: nam='SQL*Net message to client' ela= 7 driver id=1111838976 #bytes=1 p3=0 obj#=-1 tim=1581355247252
FETCH #3:c=0,e=18,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=1388734953,tim=1581355247324
STAT #3 id=1 cnt=1 pid=0 pos=1 obj=0 op='FAST DUAL (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)'
WAIT #3: nam='SQL*Net message from client' ela= 193 driver id=1111838976 #bytes=1 p3=0 obj#=-1 tim=1581355247735
FETCH #3:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1388734953,tim=1581355247800
WAIT #3: nam='SQL*Net message to client' ela= 5 driver id=1111838976 #bytes=1 p3=0 obj#=-1 tim=15813552478552) Is there any possibility to view data provided by each fetch?
Thanks in advance!
P.S.
SQL> sho arraysize
arraysize 15Thanks.
I have tested two statements.
select 'hello world' from dual where 1=1;
select 'hello world' from dual where 1=0;When query returns no data, there is only one SQL*Net roundtrip (and one fetch)
SQL> set autot on statistics
SQL> select 'hello world' from dual where 1=1;
'HELLOWORLD
hello world
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
528 bytes sent via SQL*Net to client
492 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select 'hello world' from dual where 1=0;
no rows selected
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
329 bytes sent via SQL*Net to client
481 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processedBut in both cases i found in client trace this sequence of bytes:
] nsprecv: 00 00 36 01 00 00 00 00 |..6.....|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 90 19 43 13 00 00 |....C...|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 00 00 00 00 00 00 |........|
] nsprecv: 00 00 19 4F 52 41 2D 30 |...ORA-0|
] nsprecv: 31 34 30 33 3A 20 6E 6F |1403:.no|
] nsprecv: 20 64 61 74 61 20 66 6F |.data.fo|
] nsprecv: 75 6E 64 0A |und. |In first case - it was in 2nd packet and in second case (query returns no data) - part of 1st packet. -
Extract SQL Query Results to 'xlsx' bogs down at 150,000 rows
Environment:
SQL*Developer 3.1.07.42 on Windows XP SP3
Oracle 11.2.0.3 EE on Solaris 10.5
I ran a query in a worksheet window and the first page of results came back in 10 seconds, whoo hooo!
I right-clicked the first column in the first row and selected 'Count Rows' and it returned 527,563 after thinking a bit.
I right-clicked 'Export', selected a format of 'xlsx', unchecked the box for 'Query Worksheet Name' and browsed to specify the output file directory (my local C: drive) and file name. I clicked 'Next' and then 'Finish'.
I watch the row counter at the bottom right of the window and it went very fast until it hit about 150,000 rows and then it started slowing down. It got slower and slower and slower and slower, well you get the picture, and I finally killed the process when it took over 15 seconds to get from 156,245 to 156,250.
Why would this be?
Additional information:
I ran the exact same query again and exported the same 527,563 rows using the 'xls' format instead of 'xlsx' and the process proceeded very quickly all the way to the end and completed successfully in just several minutes. The resultant spreadsheet contained eight (8) worksheets since it could only put 65536 rows into each worksheet. This was acceptable to the user who simply merged the data manually.
Is there some issues with using 'xlsx' as the output format as opposed to just using it as an input format?
Does SQL*Developer try to create a spreadsheet with as many rows as the data up to the max in Excel 2010 (which is more than 527,563)?
Thanks very much for any light shed on this issue. If I've left out any important details please let me know and I'll try to include them.
-garyHi gary,
You may have already seen one or more threads like the following on the issue of increased memory overhead for the Excel formats:
Re: Sql Developer 3.1 - Exporting a result set in xls generates and empty file
Basically SQL Developer uses a third-party API to read and write these Excel formats. There are distinct readers and formatters for each of the xls and xlsx forms.
There is a newer version of the API that supports streaming of xlsx Workbooks. Basically it achieves a much lower footprint by keeping in memory only rows that are within a sliding window, while the older, non-streaming version gives access to all rows in the document. The programmer may define the size of this window. I believe the newer API version was either not available or not stable during our 3.1 development cycle. Possibly a future SQL Developer version might use it.
Regards,
Gary
SQL Developer Team -
My Scenario is like this:*
Hi i have 2 fact tables fact1 and fact 2 and four dimension tables D1,D2,D3 ,D4 & D1.1 ,D1.2 the relations in the data model is like this :
NOTE: D1.1 and D1.2 are derived from D1 So D1 might be snow Flake.
[( D1.. 1:M..> Fact 1 , D1.. 1:M..> Fact 2 ), (D2.. 1:M..> Fact 1 , D2.. 1:M..> Fact 2 ), ( D3.. 1: M.> Fact 1 , D3.. 1:M..> Fact 2 ),( D4.. 1:M..> Fact 1 , D4 ... 1:M..> Fact 2 )]
Now from D1 there is a child level like this: [D1 --(1:M)..> D1.1 and from D1.1.. 1:M..> D1.2.. 1:M..> D4]
Please help me in modeling these for making a report of 10,000 rows and also let me know for which tables do i need to enable cache?
PS: There shouldn't be performance issue so please help me in modeling this.
Thanks in Advance for the Experts who are helping me for a while.Shudn't be much problem with just these many rows...
Model something like this only Re: URGENT MODELING SNOW FLAKE SCHEMA
There are various ways of handling performance issues if any in OBIEE.
Go for caching strategy for complete warehouse. Make sure to purge it after every data load..If you have aggr calculations at higher level then you can also go for aggregated tables in OBIEE for better performance.
http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
Hope this is clear...Go ahead with actual implementation and lets us know incase you encounter any major issues.
Cheers -
Processing a cursor of 11,000 rows and Query completed with errors
So I have 3rd party data that I have loaded into a SQL Server Table. I am trying to determine if the 3rd party Members reside in our database by using a cursor and going through all 11,000 rows...substituting the #Parameter Values in a LIKE statement...trying
to keep it pretty broad. I tried running this in SQL Server Management Studio and it chunked for about 5 minutes and then just quit. I kind of figured I was pushing the buffer limits within SQL Server Management Studio. So instead I created it as a Stored
Procedure and changed my Query Option/Results and checked Discard results after execution. This time it chunked away for 38 minutes and then stopped saying
Query completed with errors. I did throw a COMMIT in there thinking that the COMMIT would hit and free up resources and I'd see the Table being loaded in chunks. But that didn't seem to work.
I'm kind of at a loss here in terms of trying to tie back this data.
Can anyone suggest anything on this???
Thanks for your review and am hopeful for a reply.
WHILE (@@FETCH_STATUS=0)
BEGIN
SET @SQLString = 'INSERT INTO [dbo].[FBMCNameMatch]' + @NewLineChar;
SET @SQLString = ' (' + @NewLineChar;
SET @SQLString = ' [FBMCMemberKey],' + @NewLineChar;
SET @SQLString = ' [HFHPMemberNbr]' + @NewLineChar;
SET @SQLString = ' )' + @NewLineChar;
SET @SQLString = 'SELECT ';
SET @SQLString = @SQLString + CAST(@FBMCMemberKey AS VARCHAR) + ',' + @NewLineChar;
SET @SQLString = @SQLString + ' [member].[MEMBER_NBR]' + @NewLineChar;
SET @SQLString = @SQLString + 'FROM [Report].[dbo].[member] ' + @NewLineChar;
SET @SQLString = @SQLString + 'WHERE [member].[NAME_FIRST] LIKE ' + '''' + '%' + @FirstName + '%' + '''' + ' ' + @NewLineChar;
SET @SQLString = @SQLString + 'AND [member].[NAME_LAST] LIKE ' + '''' + '%' + @LastName + '%' + '''' + ' ' + @NewLineChar;
EXEC (@SQLString)
--SELECT @SQLReturnValue
SET @CountFBMCNameMatchINSERT = @CountFBMCNameMatchINSERT + 1
IF @CountFBMCNameMatchINSERT = 100
BEGIN
COMMIT;
SET @CountFBMCNameMatchINSERT = 0;
END
FETCH NEXT
FROM FBMC_Member_Roster_Cursor
INTO @MemberIdentity,
@FBMCMemberKey,
@ClientName,
@MemberSSN,
@FirstName,
@MiddleInitial,
@LastName,
@AddressLine1,
@AddressLine2,
@City,
@State,
@Zipcode,
@TelephoneNumber,
@BirthDate,
@Gender,
@EmailAddress,
@Relation
END
--SELECT *
--FROM [#TempTable_FBMC_Name_Match]
CLOSE FBMC_Member_Roster_Cursor;
DEALLOCATE FBMC_Member_Roster_Cursor;
GOHi ITBobbyP,
As Erland suggested, you can compare all rows at once. Basing on my understanding on your code, the below code can lead to the same output as yours but have a better performance than cursor I believe.
CREATE TABLE [MemberRoster]
MemberKey INT,
FirstName VARCHAR(99),
LastName VARCHAR(99)
INSERT INTO [MemberRoster]
VALUES
(1,'Eric','Zhang'),
(2,'Jackie','Cheng'),
(3,'Bruce','Lin');
CREATE TABLE [yourCursorTable]
MemberNbr INT,
FirstName VARCHAR(99),
LastName VARCHAR(99)
INSERT INTO [yourCursorTable]
VALUES
(1,'Bruce','Li'),
(2,'Jack','Chen');
SELECT * FROM [MemberRoster]
SELECT * FROM [yourCursorTable]
--INSERT INTO [dbo].[NameMatch]
--[MemberNbr],
--[MemberKey]
SELECT y.MemberNbr,
n.[MemberKey]
FROM [dbo].[MemberRoster] n
JOIN [yourCursorTable] y
ON n.[FirstName] LIKE '%'+y.FirstName+'%'
AND n.[LastName] LIKE '%'+y.LastName+'%'
DROP TABLE [MemberRoster], [yourCursorTable]
If you have any question, feel free to let me know.
Eric Zhang
TechNet Community Support -
Hi,
I try to display a report with 250 000 rows in bi publisher 11.1.1.6.
Running the SQL Request in TOAD take 20s.
From bi publisher 11.1.1.6 this operation take more than 2 hour without result.
The temp file show an xml file which increase (53 M to 70M to 100 M)
I configure jvm (1.6_029) with the following parameters : Xms512m - Xmx2048 -XX:MaxPermSize=3072m
My configuration is the following :
REHL5 64bits
8G RAM
100G file system and 50 GB tmp file for bi publisher
4 CPU
Jdk Parameters:
Xms512m -Xmx2048m -XX:MaxPermSize=3072m -XX:UseParallelGC.
Total CPU usage : 25%
Live Threads : 85 threads
Used : 665 Mb
Commited : 908 Mb
GC time :
8.047 s on PS MarkSweep (3 collections)
8.625 s on PS Scavenge (242 collections)
Any idea to increase performance or other will be appreciate.
Thank you
MamsIf you are generating a PDF output, select "PDF Compression" option in the properties. Ensure you reduce all the log levels to "Low". Ensure there are no (or minimal) calculations/formulas in the report template.
-
Calculating values from row to row with pure sql?
Hello,
I'm searching for a way to calculate values from row to row with pure sql. I need to create an amortisation table. How should it work:
Known values at start: (they can be derived with an ordinary sql-statement)
- redemption amount RA
- number of payment terms NT
- annuity P (is constant in every month)
- interest rate IR
What has to be calculated:
First row:
RA1 = RA - P
Z1 = (RA1 * (IR/100/12))
T1 = P - Z1
2nd row
RA2 = RA1 - T1
Z2 = (RA2 * (IR/100/12))
T2 = P - Z2
and so on until NT has reached.
It should look like
NT
P
Tn
Zn
RAn
1
372,17
262,9
109,27
22224,83
2
372,17
264,19
107,98
21961,93
3
372,17
265,49
106,68
21697,74
4
372,17
266,8
105,38
21432,25
5
372,17
268,11
104,06
21165,45
6
372,17
269,43
102,75
20897,34
7
372,17
270,75
101,42
20627,91
8
372,17
272,09
100,09
20357,16
9
372,17
273,42
98,75
20085,07
10
372,17
274,77
97,41
19811,65
11
372,17
276,12
96,06
19536,88
12
372,17
277,48
94,7
19260,76
13
372,17
278,84
93,33
18983,28
14
372,17
280,21
91,96
18704,44
15
372,17
281,59
90,59
18424,23
16
372,17
282,97
89,2
18142,64
17
372,17
284,36
87,81
17859,67
18
372,17
285,76
86,41
17575,31
19
372,17
287,17
85,01
17289,55
20
372,17
288,58
83,59
17002,38
21
372,17
290
82,18
16713,8
22
372,17
291,42
80,75
16423,8
23
372,17
292,86
79,32
16132,38
24
372,17
294,3
77,88
15839,52
25
372,17
295,74
76,43
15545,22
26
372,17
297,2
74,98
15249,48
27
372,17
298,66
73,52
14952,28
28
372,17
300,13
72,05
14653,62
29
372,17
301,6
70,57
14353,49
30
372,17
303,09
69,09
14051,89
31
372,17
304,58
67,6
13748,8
32
372,17
306,07
66,1
13444,22
33
372,17
307,58
64,6
13138,15
34
372,17
309,09
63,08
12830,57
35
372,17
310,61
61,56
12521,48
36
372,17
312,14
60,04
12210,87
37
372,17
313,67
58,5
11898,73
38
372,17
315,21
56,96
11585,06
39
372,17
316,76
55,41
11269,85
40
372,17
318,32
53,85
10953,09
41
372,17
319,89
52,29
10634,77
42
372,17
321,46
50,71
10314,88
43
372,17
323,04
49,13
9993,42
44
372,17
324,63
47,55
9670,38
45
372,17
326,22
45,95
9345,75
46
372,17
327,83
44,35
9019,53
47
372,17
329,44
42,73
8691,7
48
372,17
331,06
41,11
8362,26
I would appreciate every help and idea to solve the problem solely with sql.
Thanks and regards
CarstenIt's using Model Clause and / or Recursive With (sometimes maybe both)
Regards
Etbin
with
rec_proc(nt,i,ra,p,ir,z,t) as
(select nt,i,ra - p,p,ir,round((ra - p) * 0.01 * ir / 12,2),p - round((ra - p) * 0.01 * ir / 12,2)
from (select 48 nt,22597 ra,372.17 p,5.9 ir,0 z,0 t,1 i
from dual
union all
select nt,i + 1,ra - t,p,ir,round((ra - t) * 0.01 * ir / 12,2),p - round((ra - t) * 0.01 * ir / 12,2)
from rec_proc
where i < nt
select * from rec_proc
try to adjust initial values and rounding please
NT
I
RA
P
IR
Z
T
48
1
22224.83
372.17
5.9
109.27
262.9
48
2
21961.93
372.17
5.9
107.98
264.19
48
3
21697.74
372.17
5.9
106.68
265.49
48
4
21432.25
372.17
5.9
105.38
266.79
48
5
21165.46
372.17
5.9
104.06
268.11
48
6
20897.35
372.17
5.9
102.75
269.42
48
7
20627.93
372.17
5.9
101.42
270.75
48
8
20357.18
372.17
5.9
100.09
272.08
48
9
20085.1
372.17
5.9
98.75
273.42
48
10
19811.68
372.17
5.9
97.41
274.76
48
11
19536.92
372.17
5.9
96.06
276.11
48
12
19260.81
372.17
5.9
94.7
277.47
48
13
18983.34
372.17
5.9
93.33
278.84
48
14
18704.5
372.17
5.9
91.96
280.21
48
15
18424.29
372.17
5.9
90.59
281.58
48
16
18142.71
372.17
5.9
89.2
282.97
48
17
17859.74
372.17
5.9
87.81
284.36
48
18
17575.38
372.17
5.9
86.41
285.76
48
19
17289.62
372.17
5.9
85.01
287.16
48
20
17002.46
372.17
5.9
83.6
288.57
48
21
16713.89
372.17
5.9
82.18
289.99
48
22
16423.9
372.17
5.9
80.75
291.42
48
23
16132.48
372.17
5.9
79.32
292.85
48
24
15839.63
372.17
5.9
77.88
294.29
48
25
15545.34
372.17
5.9
76.43
295.74
48
26
15249.6
372.17
5.9
74.98
297.19
48
27
14952.41
372.17
5.9
73.52
298.65
48
28
14653.76
372.17
5.9
72.05
300.12
48
29
14353.64
372.17
5.9
70.57
301.6
48
30 -
SSRS and a report of 3,000,000 rows
I have SQL Server 2008 R2 with SSRS. I have created an SSRS report that may contain up to 3,000,000 rows.
When I tried to generate such huge report I saw the following picture:
The stored procedure (one that brings the data into the Report) worked 50 seconds
After this the SSRS ReportingServivesService.exe started to consume a lot of memory. It's Working Set grew up to 11 GB. It took 6 minutes; and then the report generation failed with the following error message:
An error has occurred during report processing. (rsProcessingAborted)
There is not enough space on the disk.
“There is not enough space on the disk.” – this was probably about the disk drive on that server where the Windows page file was mapped into. The drive had 14 GB of free space.
A NOTE: the report was not designed as a single-page report. It is divided on pages by 40 rows. When I try to generate the same report with 10,000 rows – it takes just 1 minute.
The question is: can this be fixed somehow?I have an MS SQL Server 2008 R2 with SSRS. I have created an SSRS report that may contain up to 3,000,000 rows.
When I tried to generate such huge report I saw the following picture:
- The stored procedure (one that brings the data into the Report) worked 50 seconds
- After this the SSRS ReportingServivesService.exe started to consume a lot of memory. Its Working Set grew up to 11 GB. It took 6 minutes; and then the report generation failed with the following error message:
An error has occurred during report processing. (rsProcessingAborted)
There is not enough space on the disk.
“There is not enough space on the disk.” – this was probably about the disk drive on that server where the Windows page file was mapped into. The drive had 14 GB of free space.
A NOTE: the Report was not designed as a single-page report. It is divided on pages by 40 rows. When I try to generate the same Report with 10,000 rows – it takes just 1 minute.
The question is: can this be fixed somehow? -
Extracting difference is attributes for a given row in two different tables
Hope you are doing great. Below is the problem i am facing. There are two tables
1: employeeinfo (HR maintains this table and we cannot alter the records of this table)
2: cmp_employee_info (i have created this table with limited fields we need to monitor for changes, the records in this table are limited).
Procedure:
1: The second tabel stores the same information as employee info but is not updated by HR, but the first table can at any point be updated by HR.
2: A program checks every day for differences in table employeeinfo and cmp_employee_info. If any changes are found it needs to aletr us via emiail regarding he changes and also it updated the cmp_employee_info table.
I have this query in place.
(SELECT cmp_employeeinfo_rms.logonid,cmp_employeeinfo_rms.firstname,cmp_employeeinfo_rms.lastname,cmp_employeeinfo_rms.emailaddr,cmp_employeeinfo_rms.Locationname,cmp_employeeinfo_rms.termdate,cmp_employeeinfo_rms.persontype,cmp_employeeinfo_rms.jobclass,cmp_employeeinfo_rms.assignmentstatus,cmp_employeeinfo_rms.mgrlogonid FROM cmp_employeeinfo_rms
LEFT OUTER JOIN employeeinfo
ON cmp_employeeinfo_rms.logonid = employeeinfo.logonid
where (cmp_employeeinfo_rms.mgrlogonid != employeeinfo.mgrlogonid or cmp_employeeinfo_rms.lastname != employeeinfo.lastname or cmp_employeeinfo_rms.locationname != employeeinfo.locationname or cmp_employeeinfo_rms.termdate != employeeinfo.termdate or cmp_employeeinfo_rms.persontype != employeeinfo.persontype or cmp_employeeinfo_rms.jobclass != employeeinfo.jobclass or cmp_employeeinfo_rms.assignmentstatus != employeeinfo.assignmentstatus))
UNION
(SELECT employeeinfo.logonid,employeeinfo.firstname,employeeinfo.lastname,employeeinfo.emailaddr,employeeinfo.Locationname,employeeinfo.termdate,employeeinfo.persontype,employeeinfo.jobclass,employeeinfo.assignmentstatus,employeeinfo.mgrlogonid FROM employeeinfo JOIN cmp_employeeinfo_rms on cmp_employeeinfo_rms.logonid=employeeinfo.logonid)
The output to the above query is.
LOGONID
FIRSTNAME
LASTNAME
EMAIL
LOCATION
TERMDATE
TYPE
DESIGNATION
ASSIGNMENTSTATUS
MGRID
TEST1
FIRST1
LAST1
EMAIL1
L1
NULL
Associate
D1
Active Assignment
M1
TEST2
FIRST2
LAST2
EMAIL2
L2
NULL
Associate
D2
Active Assignment
M2
TEST3
FIRST3
LAST3
EMAIL3
L3
NULL
Associate
D3
Active Assignment
M3
TEST4
FIRST4
LAST4
EMAIL4
L4
NULL
Associate
D4
Active Assignment
M4
TEST5
FIRST5
LAST5
EMAIL5
L5
NULL
Associate
D5
Active Assignment
M5
TEST6
FIRST6
LAST6
EMAIL6
L6
NULL
Associate
D6
Active Assignment
M6
TEST7
FIRST7
LAST7
EMAIL7
L7
NULL
Associate
D7
Active Assignment
M7
TEST8
FIRST8
LAST8
EMAIL8
L8
NULL
Associate
D8
Active Assignment
M8
TEST9
FIRST9
LAST9
EMAIL9
L9
NULL
Associate
D9
Active Assignment
M9
TEST10
FIRST10
LAST10
EMAIL10
L10
NULL
Associate
D10
Active Assignment
M10
TEST11
FIRST11
LAST11
EMAIL11
L11
NULL
Associate
D11
Active Assignment
M11
TEST12
FIRST12
LAST12
EMAIL12
L12
NULL
Associate
D12
Active Assignment
M12
TEST13
FIRST13
LAST13
EMAIL13
L13
NULL
Associate
D13
Active Assignment
M13
TEST14
FIRST14
LAST14
EMAIL14
L14
NULL
Associate
D14
Active Assignment
M14
TEST15
FIRST15
LAST15
EMAIL15
L15
NULL
Associate
D15
Active Assignment
M15
TEST16
FIRST16
LAST16
EMAIL16
L16
NULL
Associate
D16
Active Assignment
M16
TEST17
FIRST17
LAST17
EMAIL17
L17
NULL
Associate
D17
Active Assignment
M17
TEST18
FIRST18
LAST18
EMAIL18
L18
NULL
Associate
D18
Active Assignment
M18
TEST18
FIRST18
LAST18
EMAIL18
L18
NULL
Outsorced
D18
Active Assignment
M18
TEST19
FIRST19
LAST19
EMAIL19
L19
NULL
Associate
D19
Active Assignment
M19
TEST20
FIRST20
LAST20
EMAIL20
L20
NULL
Associate
D20
Active Assignment
M20
TEST21
FIRST21
LAST21
EMAIL21
L21
NULL
Associate
D21
Active Assignment
M21
TEST22
FIRST22
LAST22
EMAIL22
L22
NULL
Associate
D22
Active Assignment
M22
TEST23
FIRST23
LAST23
EMAIL23
L23
NULL
Associate
D23
Active Assignment
M23
As you would have noticed that the following record appears twice as there is a change in persontype.
TEST18
FIRST18
LAST18
EMAIL18
L18
NULL
Associate
D18
Active Assignment
M18
TEST18
FIRST18
LAST18
EMAIL18
L18
NULL
Outsorced
D18
Active Assignment
M18
I wish to extract any such change with the field changed. After recording the change i would update the second table as well.
Edited by: Prashant_Dixit on Apr 20, 2012 10:25 PMPrashant_Dixit wrote:
Hi,
I just need a report of what the changes are.As another poster has suggested above, you may want to have a look at using a materialized view log instead of your approach. The MV log approach is better as it
a) is maintained by database
b) is easy to setup
c) is easy to maintain
d) is easy to query.
Following is a demo of how you can use it.
SQL> select * from v$version ;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> create user a identified by a ;
User created.
SQL> grant create session, create table to a ;
Grant succeeded.
SQL> alter user a quota unlimited on users ;
User altered.
SQL> create table a.empinfo as select * from hr.employees ;
Table created.
SQL> conn a/a
Connected.
SQL> select * from tab ;
TNAME TABTYPE CLUSTERID
EMPINFO TABLE
SQL> alter table empinfo add constraint e_pk primary key (employee_id) ;
Table altered.
SQL> desc empinfo
Name Null? Type
EMPLOYEE_ID NOT NULL NUMBER(6)
FIRST_NAME VARCHAR2(20)
LAST_NAME NOT NULL VARCHAR2(25)
EMAIL NOT NULL VARCHAR2(25)
PHONE_NUMBER VARCHAR2(20)
HIRE_DATE NOT NULL DATE
JOB_ID NOT NULL VARCHAR2(10)
SALARY NUMBER(8,2)
COMMISSION_PCT NUMBER(2,2)
MANAGER_ID NUMBER(6)
DEPARTMENT_ID NUMBER(4)
SQL> create materialized view log on empinfo with (salary) including new values ;
Materialized view log created.
SQL> select * from tab ;
TNAME TABTYPE CLUSTERID
EMPINFO TABLE
MLOG$_EMPINFO TABLE
RUPD$_EMPINFO TABLE
SQL> desc mlog$_empinfo
Name Null? Type
EMPLOYEE_ID NUMBER(6)
SALARY NUMBER(8,2)
SNAPTIME$$ DATE
DMLTYPE$$ VARCHAR2(1)
OLD_NEW$$ VARCHAR2(1)
CHANGE_VECTOR$$ RAW(255)
XID$$ NUMBER
SQL> update empinfo set salary = salary * 1.2 where department_id = 10 ;
1 row updated.
SQL> update empinfo set salary = salary * 1.1 where department_id = 30 ;
6 rows updated.
SQL> commit ;
Commit complete.
SQL> select count(*) from mlog$_empinfo ;
COUNT(*)
14
SQL> select employee_id, salary, dmltype$$ from mlog$_empinfo order by snaptime$$ ;
EMPLOYEE_ID SALARY D
200 4400 U
200 5280 U
114 11000 U
114 12100 U
115 3100 U
115 3410 U
119 2750 U
116 3190 U
117 2800 U
117 3080 U
118 2600 U
EMPLOYEE_ID SALARY D
118 2860 U
119 2500 U
116 2900 U
14 rows selected.Hope this helps. -
How to set a MessageTextInput to be Read Only for a specific row?
Hi,
In Benefits Self Service, particularly the Update Beneficiaries page, it lists all your eligible Beneficiaries including yourself. The table has the following columns displayed for each beneficiary: Beneficiary, Relationship, Social Security Number, Primary %, Contingent %, Clear. This is the page where you allocate the Primary % and the Contingent %. The Primary % and Contingent % are MessageText Input.
We have a requirement that we wouldn't want an employee to designate himself/herself as a beneficiary. However, the employee is being listed as a beneficiary along with all the other eligible beneficiaries. Oracle says it is an intended functionality to include the employee in the beneficiary. Is there a way thru Controller Object extension to set the Primary % and Contingent % as Read Only but only for the row corresponding to the employee to prevent the employee from accidentally allocating himself/herself as a beneficiary.
I know there is the SPEL functionality but this requires me to add a transient attribute to the View Object corresponding to that table. But everytime I make a change to the View Object either by adding a transient attribute or some other changes, I end up getting an error on the standard Relation attribute of that view object. The error is something like "Relation set attribute failed for View Object". I cannot get pass this error and I'm not sure what's causing this. I see other people in this forum have encountered the same problem when extending view objects but no specific solution was offered nor any clear explanation of the cause.
So I thought if there's any way this could be done thru Controller Object extension. If so, please let me know how. The challenge for me I think is finding the bean corresponding to those Message Text Input but only for a specific row in the table.
Thanks,
RonaldoHi,
I also tried extending the View Object but without changing anything to the SQL so it is exactly the same as the standard view object but I still get the "Attribute set for Relation in view object BeneficiaryPeopleVO1 failed". Based from this, extending the view object whether any change has been made or not affects the standard transient Relation attribute.
Here is the standard XML definition of the View Object if anyone is curious
<ViewObject
Name="BeneficiaryPeopleVO"
BindingStyle="Oracle"
CustomQuery="true"
RowClass="oracle.apps.ben.selfservice.enrollment.server.BeneficiaryPeopleVORowImpl"
ComponentClass="oracle.apps.ben.selfservice.enrollment.server.BeneficiaryPeopleVOImpl"
MsgBundleClass="oracle.jbo.common.JboResourceBundle"
FetchMode="FETCH_AS_NEEDED"
FetchSize="10"
UseGlueCode="false" >
<SQLQuery><![CDATA[
SELECT pcr.person_id person_id,
pen.prtt_enrt_rslt_id prtt_enrt_rslt_id,
con.first_name ||' '||con.last_name || ' ' || con.suffix Beneficiary,
con.national_identifier Ssn,
sum(decode(pbn.prmry_cntngnt_cd,'PRIMY',pbn.pct_dsgd_num,0)) primary_pct,
sum(decode(pbn.prmry_cntngnt_cd,'CNTNGNT',pbn.pct_dsgd_num,0)) contingent_pct,
con.person_id bnf_person_id,
con.business_group_id,
pen.per_in_ler_id,
pbn.pl_bnf_id pl_bnf_id,
pbn.object_version_number object_version_number,
pbn.effective_start_date,
pcr.contact_type contact_type,
con.full_name beneficiary_full_name,
sum(decode(pbn.prmry_cntngnt_cd,'PRIMY',pbn.pct_dsgd_num,0)) Db_Primary_pct,
sum(decode(pbn.prmry_cntngnt_cd,'CNTNGNT',pbn.pct_dsgd_num,0)) db_contingent_pct,
DECODE(pbn.pl_bnf_id, null, 'DeleteIconDisabled', 'DeleteIconEnabled') delete_switcher,
pen.pl_id pl_id,
pen.oipl_id oipl_id,
nvl((select 'Y' from dual where (to_date(:1 , 'rrrr/mm/dd') between
nvl(pcr.date_start,to_date(:2 , 'rrrr/mm/dd')) and
nvl(pcr.date_end,to_date(:3 , 'rrrr/mm/dd'))) and
(pil.lf_evt_ocrd_dt <= nvl(con.date_of_death,pil.lf_evt_ocrd_dt))), 'N') rel_exists_flag
FROM per_people_f con,
per_contact_relationships pcr,
ben_pl_bnf_f pbn,
ben_prtt_enrt_rslt_f pen,
ben_per_in_ler pil
WHERE pcr.personal_flag = 'Y'
AND (pbn.pl_bnf_id is not null or
(to_date(:4 ,'rrrr/mm/dd') between
nvl(pcr.date_start,to_date(:5 ,'rrrr/mm/dd'))
AND nvl(pcr.date_end,to_date(:6 ,'rrrr/mm/dd'))
and pil.lf_evt_ocrd_dt <= nvl(con.date_of_death,pil.lf_evt_ocrd_dt))) --Bug 4297137
AND pcr.contact_person_id = con.person_id
and pen.person_id = pcr.person_id
AND pbn.bnf_person_id (+) = con.person_id
AND to_date(:7 ,'rrrr/mm/dd') between pbn.effective_start_date (+)
AND pbn.effective_end_date (+)
AND pcr.person_id = :8
and pen.prtt_enrt_rslt_id = :9
and to_date(:10 , 'rrrr/mm/dd') between con.effective_start_date and con.effective_end_date
and to_date(:11 , 'rrrr/mm/dd') between pen.effective_start_date and pen.effective_end_date
and pil.per_in_ler_id = pen.per_in_ler_id
and pil.per_in_ler_stat_cd NOT IN('VOIDD', 'BCKDT')
and pbn.prtt_enrt_rslt_id (+) = :12
and ((to_date(:13 ,'rrrr/mm/dd') between
nvl(pcr.date_start,to_date(:14 ,'rrrr/mm/dd')) and
nvl(pcr.date_end,to_date(:15 ,'rrrr/mm/dd'))) or
((pcr.date_start = (select max(pcr2.date_start)
from per_contact_relationships pcr2
where pcr2.contact_person_id = pcr.contact_person_id
and pcr2.person_id = pcr.person_id
and pcr2.personal_flag = 'Y')) and
not exists (select null
from per_contact_relationships pcr3
where pcr3.contact_person_id = pcr.contact_person_id
and pcr3.person_id = pcr.person_id
and pcr3.personal_flag = 'Y'
and to_date(:16 ,'rrrr/mm/dd') between
nvl(pcr3.date_start,to_date(:17 ,'rrrr/mm/dd'))
and nvl(pcr3.date_end,to_date(:18 ,'rrrr/mm/dd')))
and (pbn.pl_bnf_id is null or
exists (select null from ben_per_in_ler pil2
where pil2.per_in_ler_id = pbn.per_in_ler_id
and pil2.per_in_ler_stat_cd not in ('VOIDD','BCKDT')))
GROUP BY pcr.person_id,
pen.prtt_enrt_rslt_id,
con.last_name,con.first_name,con.suffix,
con.full_name,
con.national_identifier,
pcr.contact_type,
con.date_of_birth,
con.person_id,
con.business_group_id,
pen.per_in_ler_id,
pl_bnf_id,
pbn.object_version_number,
pbn.effective_start_date,
pen.pl_id,
pen.oipl_id,
pcr.date_start,
pcr.date_end,
pil.lf_evt_ocrd_dt,
con.date_of_death
union
SELECT to_number(null) person_id,
pen.prtt_enrt_rslt_id prtt_enrt_rslt_id,
con.first_name ||' '||con.last_name || ' ' || con.suffix Beneficiary,
con.national_identifier Ssn,
sum(decode(pbn.prmry_cntngnt_cd,'PRIMY',pbn.pct_dsgd_num,0)) primary_pct,
sum(decode(pbn.prmry_cntngnt_cd,'CNTNGNT',pbn.pct_dsgd_num,0)) contingent_pct,
con.person_id bnf_person_id,
con.business_group_id,
pen.per_in_ler_id,
pbn.pl_bnf_id pl_bnf_id,
pbn.object_version_number object_version_number,
pbn.effective_start_date,
'SLF' contact_type,
con.full_name beneficiary_full_name,
sum(decode(pbn.prmry_cntngnt_cd,'PRIMY',pbn.pct_dsgd_num,0)) Db_Primary_pct,
sum(decode(pbn.prmry_cntngnt_cd,'CNTNGNT',pbn.pct_dsgd_num,0)) db_contingent_pct,
DECODE(pbn.pl_bnf_id, null, 'DeleteIconDisabled', 'DeleteIconEnabled') delete_switcher,
pen.pl_id pl_id,
pen.oipl_id oipl_id,
'Y' rel_exists_flag
FROM per_people_f con,
ben_pl_bnf_f pbn,
ben_prtt_enrt_rslt_f pen,
ben_per_in_ler pil
WHERE
pbn.bnf_person_id (+) = con.person_id
AND to_date(:19 ,'rrrr/mm/dd') between pbn.effective_start_date (+)
AND pbn.effective_end_date (+)
AND con.person_id = :20
and con.person_id = pen.person_id
and pen.prtt_enrt_rslt_id = :21
and to_date(:22 , 'rrrr/mm/dd') between con.effective_start_date and con.effective_end_date
and to_date(:23 , 'rrrr/mm/dd') between pen.effective_start_date and pen.effective_end_date
and pil.per_in_ler_id = pen.per_in_ler_id
and pil.per_in_ler_stat_cd NOT IN('VOIDD', 'BCKDT')
and (pbn.pl_bnf_id is not null or pil.lf_evt_ocrd_dt <= nvl(con.date_of_death,pil.lf_evt_ocrd_dt)) --Bug 4297137
and pbn.prtt_enrt_rslt_id (+) = :24
and (pbn.pl_bnf_id is null or
exists (select null from ben_per_in_ler pil2
where pil2.per_in_ler_id = pbn.per_in_ler_id
and pil2.per_in_ler_stat_cd not in ('VOIDD','BCKDT')))
GROUP BY pen.prtt_enrt_rslt_id,
con.last_name,con.first_name,con.suffix,
con.full_name,
con.national_identifier,
con.date_of_birth,
con.person_id,
con.business_group_id,
pen.per_in_ler_id,
pl_bnf_id,
pbn.object_version_number,
pbn.effective_start_date,
pen.pl_id,
pen.oipl_id
ORDER BY 3,14
]]></SQLQuery>
<DesignTime>
<Attr Name="_isCodegen" Value="true" />
<Attr Name="_version" Value="9.0.3.13.75" />
<Attr Name="_CodeGenFlagNew" Value="36" />
<Attr Name="_rowSuperClassName" Value="oracle.apps.fnd.framework.server.OAPlsqlViewRowImpl" />
<Attr Name="_objectSuperClassName" Value="oracle.apps.fnd.framework.server.OAPlsqlViewObjectImpl" />
</DesignTime>
<ViewAttribute
Name="PersonId"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="PERSON_ID"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="PrttEnrtRsltId"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="PRTT_ENRT_RSLT_ID"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="Beneficiary"
IsQueriable="false"
IsPersistent="false"
Precision="332"
Type="java.lang.String"
AliasName="BENEFICIARY"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="VARCHAR" >
</ViewAttribute>
<ViewAttribute
Name="Ssn"
IsQueriable="false"
IsPersistent="false"
Precision="30"
Type="java.lang.String"
AliasName="SSN"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="VARCHAR" >
</ViewAttribute>
<ViewAttribute
Name="PrimaryPct"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Type="oracle.jbo.domain.Number"
AliasName="PRIMARYPCT"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="ContingentPct"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="CONTINGENTPCT"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="BnfPersonId"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="BNFPERSONID"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="BusinessGroupId"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="BUSINESSGROUPID"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="PerInLerId"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="PERINLERID"
ColumnType="$none$"
Expression="'See the SQL...'"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="PlBnfId"
IsQueriable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="PL_BNF_ID"
ColumnType="$none$"
Expression="PL_BNF_ID"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="ObjectVersionNumber"
IsQueriable="false"
IsPersistent="false"
Precision="9"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="OBJECT_VERSION_NUMBER"
ColumnType="$none$"
Expression="OBJECT_VERSION_NUMBER"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="EffectiveStartDate"
IsQueriable="false"
IsPersistent="false"
Type="oracle.jbo.domain.Date"
AliasName="EFFECTIVE_START_DATE"
ColumnType="$none$"
Expression="EFFECTIVE_START_DATE"
SQLType="DATE" >
</ViewAttribute>
<ViewAttribute
Name="ContactType"
IsQueriable="false"
IsPersistent="false"
IsNotNull="true"
Precision="30"
Type="java.lang.String"
AliasName="CONTACT_TYPE"
ColumnType="$none$"
Expression="CONTACT_TYPE"
SQLType="VARCHAR" >
<DesignTime>
<Attr Name="_DisplaySize" Value="30" />
</DesignTime>
</ViewAttribute>
<ViewAttribute
Name="BeneficiaryFullName"
IsQueriable="false"
IsPersistent="false"
Precision="240"
Type="java.lang.String"
AliasName="BENEFICIARY_FULL_NAME"
ColumnType="$none$"
Expression="BENEFICIARY_FULL_NAME"
SQLType="VARCHAR" >
<DesignTime>
<Attr Name="_DisplaySize" Value="240" />
</DesignTime>
</ViewAttribute>
<ViewAttribute
Name="DbPrimaryPct"
IsQueriable="false"
IsPersistent="false"
Type="oracle.jbo.domain.Number"
AliasName="DB_PRIMARY_PCT"
ColumnType="$none$"
Expression="DB_PRIMARY_PCT"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="DbContingentPct"
IsQueriable="false"
IsPersistent="false"
Type="oracle.jbo.domain.Number"
AliasName="DB_CONTINGENT_PCT"
ColumnType="$none$"
Expression="DB_CONTINGENT_PCT"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="Relation"
IsQueriable="false"
IsPersistent="false"
Type="java.lang.String"
AliasName="Relation"
ColumnType="$none$"
SQLType="VARCHAR" >
</ViewAttribute>
<ViewAttribute
Name="DeleteSwitcher"
IsPersistent="false"
Precision="30"
Type="java.lang.String"
AliasName="DeleteSwitcher"
ColumnType="VARCHAR2"
Expression="DeleteSwitcher"
SQLType="VARCHAR" >
</ViewAttribute>
<ViewAttribute
Name="PlId"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="PlId"
ColumnType="VARCHAR2"
Expression="PlId"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="OiplId"
IsUpdateable="false"
IsPersistent="false"
Precision="15"
Scale="0"
Type="oracle.jbo.domain.Number"
AliasName="OiplId"
ColumnType="VARCHAR2"
Expression="OiplId"
SQLType="NUMERIC" >
</ViewAttribute>
<ViewAttribute
Name="RelExistsFlag"
IsQueriable="false"
IsPersistent="false"
Precision="30"
Type="java.lang.String"
AliasName="REL_EXISTS_FLAG"
ColumnType="VARCHAR2"
Expression="REL_EXISTS_FLAG"
SQLType="VARCHAR" >
</ViewAttribute>
</ViewObject>
Thanks,
Ronaldo -
How to delete the line for 300SAP* in table USR02 in SQL Management studio
Hello
I used to delete the line for 300SAP* in table USR02 in SQL Enterprise Manager. After I could log on with pass.
I wander how to delete it in SQL Management studio. When I expand tdatabase it takes so long time and it is uncontrollableHello,
you have to delete the row by a sql statement.
Open a new query and run a script like this:
use <Your SID DB> -- e.g. use PRD
setuser 'your sid in lowercase' --- e.g. setuser 'prd'
delete from USR02 where MANDT = '300' and BNAME = 'sap*'
go
Run a complete backup before deleting data manually.
Regards
Clas
Maybe you are looking for
-
Hi all, in which table we can check the status of MM flow( PO completed or not , delivery completed or not,PGI,billing). i know EKBE table can be used to check the PO History, but i want the status of PO,delivery,pgi,billing. please can any body he
-
Task related to Desktop Innserpage ?
I have developmed a component which allows users to enter/update data which is populated from R/3 system. Now my entries for the jap pages are .. <form method="post" action="<par_file>.<comp_name>" target="_self"> </form> Result of button click on "S
-
Dear all, Iam trying to substitute function module CCMCS_BP_SUBCREATE2 with CCMCS_BP_POPULATE2 in the search profile for ICWINCLIENT. but it gives me a dump saying that parameter 'Searchtype' is missing for it. Has anyone come across this problem bef
-
Hi Guys, I want to open pdf file in Flex (Web Application). I am unable to open it, Please guide and help me. Thanks in advance.. Rangrajan.
-
New Ipad app supporting excel, printing and email.
I am an agent/rep and I have a Excel spreadsheet as an order form. I need to enter a customers order while I am in their shop then print it via bluetooth and also email them their copy while I am in their shop (possible no internet or 3g network). Wh