Query Performace
Hi,
The below two queries will do same work. i want to know which one is performance low.
UPDATE (SELECT p.ph_country_code,c.country_code
FROM address a,
phone p,
country c
WHERE a.user_id = p.user_id
AND a.country_id = c.country_id) u
SET u.ph_country_code = u.country_code;
DECLARE
v_ph_country_code country.country_code%TYPE;
CURSOR cur_ph IS
SELECT user_id
FROM phone;
BEGIN
FOR rec_ph IN cur_ph LOOP
SELECT c.country_code
INTO v_ph_country_code
FROM address a,
country c
WHERE a.country_id = c.country_id
AND a.user_id = rec_ph.user_id;
UPDATE phone
SET ph_country_code = v_ph_country_code
WHERE user_id = rec_ph.user_id;
END LOOP;
END; Thank you,
Vi
GHD wrote:
The below two queries will do same work. i want to know which one is performance low.Let's say there are a substantial volume of data - and the CBO can parallelise access.
This means that (under the right circumstances) your native SQL can be executed using parallel processing (Oracle supports both parallel DML and DDL).
The PL/SQL code is your manual attempt at doing the same thing as that single SQL statement. How are you going to scale it for processing huge volumes in parallel? The PL/SQL engine cannot do that for you. The SQL your code generates for the SQL engine to execute is row-by-row processing. So no way that the SQL engine can do that in parallel as your code is a single session expecting a single row at a time.
Performance is therefore not only faster using native SQL (as the others indicated with their responses), native SQL is also a lot more scalable than doing row-by-row processing using PL/SQL.
There is a simple mantra for this: Maximise SQL. Minimise PL/SQL.
SQL is the superior language to crunch data in the database. Not PL/SQL.
Similar Messages
-
Hi,
I have a question about the performace of this SQL query. I have to display only 8 rows from this query. How is this query will be executed? What happens if the inside query returns more than 500 rows? Will it slow down?
SELECT * FROM (SELECT prod_date, create_date, prod_name, priority, status FROM CUST_PROD
WHERE cust_id = 100 and Status in(1, 5) order by priority, prod_name ) WHERE ROWNUM < = 8
Thank you..Its hard to tell what effect more data will have on a query until it happens. As BluShadow pointed out you're not talking about huge amounts of data; with luck the numbers you mentioned will not make any difference.
It looks like you're using the inline view to a top N query. You could look into using the RANK() function to do something similar. -
SDO_RDF_MATCH() Query Performace on 10g - ontology model size-5000 triples
Hi,
We are observing severe degradation in the performance of the queries as their size increases? In this context, I want to know if the performance of the query is dependent on the query pattern size? For example, a query with 25 triples in the query pattern is taking nearly 10 secs, to retrieve results from a ontology of size 5000 triples. (no rule bases attached, no inferencing).
Does using query API in Jena Adapter (instead of using sdo_rdf_match() directly) help improve the performance? Also, can u suggest any tuning on the ontology model for speeding up the query execution?
Thanks,
Rajesh.
Edited by: rajesh narni on Oct 8, 2009 4:29 AMPlease see my comments in the following post.
Failing to execute large queries --> 11.1.0.7.0
Cheers,
Zhe Wu -
Query performace with "partition by" clause.
Below is my query
>
select event_type, time, count(event_id) as no_of_events from (
select e.event_type, t.time , e.id as event_id from time t
left outer join events e partition by (event_type)
on t.time < e.end_time and (t.time + 1) > e.start_time
where t.time >= '2008-01-01' and t.time < '2008-02-01'
) events_by_event_type
group by event_type, time
order by event_type, time
The idea is to get a count of active "events" of each "event_type", for each day between 2 dates. The "time" table has one row each for each day. An event is said to be active for a day , when it's end_time - start_time overlaps the day's beginning and end.
The query works but always does a full table scan of the events table.
I tried creating following indexes on the events table , but none of them is ever used.
(event_type,start_time)
(event_type,end_time)
(event_type,start_time,end_time)
(start_time)
(end_time)
(start_time,end_time)
How can I avoid the full table scan of the "events" table in the above query ?
fyi the events table looks like
>
id number not null primary key,
event_type number not null,
start_date date not null,
end_date date not nullWhat I want is to avoid the full table scan on the
"events" table. I don't think adding an index on the
'time' table will help there.The conditions you have on events are
t.time < e.end_time and (t.time + 1) > e.start_timeSo you should have an index on the columns end_time and start_time to avoid a full table scan.
But anyway is that query slow?
Bye Alessandro
Message was edited by:
Alessandro Rossi
Plus I would add two more predicates to the query to enforce a range scan. If I did well they should always be true for the rows you want. They are there just to tell the CBO that the scan on end_time has to begin from '2008-02-01' and the scan on start_time has to finish on ('2008-01-01' - 1). Sometime this kind of additional conditions helped me.
select event_type, time, count(event_id) as no_of_events
from (
select e.event_type, t.time , e.id as event_id from time t
left outer join events e partition by (event_type) on (
t.time between e.start_time - 1 and e.end_time
where t.time >= '2008-01-01' and t.time < '2008-02-01'
and e.end_time >= '2008-02-01' and e.start_time <= '2008-01-01' - 1
) events_by_event_type
group by event_type, time
order by event_type, time -
Hello All -
I have a query which executes in 250msecs. The moment I throw an order by clause in it the query takes 25secs to execute.
The order by clause in on PK (just 1 column of type NUMBER)
Records Returned : 300K
Point to Note is I only need top 500 rows after the sorting is done.
Can you please suggest some ways to reduce the time of query execution and still have order by clause in it.
Thanks for your help.Point to Note is I only need top 500 rows after the
sorting is done.
Can you please suggest some ways to reduce the time
of query execution and still have order by clause in
it.
Thanks for your help.Show us your query. Maybe we can find a way to make it faster. Also do you order ascending or descending and what is your oracle database version?
This won't help much probably.
SELECT *
FROM (SELECT rownum rn, yourotherCols
FROM yourTables order by PK)
WHERE RN <= 500;Message was edited by:
Sven W. -
Query Performace Issue-Usage of SAP_DROP_EMPTY_FPARTITIONS Program
Hi Experts,
We are facing query peroformance issue in our BW Production System. Queries on the Sales Multiprovider are taking lot of time to run. We need to tune the query perofrmace.
We need to drop the empty partitions at the database level. Have anyone of you used the program SAP_DROP_EMPTY_FPARTITIONS to drop the empty partitions ? If Yes, Please provide me with details of your experience of using this program. Please let me know, whether there are any disadvantages using this program in the Production System.
Kindly treat this as an urgent requirement.
Your help will be appreciated....
Thanks,
ShalakaHi Shwetha,
I think that pgm drops if partition contains no records(in DEL_CNT)
or if partitions requid is not in dimtab (DEL_DIM)
Hope it helps!
(and don't forget to reward the answer, if you want !)
Bye,
Roberto -
How to compare same SQL query performance in different DB servers.
We have Production and Validation Environment of Oracle11g DB on two Solaris OSs.
H/W and DB,etc configurations of two Oracle DBs are almost same in PROD and VAL.
But we detected large SQL query performace difference in PROD DB and VAL DB in same SQL query.
I would like to find and solve the cause of this situation.
How could I do that ?
I plan to compare SQL execution plan in PROD and VAL DB and index fragmentations.
Before that I thought I need to keep same condition of DB statistics information in PROD and VAL DB.
So, I plan to execute alter system FLUSH BUFFER_CACHE;
But I am worring about bad effects of alter system FLUSH BUFFER_CACHE; to end users
If we did alter system FLUSH BUFFER_CACHE; and got execution plan of that SQL query in the time end users do not use that system ,
there is not large bad effect to end users after those operations?
Could you please let me know the recomendation to compare SQL query performace ?Thank you.
I got AWR report for only VAL DB server but it looks strange.
Is there any thing wrong in DB or how to get AWR report ?
Host Name
Platform
CPUs
Cores
Sockets
Memory (GB)
xxxx
Solaris[tm] OE (64-bit)
.00
Snap Id
Snap Time
Sessions
Cursors/Session
Begin Snap:
xxxx
13-Apr-15 04:00:04
End Snap:
xxxx
14-Apr-15 04:00:22
Elapsed:
1,440.30 (mins)
DB Time:
0.00 (mins)
Report Summary
Cache Sizes
Begin
End
Buffer Cache:
M
M
Std Block Size:
K
Shared Pool Size:
0M
0M
Log Buffer:
K
Load Profile
Per Second
Per Transaction
Per Exec
Per Call
DB Time(s):
0.0
0.0
0.00
0.00
DB CPU(s):
0.0
0.0
0.00
0.00
Redo size:
Logical reads:
0.0
1.0
Block changes:
0.0
1.0
Physical reads:
0.0
1.0
Physical writes:
0.0
1.0
User calls:
0.0
1.0
Parses:
0.0
1.0
Hard parses:
W/A MB processed:
16.7
1,442,472.0
Logons:
Executes:
0.0
1.0
Rollbacks:
Transactions:
0.0
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:
Redo NoWait %:
Buffer Hit %:
In-memory Sort %:
Library Hit %:
96.69
Soft Parse %:
Execute to Parse %:
0.00
Latch Hit %:
Parse CPU to Parse Elapsd %:
% Non-Parse CPU:
Shared Pool Statistics
Begin
End
Memory Usage %:
% SQL with executions>1:
34.82
48.31
% Memory for SQL w/exec>1:
63.66
73.05
Top 5 Timed Foreground Events
Event
Waits
Time(s)
Avg wait (ms)
% DB time
Wait Class
DB CPU
0
100.00
Host CPU (CPUs: Cores: Sockets: )
Load Average Begin
Load Average End
%User
%System
%WIO
%Idle
Instance CPU
%Total CPU
%Busy CPU
%DB time waiting for CPU (Resource Manager)
Memory Statistics
Begin
End
Host Mem (MB):
SGA use (MB):
46,336.0
46,336.0
PGA use (MB):
713.6
662.6
% Host Mem used for SGA+PGA:
Time Model Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Operating System Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Operating System Statistics - Detail
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Foreground Wait Class
s - second, ms - millisecond - 1000th of a second
ordered by wait time desc, waits desc
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Captured Time accounts for % of Total DB time .00 (s)
Total FG Wait Time: (s) DB CPU time: .00 (s)
Wait Class
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
%DB time
DB CPU
0
100.00
Back to Wait Events Statistics
Back to Top
Foreground Wait Events
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Background Wait Events
ordered by wait time desc, waits desc (idle events last)
Only events with Total Wait Time (s) >= .001 are shown
%Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
Waits /txn
% bg time
log file parallel write
527,034
0
2,209
4
527,034.00
db file parallel write
381,966
0
249
1
381,966.00
os thread startup
2,650
0
151
57
2,650.00
latch: messages
125,526
0
89
1
125,526.00
control file sequential read
148,662
0
54
0
148,662.00
control file parallel write
41,935
0
28
1
41,935.00
Log archive I/O
5,070
0
14
3
5,070.00
Disk file operations I/O
8,091
0
10
1
8,091.00
log file sequential read
3,024
0
6
2
3,024.00
db file sequential read
1,299
0
2
2
1,299.00
latch: shared pool
722
0
1
1
722.00
enq: CF - contention
4
0
1
208
4.00
reliable message
1,316
0
1
1
1,316.00
log file sync
71
0
1
9
71.00
enq: CR - block range reuse ckpt
36
0
0
13
36.00
enq: JS - queue lock
459
0
0
1
459.00
log file single write
414
0
0
1
414.00
enq: PR - contention
5
0
0
57
5.00
asynch descriptor resize
67,076
100
0
0
67,076.00
LGWR wait for redo copy
5,184
0
0
0
5,184.00
rdbms ipc reply
1,234
0
0
0
1,234.00
ADR block file read
384
0
0
0
384.00
SQL*Net message to client
189,490
0
0
0
189,490.00
latch free
559
0
0
0
559.00
db file scattered read
17
0
0
6
17.00
resmgr:internal state change
1
100
0
100
1.00
direct path read
301
0
0
0
301.00
enq: RO - fast object reuse
35
0
0
2
35.00
direct path write
122
0
0
1
122.00
latch: cache buffers chains
260
0
0
0
260.00
db file parallel read
1
0
0
41
1.00
ADR file lock
144
0
0
0
144.00
latch: redo writing
55
0
0
1
55.00
ADR block file write
120
0
0
0
120.00
wait list latch free
2
0
0
10
2.00
latch: cache buffers lru chain
44
0
0
0
44.00
buffer busy waits
3
0
0
2
3.00
latch: call allocation
57
0
0
0
57.00
SQL*Net more data to client
55
0
0
0
55.00
ARCH wait for archivelog lock
78
0
0
0
78.00
rdbms ipc message
3,157,653
40
4,058,370
1285
3,157,653.00
Streams AQ: qmn slave idle wait
11,826
0
172,828
14614
11,826.00
DIAG idle wait
170,978
100
172,681
1010
170,978.00
dispatcher timer
1,440
100
86,417
60012
1,440.00
Streams AQ: qmn coordinator idle wait
6,479
48
86,413
13337
6,479.00
shared server idle wait
2,879
100
86,401
30011
2,879.00
Space Manager: slave idle wait
17,258
100
86,324
5002
17,258.00
pmon timer
46,489
62
86,252
1855
46,489.00
smon timer
361
66
86,145
238628
361.00
VKRM Idle
1
0
14,401
14400820
1.00
SQL*Net message from client
253,909
0
419
2
253,909.00
class slave wait
379
0
0
0
379.00
Back to Wait Events Statistics
Back to Top
Wait Event Histogram
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (64 msec to 2 sec)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 sec to 2 min)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 min to 1 hr)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Statistics
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Wait Class Stats
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
SQL Statistics
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by User I/O Wait Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Physical Reads (UnOptimized)
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Complete List of SQL Text
Back to Top
SQL ordered by Elapsed Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by CPU Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by User I/O Wait Time
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Gets
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Reads
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Physical Reads (UnOptimized)
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Executions
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Parse Calls
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Sharable Memory
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
SQL ordered by Version Count
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
Complete List of SQL Text
No data exists for this section of the report.
Back to SQL Statistics
Back to Top
Instance Activity Statistics
Instance Activity Stats
Instance Activity Stats - Absolute Values
Instance Activity Stats - Thread Activity
Back to Top
Instance Activity Stats
No data exists for this section of the report.
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Absolute Values
No data exists for this section of the report.
Back to Instance Activity Statistics
Back to Top
Instance Activity Stats - Thread Activity
Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic
Total
per Hour
log switches (derived)
69
2.87
Back to Instance Activity Statistics
Back to Top
IO Stats
IOStat by Function summary
IOStat by Filetype summary
IOStat by Function/Filetype summary
Tablespace IO Stats
File IO Stats
Back to Top
IOStat by Function summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
ordered by (Data Read + Write) desc
Function Name
Reads: Data
Reqs per sec
Data per sec
Writes: Data
Reqs per sec
Data per sec
Waits: Count
Avg Tm(ms)
Others
28.8G
20.55
.340727
16.7G
2.65
.198442
1803K
0.01
Direct Reads
43.6G
57.09
.517021
411M
0.59
.004755
0
LGWR
19M
0.02
.000219
41.9G
21.87
.496493
2760
0.08
Direct Writes
16M
0.00
.000185
8.9G
1.77
.105927
0
DBWR
0M
0.00
0M
6.7G
4.42
.079670
0
Buffer Cache Reads
3.1G
3.67
.037318
0M
0.00
0M
260.1K
3.96
TOTAL:
75.6G
81.33
.895473
74.7G
31.31
.885290
2065.8K
0.51
Back to IO Stats
Back to Top
IOStat by Filetype summary
'Data' columns suffixed with M,G,T,P are in multiples of 1024 other columns suffixed with K,M,G,T,P are in multiples of 1000
Small Read and Large Read are average service times, in milliseconds
Ordered by (Data Read + Write) desc
Filetype Name
Reads: Data
Reqs per sec
Data per sec
Writes: Data
Reqs per sec
Data per sec
Small Read
Large Read
Data File
53.2G
78.33
.630701
8.9G
7.04
.105197
0.37
21.51
Log File
13.9G
0.18
.164213
41.9G
21.85
.496123
0.02
2.93
Archive Log
0M
0.00
0M
13.9G
0.16
.164213
Temp File
5.6G
0.67
.066213
8.1G
0.80
.096496
5.33
3713.27
Control File
2.9G
2.16
.034333
2G
1.46
.023247
0.05
19.98 -
Query based on ODS showing connection time out
Hi,
I have a query based on an ODS. The selection criteria for the query is Country which is a navigational attribute of the ODS. When i exceute the query using a country with lesser volume of data the query runs fine. But when i choose a country that has more number of records the query shows 500 connection timeout error sometimes and sometimes it runs fine for the same country.
I tried to execute the report using country as input variable - First i executed the report using a country with lesser volume of data it worked fine as usual and then i added another country with larger volume of data in the filter criteria and the old problem cropped up.
Could you please suggest a solution?
Appreciate your response.
Thanks
AshokHi,
Its not recommneded to have a query on DSO which gives huge amount of data as output.
For small amount it should not be a problem.
Shift your queries to a multicube as recommended by SAP and build the multicube over the cube which loads from the DSO.
SAP recommends to create queries only on multiproviders.
If not possible then you can try to create indexes on DSO on the characteristics field which are used in filter and global selections in the query.This can improve the query performace.
But this will create performance issue while loading data to the DSO.
So you need to do a trade off.
Thanks
Ajeet -
hi gurus
can plz tell me what tools we use to improve query performace other than agg,indices ,
actually my performace is commin down on selection of material hierarchy ,before that my query run time was 25 min after when i create indices my performance improve my 8 min that is run time was 17 min now .actually the date is commin from around 100k material .end user he was still query performane should come down.
so what measure should i go for to improve my query perforamce
regards
rajuHi
Fill OLAP cache overnight
Refer this thread
Filling the OLAP cache for queries overnight
Regards
N Ganesh -
Sales & Order report performance is too slow!
Hi All,
Sales report is prety slow.It took 90% in OLAP time. I tried RSRT all the possible ways & also created OLAP fill query level also but No result. Please help me.
Thanks
Vasu.Hi Vasu,
CAN You please refer below link
http://wiki.sdn.sap.com/wiki/display/BI/HowtoImproveQueryPerformance-A+Checklist
it tells explains how you can improve your query performace.
Also as i told since you are FETCHING DATA direct from MASTER data report will be bit slow so try to get data directly from cube and may be u can use filters also.
Hope this may help you.
Regards
Nilesh -
What is Database Hinging, regards of multi proiver performance improvement
Hello,
I have heard that instead of creating a query on a single cube, create a multiporvider on that single cube and create a query on the same.
How multi Provider is going to improve the performace of query runtime when only single cube in base?
I heard abt database hinglng as a answer of my question.
Does any one have any idea abt database hinging and how can we use it?
Expert Ideas Appreciated!!!Hello,
I think you have got the concept wrong. It should be database hinging (in terms of hinge).
As per the simple meaning of hinge whihc is joint, MPRO if used can help us in future if need comes to club more multiproviders for reporting. So this improves the effecincy of time taken for redesigning or remodelling. It has nothing to do with enhancing query performace.
Hope it helps!!
Regards,
Shashank -
Use of technical business content
help me about, what is the technical business content, what is the use of technical business content in the projects.
please give detail explanation with suitable scenario
thanks & Regards
JPReddyMJP,
Technical content is SAP delivered and is used for monitoring the system activities. The activities related to loading, query executing and other administration activities can be analyzed from the technical content infrastructure. Once you install this and enable objects for statistics, the details of various activities are picked from respective tables and loaded to technical content cubes and ODS's which can be analyzed with the technical content queries.
For ex: You can analyze the query performace, user accessing that query and compare with previous activity. -
How to identify that an InfoCube needs to be compressed?
How to identify that an InfoCube needs to be compressed?
Is there any ratio or method to check that the InfoCube is ready enough to Compress.Hi Chandran,
When a request is loaded and aggregated(rollup) it is ready for compression. I don't understand what u mean by <i>"ready enough"</i>.
As SAP says, compressing the requests in InfoCube improves the query performace as it reduces the amt of data by eliminating request ids and do Zero elimination. see below link for detailed adv and disadv of compressing a cube.
http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
Regards,
Sree -
Does platform affect report performance?
Hi
We have a report that is currently run on a unix reports server. It completes within 3 minutes , sometimes 5. We moved this to a windows based reports server, this takes more than an hour.
I tried to run this on my local report builder on windows and it takes extremely long. The same report with the same parameters runs fast on the unix server.
I created the trace file for this report, it seems like a lot of time is spent executign the queries. Why should a change in platform affect the query performace .. Any ideas please . This is a 6i report.
thanksUsually no. But lot of parameters to be taken into consideration..
RAM , Free space avaialble, processor number, speed , latest operating system version/patches
Rajesh -
BI THread : Data modelling
Hi Experts,
I have been doing Data modelling since one month, I would like to know more about it. I would like know to what are the Does and Dont's for Data modelling in BI. what are the key points that should be remebered while doing the data modelling in BI & also how more effectively the data modelling can be done and are there any new approaches for doing data modelling in BI 7.0.
kindly advise.
thanks & regards,
M.SHi M.S,
First to do effective modelling first you need to concentrate two things.
1. Load Performance
2. Query Performnce.
Load Performnce :
It will be based your Design DSO , Cube ETC............
Ex : You to thing how many DSO or cubes that needs to be developed?
Query Performace :
How much time time is taking for executing the Queries.
To increase the performnce you need to think about the facilities provided by SAP.
Ex : Multi Provides,BW Statatics, Info sets etc.............
Some More Info :
http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2f5aa43f-0c01-0010-a990-9641d3d4eef7
Do's & Don't :
http://en.sap.info/dos-and-don%E2%80%99ts-of-bi-usage/3692
Regards
Ram.
Maybe you are looking for
-
Reference Key 3 Field in Accounting Document Question
Hi Experts, I have a problem regarding the Reference Key 3 Field (BSEG-XREF3) to be maintained for Vehicle Number Values in Accounting Documents. My Requirement is when creating Journal Vouchers (tcode FB50): this Reference Key 3 will be filled accor
-
Can anyone help me in a problem with PL/SQL Tables & the Java getArray()
The problem is the following: I would tike to get a PL/SQL Table (a table of varchar2) into a Java String Array. It works very well except that instead of getting the characters in the PL/SQL Table I receive the character codes in each string. In the
-
I have Skype version 7.0.80.102 on my PC (Windows 7 Home Premium 6.1 Service Pack 1). Before installing this latest version of Skype, I was able to see my Skype credit on the home page and when I was making a call. Now it seems to have disappeared.
-
Reservation not visible in MD04 but visible in MMBE
Hi Everyone, I am facing a problem . Reservations of materials are visible in the Tcode MMBE but those reservations are not visible in Tcode MD04. Please help me in this Thanks Bijay Prasad
-
About OTT and ORACLE_HOME
i installed oracle default, and its environment name is ORACLE_HOME1, when i run ott, it said "ORACLE_HOME:not defined", then i set the ORACLE_HOME and related CLASSPATH and run ott again, but i get the exception about "can't locate the function 'kpu