Need help/suggestion in performance tuning
hi,
I have the following
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 460 4700.00 5239.96 2 6 0 0
Execute 6234640362100.0040673190.99 102043906 110604822 123086 49656
Fetch 150442561100.0011454381.51 515184 13365552 0 92801
total 7785042927900.00 9183139.50 102559092 123970380 123086 142457
Misses in library cache during parse: 27
Misses in library cache during execute: 14
40 user SQL statements in session.
585 internal SQL statements in session.
625 SQL statements in session.
10 statements EXPLAINed in this session.Can some one suggest how to check
Aman.... wrote:
What do you want us to say about it since it's a summary of the session's trace file? Do you have any particular query which you think is not performing well?
It's a summary that records an elapsed time of 9.1 million seconds - that's about 106 days - which is quite a long time for a single session running 40 end-user statements.
Regards
Jonathan Lewis
Similar Messages
-
EP6 sp12 Performance Issue, Need help to improve performance
We have a Portal development environment with EP6.0 sp12.
What we are experiencing is performance issue, It's not extremely slow, but slow compared to normal ( compared to our prod box). For example, after putting the username and password and clicking the <Log on> Button it's taking more than 10 secs for the first home page to appear. Also currently we have hooked the Portal with 3 xAPPS system and one BW system. The time taken for a BW query to appear ( with selection screen) is also more than 10 secs. However access to one other xAPPS is comparatively faster.
Do we have a simple to use guide( Not a very elaborate one) with step by step guidance to immediately improve the performance of the Portal.
Simple guide, easy to implement, with immediate effect is what we are looking for in the short term
Thanks
ArunabhaHi Eric,
I have searched but didn't find the Portal Tuning and Optimization Guide as you have suggested, Can you help to find this.
Subrato,
This is good and I would obviously read through this, The issue here is this is only for Network.
But do you know any other guide, which as very basic ( may be 10 steps) and show step by step the process, it would be very helpful. I already have some information from the thread Portal Performance - page loads slow, client cache reset/cleared too often
But really looking for answer ( steps to do it quickly and effectively) instead of list of various guides.
It would be very helpful if you or anybody( who has actually done some performance tuning) can send a basic list of steps that I can do immediately, instead of reading through these large guides.
I know I am looking for a shortcut, but this is the need of the hour.
Thanks
Arun -
Need help with raw fine tuning version
I am a new user of the aperture 2. I just activated my copy of aperture by the trial key provided by apple.
Everything looked nice except when I imported some raw images of Fujifilm S5PRO, the raw decoder available are only 1.0 and 1.1. There are no option for 2.0. Also the recovery function under exposure is disabled.
Need help with this, how could I solve and get back to a 2.0 decoder?
Aperture Version is 2.01
MAC OSX 1.5.2Hi KelseyyBevann,
Welcome to the Support Communities!
Is this the update that you tried to download?
Digital Camera RAW Compatibility Update 4.06
http://support.apple.com/kb/DL1656
What version of the operating system do you have?
When you attach your camera do you see it in Finder?
If so, you can import the photos into iPhoto this way:
iPhoto '11: Import photos from your hard disk or other computers
http://support.apple.com/kb/PH2357
Judy -
Need suggestion on Performance tuning
Hi,
We have a container with the single document which is of size 3 MB around. This document contains some master data, it will be used across the application.
we have implemented transactions for the whole application recently,so that we could avoid the db locks now. But, it resulted in some performance overhead especially for this single document container.
For example, we have a scenario to display all the records from the document on the screen at a time. That time it is taking too much time when compared to earlier to the transactions implementations.And we will be using (read operation) this container data very freequently (most of the pages) as it is master data.
Please let me know, is there any approach/solution for the above case particularly to boost the performance on the container?
Thanks,
Balakrishna.Thank you for the response.
Here 'display all records' mean read operation on the document. We perform read operations very frequently.
We do have write operations to update the document in between read operations.
Also the index has been done for frequently used nodes in the document.
Hope the above information gives you more details. Please let me know if you need any information.
Thanks,
Balakrishna. -
WM/Menu/Panel - Need Help/Suggestions
At the moment, I'm running KDE 3.5... with OpenBox3 as the window manager. I like the KDE apps, and the OpenBox look and feel. I really, really like Xfce4's panel and right click on desktop menu, and was wondering if it is possible to give up the DE and just have OpenBox with Xfce4's menu and panel. I could still use the KDE apps but I would not use the DE. I do not like the OpenBox menu, even if it is xml/standard. I know the panel is possible, but not so sure about the menu. I don't want to have to download all of Xfce4 though. I don't need Terminal or Thunar or that stuff.
If anyone has any ideas or suggestions I'd appreciate it.
Thanks,
ZackIf you want to run XFCE and just substitute Openbox as the window manager, you can look here:
http://icculus.org/openbox/index.php/Help:XFCE/Openbox
It's also possible to run Openbox, and then launch the XFCE panel on top of it (xfce4-panel).
I'll also second that the XFCE apps (Thunar and their terminal program) are quite light and fast. If you don't want them, they shouldn't get in the way.
Chris
Sjoden wrote:At the moment, I'm running KDE 3.5... with OpenBox3 as the window manager. I like the KDE apps, and the OpenBox look and feel. I really, really like Xfce4's panel and right click on desktop menu, and was wondering if it is possible to give up the DE and just have OpenBox with Xfce4's menu and panel. I could still use the KDE apps but I would not use the DE. I do not like the OpenBox menu, even if it is xml/standard. I know the panel is possible, but not so sure about the menu. I don't want to have to download all of Xfce4 though. I don't need Terminal or Thunar or that stuff. -
Need help troubleshooting poor performance loading cubes
I need ideas on how to troubleshoot performance issues we are having when loading our infocube. There are eight infopackages running in parallel to update the cube. Each infopackage can execute three datapackages at a time. The load performance is erractic. For example, if an infopackage needs five datapackages to load the data, data package 1 is sometimes the last one to complete. Sometimes the slow performance is in the Update Rules processing and other times it is on the Insert into the fact table.
Sometimes there are no performance problems and the load completes in 20 mins. Other times, the loads complete in 1.5+ hours.
Does anyone know how to tell which server a data package was executed on? Can someone tell me any transactions to use to monitor the loads while they are running to help pinpoint what the bottleneck is?
Thanks.
Regards,
RyanSome sugegstions:
1. Collect BW statistics for all the cubes. Goto RSA1 and go to the cube and on tool bar - tools - BW statistics. Check thed boxes to collect both OLAP and WHM.
2. Activate all the technical content cubes and reports and relevant objects. You will find them if you search with 0BWTC* in the business content.
3. Start loading data to the Technical content cubes.
4. There are a few reports out of these statistical cubes and run them and you will get some ideas.
5. Try to schedule sequentially instead of parallel loads.
Ravi Thothadri -
Help ! SQL Performance Tuning
Hi,
I am having following three sql statements. I am using Oracle 8i.
====================================================================================================================
Statement1 : Insert
Insert Into DBSchema.DstTableName( dstCol1, dstColP, dstColKey, dstCol2, dstCol3, dstCol4, dstCol5, dstCol6 )
( SELECT DbSchema.Seq.nextval, srColP, srColKey, srCol1, srCol2, nvl(srCol3,0), nvl(srCol4,0), SYSDATE
From
SrcTableName SRC
Where
srcColP IS NOT NULL AND
NOT EXISTS
(SELECT 1
From
DBSchema.DstTableName Dst
Where
SRC.srcColP = DST.dstColP AND SRC.srcColKey = DST.dstColKey )
====================================================================================================================
Statement2 : Update
Update DBSchema.DstTableName dst
SET ( dstCol1,dstCol2,dstCol3,dstCol4, dstCol5)
=
( SELECT srCol1, srCol2, nvl(srCol3,0), nvl(srCol4,0), SYSDATE
From
SrcTableName src
Where
src.srcColP = dst.dstColP AND SRC.srcColKey = DST.dstColKey
WHERE EXISTS (
SELECT
1
From
SrcTableName SRC
Where
SRC.srcColP = DST.dstColP AND SRC.srcColKey = DST.dstColKey ) ;
====================================================================================================================
Statement3 : Delete
Delete
FROM DBSchema.DstTableName DST
Where Exists (
SELECT
1
From
SrcTableName SRC
Where
src.srcColP = dst.dstColP )
AND NOT EXISTS
SELECT
1
From
SrcTableName SRC
Where
src.srcColP = dst.dstColP AND SRC.srcColKey = DST.dstColKey ) ;
====================================================================================================================
For the above three statement I have written the following procedure with cursor.
Equivalent Cursor:
PROCEDURE DEMOPROC
is
loop_Count integer := 0;
insert_Count integer := 0;
CURSOR c1
IS
SELECT src.srcCol1,
src.srcCol2,
src.srcCol3,
src.srcCol4,
src.srcCol5,
src.srcCol6,
src.srcCol7,
src.srcCol8,
src.srcCol9,
src.srcColKey,
src.srcColP
FROM
SrcTableName SRC
Where src.srcColP IS NOT NULL
AND NOT EXISTS
(SELECT 1
From
DBSchema.DstTableName Dst
Where
src.srcColP = DST.dstColP AND src.srcColKey = DST.dstColKey )
BEGIN
FOR r1 in c1 LOOP
Insert Into DBSchema.DstTableName( dstCol1, dstColP, dstColKey, dstCol2, dstCol3, dstCol4, dstCol5, dstCol6 )
values(DBSchema.Seq.nextval, r1.srcColP, r1.srcColKey, r1.srcCol1, r1.srcCol2, nvl(r1.srcCol3,0), nvl(r1.srcCol4,0), SYSDATE);
Update DBSchema.DstTableName dst
SET dst.dstCol1=r1.srcCol1 , dst.dstCol2=r1.srcCol2,
dst.dstCol3=nvl(r1.srcCol3,0),
dst.dstCol4=nvl(r1.srcCol4,0),
dst.dstCol5=SYSDATE
Where
r1.srcColP = dst.dstColP
AND
r1.srcColKey = DST.dstColKey ;
Delete
FROM DBSchema.DstTableName DST
Where
r1.srcColP = dst.dstColP ;
insert_Count := insert_Count + 1 ;
/* commit on a pre-defined interval */
if loop_Count > 999
then begin
commit;
loop_Count := 0;
end;
else loop_Count := loop_Count + 1;
end if;
end loop;
/* once the loop ends, commit and display the total number of records inserted */
commit;
dbms_output.put_line('total rows processed: '||TO_CHAR(insert_Count)); /*display insert count*/
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM);
END;
====================================================================================================================
I am not sure whether this cursor is right or not, have to verify it.
In delete and insert statements there is same where not exist clause in the original statement so I included that in my cursor declaration but not sure whether update will work with that or not.
I have to use the three statements mentioned above for few source and destination tables and each are having many rows. How do i tune it ?
What else can be done to improve the performance with the the three statements mentioned above.
Any help will be highly appreciated.
Thanks !
Regards,Hi Tom,
Thanks for replying.
I tried three statement seperately.
As seen in my problem statement I am moving data from one table to another. Like this there are 50 tables. One of my procedure reads the source and destination table and creates these three statements dyanmically (Creates pl/sql block) and runs it.
As you have seen the three statements above, I am not able to write cursor properly for it. I was suggested by someone to write cursor, so i started. But as you can see my cursor won't satisfy all the "where not exists " and "where exists" conditions satifactorily.
I only tried insert in the cursor and compared it with the insert in the cursor. My procedure with cursor was slower than the the insert. But since i didn't tried the three things togather i.e. update, delete and insert, i guess theoritically write cursor will select data from the source table only once, which will improve performance. But i can't do that. The other way to solve this is writing to procedure, one having insert and delete in it (as the conditons are same in both) and the other having update statement in it. But this won't increase the performance to that extent i guess ?
Do you have any other solutions to write the 3 DML statements which I have written above.
Any help would be highly appreciated.
Thanks! -
Need Help with site performance
Looking for Help..
In particular we would like help from experts in ssl, browser experts
(how browsers handle encryption, de-encryption), iPlanet experts, Sun
crypto card experts, webdesign for performance experts.
Our website is hosted on a Sun Enterprise 450 server running Solaris v7
The machine is hosted at Exodus. These are the following software
servers that perform the core functions of the website:
iPlanet Web Server v. 4.1 ( Java server is enabled)
IBM db2 v. 7.1
SAA uses SmartSite, a proprietary system developed by Adaptations
(www.adaptations.com). At the level of individual HTML pages, SmartSite
uses
proprietary markup tags and Tcl code embedded in HTML comments to
publish
content stored in a database. SmartSite allows for control over when,
how and
to whom content appears. It is implemented as a java servlet which
stores its data on the db2 server and uses a tcl like scripting language
(jacl- orginally developed by Sun)
CHALLENGE:
In late June this year we launched a redesigned website with ssl enabled
on all pages. (a departure from the previous practice of maintaining
most of the site on non-secure server and only some pages on a ssl
server). We also introduced a new website design with greater use of
images, nested tables and javascript.
We have found that the introduction of the "secure everywhere" policy
has had a detrimental effect on the web site user experience, due to
decreased web server and web browser performance. In other words, the
site got slower. Specifically, we have
identified the following problems:
1. Web server performance degradation. Due to unidentified increases in
web
server resource demand caused (probably) by the global usage of SSL, the
web
server experienced instability. This was resolved by increasing the
amount of
operating system (OS) resources available to the server.
2. Web browser performance degradation. Several categories are noted:
2.1. Page load and rendering. Page load and rendering time has
increased dramatically on the new site, particularly in the case of
Netscape Navigator. Some of this may be attributed to the usage of SSL.
Particularly, the rendering time of complex tables and images may be
markedly slower on slower client machines.
2.2. Non-caching of content. Web browsers should not cache any content
derived from https on the local hard disk. The amount of RAM caching
ability varies form browser to browser, and machine to machine, but is
generally much less than for disk caching. In addition, some browser may
not cache content in RAM cache at all. The overall effect of reduced
caching is increased accesses to the web server to retrieve content.
This
will degrade server performance, as it services more content, and also
web browser performance, as it will spend more time waiting for page
content before and while rendering it.
Things that have been attempted to improve performance:
1) Reducing javascript redundancy (less compiling time required)
2) Optimizing HTML code (taking out nested tables, hard coding in specs
where possible to reduce compiling time)
3) Optimizing page content assembly (reducing routine redundancy,
enabling things to be compiled ahead of time)
4) Installing an encryption card (to speed page encryption rate) - was
removed as it did not seem to improve performance, but seemed to have
degraded performanceFred Martinez wrote:
Looking for Help..
In particular we would like help from experts in ssl, browser experts
(how browsers handle encryption, de-encryption), iPlanet experts, Sun
crypto card experts, webdesign for performance experts.
Our website is hosted on a Sun Enterprise 450 server running Solaris v7
The machine is hosted at Exodus. These are the following software
servers that perform the core functions of the website:
iPlanet Web Server v. 4.1 ( Java server is enabled)
IBM db2 v. 7.1
SAA uses SmartSite, a proprietary system developed by Adaptations
(www.adaptations.com). Since I don't see iPlanet's application server in the mix here this (a
newsgroup
for performance questions for iAS) is not the newsgroup to ask in.
Kent -
Cursor For Loop SQL/PL right application? Need help with PL Performance
I will preface this post by saying that I am a novice Oracle PL user, so an overexplanation would not be an issue here.
Goal: Run a hierarchial query for over 120k rows and insert output into Table 1. Currently I am using a Cursor For Loop that takes the first record and puts 2 columns in "Start" section and "connect by" section. The hierarchial query runs and then it inserts the output into another table. I do this 120k times( I know it's not very efficient). Now the hierarchial query doesn't take too long ( run by itself for many parts) but this loop process is taking over 9 hrs to run all 120k records. I am looking for a way to make this run faster. I've read about "Bulk collect" and "forall", but I am not understanding how they function to help me in my specific case.
Is there anyway I can rewrite the PL/SQL Statement below with the Cursor For loop or with another methodology to accomplish the goal significantly quicker?
Below is the code ( I am leaving some parts out for space)
CREATE OR REPLACE PROCEDURE INV_BOM is
CURSOR DISPATCH_CSR IS
select materialid,plantid
from INV_SAP_BOM_MAKE_UNIQUE;
Begin
For Row_value in Dispatch_CSR Loop
begin
insert into Table 1
select column1
,column2
,column3
,column4
from( select ..
from table 3
start with materialid = row_value.materialid
and plantid = row_value.plantid
connect by prior plantid = row.value_plantid
exception...
end loop
exception..
commitBluShadow:
The table that the cursor is pulling from ( INV_SAP_BOM_MAKE_UNIQUE) has only 2 columns
Materialid and Plantid
Example
Materialid Plantid
100-C 1000
100-B 1010
X-2 2004
I use the cursor to go down the list 1 by 1 and run a hierarchical query for each row. The only reason I do this is because I have 120,000 materialid,plantid combinations that I need to run and SQL has a limit of 1000 items in the "start with" if I'm semi-correct on that.
Structure of Table it would be inserted into ( Table 1) after Hierarchical SQL Statement runs:
Materialid Plantid User Create Column1 Col2
100-C 1000 25 EA
The Hierarchical query ran gives the 2 columns at the end.
I am looking for a way to either just run a quicker SQL or a more efficient way of running all 120,000 materialid, plantid rows through the Hierarchial Query.
Any Advice? I really appreciate it. Thank You. -
RE: Need help to improve performance!!
Hi Experts,
There is an standard SAP tcode FPREPT which is to re-print a receipt. The execution of the Query time takes 5+ minutes.
Can anybody suggest me the best way to improve this and if hlp me with any SAP note available for the same.
vishalHi,
Check this note
Note 607651 - FPREPT/FPY1: Performance for receipt number assignment
It is a old one for release 471 (FI-CA)
What is your release ?
Regards -
Help!!performance tuning
SELECT vbrk~vbeln
vbrp~posnr
vbrp~fkimg
vbrp~vrkme
vbrp~netwr
vbrp~aubel
vbrp~aupos
vbrp~matnr
vbrp~charg
vbrk~fkart
vbrk~fkdat
vbrk~erdat
vbrk~kunag
FROM vbrk
INNER JOIN vbrp
ON vbrpvbeln = vbrkvbeln
INTO TABLE gt_vbrk
WHERE vbrk~vbeln IN s_vbeln
AND vbrk~fkdat IN s_fkdat
AND vbrk~fkart IN s_fkart
AND vbrp~matnr IN s_matnr
AND vbrp~charg IN s_charg
AND vbrk~kunag IN s_kunnr.
IF sy-subrc = 0.
SORT gt_vbrk BY vbeln posnr ASCENDING.
DELETE ADJACENT DUPLICATES FROM gt_vbrk COMPARING vbeln posnr.
*sold-to-party
SELECT kunnr
name1
FROM kna1
INTO TABLE gt_kna1_soldn
FOR ALL ENTRIES IN gt_vbrk
WHERE kunnr = gt_vbrk-kunag.
IF sy-subrc = 0.
SORT gt_kna1_soldn BY kunnr ASCENDING.
ENDIF.
*ship-to-party
SELECT vbeln
posnr
parvw
kunnr
FROM vbpa
INTO TABLE gt_vbpa
FOR ALL ENTRIES IN gt_vbrk
WHERE vbeln = gt_vbrk-aubel
AND parvw = 'WE'
AND kunnr IN s_kunnr1.
IF sy-subrc = 0.
SORT gt_vbpa BY vbeln ASCENDING.
DELETE ADJACENT DUPLICATES FROM gt_vbpa COMPARING vbeln posnr.
SELECT kunnr
name1
FROM kna1
INTO TABLE gt_kna1_shipn
FOR ALL ENTRIES IN gt_vbpa
WHERE kunnr = gt_vbpa-kunnr.
IF sy-subrc = 0.
SORT gt_kna1_shipn BY kunnr ASCENDING.
ENDIF.
ENDIF.
*tomg
SELECT vbeln
parvw
kunnr
pernr
FROM vbpa
INTO TABLE gt_vbpa_tomg
FOR ALL ENTRIES IN gt_vbrk
WHERE vbeln = gt_vbrk-aubel
AND parvw = 'A1'.
IF sy-subrc = 0.
SORT gt_vbpa_tomg BY vbeln ASCENDING.
description
SELECT pernr
sname
FROM pa0001
INTO TABLE gt_pa0001_tomg
FOR ALL ENTRIES IN gt_vbpa_tomg
WHERE pernr = gt_vbpa_tomg-pernr.
ENDIF.
*sales rep
SELECT vbeln
parvw
kunnr
pernr
FROM vbpa
INTO TABLE gt_vbpa_sr
FOR ALL ENTRIES IN gt_vbrk
WHERE vbeln = gt_vbrk-aubel
AND parvw = 'AF'.
IF sy-subrc = 0.
SORT gt_vbpa_sr BY vbeln ASCENDING.
*description
SELECT pernr
sname
FROM pa0001
INTO TABLE gt_pa0001_sr
FOR ALL ENTRIES IN gt_vbpa_sr
WHERE pernr = gt_vbpa_sr-pernr.
ENDIF.
*if s_vbeln2 is initial.
IF s_vbeln2[] IS INITIAL.
SELECT vbak~vbeln
vbap~posnr
vbak~audat
vbak~auart
vbak~vkgrp
vbak~bstnk
vbak~kunnr
FROM vbak
INNER JOIN vbap
ON vbapvbeln = vbakvbeln
INTO TABLE gt_vbak
FOR ALL ENTRIES IN gt_vbrk
WHERE vbak~vbeln = gt_vbrk-aubel
AND vbap~posnr = gt_vbrk-aupos
AND vbak~audat IN s_audat
AND vbak~auart IN s_auart.
IF sy-subrc = 0.
SORT gt_vbak BY vbeln posnr ASCENDING.
DELETE ADJACENT DUPLICATES FROM gt_vbak COMPARING vbeln posnr.
*delivery
SELECT lips~vgbel
lips~vgpos
likp~vbeln
lips~posnr
likp~erdat
likp~kunnr
likp~podat
FROM likp
INNER JOIN lips
ON lipsvbeln = likpvbeln
INTO TABLE gt_billing
FOR ALL ENTRIES IN gt_vbak
WHERE lips~vgbel = gt_vbak-vbeln
AND lips~vgpos = gt_vbak-posnr.
IF sy-subrc = 0.
SORT gt_billing BY vgbel vgpos ASCENDING.
DELETE ADJACENT DUPLICATES FROM gt_billing COMPARING vgbel vgbel.
SELECT vttp~vbeln
vttk~tknum
vttk~dtabf
FROM vttk
INNER JOIN vttp
ON vttptknum = vttktknum
INTO TABLE gt_shipment
FOR ALL ENTRIES IN gt_billing
WHERE vttp~vbeln = gt_billing-vbeln.
IF sy-subrc = 0.
SORT gt_shipment BY vbeln ASCENDING.
ENDIF.
ENDIF. " likp endif.
ENDIF. "gt_vbak endif.
s_vbeln2 is not initial.
ELSE.
SELECT vbak~vbeln
vbap~posnr
vbak~audat
vbak~vkgrp
vbak~bstnk
vbak~kunnr
FROM vbak
INNER JOIN vbap
ON vbapvbeln = vbakvbeln
INTO TABLE gt_vbak
FOR ALL ENTRIES IN gt_vbrk
WHERE vbak~vbeln IN s_vbeln2
AND vbap~posnr = gt_vbrk-aupos
AND vbak~audat IN s_audat
AND vbak~auart IN s_auart.
IF sy-subrc = 0.
SORT gt_vbak BY vbeln posnr ASCENDING.
DELETE ADJACENT DUPLICATES FROM gt_vbak COMPARING vbeln posnr.
SELECT lips~vgbel
lips~vgpos
likp~vbeln
lips~posnr
likp~erdat
likp~kunnr
likp~podat
FROM likp
INNER JOIN lips
ON lipsvbeln = likpvbeln
INTO TABLE gt_billing
FOR ALL ENTRIES IN gt_vbak
WHERE lips~vgbel = gt_vbak-vbeln
AND lips~vgpos = gt_vbak-posnr.
IF sy-subrc = 0.
SORT gt_billing BY vgbel vgpos ASCENDING.
DELETE ADJACENT DUPLICATES FROM gt_billing COMPARING vgbel vgbel.
SELECT vttp~vbeln
vttk~tknum
vttk~dtabf
FROM vttk
INNER JOIN vttp
ON vttptknum = vttktknum
INTO TABLE gt_shipment
FOR ALL ENTRIES IN gt_billing
WHERE vttp~vbeln = gt_billing-vbeln.
IF sy-subrc = 0.
SORT gt_shipment BY vbeln ASCENDING.
ENDIF.
ENDIF. " likp endif.
ENDIF. "gt_vbak endif.
ENDIF.
ENDIF.
*Header fieldnames
CONCATENATE text-005 text-006 text-007
text-008 text-009 text-010
text-011 text-012 text-013
text-014 text-015 text-016
text-017 text-018 text-019
text-020 text-021 text-022
text-023 text-024 text-025
text-026 text-027
INTO gv_header
SEPARATED BY ','.
APPEND gv_header TO gt_output.
IF gt_vbrk[] IS INITIAL.
WRITE text-004.
ENDIF.
LOOP AT gt_vbrk ASSIGNING <fs_vbrk>.
lv_netwr_vbrp = <fs_vbrk>-netwr .
lv_fkimg_vbrp = <fs_vbrk>-fkimg .
lv_vbeln_vbrk = <fs_vbrk>-vbeln .
lv_vrkme_vbrp = <fs_vbrk>-vrkme .
lv_matnr_vbrp = <fs_vbrk>-matnr .
lv_charg_vbrp = <fs_vbrk>-charg .
lv_fkart_vbrk = <fs_vbrk>-fkart .
lv_fkdat_vbrk = <fs_vbrk>-fkdat .
lv_erdat_vbrk = <fs_vbrk>-erdat .
lv_kunag_vbrk = <fs_vbrk>-kunag .
IF lv_netwr_vbrp IS INITIAL.
lv_netwr_output = 0.
ENDIF.
IF lv_fkimg_vbrp IS INITIAL.
lv_fkimg_output = 0.
ENDIF.
SPLIT lv_netwr_vbrp AT '.' INTO lv_string2 lv_string3.
CONCATENATE lv_string2 '.' lv_string3 INTO lv_netwr_output.
SPLIT lv_fkimg_vbrp AT '.' INTO lv_string lv_string1.
CONCATENATE lv_string '.' lv_string1 INTO lv_fkimg_output.
READ TABLE gt_kna1_soldn ASSIGNING <fs_kna1_soldn> WITH KEY kunnr = <fs_vbrk>-kunag
BINARY SEARCH.
IF sy-subrc = 0.
lv_name1 = <fs_kna1_soldn>-name1.
REPLACE ALL OCCURRENCES OF ',' IN lv_name1 WITH space.
ENDIF.
READ TABLE gt_vbpa ASSIGNING <fs_vbpa> WITH KEY vbeln = <fs_vbrk>-aubel
parvw = 'WE'
BINARY SEARCH.
IF sy-subrc = 0.
lv_kunnr_vbpa = <fs_vbpa>-kunnr.
READ TABLE gt_kna1_shipn ASSIGNING <fs_kna1_shipn> WITH KEY kunnr = <fs_vbpa>-kunnr
BINARY SEARCH.
IF sy-subrc = 0.
lv_name2 = <fs_kna1_shipn>-name2.
REPLACE ALL OCCURRENCES OF ',' IN lv_name2 WITH space.
ENDIF.
ENDIF.
READ TABLE gt_vbpa_tomg ASSIGNING <fs_vbpa_tomg> WITH KEY vbeln = <fs_vbrk>-aubel
parvw = 'A1'
BINARY SEARCH.
IF sy-subrc = 0.
READ TABLE gt_pa0001_tomg ASSIGNING <fs_pa0001_tomg> WITH KEY pernr = <fs_vbpa_tomg>-pernr.
IF sy-subrc = 0.
lv_desc_tomg = <fs_pa0001_tomg>-sname.
ENDIF.
ENDIF.
READ TABLE gt_vbpa_sr ASSIGNING <fs_vbpa_sr> WITH KEY vbeln = <fs_vbrk>-aubel
parvw = 'AF'
BINARY SEARCH.
IF sy-subrc = 0.
READ TABLE gt_pa0001_sr ASSIGNING <fs_pa0001_sr> WITH KEY pernr = <fs_vbpa_sr>-pernr.
IF sy-subrc = 0.
lv_desc_sr = <fs_pa0001_sr>-sname.
ENDIF.
ENDIF.
LOOP AT gt_vbak ASSIGNING <fs_vbak> WHERE vbeln = <fs_vbrk>-aubel
AND posnr = <fs_vbrk>-aupos.
IF <fs_vbak>-bstnk IS NOT INITIAL.
REPLACE ALL OCCURRENCES OF ',' IN <fs_vbak>-bstnk WITH space.
ENDIF.
lv_vbeln_vbak = <fs_vbak>-vbeln.
lv_audat = <fs_vbak>-audat.
lv_vkgrp = <fs_vbak>-vkgrp.
lv_bstnk = <fs_vbak>-bstnk.
LOOP AT gt_billing ASSIGNING <fs_billing> WHERE vgbel = <fs_vbak>-vbeln
AND vgpos = <fs_vbak>-posnr.
lv_vbeln_likp = <fs_billing>-vbeln.
lv_erdat_likp = <fs_billing>-erdat.
lv_podat_likp = <fs_billing>-podat.
READ TABLE gt_shipment ASSIGNING <fs_shipment> WITH KEY vbeln = <fs_billing>-vbeln
BINARY SEARCH.
IF sy-subrc = 0.
lv_dtabf = <fs_shipment>-dtabf.
ENDIF.
ENDLOOP.
ENDLOOP.
IF sy-subrc NE 0 AND s_vbeln2[] IS NOT INITIAL.
CLEAR: gv_string ,
lv_vbeln_vbrk, lv_fkart_vbrk ,
lv_erdat_vbrk, lv_fkdat_vbrk,
lv_vbeln_vbak, lv_audat,
lv_bstnk, lv_vkgrp,
lv_desc_tomg , lv_desc_sr ,
lv_kunag_vbrk,
lv_name1,
lv_kunnr_vbpa, lv_name2,
lv_matnr_vbrp, lv_charg_vbrp,
lv_fkimg_output, lv_vrkme_vbrp,
lv_netwr_output, lv_vbeln_likp ,
lv_erdat_likp, lv_podat_likp ,
lv_dtabf.
CONTINUE.
ENDIF.
CONCATENATE lv_vbeln_vbrk lv_fkart_vbrk
lv_erdat_vbrk lv_fkdat_vbrk
lv_vbeln_vbak lv_audat
lv_bstnk lv_vkgrp
lv_desc_tomg lv_desc_sr
lv_kunag_vbrk
lv_name1
lv_kunnr_vbpa lv_name2
lv_matnr_vbrp lv_charg_vbrp
lv_fkimg_output lv_vrkme_vbrp
lv_netwr_output lv_vbeln_likp
lv_erdat_likp lv_podat_likp
lv_dtabf
INTO gv_string
SEPARATED BY ','.
APPEND gv_string TO gt_output.
CLEAR: gv_string ,
lv_vbeln_vbrk, lv_fkart_vbrk ,
lv_erdat_vbrk,lv_fkdat_vbrk,
lv_vbeln_vbak, lv_audat,
lv_bstnk, lv_vkgrp,
lv_desc_tomg , lv_desc_sr ,
lv_kunag_vbrk,
lv_name1,
lv_kunnr_vbpa, lv_name2,
lv_matnr_vbrp, lv_charg_vbrp,
lv_fkimg_output, lv_vrkme_vbrp,
lv_netwr_output, lv_vbeln_likp ,
lv_erdat_likp, lv_podat_likp ,
lv_dtabf.
ENDLOOP.
**guys thats my whole code for retrieving my data.my program is king of slow.
Help me experts to make it more faster.
Thanks!!hI
IN YOUR PROGRAM YOUR USEING MANY SELECT QUERYS
AND IN THAT MOST OF THE SELECT QUERYS ARE ON JOINS
IF YOU USE JOINS WHAT HAPPENS YOU KNOW
THAT DATA BASE CONNECTIVITY IS THERE UPTO YOUR TOTAL PROGRAM EXCUTION SO IT MAKES MORE PERFORMANCE ISSUE
<b>BETTER TO USE FOR ALL ENTRIES OPTION INSTEAD OF JOINS</b>
IN THAT THERE WON'T BE ANY DATA BASE CONNECTIVITY SO IT WILL EXECUTE BIT FASTER THAN USEING JOINS
<b>SELECT DATA FROM DBTABLE INTO ITAB WHERE CONDITION
IF ITAB IS NOT INITIAL
SELECT DATA FROM DBTABLE2 INTO ITAB1 FOR ALL ENTRIES IN ITAB WHERE CONDITION
ENDIF.</b>
<b>rEWARD IF USEFULL</b> -
hi ,
i need to develop a report --
a report with columns: emp_id, emp_name, manager_number
Basically i have 4 tables -- dept_table, emp_table, time_table, role_table
dept_table has columns ---> dept_id, dept_name
emp_table has columns --> emp_name, dept_number, emp_key, emp_id
time_table ---> time_key, year
role_table --> time_key, emp_key
joins:
emp_table and role_table are joined thru ---- emp_key
role_table and time_table are joined thru ------ time_key
dept_table.dept_id = emp_table.dept_number
parameter of the report:
year from time_table.
for example
the dept_table has 20 depts => 20 dept ids
the emp_table has 100 employees.
these 100 employees wud be assigned to one of the 20 depts. (some employees might be given a wrong dept id which actually doesn’t exist)
Now i need to develop a report with the field columns of the emp_table: emp_id, emp_name, dept_number
where dept_number is not a valid dept_number (i.e. dept_id not present in dept_table)
how shud i proceed?
In discoverer admin, how shud i create the join between the database folders dept_table and emp_table like
master_item detail_item
dept_table.dept_id = emp_table.dept_number (???)
or the other way round???
under options of create join: what shud be the join details which i need to select ??
thanks
Message was edited by:
boyz
Message was edited by:
boyz
Message was edited by:
boyzHi Rodwest,
Thanks for the suggestion. But did not work.
(i) For all the employees that are not assigned to a department.
I have created the join and the report as u suggested. I have put 4 fields from emp_table and 1 field (i.e. dept_table.dept_id).
There I could see the employees who are not assigned to any dept has dept_table.dept_id as NULL. So, when I apply the condition dept_table.dept_id IS NULL to pick up all employees without a valid department, it still shows some employees with valid depts.
i dont understand what to do.
(ii) I have proceeded just as u suggested. But I dont see all the depts. some depts are filtered by the parameters (this is the main issue).
I need to have all the depts. and its corresponding employee count irrespective of the parameters.
Some depts are filtered by the parameters. the dept_table doesnt have any of the parameters. There are three parameters, one from time_table and 2 from emp_table. These 3 parameters are filtering the dept_id s and is not allowing all the depts to show up.
awaiting ur response.
Thanks -
I need help/suggestions with the HTC Rhyme
Hello,
I have the HTC Rhyme and I am so unhappy with it. I hate knowing I have to wait until 2014 until I can upgrade. I am wondering if anyone else has the issues I have with mine. 1.) Drops calls alot even when I have all bars. 2.) It is very slow if I try to pull something up online. 3.) My phone numbers seem to disappear at nothing. 3.) My phone cuts off randomly without my knowing it. 4.) My charge never lasts long at all. In fact I had straight talk before and that phone stayed charged forever and it was a smart phone as well. 5.) My apps disappear as well. 6.) When I am talking to family and friends they are always asking me to repeat myself or I cant hear them. 7.) Trying to send a picture is like pulling teeth about near impossible for me to do. 8.) Text messages gets to where they are going sometimes and sometimes not. 9.) And try to find a decent case for this phone is not easy. In fact I've not found one yet that I like thats not plastic or silicone.
Is anyone else having these problems.?? I believe my phone is a lemon if I'm the only one having these problems. I am not a person to complain about things but even my family and friends say I have a piece of crap for a phone. What do I do?? Just got the phone in May.I totally agree with Ginny. I have had my phone for about a month. It won't hold a charge. Someone suggested turning it off overnight to see if it will take a full charge. After I did, it wouln't come on. Still didn't get a full charge. When I place it in the docking station, it gets so hot I can't hold it to my ear. It is dropping calls even when I have the full bars. At times, my phone shows that I am still connected on the call, but the other person can't hear me and hang up. This is crap. I can't go until 2014 to get a decent phone. I want a different phone. I had a little crappy Samsung Galaxy with TMOBILE and it stayed charged, never over heated, seldom dropped calls and I gave it up for THIS? Surely you people must have some kind of replacement plan with this phone given all the people that are having the same issues that apparently are not getting fixed. When I switched from TMOBILE, the guy that helped me with my new phone and plan swore by this phone. I had done my homework and had seen all the negative responses to the phone, so when I asked him about it during his sales pitch, he assured me everything had been updated and it wasn't happening any longer. Given that he was a Verizon representative, I believed him.
-
Need help to debug SQL Tuning Advisor Error Message
Hi,
I am getting an error message while try to get recommendations from the SQL Tuning Advisor.
Environment:
Oracle Version: 11.2.0.3.0
O/S: AIX
Following is my code:
declare
my_task_name varchar2 (30);
my_sqltext clob;
begin
my_sqltext := 'SELECT DISTINCT MRKT_AREA AS DIVISION, PROMO_ID,
PROMO_CODE,
RBR_DTL_TYPE.PERF_DETL_TYP,
RBR_DTL_TYPE.PERF_DETL_DESC,
RBR_DTL_TYPE.PERF_DETL_SUB_TYP,
RBR_DTL_TYPE.PERF_DETL_SUB_DESC,
BU_SYS_ITM_NUM,
RBR_CPN_LOC_ITEM_ARCHIVE.CLI_SYS_ITM_DESC,
PROMO_START_DATE,
PROMO_END_DATE,
PROMO_VALUE2,
PROMO_VALUE1,
EXEC_COMMENTS,
PAGE_NUM,
BLOCK_NUM,
AD_PLACEMENT,
BUYER_CODE,
RBR_CPN_LOC_ITEM_ARCHIVE.CLI_STAT_TYP,
RBR_MASTER_CAL_ARCHIVE.STATUS_FLAG
FROM (PROMO_REPT_OWNER.RBR_CPN_LOC_ITEM_ARCHIVE
INNER JOIN PROMO_REPT_OWNER.RBR_MASTER_CAL_ARCHIVE
ON (RBR_CPN_LOC_ITEM_ARCHIVE.CLI_PROMO_ID = PROMO_ID)
AND (RBR_CPN_LOC_ITEM_ARCHIVE.CLI_PERF_DTL_ID = PERF_DETAIL_ID)
AND (RBR_CPN_LOC_ITEM_ARCHIVE.CLI_STR_NBR = STORE_ZONE)
AND (RBR_CPN_LOC_ITEM_ARCHIVE.CLI_ITM_ID = ITM_ID))
INNER JOIN PROMO_REPT_OWNER.RBR_DTL_TYPE
ON (RBR_MASTER_CAL_ARCHIVE.PERF_DETL_TYP = RBR_DTL_TYPE.PERF_DETL_TYP)
AND (RBR_MASTER_CAL_ARCHIVE.PERF_DETL_SUB_TYP = RBR_DTL_TYPE.PERF_DETL_SUB_TYP)
WHERE ( ((MRKT_AREA)=40)
AND ((RBR_DTL_TYPE.PERF_DETL_TYP)=1)
AND ((RBR_DTL_TYPE.PERF_DETL_SUB_TYP)=1) )
AND ((CLI_STAT_TYP)=1 Or (CLI_STAT_TYP)=6)
AND ((RBR_MASTER_CAL_ARCHIVE.STATUS_FLAG)=''A'')
AND ( ((PROMO_START_DATE) >= to_date(''2011-10-20'', ''YYYY-MM-DD'')
And (PROMO_END_DATE) <= to_date(''2011-10-26'', ''YYYY-MM-DD'')) )
ORDER BY MRKT_AREA';
my_task_name := dbms_sqltune.create_tuning_task
(sql_text => my_sqltext,
user_name => 'PROMO_REPT_OWNER',
scope => 'COMPREHENSIVE',
time_limit => 3600,
task_name => 'Test_Query',
description => 'Test Query');
end;
begin
dbms_sqltune.execute_tuning_task(task_name => 'Test_Query');
end;
set serveroutput on size unlimited;
set pagesize 5000
set linesize 130
set long 50000
set longchunksize 500000
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK('Test_Query') FROM DUAL;
Output:
snippet .....
FINDINGS SECTION (1 finding)
1- Index Finding (see explain plans section below)
The execution plan of this statement can be improved by creating one or more
indices.
Recommendation (estimated benefit: 71.48%)
- Consider running the Access Advisor to improve the physical schema design
or creating the recommended index.
Error: Cannot fetch actions for recommendation: INDEX
Error: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
Rationale
Creating the recommended indices significantly improves the execution plan
of this statement. However, it might be preferable to run "Access Advisor"
using a representative SQL workload as opposed to a single statement. This
will allow to get comprehensive index recommendations which takes into
account index maintenance overhead and additional space consumption.
snippet
Any ideas why I am getting ORA-06502 error?
Thanks in advance
RogersBug 14407401 - ORA-6502 from index recommendation section of DBMS_SQLTUNE output (Doc ID 14407401.8)
Fixed:
The fix for 14407401 is first included in
12.1.0.1 (Base Release) -
Need help / suggestion in creating realistic thunder particle effect
I has a lot of problem in creating realistic thunder particle
effect:
1) The "particle" I use is a tiny white square, when it
duplicating it create weird thunder...simply say ithe lighting
effect look very fake.
2) The motiton of thunder is too "sharp", it make the thunder
look like "zip zap"
3) So far I only can create one thunder bolt but I cannot
make it split into few thunder bolt and move to different
direction.
Can anyone give me some suggestion, either in graphic, math
or logic?
Here is my swf file:
http://www.mediafire.com/?3tayypgnwyaI has a lot of problem in creating realistic thunder particle
effect:
1) The "particle" I use is a tiny white square, when it
duplicating it create weird thunder...simply say ithe lighting
effect look very fake.
2) The motiton of thunder is too "sharp", it make the thunder
look like "zip zap"
3) So far I only can create one thunder bolt but I cannot
make it split into few thunder bolt and move to different
direction.
Can anyone give me some suggestion, either in graphic, math
or logic?
Here is my swf file:
http://www.mediafire.com/?3tayypgnwya
Maybe you are looking for
-
Remote Delta Link vs Direct Link - Federated Portal
Hi, We couldn't get our Remote Delta Links to work on our Enterprise Portal so we just built a direct link to our IViews. This link goes through the Enterprise Portal and to the IView on the BI Portal. (Points to the Producer on the Consumer) This w
-
DW5.5, PhoneGap 1.2, XCode 4.2, iOS5: A Fix
I made a hack based on the 1.2.0 version available for download from phonegap. It is here: https://github.com/jamiebriant/dw55-phonegap It involves deleting/replacing files in the Dreamweaver folder so use at your own risk. It may not work and b
-
Does anyone have trouble saving or opening files with Mavericks?
I can not save my files where I want to save them or opening files with the browser. Does anyone has the same trouble?
-
My macbook is secondhand. So it has the name of a former user. The reseller told me that it is not possible to change the name of the computer (as shown in the Finder in the owners map). Is this really impossible?
-
Hi, Since getting Safari 5.0.1 with Reader, it seems Safari is much slower loading pages now. I'm not certain that it is reader slowing page loading but going to the same sites in Firefox and other browsers the pages load much faster. I'll open a pag