Execution time too low
I was trying to measure the execution time. The rate is 1KS/s per channel and the samples to read is 100 per channel in a for loop of 10. The time should be around 1000ms, but it's only 500-600ms. And when I changed the rate/numbers of samples, the execution time doesn't change..... how could this happen?
Solved!
Go to Solution.
Attachments:
trial 6.vi 19 KB
JudeLi wrote:
I've tried to drag the clear task out of the loop but every time I did it, it ended up in a broken wire saying that the soure is a 1-D array of DAQmx event and the type of sink is DAQmx event.....
You can right-click on the output tunnel and tell it to not auto index. But even better would be to use shift registers just in case you tell that FOR loop to run 0 times.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
Attachments:
trial 6.vi 13 KB
Similar Messages
-
Execution time of a simple vi too long
I'm working with LabVIEW 6.0.2 on a computer (AMD ~700MHz) under Windows 2000. The computer is connected to the instruments (eg Keithley 2400 Sourcemeter) via GPIB (NI PCI-GPIB Card). When trying to read the output of the K2400 with a very simple vi (sending the string READ? to the instrument with GPIBWrite (mode 2) and subsequently reading 100byte with GPIBRead (mode 2) from the instrument, the execution time mostly exceeds 1s (execution highlighting disabled). Sometimes, it can be much faster but this is very irreproducible. I played around with the GIPBRead and Write modes and with the number of bytes to be read from the device as well as with the hardware settings of the Keithley 2400 but nothing seemed to work. The API calls ca
ptured by NI Spy mainly (lines 8 - 160) consist of ThreadIberr() and ibwait(UD0, 0x0000).
As this problem is the main factor limiting our measurement speed, I would be grateful for any help.
Thanks a lot
Bettina WelterHello,
Thanks for contacting National Instruments. It seems like the 1 second delay that is occurring is due to the operation being called. ThreadIberr returns the value of iberr, while ibwait simply implements a wait. These two get called repeatedly while the GPIB device waits for the instrument (K2400, in your case) to finish its operation and respond back. It is quite possible that when you query the Keithley to send back 100 bytes of data, it has to gather them from its buffer (if its already been generated). And if there aren't 100 bytes of data in the buffer, the Keithley will keep the NRFD line asserted while it gathers 100 btyes of data. After the data has been gathered, the NRFD line is deasserted, at which point, ThreadIberr will detect the change in th
e ibsta status bit and read the 100 bytes.
So make sure that the 100 bytes of data that you are requesting don't take too long to be gathered, since this is where the main delay lies. Hope this information helps. Please let us know if you have further questions.
A Saha
Applications Engineering
National Instruments
Anu Saha
Academic Product Marketing Engineer
National Instruments -
Execution time is too high.
Hi,
I got several xsql-files that uses aprox. 1 minute to execute by them selves. I want to run one xsql that includes them all, but it times out, and I get an server-error. Is there any way I can get the execution time down, or change the timeout-setting?
I am running the xsql that comes with 8.1.7
Terje K.If Oracle8i JServer is included in the Oracle 8i package, then yes. The database itself is not large (aprox. 50Mb with data), but the results of the queries can get somewhat large. Here is an example:
1. first I made a view:
create view view_section2_issue as SELECT SPVSPC.OPN, OPE, PCA, PCS, PCSR, OSC, PWTT, OVSM, PLWD, PCSOD, PDC, TO_CHAR(ISOD,'dd.MM.YY') AS ISOD, TO_CHAR(PCSOD,'dd.MM.YY') AS OD, PSCN, PMDP1, PMDP2, PMDP3, PMDP4, PMDP5, PMDP6, PMDP7, PMDP8, PMDP9, PMDP10, PMDP11, PMDP12,PDT1, PDT2, PDT3, PDT4, PDT5, PDT6, PDT7, PDT8, PDT9, PDT10, PDT11, PDT12, PMDC, PMCA, PMMDP, PMDT, PMLWD, PMWTT, PMSCN, PMNS, PMWTH, PMSCH, PMOD
from SPVSISSU, SPVSISS2, SPVSPCS2, SPVSPC
where SPVSISSU.OPN = SPVSPC.OPN
and SPVSISSU.ISS is not null
and SPVSISS2.OPN = SPVSISSU.OPN
and SPVSISS2.ISSUE = SPVSISSU.ISR2
and SPVSPCS2.OPCS = SPVSISS2.IOPCS
and SPVSPCS2.PCSR = SPVSISS2.IPCS_REV
then I made the query (with some cursors):
SELECT OPE, PCA, PCS, PCSR, OSC, PWTT, OVSM, PLWD, PCSOD, PDC, OD, PSCN, PMDP1, PMDP2, PMDP3, PMDP4, PMDP5, PMDP6, PMDP7, PMDP8, PMDP9, PMDP10, PMDP11, PMDP12, PDT1, PDT2, PDT3, PDT4, PDT5, PDT6, PDT7, PDT8, PDT9, PDT10, PDT11, PDT12, PMDC, PMCA, PMMDP, PMDT, PMLWD, PMWTT, PMSCN, PMNS, PMWTH, PMSCH, PMOD,
CURSOR( SELECT PNS, POD, PWTH, PSCH
FROM spvspcs4
WHERE spvspcs4.opn = view_section2_issue.opn
and spvspcs4.pcs = view_section2_issue.pcs
and spvspcs4.pcsr = view_section2_issue.pcsr ) as wallThickness,
CURSOR( SELECT PELM, SST, PDSTD, PFS, PTS, PTY, PMN, MDS, ESK, PRM, PAGEBREAK, PMELL, page, start_remark(opn,pcs,pcsr,pel,pell) starten,
end_remark(opn,pcs,pcsr,pel,pell,start_remark(opn,pcs,pcsr,pel,pell)) as slutt
FROM spvspcs6
WHERE spvspcs6.opn = view_section2_issue.opn
and spvspcs6.pcs = view_section2_issue.pcs
and spvspcs6.pcsr = view_section2_issue.pcsr ) as elements,
CURSOR( SELECT PVELM, VDS, PVFS, PVTS, PVRM, PMVELL
FROM spvspcs7
WHERE spvspcs7.opn = view_section2_issue.opn
and spvspcs7.pcs = view_section2_issue.pcs
and spvspcs7.pcsr = view_section2_issue.pcsr ) as vType,
CURSOR( SELECT PBLP, PAGEBREAK, LTXT
FROM spvspcs5
WHERE spvspcs5.opn = view_section2_issue.opn
and spvspcs5.pcs = view_section2_issue.pcs
and spvspcs5.pcsr = view_section2_issue.pcsr ) as kommentar,
CURSOR( SELECT count(*) as tot
FROM spvspcs5
WHERE pagebreak = 'P'
and spvspcs5.opn = view_section2_issue.opn
and spvspcs5.pcs = view_section2_issue.pcs
and spvspcs5.pcsr = view_section2_issue.pcsr ) as kpages,
CURSOR( SELECT count(*) as tot
FROM spvspcs6
WHERE pagebreak = 'P'
and spvspcs6.opn = view_section2_issue.opn
and spvspcs6.pcs = view_section2_issue.pcs
and spvspcs6.pcsr = view_section2_issue.pcsr ) as tpages
from view_section2_issue
where OPN = {@opn} -
How to improve the execution time of my VI?
My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.
Bryan,
If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
LabVIEW Champion . Do more with less code and in less time . -
Transaction execution time and block size
Hi,
I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.
Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up (each time SGA and PGA parameters ware identical).
In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
2K oracle block database had 3 datafiles, each 7GB.
4K oracle block database had 2 datafiles, each 10GB.
8K oracle block database had 1 datafile of 20GB.
Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.
I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).
Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.
THX to all.>
It's always interesting to see the results of serious attempts to quantify the effects of variation in block sizes, but it's hard to do proper tests and eliminate side effects.
I have Oracle Database 11g R2 64 bit database on Oracle Linux 5.6. My system has ONE hard drive.A single drive does make it a little too easy for apparently random variation in performance.
Recently I experimented with 8.5 GB database in TPC-E test. I was watching transaction time for 2K,4K,8K Oracle block size. Each time I started new test on different block size, I would created new database from scratch to avoid messing something up Did you do anything to ensure that the physical location of the data files was a very close match across databases - inner tracks vs. outer tracks could make a difference.
(each time SGA and PGA parameters ware identical).Can you give us the list of parameters you set ? As you change the block size, identical parameters DON'T necessarily result in the same configuration. Typically a large change in response time turns out to be due to changes in execution plan, and this can often be associated with different configuration. Did you also check that the system statistics were appropriately matched (which doesn't mean identical cross all databases).
In all experiments a gave to my own tablespace (NEWTS) different configuration because of oracle block-datafile size limits :
2K oracle block database had 3 datafiles, each 7GB.
4K oracle block database had 2 datafiles, each 10GB.
8K oracle block database had 1 datafile of 20GB.If you use bigfile tablespaces I think you can get 8TB in a single file for a tablespace.
Now best transaction (tranasaction execution) time was on 8K block, little longer tranasaction time had 4K block, but 2K oracle block had definitly worst transaction time.We need some values here, not just "best/worst" - it doesn't even begin to get interesting unless you have at least a 5% variation - and then it has to be consistent and reproducible.
I identified SQL query(when using 2K and 4K block) that was creating hot segments on E_TRANSACTION table, that is largest table in database (2.9GB), and was slowly executed (number of executions was low compared to 8K numbers).Query, or DML ? What do you mean by "hot" ? Is E_TRANSACTION a partitioned table - if not then it consists of one segment, so did you mean to say "blocks" rather than segments ? If blocks, which class of blocks ?
Now here is my question. Is it possible that multiple datafiles are reasone for this low transaction times. I have AWR reports from that period, but as someone who is still learning things about DBA, I would like to asq, how could I identify this multi-datafile problem (if that is THE problem), by looking inside AWR statistics.On a single disc drive I could probably set something up that ensured you got different performance because of different numbers of files per tablespace. As SB has pointed out there are some aspects of extent allocation that could have an effect - roughly speaking, extents for a single object go round-robin on the files so if you have small extent sizes for a large object then a tablescan is more likely to result in larger (slower) head movements if the tablespace is made from multiple files.
If the results are reproducible, then enable extended tracking (dbms_monitor, with waits) and show us what the tkprof summaries for the slow transactions look like. That may give us some clues.
Regards
Jonathan Lewis -
Pop up window saying Memory is too low to continue in Logic.Why?Way to Fix?
I recently purchased my IMac OSX 10.6.3 as well as Logic. I have about a half dozen projects on my hard drive now, which is also being backed up to a "time machine" external drive. The projects are pretty small( just piano, or piano and string quartet, etc.) I also just installed Office Suite for Mac and the full Adobe Creative Suite.
Recently when I have been working in the Score Editor(In Logic Pro 9), a pop up window appears saying something to the effect of "memory is getting low, don't install more plugins and remove any unused ones." Then today, after that pop up, another one showed up and said "memory is too low to continue. we will close project, autosave, etc. etc."
Why is this happening? Is there not enough memory on my computer? Will this actually effect editing procedures in Logic? (because it seems like some things aren't working like they used to) Is there a quick and easy fix? Is this related to space remaining on hard drive, or is it completely different? Could it be related to the IMAC backing up files to my "time machine" drive at the same time I am working in LOGIC?
Could someone who has a little time on their hands please answer these questions IN DETAIL? I am new to MAC and LOGIC, so I really have no Idea what's going on with the memory issue. I don't even understand how MEMORY corresponds to available space, quality of running programs, etc.
Please help. I'm working on a deadline to finish a Master's portfolio and this could potentially be a big bump in the road for me.
Thanks....The System Requirements for Logic Studio call for a minimum or 1 GB RAM with 2 GB recommended. So you should not have a problem with your 4 GB.
First make sure your RAM is being recognized by your computer. Go to Apple Menu (apple in upper left hard corner and click on "About this Mac". Make sure it say 4 GB. Ram chip at times will go bad or become unseated.
Now for troubleshooting, here is what I would do. The next time you want to use Logic, restart your computer. Restarting "flushes" the RAM; it may not sound like it but that is a good thing. Then do not open any other programs other than Logic (except ones you may need for it). You listed an awesome group of software you own. Your computer is not set up for all these programs to automatically start up when you turn on the computer, is it??
So I am betting that you will not get the low memory message when only Logic is working. With some trial and error you will be able to open more and more programs at the same time without effecting Logic.
You can better understand your RAM by opening Activity Monitor (Applications >> Utilities >> Activity Monitor). Click on the System Memory tab at the bottom and you can visualize what your RAM (Memory) is doing. There is also a list of programs that are open and what resources they are sucking up.
Good luck with your computer and your masters.
Arch -
Volume too low on ringer and with hands-free headsets
Both the ringer's volume and the volume within headsets are too low in my new Droid X. When holding it to my ear while talking it is loud enough well below max volume, but hearing people through headsets while driving is often too low with all volumes set to max (on loud freeways, for example). I usually keep my phone's ringer on silent vibrate, but it would be nice to be able to turn it up louder when the phone is sitting on a desk at home. Cell phone ringers can be obnoxiously loud, but there are times when people need to have them turned up.
Because it is an issue in both the ringer speaker and the jack for wired headsets, it seems like there is a good chance of getting a software fix to raise the max volume levels. Reading posts on forums, it seems clear that this problem is widespread in most, if not all, Droid X devises.
If you are having similar problems, please take a moment to call both Verizon and Motorola with a polite report of the issue. I spoke with both today and they said they were aware of the issue, and the tech at Motorola said they were "looking into" a software fix for it. If enough people call and report it, we have a better chance of getting a fix sometime soon.
Verizon's support line is: (800) 922-0204.
Motorola's is: (800) 734-5870 (I chose #6 for "other issues" a couple of times after routing into Droid support to get a live tech)
It can be frustrating to filter through the support systems and report it, especially if they tell you to try obvious things like rebooting the phone and turning the volume up, but it really is the only way to get them to fix this issue. Both techs I spoke with actually looked in their systems right away and acknowledged the issue is being reported by people.
Of course, if anyone has a real solution to this issue, please let us know what it is!Thanks for the suggestion JKramer. I should have been more clear about having adjusted all volumes up in the settings screen. Assuming all sound options are located within the "Settings --> Sound & Display" screen, then yes, I have tried changing them. I have also tried changing the option to "use incoming call volume for notifications" under the "ringer volume" settings, after reading that someone had success in improving volume that way.
This issue is not a problem all of the time. It is only an issue when there is an elevated level of background noise, such as driving on the freeway, or having music or a movie playing in the room.
The phone is otherwise excellent! I live and commute in areas of very spotty reception (with all cell providers), and this phone will keep calls and not even have voices garble where my past two phones have consistently dropped calls in at least five problem locations. So far, everything else about it seems to work great!
Maybe some of these phones don't have this issue. Or, maybe the people who say they don't encounter the problem are lucky enough to live in relatively quite environments most of the time. In a quiet house or office, it is certainly loud enough. It just needs to have the option for increased volume when in noisy places. The issue is that its "max volume" settings are not as loud as with other phones.
Searching for something like "Droid X volume" will indeed show many people experiencing these same problems on various forums... -
How to reduce execution time ?
Hi friends...
I have created a report to display vendor opening balances,
total debit ,total credit , total balance & closing balance for the given date range. it is working fine .But it takes more time to execute . How can I reduce execution time ?
Plz help me. It's a very urgent report...
The coding is as below.....
report yfiin_rep_vendordetail no standard page heading.
tables : bsik,bsak,lfb1,lfa1.
type-pools : slis .
--TABLE STRUCTURE--
types : begin of tt_bsik,
bukrs type bukrs,
lifnr type lifnr,
budat type budat,
augdt type augdt,
dmbtr type dmbtr,
wrbtr type wrbtr,
shkzg type shkzg,
hkont type hkont,
bstat type bstat_d ,
prctr type prctr,
name1 type name1,
end of tt_bsik,
begin of tt_lfb1,
lifnr type lifnr,
mindk type mindk,
end of tt_lfb1,
begin of tt_lfa1,
lifnr type lifnr,
name1 type name1,
ktokk type ktokk,
end of tt_lfa1,
begin of tt_opbal,
bukrs type bukrs,
lifnr type lifnr,
gjahr type gjahr,
belnr type belnr_d,
budat type budat,
bldat type bldat,
waers type waers,
dmbtr type dmbtr,
wrbtr type wrbtr,
shkzg type shkzg,
blart type blart,
monat type monat,
hkont type hkont,
bstat type bstat_d ,
prctr type prctr,
name1 type name1,
tdr type dmbtr,
tcr type dmbtr,
tbal type dmbtr,
end of tt_opbal,
begin of tt_bs ,
bukrs type bukrs,
lifnr type lifnr,
name1 type name1,
prctr type prctr,
tbal type dmbtr,
bala type dmbtr,
balb type dmbtr,
balc type dmbtr,
bald type dmbtr,
bale type dmbtr,
gbal type dmbtr,
end of tt_bs.
************WORK AREA DECLARATION *********************
data : gs_bsik type tt_bsik,
gs_bsak type tt_bsik,
gs_lfb1 type tt_lfb1,
gs_lfa1 type tt_lfa1,
gs_ageing type tt_ageing,
gs_bs type tt_bs,
gs_opdisp type tt_bs,
gs_final type tt_bsik,
gs_opbal type tt_opbal,
gs_opfinal type tt_opbal.
************INTERNAL TABLE DECLARATION*************
data : gt_bsik type standard table of tt_bsik,
gt_bsak type standard table of tt_bsik,
gt_lfb1 type standard table of tt_lfb1,
gt_lfa1 type standard table of tt_lfa1,
gt_ageing type standard table of tt_ageing,
gt_bs type standard table of tt_bs,
gt_opdisp type standard table of tt_bs,
gt_final type standard table of tt_bsik,
gt_opbal type standard table of tt_opbal,
gt_opfinal type standard table of tt_opbal.
ALV DECLARATIONS *******************
data : gs_fcat type slis_fieldcat_alv ,
gt_fcat type slis_t_fieldcat_alv ,
gs_sort type slis_sortinfo_alv,
gs_fcats type slis_fieldcat_alv ,
gt_fcats type slis_t_fieldcat_alv.
**********global data declration***************
data : kb type dmbtr ,
return like bapireturn ,
balancespgli like bapi3008-bal_sglind,
noteditems like bapi3008-ntditms_rq,
keybalance type table of bapi3008_3 with header line,
opbalance type p.
SELECTION SCREEN DECLARATIONS *********************
selection-screen begin of block b1 with frame .
select-options : so_bukrs for bsik-bukrs obligatory,
so_lifnr for bsik-lifnr,
so_hkont for bsik-hkont,
so_prctr for bsik-prctr ,
so_mindk for lfb1-mindk,
so_ktokk for lfa1-ktokk.
selection-screen end of block b1.
selection-screen : begin of block b1 with frame.
parameters : p_rb1 radiobutton group rad1 .
select-options : so_date for sy-datum .
selection-screen : end of block b1.
********************************ASSIGNING ALV GRID
****field catalog for balance report
gs_fcats-col_pos = 1.
gs_fcats-fieldname = 'BUKRS'.
gs_fcats-seltext_m = text-001.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 2 .
gs_fcats-fieldname = 'LIFNR'.
gs_fcats-seltext_m = text-002.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 3.
gs_fcats-fieldname = 'NAME1'.
gs_fcats-seltext_m = text-003.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 4.
gs_fcats-fieldname = 'BALC'.
gs_fcats-seltext_m = text-016.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 5.
gs_fcats-fieldname = 'BALA'.
gs_fcats-seltext_m = text-012.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 6.
gs_fcats-fieldname = 'BALB'.
gs_fcats-seltext_m = text-013.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 7.
gs_fcats-fieldname = 'TBAL'.
gs_fcats-seltext_m = text-014.
append gs_fcats to gt_fcats .
gs_fcats-col_pos = 8.
gs_fcats-fieldname = 'GBAL'.
gs_fcats-seltext_m = text-015.
append gs_fcats to gt_fcats .
data : repid1 type sy-repid.
repid1 = sy-repid.
INITIALIZATION EVENTS ******************************
initialization.
*Clearing the work area.
clear gs_bsik.
Refreshing the internal tables.
refresh gt_bsik.
******************START OF SELECTION EVENTS **************************
start-of-selection.
*get data for balance report.
perform sub_openbal.
perform sub_openbal_display.
*& Form sub_openbal
text
--> p1 text
<-- p2 text
form sub_openbal .
if so_date-low > sy-datum or so_date-high > sy-datum .
message i005(yfi02).
leave screen.
endif.
select bukrs lifnr gjahr belnr budat bldat
waers dmbtr wrbtr shkzg blart monat hkont prctr
from bsik into table gt_opbal
where bukrs in so_bukrs and lifnr in so_lifnr
and hkont in so_hkont and prctr in so_prctr
and budat in so_date .
select bukrs lifnr gjahr belnr budat bldat
waers dmbtr wrbtr shkzg blart monat hkont prctr
from bsak appending table gt_opbal
for all entries in gt_opbal
where lifnr = gt_opbal-lifnr
and budat in so_date .
if sy-subrc <> 0.
message i007(yfi02).
leave screen.
endif.
select lifnr mindk from lfb1 into table gt_lfb1
for all entries in gt_opbal
where lifnr = gt_opbal-lifnr and mindk in so_mindk.
select lifnr name1 ktokk from lfa1 into table gt_lfa1
for all entries in gt_opbal
where lifnr = gt_opbal-lifnr and ktokk in so_ktokk.
loop at gt_opbal into gs_opbal .
loop at gt_lfb1 into gs_lfb1 where lifnr = gs_opbal-lifnr.
loop at gt_lfa1 into gs_lfa1 where lifnr = gs_opbal-lifnr.
gs_opfinal-bukrs = gs_opbal-bukrs.
gs_opfinal-lifnr = gs_opbal-lifnr.
gs_opfinal-gjahr = gs_opbal-gjahr.
gs_opfinal-belnr = gs_opbal-belnr.
gs_opfinal-budat = gs_opbal-budat.
gs_opfinal-bldat = gs_opbal-bldat.
gs_opfinal-waers = gs_opbal-waers.
gs_opfinal-dmbtr = gs_opbal-dmbtr.
gs_opfinal-wrbtr = gs_opbal-wrbtr.
gs_opfinal-shkzg = gs_opbal-shkzg.
gs_opfinal-blart = gs_opbal-blart.
gs_opfinal-monat = gs_opbal-monat.
gs_opfinal-hkont = gs_opbal-hkont.
gs_opfinal-prctr = gs_opbal-prctr.
gs_opfinal-name1 = gs_lfa1-name1.
if gs_opbal-shkzg = 'H'.
gs_opfinal-tcr = gs_opbal-dmbtr * -1.
gs_opfinal-tdr = '000000'.
else.
gs_opfinal-tdr = gs_opbal-dmbtr.
gs_opfinal-tcr = '000000'.
endif.
append gs_opfinal to gt_opfinal.
endloop.
endloop.
endloop.
sort gt_opfinal by bukrs lifnr prctr .
so_date-low = so_date-low - 1 .
loop at gt_opfinal into gs_opfinal.
call function 'BAPI_AP_ACC_GETKEYDATEBALANCE'
exporting
companycode = gs_opfinal-bukrs
vendor = gs_opfinal-lifnr
keydate = so_date-low
balancespgli = ' '
noteditems = ' '
importing
return = return
tables
keybalance = keybalance.
clear kb .
loop at keybalance .
kb = keybalance-lc_bal + kb .
endloop.
gs_opdisp-balc = kb.
gs_opdisp-bukrs = gs_opfinal-bukrs.
gs_opdisp-lifnr = gs_opfinal-lifnr.
gs_opdisp-name1 = gs_opfinal-name1.
at new lifnr .
sum .
gs_opfinal-tbal = gs_opfinal-tdr + gs_opfinal-tcr .
gs_opdisp-tbal = gs_opfinal-tbal.
gs_opdisp-bala = gs_opfinal-tdr .
gs_opdisp-balb = gs_opfinal-tcr .
gs_opdisp-gbal = keybalance-lc_bal + gs_opfinal-tbal .
append gs_opdisp to gt_opdisp.
endat.
clear gs_opdisp.
clear keybalance .
endloop.
delete adjacent duplicates from gt_opdisp.
endform. " sub_openbal
*& Form sub_openbal_display
text
--> p1 text
<-- p2 text
form sub_openbal_display .
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
I_INTERFACE_CHECK = ' '
I_BYPASSING_BUFFER = ' '
I_BUFFER_ACTIVE = ' '
i_callback_program = repid1
I_CALLBACK_PF_STATUS_SET = ' '
I_CALLBACK_USER_COMMAND = ' '
I_CALLBACK_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_END_OF_LIST = ' '
I_STRUCTURE_NAME =
I_BACKGROUND_ID = ' '
I_GRID_TITLE =
I_GRID_SETTINGS =
IS_LAYOUT =
it_fieldcat = gt_fcats
IT_EXCLUDING =
IT_SPECIAL_GROUPS =
IT_SORT =
IT_FILTER =
IS_SEL_HIDE =
I_DEFAULT = 'X'
I_SAVE = 'X'
IS_VARIANT =
it_events =
IT_EVENT_EXIT =
IS_PRINT =
IS_REPREP_ID =
I_SCREEN_START_COLUMN = 0
I_SCREEN_START_LINE = 0
I_SCREEN_END_COLUMN = 0
I_SCREEN_END_LINE = 0
IT_ALV_GRAPHICS =
IT_HYPERLINK =
IT_ADD_FIELDCAT =
IT_EXCEPT_QINFO =
I_HTML_HEIGHT_TOP =
I_HTML_HEIGHT_END =
IMPORTING
E_EXIT_CAUSED_BY_CALLER =
ES_EXIT_CAUSED_BY_USER =
tables
t_outtab = gt_opdisp
exceptions
program_error = 1
others = 2
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif.
endform. " sub_openbal_displayI think you are using for all entries statement in almost all select statements but i didnt see any condtion before you are using for all entries statement.
If you are using for all entries in gt_opbal ... make sure that gt_opbal has some records other wise it will try to read all records from the data base tables.
Try to check before using for all entries in the select statement like
if gt_opbal is not initial.
select adfda adfadf afdadf into table
for all entries in gt_opbal.
else.
select abdf afad into table
from abcd
where a = 1
and b = 2.
endif.
i didnt see anything wrong in your report but this is major time consuming when you dont have records in the table which you are using for all entries. -
Reduce execution time with selects
Hi,
I have to reduce the execution time in a report, most of the consumed time is in the select query.
I have a table, gt_result:
DATA: BEGIN OF gwa_result,
tknum LIKE vttk-tknum,
stabf LIKE vttk-stabf,
shtyp LIKE vttk-shtyp,
route LIKE vttk-route,
vsart LIKE vttk-vsart,
signi LIKE vttk-signi,
dtabf LIKE vttk-dtabf,
vbeln LIKE likp-vbeln,
/bshm/le_nr_cust LIKE likp-/bshm/le_nr_cust,
vkorg LIKE likp-vkorg,
werks LIKE likp-werks,
regio LIKE kna1-regio,
land1 LIKE kna1-land1,
xegld LIKE t005-xegld,
intca LIKE t005-intca,
bezei LIKE tvrot-bezei,
bezei1 LIKE t173t-bezei,
fecha(10) type c.
DATA: END OF gwa_result.
DATA: gt_result LIKE STANDARD TABLE OF gwa_result.
And the select query is this:
SELECT ktknum kstabf kshtyp kroute kvsart ksigni
k~dtabf
lvbeln l/bshm/le_nr_cust lvkorg lwerks nregio nland1 oxegld ointca
tbezei ttbezei
FROM vttk AS k
INNER JOIN vttp AS p ON ktknum = ptknum
INNER JOIN likp AS l ON pvbeln = lvbeln
INNER JOIN kna1 AS n ON lkunnr = nkunnr
INNER JOIN t005 AS o ON nland1 = oland1
INNER JOIN tvrot AS t ON troute = kroute AND t~spras = sy-langu
INNER JOIN t173t AS tt ON ttvsart = kvsart AND tt~spras = sy-langu
INTO TABLE gt_result
WHERE ktknum IN s_tknum AND ktplst IN s_tplst AND k~route IN s_route AND
k~erdat BETWEEN s_erdat-low AND s_erdat-high AND
l~/bshm/le_nr_cust <> ' ' "IS NOT NULL
AND k~stabf = 'X'
AND ktknum NOT IN ( SELECT tktknum FROM vttk AS tk
INNER JOIN vttp AS tp ON tktknum = tptknum
INNER JOIN likp AS tl ON tpvbeln = tlvbeln
WHERE l~/bshm/le_nr_cust IS NULL )
AND k~tknum NOT IN ( SELECT tknum FROM /bshs/ssm_eship )
AND ( o~xegld = ' '
OR ( o~xegld = 'X' AND
( ( n~land1 = 'ES'
AND ( nregio = '51' OR nregio = '52'
OR nregio = '35' OR nregio = '38' ) )
OR n~land1 = 'ESC' ) )
OR ointca = 'AD' OR ointca = 'GI' ).
Does somebody know how to reduce the execution time ?.
Thanks.Hi,
Try to remove the join. Use seperate selects as shown in example below and for the sake of selection, keep some key fields in your internal table.
Then once your final table is created, you can copy the table into GT_FINAL which will contain only fields you need.
EX
data : begin of it_likp occurs 0,
vbeln like likp-vbeln,
/bshm/le_nr_cust like likp-/bshm/le_nr_cust,
vkorg like likp-vkorg,
werks like likp-werks,
kunnr likr likp-kunnr,
end of it_likp.
data : begin of it_kna1 occurs 0,
kunnr like...
regio....
land1...
end of it_kna1 occurs 0,
Select tknum stabf shtyp route vsart signi dtabf
from VTTP
into table gt_result
WHERE tknum IN s_tknum AND
tplst IN s_tplst AND
route IN s_route AND
erdat BETWEEN s_erdat-low AND s_erdat-high.
select vbeln /bshm/le_nr_cust
vkorg werks kunnr
from likp
into table it_likp
for all entries in gt_result
where vbeln = gt_result-vbeln.
select kunnr
regio
land1
from kna1
into it_kna1
for all entries in it_likp.
similarly for other tables.
Then loop at gt result and read corresponding table and populate entire record :
loop at gt_result.
read table it_likp where vbeln = gt_result-vbeln.
if sy-subrc eq 0.
move corresponding fields of it_likp into gt_result.
gt_result-kunnr = it_likp-kunnr.
modify gt_result.
endif.
read table it_kna1 where kunnr = gt_result-vbeln.
if sy-subrc eq 0.
gt_result-regio = it-kna1-regio.
gt_result-land1 = it-kna1-land1.
modify gt_result.
endif.
endloop. -
How do I know if my image resolution is too low now that the yellow triangle warning is gone?
I am creating a book in iPhoto 11 for the first time. In older versions, I would get a yellow triangle indicating my image quality was too low for that size image. Based on other threads, it looks like iPhoto no longer gives the warning. What is the best way to know if the image will print okay? Is there a workaround so I can check the quality of the image prior to printing out the book?
I noticed that there's not resolution waring also. Create a PDF file of the book and look at the pages at full sized with Preview and determine if the photos are sharp enough for you.
I ran a test with two images, one 4000 x 3000 pixels and the other 230 x 200 pixels in a 2 photo per page book page and then created a pdf of the book, Modern Lines theme. This is a screenshot of the page in Preview at View ➙ View Full Sized:
This screenshot is of the same photo in a different 2 photo per page layout in the same theme. It does look more like I expected:
Right click on the image and select Show in New Window from the Contextual menu. I'm really surprised the first example of the small image looked as good as it did. I expected a very pixelated image like the second example but iPhoto upsized it rather nicely. So iPhoto can upsize a small photo to some degree but I don't recommend relying on it for such small images. And it is a very, very, very small image. Soif your image is reasonably large you should be OK. But check it in the PDF file.
OT -
Volume too low on album purchased on iTunes
I purchased an album on iTunes and when I burn to CD the volume is too low. I have done all suggest things to "enhance" the sound. All other items I have purchased are fine. It just seems to be this one album. Any suggestions. Is there a way to "repurchase-download" without paying.
JeneanCraw4dm,
If you go to iTunes menu, select preferences and click on "playback" button you'll see a check box called "sound check". This enables the computer to adjust the volume of each track so that they all sound similar in volume. So no, you don't have to go through each track! Its a global setting of your music player.
I have it switched off at all times. I also have sound enhancer switched off at all times.
These features have no effect on volume (or anything for that matter) when you burn CD's.
As to the original poster, if the CD sounds quiet then that is how its supposed to be. If buy the CD in the shops it will sound the same. Note that there is big trend at the moment to make albums sound very loud using special processing at the mastering stage (when audio is prepared for album release). It is a trend that most "audiophiles" (or IMHO anyone with ears!) detest. Look up "loudness wars" or similar, on google.
Indeed the "sound check" facility in itunes is there to counteract the fact that CD's sound different in volume - a fact that is made all the worse by the record company practise I have just outlined!
I suspect the "quiet CD" just sounds more like it should do and its being compared to a CD thats been mastered overly loud. -
Hello,
When I am getting the below error while processing the MIRO. The error is:
Price too low (below tolerance limit of 1000.00 INR)
Message No: M8084
We have done the changes for Unplanned Deliver cost tolerance limit to unlimited -ve amount but +ve 1000 INR only. I got some of suggestions last time. Thanks for that.
Now they want a different thing. They want the tolerance should be of both the option i.e. 10% of the total value of PO or Rs1000 whichever is less. Can we activate both the option in tolerance?
Please suggest.
SPHi Gaurav,
Now the user wants only the warning message. So I had tried OMRM. I got the message 084 details as below:
Online: Error
Batch: Error
Standard: Warning
So I have to change both the Online & Batch from Error to warning or any particular out of them.
Need suggestion.
SP -
I just bought an iphone for wife, she said the earphone volume is too low, I have set the volume to max, any idea? Thanks Alex
The volume issue WAS a problem. Apple fixed it for most people a number of software releases ago. Most of the volume complaints stopped a long time ago after the fix. Now, with the 3G iphone, my volume is very loud and much better than even the fixed first version iphone. I think if you read the many initial reviews of the 3G version, most mentioned how much better the handset, speaker and speakerphone quality and loudness is. I know this is not solace for someone with a problem now, but I don't think it is widespread. You could just have a defective phone. Take it in the Apple Genius bar and see if they will get you a new one.
-
I have a problem with the black level. There are too low all the time. When I do color correction and i render the blacks going up to high. Someone can help me?
Thank you,
MarioI think you are referring to this:
http://prolost.blogspot.com/2008/01/colorista-clips-no-more.html -
The maximum volume on my ipod is too low. When I am driving, I cannot hear it. When I use the car jack, I cannot hear it. How do I turn up the voume
on the ipod so it can work? I don't have the same problem with my iphone, it works fine.armin75 wrote:
And why is the crappy supermarket mp3 player louder then my Ipod Classic then???
I don't know, you tell us!
I also use an extra AKG headphone with a low impedance which fits the Ipods output, otherwise it would even be more silent.
I have an old pair of AKG K340 headphones and I use a Cmoy "altoids" amp to drive them from the iPod.
I compared also with my old 5.5 Video Ipod and with the volume cap enabled (i can also uncap it of course :-)) it is 5dB louder then the classic.
I believe it is a known fact that the 5th gen iPod can be hacked, the Classic cannot.
I am also working for a very big service department for Nokia Siemens Networks and within i learned that the customer is the king, but this doesn't seem to apply for Apple.
As has already been said, it is an EU law - and all mp3 players sold within the EU must comply with EU legislation. If you know of a brand that doesn't, perhaps it's not actually approved for sale within the EU. Apple are not going to ignore an EU law, whatever you think or say! Do you think the EU would simply roll over and let Apple ignore this law?
I am not the only complaining customer i suppose, and normally when a company is loosing (normally satisfied) customers it should think about customer oriented solutions.
As above!
When i buy an Apple product i automatically assume a very good product, but i am really, really annoyed this time. Other producers within the EU don't play by the book and no one takes care about it, or how can the supermarket mp3 play much louder than the classic (i measured the same!! output impedance).
I believe other manufacturers do play by the book. Is "supermarket mp3 player" a euphemism for something that should not be sold within the EU?
You know - since you feel this strongly about it, you should take the matter up with your MEP (Member of the European Parliament), since what you say here will not change a thing.
Phil
Maybe you are looking for
-
Higher education cess not appearing in the excise invoice
Hello experts,I am presently in a support project of a big aluminium project in india, my question is while i am creating the excise invoice using the transaction j1iin then with respect to the proforma invoice,what is happening is that the excise in
-
How do I transfer iTunes library (music only) from PowerMac G4 FW800 to new iPad 4?
How do I transfer iTunes library (music only) from PowerMac G4 FW800 to new iPad 4?
-
ITunes: a new display/plug-in to see all artworks at the left of each song
iTunes: a new display or plug-in to see all artworks at the left or right of each song ''bacuse'' this way is easier for all of us to see if some albums/songs have or not covers And the extension (jpeg, gif, etc.), zise-dpi, filename of the cover wil
-
Itunes sorting single seperate from album
i have 2 problems.....first, i purchased a single before CD was out, and when i purchased the rest of the CD, the single is not on it. I have already checked in the "get info" section and everything matches, and the song still wont show up on album..
-
Two hard drive configuration on MacBook Pro
Hi everyone, So I'm planning on upgrading my early 2011 Macbook Pro 15 by adding an SSD at the expense of the optical drive. Basically, what I would like to do is to replace the current HDD with the new SSD and replace the optical drive with the HDD