Performance concern

Dear Experts,
The below query is causing a lot of performance concern. Kindly go through and let me know about the suitable modifications i can make.
select
          vbeln
          fkart
          vkorg
          vtweg
          fkdat
          sum( fkimg ) as fkimg
          matnr
          aubel
          vstel
          ktgrm
          matkl
          prctr
          spart
          SAKN1
          from ZV_BSEG_VBRP_RK
          into table it_sales
          where
          fkdat_i in s_fkdat and
          vtweg in s_vtweg and
          vkorg in s_vkorg and
          fkart in so_fkart and
          spart in s_spart and
          vstel in s_werks and
          matnr in s_matnr and
          vbeln in s_vbeln and
          fkimg ne '0' and
          ktgrm in ('01','02','03','04','05','06','07','08','09','10','11','12') and
          fksto ne 'X'
          group by vbeln fkart vkorg vtweg fkdat matnr aubel vstel ktgrm matkl prctr spart SAKN1.
    sort it_sales by vbeln matnr prctr.
    if it_sales[] is not initial.
select vbeln fkdat GJAHR VKORG from vbrk into table it_vbeln for all entries in it_sales where vbeln = it_sales-vbeln.
    select
          belnr
          shkzg
          dmbtr
          hkont
          MATNR
          prctr
          from bsEG into table it_fin1
          for all entries in it_vbeln
          where belnr = it_vbeln-vbeln AND
          BUKRS EQ IT_VBELN-VKORG and
          hkont >= '0000400001' and hkont <= '0000400251'.
*          hkont = it_sales-sakn1.
*          group by belnr shkzg hkont prctr.
          SORT IT_FIN BY hkont.
*delete it_fin where hkont >= '0000400001' and hkont <= '0000400251'.
          SORT IT_FIN BY BELNR MATNR PRCTR.
        IF it_fin1[] IS NOT INITIAL.
loop at it_fin1.
move-corresponding it_fin1 to it_fin.
collect it_fin.
append it_fin.
endloop.
          loop at it_fin.
            it_data_1-vbeln = it_fin-belnr.
            it_data_1-matnr = it_fin-matnr.
            it_data_1-prctr = it_fin-prctr.
*read table it_sales transporting no fields with key vbeln = it_data_1-vbeln matnr = it_Data_1-matnr prctr = it_data_1-prctr.
*if sy-subrc = 0.
*tabix = sy-tabix.
*FOR SALES INVOICE
read table it_sales with key vbeln = it_data_1-vbeln matnr = it_Data_1-matnr prctr = it_data_1-prctr.
            it_data_1-aubel = it_sales-aubel.
            it_data_1-vstel = it_sales-vstel.
            it_data_1-ktgrm = it_sales-ktgrm.
            it_data_1-matkl = it_sales-matkl.
            it_data_1-fkart = it_sales-fkart.
            it_data_1-vkorg = it_sales-vkorg.
            it_data_1-vtweg = it_sales-vtweg.
            it_data_1-fkdat = it_sales-fkdat.
            it_data_1-spart = it_sales-spart.
            if it_data_1-fkart = 'F2' or  it_data_1-fkart = 'IV' or  it_data_1-fkart = 'ZF2' or it_data_1-fkart = 'ZMIS' or  it_data_1-fkart = 'ZSF2' or it_data_1-fkart = 'ZMF2'.
              it_data_1-fkimg = it_sales-fkimg.
              it_data_1-shkzg = it_fin-shkzg.
              if it_data_1-shkzg = 'H'.
              it_data_1-dmbtr_1 = it_data_1-dmbtr_1 + it_fin-dmbtr.
              endif.
              if it_data_1-shkzg = 'S'.
              it_data_1-dmbtr_2 = it_data_1-dmbtr_2 + it_fin-dmbtr.
              endif.
              endif.
* H - Credit
* S - Debit
            if it_data_1-fkart = 'G2' or  it_data_1-fkart = 'IG' or  it_data_1-fkart = 'RE' or  it_data_1-fkart = 'ZCRE'.
              if it_data_1-fkart = 'G2'.
                it_data_1-cr_qty = '0'.
              else.
                it_data_1-cr_qty = it_sales-fkimg.
              endif.
              it_data_1-shkzg = it_fin-shkzg.
              if it_data_1-shkzg = 'H'.
              it_data_1-dmbtr_cr_1 = it_data_1-dmbtr_cr_1 + it_fin-dmbtr.
              endif.
              if it_data_1-shkzg = 'S'.
              it_data_1-dmbtr_cr_2 = it_data_1-dmbtr_cr_2 + it_fin-dmbtr.
              endif.
              endif.
            if it_data_1-fkart = 'L2'.
              it_data_1-dr_qty = 0.
              it_data_1-shkzg = it_fin-shkzg.
              if it_data_1-shkzg = 'H'.
              it_data_1-dmbtr_dr_1 = it_data_1-dmbtr_dr_1 + it_fin-dmbtr.
              endif.
              if it_data_1-shkzg = 'S'.
              it_data_1-dmbtr_dr_2 = it_data_1-dmbtr_dr_2 + it_fin-dmbtr .
              endif.
            endif.
            select single vtext into it_data_1-vtext from tvkmt where spras = 'EN' and ktgrm = it_data_1-ktgrm.
            select single txt20 into it_data_1-gltxt from skat where spras = 'EN' and saknr = it_data_1-hkont and ktopl = '1000'.
            select single maktg into it_data_1-maktg from makt where matnr = it_data_1-matnr.
            select single vtext into it_data_1-division from tspat where spart = it_data_1-spart and spras = 'EN'.
            append it_data_1.
            clear it_data_1.
            clear it_sales.
          endloop.
          sort it_data_1 by matnr vbeln.
if p_check ne 'X'.
          loop at it_data_1.
            concatenate it_data_1-matnr ' ' it_data_1-matkl ' ' it_data_1-ktgrm into it_final_1-count.
            move: it_data_1-matnr to it_final_1-matnr,
                  it_data_1-matkl to it_final_1-matkl,
                  it_data_1-ktgrm to it_final_1-ktgrm,
                  it_data_1-vtext to it_final_1-vtext,
                  it_data_1-sakn1 to it_final_1-sakn1,
                  it_data_1-spart to it_final_1-spart,
                  it_data_1-gltxt to it_final_1-gltxt,
                  it_data_1-division to it_final_1-division,
                  it_data_1-prctr to it_final_1-prctr,
                  it_data_1-maktg to it_final_1-maktg,
                  it_data_1-fkimg to it_final_1-fkimg,
*                  it_data_1-dmbtr to it_final_1-dmbtr,
*                  it_data_1-dmbtr_cr to it_final_1-dmbtr_cr,
*                  it_data_1-dmbtr_dr to it_final_1-dmbtr_dr,
                  it_data_1-dr_qty to it_final_1-dr_qty,
                  it_data_1-cr_qty to it_final_1-cr_qty,
                  it_data_1-dmbtr_1 TO it_final_1-dmbtr_1,
                  it_data_1-dmbtr_2 TO it_final_1-dmbtr_2,
                  it_data_1-dmbtr_dr_1 TO it_final_1-dmbtr_dr_1,
                  it_data_1-dmbtr_dr_2 TO it_final_1-dmbtr_dr_2,
                  it_data_1-dmbtr_cr_1 TO it_final_1-dmbtr_cr_1,
                  it_data_1-dmbtr_cr_2 TO it_final_1-dmbtr_cr_2.
            append it_final_1.
            clear it_data_1.
          endloop.
          data: wa_matnr_1 like mara-matnr,
                wa_matkl_1 like vbrp-matkl,
                wa_ktgrm_1 like vbrp-ktgrm,
                wa_hkont like bsis-hkont,
                wa_gltxt like skat-txt20,
                wa_hkont_dr like bsis-hkont,
                wa_hkont_cr like bsis-hkont,
                wa_maktg_1 like makt-maktg,
                wa_vtext_1 like tvkmt-vtext,
                wa_vtext_2 like tspat-vtext,
                wa_spart like vbrk-spart,
                wa_prctr like vbrp-prctr.
          sort it_final_1 by matnr matkl ktgrm division prctr hkont.
          loop at it_final_1.
            wa_matnr_1 = it_final_1-matnr.
            wa_matkl_1 = it_final_1-matkl.
            wa_ktgrm_1 = it_final_1-ktgrm.
            wa_vtext_1 = it_final_1-vtext.
            wa_spart = it_final_1-spart.
            wa_hkont = it_final_1-sakn1.
            wa_gltxt = it_final_1-gltxt.
            wa_vtext_2 = it_final_1-division.
            wa_prctr = it_final_1-prctr.
            wa_maktg_1 = it_final_1-maktg.
*        wa_hkont_cr = it_final_1-hkont_cr.
            at end of count.
              sum.
*          it_gl-hkont_dr = it_final_1-hkont_dr.
*          it_gl-hkont_cr = it_final_1-hkont_cr.
              it_gl-fkimg = it_final_1-fkimg.
              it_gl-dr_qty = it_final_1-dr_qty.
              it_gl-cr_qty = it_final_1-cr_qty.
              it_gl-dmbtr_1 = it_final_1-dmbtr_1.
              it_gl-dmbtr_2 = it_final_1-dmbtr_2.
              it_gl-dmbtr = it_final_1-dmbtr_1 - it_final_1-dmbtr_2.
*it_gl-dmbtr = it_final_1-dmbtr.
              it_gl-dmbtr_dr_1 = it_final_1-dmbtr_dr_1.
              it_gl-dmbtr_dr_2 = it_final_1-dmbtr_dr_2.
              it_gl-dmbtr_dr = it_final_1-dmbtr_dr_1 - it_final_1-dmbtr_dr_2.
*it_gl-dmbtr_dr = it_final_1-dmbtr_dr.
              it_gl-dmbtr_cr_1 = it_final_1-dmbtr_cr_1.
              it_gl-dmbtr_cr_2 = it_final_1-dmbtr_cr_2.
              it_gl-dmbtr_cr = it_final_1-dmbtr_cr_1 - it_final_1-dmbtr_cr_2.
*it_gl-dmbtr_cr = it_final_1-dmbtr_cr.
              it_gl-gltxt = wa_gltxt.
              it_gl-matnr = wa_matnr_1.
              it_gl-matkl = wa_matkl_1.
              it_gl-vtext = wa_vtext_1.
              it_gl-spart = wa_spart.
              it_gl-division = wa_vtext_2.
              it_gl-prctr = wa_prctr.
              it_gl-hkont = wa_hkont.
              it_gl-maktg = wa_maktg_1.
              it_gl-netqty = ( it_gl-fkimg + it_gl-dr_qty ) - ( it_gl-cr_qty ).
              it_gl-netval = ( it_gl-dmbtr + it_gl-dmbtr_dr ) - ( it_gl-dmbtr_cr ).
              append it_gl.
              clear wa_matnr_1.
              clear wa_vtext_1.
              clear wa_matkl_1.
              clear wa_ktgrm_1.
              clear wa_hkont.
              clear wa_hkont_dr.
              clear wa_hkont_cr.
            endat.
            clear it_final_1.
            clear it_gl.
          endloop.
        endif.
      ENDIF.
  endif.
Do provide your valuable suggestions
Regards,
Jitesh
Use meaningful subject for your Future questions
Edited by: Vijay Babu Dudla on Mar 23, 2009 6:20 AM

Assuming you are using standard tables instead of sorted or hashed, your problem is likely here:
loop at it_fin.
  read table it_sales with
    key vbeln = it_data_1-vbeln
    matnr = it_Data_1-matnr
    prctr = it_data_1-prctr.
endloop.
The read without the binary search option is in effect a nested loop. so have a look at:
[Performance of Nested Loops|/people/rob.burbank/blog/2006/02/07/performance-of-nested-loops]
Rob

Similar Messages

  • Performance concern with directory server implementation

    performance concern with directory server implementation
    I first posted this at metalink forum, and was suggested to post it here instead.
    Hi,
    I'd like to get any feedback regarding performance of oracle directory server implementation. Below is what I copy&patested from 9i Net Services Administrator's Guide, I found no 'directory server vendor documentation', so anything regarding this is welcome too.
    Performance
    Connect identifiers are stored in a directory server for all clients to access.
    Depending on the number of clients, there can be a significant load on a directory
    server.
    During a connect identifier lookup, a name is searched under a specific Oracle
    Context. Because of the scope of the lookup, you probably want users to experience
    relatively quick performance so that the database connect time is not affected. Users
    may begin to notice slow connect times if lookups takes more than one second.
    You can resolve performance problems changing the network topology or
    implementing replication.
    See Also: Directory server vendor documentation for details on
    resolving performance issues
    Thanks.
    Shannon

    Shannon,
    you can find some tuning advises in the following
    a) OiD Capacity Planning Considerations
    http://download-west.oracle.com/docs/cd/B10501_01/network.920/a96574/cap_plan.htm#1030019
    b) Tuning Considerations
    http://download-west.oracle.com/docs/cd/B10501_01/network.920/a96574/tuning.htm#999468
    c) oracle net services
    http://download-west.oracle.com/docs/cd/B10501_01/network.920/a96579/products.htm#1005697
    you should start with a) to get an overview what to be aware of
    --Olaf

  • Are there any performance concerns when referencing an image located in a c

    Are there any performance concerns when referencing an image located in a central location (example application server)?

    Hi
    Should not be an issue at all - we are only going to be limited by the network bandwidth
    Regards
    Tim
    http://blogs.oracle.com/xmlpublisher

  • Performance concerns in upload flat file into SEM-BPS.

    Hi,
    we are using HOW-TO document to upload flatfile into SEM-BPS.
    in the same exit function, we have a need to derive missing characteristic values from reference data.
    So, we are reading reference data using API_SEMBPS_GETDATA.
    upload is taking around 10 minutes in test systems. our concern is, if taking so long for test systems with less data , what about production time with more data?
    from what all I can see is, most of the time is being consumed at reading reference data.
    I'm dealing around 14000 records of reference data & around the same number of uploaded records. Initially, system status is, "number of cells to be formed : 33092" and then immediately after 5 seconds, status changes to, "formed cells : 33000" and this status stays for like 8 minutes.
    I'm not using any input/output layouts but just exit planning function in the planning folder. So, I anticipate that above status is while reading reference data but not due to huge amounts of uploaded records.
    When I ran the same exit funciton by commenting just the "reading reference data" code, upload function execution time came out as 2 minutes.
    What is the best bet in dealing this scenario?
    Usually, what is the best approch to read reference data / to derive missing characteristic values?
    I couldn't able to use "Char Relationship using reference data" as it might not be suitable in my case. Even if suits, am missing enough documents/info/examples to deal "Char Relationship". Documentation on help.sap.com is not enough in this case.
    PS: initially, when I tried to read 14000 records at detailed level, exit funciton returned no reference data & in the debugging mode, I could see a message "too many records".
    Can a given layout read only a maximum of 9999 records / excel rows?
    records are at CALMONTH level. As I don't need CALMONTH for derivation of characteristic values, I have deleted CALMONTH from " READ reference data" level to avoid the above message. Does this anyway relates to peformance?
    Appreciate any help

    Hello Hari,
    it is touch to say what exactly causes this performance problem. Since you are dealing with custom coding (the how-to + your derivation logic), I suggest to do an ABAP trace (SE30) to see where the time is really spent.
    The API call to read data should not take more than a few seconds. Test it separately by putting the API call into a simple ABAP program.
    As Mary pointed out already, there's a 9999 line limit for layouts and therefore the GETDATA API as well.
    Note: The file upload/download how-to solution was never meant for mass data loads. This needs to be done using regular BW functionality.
    Regards
    Marc
    SAP NetWeaver RIG

  • Performance concern with deferred loading.

    I've been reading a great tutorial of making Space Invaders. Each frame, it checks if the sprite is in a HashMap instance. If it finds the key, the value is assigned for drawing. If it's not found, we would load it. This is referred to as differed loading for later complex games. Anyway, my concern is every frame, it checks this hash map. Wouldn't this kill performance? Any better ways of handling this?
    Here's a link to the code:
    http://www.planetalia.com/cursos/Java-Invaders/JAVA-INVADERS-11.tutorial
    Thanks,
    Phil

    Anyway, my concern is every
    frame, it checks this hash map. Wouldn't this kill
    performance?Yeah, it might take the frame as much as one microsecond longer to display. But probably not nearly that much.
    Computers are fast. Displays are slow. People's perceptions of displays are extremely slow. If you saw the code that runs as you drag a piece of code from one place to another place in your text editor you would (a) be astonished at how much code gets run repeatedly, and (b) notice that that code is much more complicated than just a little hashmap lookup.

  • TREX Performance Concerns

    Hi all,
    We are interested in using TREX to handle duplicate prevention for business partners using BAS. Should there be any concerns with regards to performance if we have TREX installed on our CRM system?
    Thanks in advance. Points will be awarded for helpful answers.

    Hi,
    Please see TREX Performance Settings.
    Most probably you will need to adjust some settings to optimize performance.
    http://wiki.colar.net/sap_crm5_0_isa_trex_performance_settings
    Regards
    -jemaru

  • Performance concerns: WRT310N + WGA600N

    Last night I picked up a WRT310N router and a WGA600N adapter and I'm a bit concerned with the wireless N performance. When first setting up the router I kept it on WEP encryptions just because I had multiple devices configured for WEP already and had not felt like going around updating them just yet. I got the router and adapter working great and got everything setup. Now the connection on the WGA600N was running in wireless G because I was using WEP (adapater notes N will only work unencrypted / WPA*). So OK, I set the router to use a WPA2 key, and connect again. I notice things responding a bit slow. I test download a file from a local university mirror that gives 1.5MB/s when using ethernet and on the Wireless G. However, it was capping at like 150KB/s on wireless-n! I thought it had to be something wrong with the encryption, so I turn off all encryption all together. Nope, still same results. Thinking that's because the wireless is in mix-mode on the router, so I turn it to Wireless-N only, so the adapter is the only wireless device. Same exact results. I try between standard and wide signal, same thing. So I'm very confused as to why i'm getting such horrible performance thorugh Wireless-N compared to G?

    Try upgrading the drivers of the Adapter .... from http://linksys.com/download ...
    Uninstall older drviers & try installing the newer drivers ....

  • Performance concerns - Is what I have good enough?

    Before I ask my question, a quick comment about Aperture. Today, I went down to my local Apple store and spent about 2 hours working with Aperture and talking to some fairly knowledgeable Apple store employees. I told them what I had learned here on the forum, and they showed me how to work around many of the concerns I had with buying the program. My Nikon .NEF RAW files looked great!
    So, what stopped me from buying the program today. One thing. No one could tell me if I will get good performance from Aperture on my system the way it is currently configured. One person said I needed a new video card, the second person said 'No'. I'd rather not buy a new video card right now if I don't have to.
    Therefore, I need some help from the participants here on the forum. Below are the specs to my system.
    System: DP 2.5 Ghz PowerMac G5 (this is fine)
    Memory: 1.5 Gb RAM (a little low, but I'm told still very acceptable)
    Video: (here's my concern)
    ATI Radeon 9600 XT:
    Chipset Model: ATY,RV360
    Type: Display
    Bus: AGP
    VRAM (Total): 128 MB
    Cinema HD Display: (fine)
    Display Type: LCD
    Resolution: 1920 x 1200
    Depth: 32-bit Color
    Core Image: Supported
    Main Display: Yes
    Mirror: Off
    Online: Yes
    Quartz Extreme: Supported
    Rotation: Supported
    The Apple store had Aperture running on a Quad 2.5 Ghz system with 2.5 Gb of RAM and a recommended video card driving a 30" Cinema display. About 40% of the time I was using the program, there were noticable delays between when I clicked on something to happen and when it did happen.
    So, here's my question. Have any of you with similarly configured systems gotten good performance from Aperture?
    Jeff Weinberg

    I have the 9800 on a DP 1.8GHz. The 9600 I believe is a little more than half as fast as that card (on Core Image operations) - which is again half as fast as the latest 800XT cards.
    So if you can image some of the visual operations running at about 1/4 the speed you saw at the store, you'll have some idea of the speed you'd get. Many of the other operations, like loading of RAW files would be as fast thanks to your faster computer. On my computer most of the operations are pretty quick but sometimes things like straightening bog down a little.
    It may still be usable but parts of the UI will not be as snappy for sure. It's a good concern to have, perhaps somone who has a 9600 will pipe up.
    How much memory does your 9600 have? If it has a lot that might help.

  • Regarding Performance concerns during the creation of Infocube

    hai
    im going to create the infocube on top of ODS.
    Pls tell me some design tips for performance things during the creation of Infocube like partitioning , Indexes...
    Basically im loading from Oracle Databasetables by using DB-COnnect .
    pls tell me
    i ll assing the points
    bye
    rizwan

    hi Rizwan,
    check these:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    assign points if useful ***
    Thanks,
    Raj

  • SQL Query Performance Concern

    Hi All,
    I have a two type of query  below
     Query 1:
    Select  A.name, C.description  from  A   JOIN B   ON A.id=B.id
    LEFT JOIN C ON b.id=c.id 
    where B.status<>'PASS'
    Query 2:
    Select  A.name, C.description  from  A   JOIN  B   ON A.id=B.id  AND B.status<>'PASS'
    LEFT JOIN C ON b.id=c.id 
     which one will give the best performance in the above. Table A ,B And C have more than million records.
    Regards,
    Thiru 
    Thiru

    As its an INNER JOIN in the above case the queries are equivalent and so is the plan created by the optimizer.
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Troubleshooting Performance Concerns

    I have been having some serious 'disk thrashing' and beach balls when working inside Aperture.
    And this is while doing things like sorting photos, moving between projects, stacking/unstacking/extracting an item from a stack. Nothing I would deem 'processor intensive'
    I've got 4GB of Memory on my Mac Pro 2.66GHz. I've launched activity monitor and I typically have 2GB wired/active, 1.5GB unactive, and about 600 - 700MB free. So I'm not thinking that memory is the issue.
    What is aperture thrashing my disk for? It's a 500GB 7200 RPM I got with the system, but just when I do a few simple tasks like selecting about 50 photos (Canon 5D CR2 Files) it'll pause and the hard drive rattles away for 30 - 45 - 60 - 90 seconds comes back, and if I de-select them it'll do it again, frustratingly so if I click on accident and de-select.
    Anything I can check out to see if there is an issue with my installation of Aperture or is this standard behavior?
    This is a library that consists of RAW files from a Canon D60 (2002 - 2004), a Canon 20D (2004 - 2006) and a Canon 5D.
    my largest project has 4655 files, the next one has 2112, the next one has 1229, and one more with 1557, the rest are < 500 I have 17 projects
    I had imported everything I had on an external disk. My library is in the neighborhood of 89GB. I'm going through the sorting, rejecting the really nasty ones, and then nuking the reject folder to remove from the libary. I have one vault on another RAID0 array of 500GB Drives I've installed in my Mac Pro.
    Any Ideas?
    Thank you

    I have the same problem as you. My Mac pro is exactly the same as yours and I see the same behaviour. My video card is the ATi X1900.
    In an attempt to cure this disk trashing, I once removed Aperture & reinstalled from DVD. I did this after the 1.5 update came out and I didn't see the slightest performance improvement.
    I have the feeling that the harddisk (Seagate 500GB) is slow and that it's the biggest bottleneck in my system. I'm planning to buy 4xHitachi or Maxtor 500GB harddisks and put them in one RAID-0. Backup will then be handled on external FW/USB drives or internal 5th disk underneath the opticals. That and a fresh install of OS X should cure any performance problem I hope.
    As of now, I'm still using the factory OS X install. Maybe this is also part of the problem?
    iBook, Mac Pro   Mac OS X (10.4.9)  

  • Performance concerns about NSVs in large RT applications

    Hello,
    The end of http://www.ni.com/white-paper/12176/en says "...misuse of Shared Variables in a LabVIEW Real-Time application can result in poor machine performance.... Typical misuses include using too many Shared Variables.... In applications deployed to small Real-Time targets such as Compact RIO or Compact FieldPoint, no more than 25 variables should be used."
    Does this apply to network-published I/O Aliases as well? I was planning to have ~100 I/O Aliases (cRIO-9082 + 2x NI-9144), exposed to a Windows PC for DSC logging and/or alarming -- would that be a problem?)
    Solved!
    Go to Solution.

    Hi JKSH,
    Using the 9082 cRIO you should be more than ok to host that many I/O Aliases. Depending on the controller you could run into issues where there are memory limitations on the controller. However, that controller should have more than enough memory to support that many variables. The best way to get an idea of how the variables are affecting your system is to monitor the CPU and memory usage in MAX. I would also recommend only installing the necessary software onto the cRIO and not everything that is possible. I hope this helps and if you have any more questions please let me know.
    Patrick H | National Instruments | Software Engineer

  • Performance concern about ttbulkcp

    Hi,
    Do any guys can give some suggestions on how to improve the performance of ttbulkcp? I find it loads data much slower than odbc written programs. Does it have batch commit mode?In fact, importing big size data from files is so popular for our customers, we do need provide a good and high performance tool to faciliate that.
    thanks,
    michael

    ttBulkCp is a general purpose tool and so will always be slower than an ODBC program written to handle a specific case. For example, ttBulkCp has to first analyse the structure of the table. Then it has to parse each row read from the load file based on the structure of the table, possibly converting data types in the process. Then it has to actually do the insert (via ODBC). It's actually pretty efficient given its generic nature but will never be as efficient as a specifically written ODBC program that target a known table format and maybe also does not have to parse the input file so riogpurously (or maybe does not even read from a file). Also, its performance depends heavily on the command line options that you use. If you commit too frequently (or not frequently enough) or checkpoint too often then this will also hurt its performance.
    I believe that ttBulkCp does use batch insert in TT 7.0, As for 'batch commit' mode, you controlthat using the -xp command line option.
    Chris

  • Query performance concerning count

    Hi
    I join three tables based on indexed fields.
    Two tables have 25000 records, the third has only
    a couple of records. The result of the join is
    25000 records.
    The query runs in 0.2 seconds. Very fast.
    I try to count the records replacing the select ID
    with count(ID) or with count(*) and it takes 2 seconds
    to count the rows.
    Any ideas ?
    Thank you !!

    COUNT(*) can most certainly make use of indexes, assuming you're using the cost-based optimizer and you have gathered statistics recently.
    First, let's walk through what happens when there are no statistics on a table
    scott@jcave > ANALYZE TABLE my_table DELETE STATISTICS;
    scott@jcave > desc my_table;
    Name                  Null?    Type
    MYKEY                 NOT NULL NUMBER
    VALUE                          VARCHAR2(100)
    scott@jcave > set autotrace on;
    scott@jcave > select count(*) from my_table;
      COUNT(*)
        227610
    Elapsed: 00:00:03.03
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MY_TABLE'
    Statistics
             36  recursive calls
              0  db block gets
           1145  consistent gets
           1126  physical reads
            480  redo size
            381  bytes sent via SQL*Net to client
            503  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedSo, it took 3 seconds to full-scan the table when there were no statistics. Now, lets gather some statistics and try again. Note that I'll be using the ANALYZE command, but production databases should probably be gathering statistics with the dbms_stats package regularly.
    scott@jcave > analyze table my_table compute statistics;
    Table analyzed.
    Elapsed: 00:00:09.07
    scott@jcave > analyze table my_table compute statistics for all indexes;
    Table analyzed.
    Elapsed: 00:00:04.00
    scott@jcave > select count(*) from my_table;
      COUNT(*)
        227610
    Elapsed: 00:00:00.08
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=43 Card=1)
       1    0   SORT (AGGREGATE)
       2    1     INDEX (FAST FULL SCAN) OF 'SYS_C003704' (UNIQUE) (Cost=4
              3 Card=227610)
    Statistics
             55  recursive calls
              2  db block gets
            517  consistent gets
            489  physical reads
            120  redo size
            381  bytes sent via SQL*Net to client
            503  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedWe've gone down to a fraction of a second, now that the CBO knows that the index will be useful.
    The moral of the story is that if you haven't gathered statistics recently, you may want to try doing so.
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com/askDDBC

  • Query performance concerning order by

    Hi
    The following query taks 0.2 seconds
    select * from messagerecipients order by messageid
    Also the following one takes same amount of time
    select * from messagerecipients order by groupid
    I want to order by both messageid and groupid so
    I try both the following ways :
    1)
    select * from messagerecipients order by messageid, groupid
    2)
    select * from
    ( select * from messagerecipients order by messageid)
    order by groupid
    Both ways take around 2 seconds (10 times slower than
    each query on each own).
    Any ideas ?
    Thanks !!

    I just noticed the following:
    messagerecipients has 39998 rows.
    The following runs in 0.2 secs
    select * from (
    select * from messagerecipients
    order by messageid, groupid
    ) where rownum < 38000
    While the following in 2.5 secs
    select * from (
    select * from messagerecipients
    order by messageid, groupid
    ) where rownum < 39000
    Very strange !!!
    Does that give you any ideas maybe ?
    Thank you once again !!

Maybe you are looking for

  • Converting MXML Components to ActionScript Classes

    I'm in the process of converting most (if not all) of my MXML components to Action Script classes. I've found this is easy, and doesn't require a lot of extra code when extending a simple container or control. However, several of my MXML components h

  • Custom Label Printing Using BI Publisher

    I looked at the example of AveryLabel3x10, But I need to do for 4x8 labels with each label size 2'' W x 1.25'' H on A4 Sheet, Each label as gap of .063''. I tried to extend the above example but couldn't do it. Any help is higly appreciated. Thanks i

  • Startup Error:  scheduler: runtime exception during start up: null

    Dear All, We are getting the error in the scheduler service. Startup Error:  scheduler: runtime exception during start up: null We have installed TREX 7 on our Portal server (SP15). We can now create index but while assigning  datasource we are getti

  • XRPM Resource management fast entry screen

    When I user the staffing in resource management, I can see the project for which I have been assigned as resource manager and also resources in my pool. But the entry screens are non editable. Is there any setting to make this fast entry screen edita

  • Publishing on a Lotus Notes Domino Intranet

    Good morning all, I am new to this discussion forum and am hoping to get some pointers with Adobe Captivate. I searched for information on Lotus Domino but nothing came up. I am a Lotus Notes Domino developer and look after our Lotus Domino intranet