Optimizing EtherCAT Performance when using Scan Engine

Hello everyone,
This week I have been researching and learning about the limitations of the LabVIEW's Scan Engine. Our system is an EtherCAT (cRIO 9074) with two slaves (NI 9144). We have four 9235s, two 9237s and two 9239 modules per chassis. That means we have a total of 144 channels. I have read that a conservative estimation for each scan is 10 usec. This means with our set up, assuming we only scan it would take 1.44 msec which would yield roughly a rate of 694 Hz. I know that when using a shared variable, the biggest bottle neck is transmitting the data. For instance, if you scan at 100 Hz, it'll be difficult to transmit that quickly, so it's best to send packets of scans (which you can see in my code).
With all of that said, I'm having difficulties scanning any faster than 125 Hz without railing out my CPU. I can record at 125 Hz at 96% of the CPU usage but if I go down to 100 Hz, I'm at 80%. I noticed that the biggest factor of performance comes when I change my top loop's, the scan loop, period. Scanning every period is much much more demanding than every other. I have also adjusted the scan period on the EtherCAT's preferences and I have the same performance issues. I have also tried varying the transmission frequency(bottom loop), and this doesn't affect the performance at all.
Basically, I have a few questions:
1. What frequency can I reasonably expect to obtain from the EtherCAT system using the Scan Engine with 144 channels?
2. What percent of the CPU should be used when running a program (just because it can do 100%, I know you shouldn't go for the max. Is 80% appropriate? Is 90% too high?)
3.Could you look through my code and see if I have any huge issues? Does my transmission loop need to be a timed structure? I know that it's not as important to transmit as it is to scan, so if the queue doesn't get sent, it's not a big deal. This is my first time dealing with a real time system, so I wouldn't be surprised if that was the case.
I have looked through almost every guide I could find on using the scan engine and programming the cRIO (that's how I learned the importance of synchronizing the timing to the scan engine and other useful facts) and haven't really found a definitive answer. I would appreciate any help on this subject.
P.S. I attached my scan/transmit loop, the host program and the VI where I get all of the shared variables (I use the same one three times to prevent 144 shared variables from being on the screen at the same time).
Thanks,
Seth
Attachments:
target - multi rate - variables - fileIO.vi ‏61 KB
Get Strain Values.vi ‏24 KB
Chasis 1 (Master).vi ‏85 KB

Hi,
It looks like you are using a 9074 chassis and two 9144 chassis, all three full with Modules and you are trying to read all the IO channels in one scan?
First of all, if you set your scan engine speed for the controller (9074), then you have to synchronize your Timed Loop to the Scan Engine and not to use a different timebase as you do in your scan VI.
Second the best performance can be achieved with I/O variables, not shared variables and you should make sure to  not allocate memory in your timed Loop.  memory will be allocated if an input of a variable is not connected, like the error cluster for example or if you create Arrays from scratch as you do in your scan VI.
If you resolve all these Issues you can time the code inside your Loop to see how long it really takes and adjust your scan time accordingly.  The 9074 does not have that much power so you should not expect us timing. 500 Hz is probably a good estimate for the max. Performance for 144 channels and depending on how much time the additional microstrain calculation takes.
The ECAT driver brings examples which show how to program these kinds of Apps. Another way of avoiding the variables would be the programmatic approach using the variable API.
DirkW

Similar Messages

  • Poor performance when using kde desktop effect

    Hey,
    I'm having trouble when using kde effet (system settings -> desktop -> desktop effects).
    I have a dual core E5200 3 ghz, 2 GB memory pc8500 and an HD4850 using fglrx driver, but I got incredible bad performance when using desktop effect and watching video, I can barely watch a 800x600 video in full screen mode without having bad performance, X getting up to 40% cpu usage.
    Its really like my graphic card isnt handling the rendering stuff but 3D acceleration is working, I can play 3D game without problem so far (as long as the deskstop effect arent enabled cause the cpu have like a hard time handling both for recent game)
    So I guess its some trouble with 2D acceleration or something like that, I read that some people had such issue, but I didnt figure a way to fix it.
    Here is my xorg.conf, in case something is wrong with it :
    Section "ServerLayout"
    Identifier "X.org Configured"
    Screen 0 "aticonfig-Screen[0]-0" 0 0
    InputDevice "Mouse0" "CorePointer"
    InputDevice "Keyboard0" "CoreKeyboard"
    EndSection
    Section "Files"
    ModulePath "/usr/lib/xorg/modules"
    FontPath "/usr/share/fonts/misc"
    FontPath "/usr/share/fonts/100dpi:unscaled"
    FontPath "/usr/share/fonts/75dpi:unscaled"
    FontPath "/usr/share/fonts/TTF"
    FontPath "/usr/share/fonts/Type1"
    EndSection
    Section "Module"
    Load "dri2"
    Load "extmod"
    Load "dbe"
    Load "record"
    Load "glx"
    Load "dri"
    EndSection
    Section "InputDevice"
    Identifier "Keyboard0"
    Driver "kbd"
    EndSection
    Section "InputDevice"
    Identifier "Mouse0"
    Driver "mouse"
    Option "Protocol" "auto"
    Option "Device" "/dev/input/mice"
    Option "ZAxisMapping" "4 5 6 7"
    EndSection
    Section "Monitor"
    Identifier "Monitor0"
    VendorName "Monitor Vendor"
    ModelName "Monitor Model"
    EndSection
    Section "Monitor"
    Identifier "aticonfig-Monitor[0]-0"
    Option "VendorName" "ATI Proprietary Driver"
    Option "ModelName" "Generic Autodetecting Monitor"
    Option "DPMS" "true"
    EndSection
    Section "Device"
    ### Available Driver options are:-
    ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
    ### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
    ### [arg]: arg optional
    #Option "ShadowFB" # [<bool>]
    #Option "DefaultRefresh" # [<bool>]
    #Option "ModeSetClearScreen" # [<bool>]
    Identifier "Card0"
    Driver "vesa"
    VendorName "ATI Technologies Inc"
    BoardName "RV770 [Radeon HD 4850]"
    BusID "PCI:8:0:0"
    EndSection
    Section "Device"
    Identifier "aticonfig-Device[0]-0"
    Driver "fglrx"
    BusID "PCI:8:0:0"
    EndSection
    Section "Screen"
    Identifier "Screen0"
    Device "Card0"
    Monitor "Monitor0"
    SubSection "Display"
    Viewport 0 0
    Depth 1
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 4
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 8
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 15
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 16
    EndSubSection
    SubSection "Display"
    Viewport 0 0
    Depth 24
    EndSubSection
    EndSection
    Section "Screen"
    Identifier "aticonfig-Screen[0]-0"
    Device "aticonfig-Device[0]-0"
    Monitor "aticonfig-Monitor[0]-0"
    DefaultDepth 24
    SubSection "Display"
    Viewport 0 0
    Depth 24
    EndSubSection
    EndSection
    Thank you for any help.

    Section "Device"
    ### Available Driver options are:-
    ### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
    ### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
    ### [arg]: arg optional
    #Option "ShadowFB" # [<bool>]
    #Option "DefaultRefresh" # [<bool>]
    #Option "ModeSetClearScreen" # [<bool>]
    Identifier "Card0"
    Driver "vesa"
    VendorName "ATI Technologies Inc"
    BoardName "RV770 [Radeon HD 4850]"
    BusID "PCI:8:0:0"
    EndSection
    and
    Section "Monitor"
    Identifier "Monitor0"
    VendorName "Monitor Vendor"
    ModelName "Monitor Model"
    EndSection
    I see no reason for those to be there.
    make a backup of your xorg.conf and remove / comment those lines.

  • When use scan mode and FPGA simultaneously, why does the FIFO Read can not be used?

    Hello, I am using compactRIO-9025 to do a project and trying to use scan mode and FPGA simultaneously ( Hybrid mode). I have already build a project as the following tutorial.
    http://digital.ni.com/public.nsf/allkb/0DB7FEF37C26AF85862575C400531690 and I have NI 9205, NI 9023 in Scan mode and NI 9871 in FPGA mode in the same project.
    In the FPGA Target of the project,  I add a FIFO and tried to use it to log in data from NI 9871. I wired the module I/O node to FIFO Write in the target vi. However, when I  droped the invoke method to the block diagram of the host vi and right clicked it, there is no FIFO Read can be choose. Could you please help me to solve this problem?
    Thank you very much!!

    I am not aware that using the Scan Engine is blocking any access to a transfer FIFO. But please reread this from your quoted KB:
    Secondly, the number of DMA FIFO's that can be used in the FPGA code will be reduced, since the scan engine uses two DMA FIFO's. Most FPGAs have 3 DMA FIFO's, so there will only be one DMA channel left to use in the FPGA code.
    This means, you have only a single DMA FIFO left which is either Target to Host or Host to Target.
    Make sure that you configured the FIFO to be the correct direction for your needs...
    hope this helps,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Is anyone able to explain really poor performance when using 'If Exists'?

    Hello all.  We've recently had a performance spike when using the 'if exists' construct, which is a construct that we use through much of our code.  The problem is, it appears illogical since if can be removed via a tiny modification that does
    not change the core code.
    I can demonstrate  
    This is the (simplified) format of the base original code.  It's purpose is to identify when a column value has changed comparing to a main table and a complex view
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    This is doing a table scan, however the table only has 17000 rows while the view only has 7000 rows.  The sql executes in approximately 3 seconds.
    However if we add the 'If Exists' construct around the original query, like such:
    if exists (
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    print 1
    The sql now takes over 2 minutes to run.  Note that the core sql is unchanged.  All I have done is wrap it with 'If Exists'
    I can't fathom why the if exists construct is taking so much longer, especially since the code code is unchanged, however more importantly I would like to understand why since we commonly use this type of syntax.
    Any advice would be greatly appreciated

    OK, that's interesting.  Adding the top 1 clause greatly affects the performance (in a bad way).
    The original query (as below) still runs in a few seconds.
    select 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    The 'Top 1' query (as below) takes almost 2 minutes however.  It's exactly the same query, but with 'top 1' added to it.
    select top 1 1 from MainTable m
    inner join ComplexView v on m.col2 = v.col2
    where m.col3 <> v.col3
    I suspect that the top 1 is performing a very similar operation to the exists, in that it is 'supposed' to exit as soon as it finds a single row that satisfies it's results.
    It's still not really any closer to making me understand what is causing the issue however.

  • Poor Performance when using Question Pooling

    I'm wondering if anyone else out there is experiencing
    Captivate running very slow when using question pooling. We have
    about 195 questions with some using screenshots in jpeg format.
    By viewing the Windows Task Manager, CP is using anywhere
    between 130 to 160 K worth of memory. What is going on here? It's
    hammering the system pretty hard. It takes a large effort just to
    reposition the screenshot or even move a distractor.
    I'm running this on a 3.20GHz machine with 3GB of RAM.
    Any Captivate Gurus out there care to tackle this one?
    Help.

    MtnBiker1966,
    I have noticed the same problem. I only have 60 slides with
    43 questions and the Question Pool appears to be a big drain on
    performance. Changing the buttons from Continue to Go to next slide
    helped a little, but performance still drags compared to not using
    a question pool. I even tried reducing the number of question
    pools, but that did not affect the performance any. The search
    continues.
    Darin

  • Decreased Battery Performance when using SL. Apple states I am alone..

    This thread is mainly aimed at people that are using a mid 2009 unibody MacBook Pro 13" or upwards that came pre-installed with Leopard. Since upgrading to Snow leopard have you noticed a significant decrease in how long the battery lasts while using the machine in the same fashion?
    Am I alone? I first reported this issue to the Dutch customer service team back in October 2009 and since then have fought it all the way to make Apple admit that there is a problem with a minority of customers. To this day according to executive customer relations, I am the only person who has reported this issue to them. I find this hard to believe as there is a mass of information/threads/blogs etc on the internet with many people complaining about the same issue.
    The following is an email I sent to the [email protected] email as a last resort.
    Dear Steve,
    I am writing to you as a last resort hoping that my complaint will be dealt with professionally as I have actually given up all hope with the Apple customer service. I do not expect that you will personally read this letter but I just trust that someone will care and pass this to someone that will treat my complaint seriously as I really am losing all faith with Apple.
    I purchased a brand new Macbook Pro 13" mid 2009 model at the end of June 2009. Around 3 weeks after I purchased the machine I received a message from Apple stating that I was eligible for a copy of Snow Leopard as it was compatible with my machine and because I had just purchased a new Mac. I upgraded the Mac to Snow Leopard without any issues. It wasn't until a few weeks later that I noticed a massive reduction in battery performance. When I ran the machine on Leopard I would easily get the advertised 7 hour battery life while doing basic web browsing with all the battery settings set to the Mac default recommended settings. On Snow Leopard I would not get anything more than 4 hours completing the same tasks.
    I contacted Apple about this and was provided with the following reference number XXXXXXXXX I was basically told that I should monitor what I was doing etc and try using the screen on the lowest setting to see if this would improve the performance, it did not. I am a busy man and work in graphic design so unfortunately I put up with the battery issue until my Hard Drive decided to die in April. I contacted Apple again about this and was again provided with a reference number XXXXXXXXXX this time I was asked to do various steps to help fix the machine which did not work. Eventually I had the machine booked in for repair. When the machine was booked in for repair I also stated that I would like the battery issue addressed once and for all as I still only get around 4 hours charge. I was told this would be looked into. After 2 weeks of having no machine (and losing out on work) I received my Mac back with a new Hard drive installed as the old one was corrupted. I asked the Apple employee if my battery had been addressed and it had not. I refused to except my machine back as the battery issue was not resolved. After another 2 weeks of not having my Mac (more loss of work on my account) I received it back this time a new battery and motherboard had been installed.
    I returned to my office and reinstalled Snow Leopard, my Adobe Master Collection and charged to 100% I then tested the machine on battery. Again I received no more than 4 hours using BELOW recommended Mac battery settings. I contacted Apple again by telephone (another 0900 number call at my expense) and was messed around left right and centre. The employee on the telephone did not know what to do so transferred me to one of the Mac level 2 agents. I have never been spoken to so rudely and was even told that the only way I could expect 7 hours battery life from a MacBook Pro 2009 model is to use it with the lid closed. Ridiculous hey? I was also even told by one of the previous customer service agents that I could try using the machine with the battery disconnected. Even more ridiculous considering it is a unibody model.
    At this point I started doing research on the issue and found that I was far from alone with the battery issue. I have found numerous people suffering from the same issue. Below are a few links containing the same issue.
    http://discussions.apple.com/thread.jspa?messageID=10337524&#10337524
    http://discussions.apple.com/thread.jspa?messageID=11365892&#11365892
    http://forums.cnet.com/5208-21565_102-0.html?threadID=357366
    At this point I decided to try something different, something that was not recommended or mentioned by any Apple employee, sales or technical. I decided to reformat the HD and install Leopard back on the on the machine. I then completed a battery test and low and behold my 7 hours of battery life were fully back. I think this can safely say that the issue is clearly with the OSX and not the hardware.
    I then contacted Apple customer service by phone (more 0900 numbers at my expense) I spoke to a senior Applecare advisor called XXXXX XXXXXXXX and explained my issue in full. I told xxxxxxx that I wanted this matter looking into and that I wanted a machine that would give me what is advertised using Snow Leopard, as know where on the Apple site or in the Snow Leopard documentation does it state ' Expect to lose 3 hours '
    I provided xxxxxxxx with a system info file and he sent it the Apple engineering team. A week went by and I received an email from him with the report from engineering. Engineering basically blamed it ALL on 3rd party applications, they even blamed it on the weather widget stating that it is constantly updating and will drain power. My remote access software was also blamed which was not even running in the background.
    1: The weather widget updates when you select it not constantly.
    2: The weather widget is by DEFAULT installed on a machine so thus would be classed as an Apple product in my opinion.
    That evening I decided to reformat my HD and reinstall Snow Leopard. This time I installed ONLY Apple software updates and nothing else. I then tested the battery again (getting very boring now) using the same settings as previous tests. This time I managed to have 4 hrs and 15 mins. Just 15 minutes more than what I had previously had. I then contacted xxxxxxx again and provided him with another system information file from the clean install. I explained that I had tested this with just the airport on and basic web browsing on various sites (my exact words) Another week went by and I received a call from xxxxxxx with his report from engineering. He said that Engineering wanted to know if I had tested this with basic web browsing or sat on Youtube or other flash based video streaming sites for 4 hours as this could reduce the battery life.
    As you could imagine I was furious. I repeated myself to xxxxxxx about my tests and was told that he would go back to engineering. I received another call from xxxxxxx stating that engineering are looking into it. I asked how long it would take for an answer on this and we agreed on a 2 week time frame. It is now over 2 weeks and I have sent 2 emails to xxxxxx and still have not received a reply.
    This issue has been going on since October 2009 when I first reported it. We are now in July 2010 and I am still suffering with this, as you can imaging this is unacceptable and not what one would expect from a company like Apple. My complaints summarized are below, I believe the obvious speak for themselves.
    1: There is no disclaimer on the Apple site stating that Snow Leopard will decrease the performance of a battery on a machine that did not come preinstalled with Snow Leopard. Neither is this information provided on the installation PDF instructions that come with the Snow Leopard disc.
    2: If there were such a disclaimer on the Apple site then why would I have my battery and motherboard replaced by the Apple repair team when there was clearly nothing wrong with them in the first place. Why was I never told this by any agent on the phone or in the Mac store? The answer seems clearly that this information is not available to the public or to Apple staff which is a big waste or my time as well as theirs.
    3: I have missed out on work because of having my Mac in for repair for a total of 4 weeks. As mentioned above there was clearly no need to have had the battery or motherboard replaced.
    4: I have spent money calling the 0900 numbers on numerous occasions simply getting passed around from Apple employee to Apple employee and still have no answer to my problem.
    5: I have wasted so much of my own time clean erasing my hard drive including zero out of sectors and reinstalling operating systems, not to mention other software and Apple updates. All of this comes to many hours of total wasted time at my expense when I could have been working.
    Where we stand now is that I am on the verge of taking legal proceedings with the Dutch Consumentenbond as this machine is not suitable to run with the Snow Leopard OSX when Apple advertise that it is.
    I expect either a full refund of my MacBook Pro 13" mid 2009 or an exchange for a Macbook Pro 13" that comes with Snow Leopard preinstalled with the disc in the box. I also expect to be compensated for my loss of time, money I have spent on the phone to Apple and the loss of work while the machine was in for unnecessary repair.
    I am really at the end of my patience with this issue and feel that I have been much more than fair with my reports, honesty and patience.
    I trust to hear from somebody soon regarding this issue so that we can finally put it to a close once and for all in a professional manor.
    Regards
    I received a phone call within 14 hours of sending the email from executive customer relations. They came back with new information from engineering but did not think it was acceptable as engineering were now blaming my battery decrease on the assumption that my apartment might be too hot this causing the CPU to run at a higher temperature making the battery decrease quicker. We both agreed that it was an unacceptable answer.
    The day after I received another call from them stating that engineering had now finished tests and came to the conclusion that when testing on Leopard and Snow Leopard on the MacBook Pro 2009 there was no difference in battery run time whats so ever. I found this hard to believe so asked exactly how the test was conducted. Was it conducted in the same fashion that I had spent the last 9 months talking about. ' Using below default energy saving settings with only airport on while completing basic web browsing '???
    They confirmed this was not the case, engineering had basically turned the machines on, turned the screen right down to 1 bar and left them on screen saver over night. As you can imaging I was speechless (for a few seconds)
    Apple seem consistent on NOT admitting that there is an issue with battery performance for a minority of customers. As is stands now engineering in Europe have now passed the case over to engineering in the US for further ' in-depth ' tests.
    I know there are people out there that are suffering the same issues as myself and I welcome you to please contact me with your experiences. As the more of us that complain about this make it harder for Apple to deny this and brush it under the carpet.
    Please feel free to reply to this thread or email me directly on [email protected]

    Thomas A Reed wrote:
    I mean this in the kindest way but please read the post
    Please, nobody's going to waste their time reading all that! I don't know what you possibly could have said about this issue that took up so much space, but if you can't express it more concisely, it's probably not worth reading.
    I have used SL on both a unibody MBP and an older MBP. On the older MBP, when I upgraded from Leopard to SL, I got a noticeable increase in battery life immediately. I have no such comparison on the unibody MBP, since it had SL installed to begin with, but I'll put it this way: I haven't come close to running out of power on it. I used it once for about 4 hours and still had more than a 1/3 charge. I could have easily used it for another couple hours, at least. And this was with a 17" MBP, which uses more power than your 13", and without much in the way of battery-saving going on. (Screen at full brightness, AirPort on and in use, Bluetooth on, etc.)
    It's difficult to say that SL is the problem when it clearly isn't for everyone. What your problem is, I don't know, but maybe if you posted a more concise question you'd get some answers.
    I think ..
    This thread is mainly aimed at people that are using a mid 2009 unibody MacBook Pro 13" or upwards that came pre-installed with Leopard.....
    at the beginning of the thread I made it pretty clear, if you are not affected by this then I'm happy for you, but then the thread is clearly not aimed at you. I can guarantee that I have tested this exactly the same way I did with Leopard and there is a significant decrease. I also know there are people out there that have gained battery life but there have also been many threads containing people who have lost battery life.

  • Improving performance when using LineStripArray?

    I'm rendering approximately 680 LineStripArrays to represent an airport on a situation display. I read the data from a DXF file and I imagine I can strip that down by removing parts of the airport I don't want to show.
    However, performance is poor - I'm probably hitting 50fps when I'm barely rendering anything. Apart from not displaying some LineStripArrays, what can I do to improve performance? Should I merge some LineStripArrays? Is there another geometry class I should use?
    My code is:
                   // This array will hold the vertices.
                   float[] vertices = new float[count*3];
                   // Create the colour.
                   Color3f colour = new Color3f(1.0f,1.0f, 1.0f);
                   // iterate over all vertex of the polyline
                   for (int i = 0; i < count; i++) {                    
                        vertices[(i*3)+0] = (float)vertex.getX();
                        vertices[(i*3)+1] = (float)vertex.getY();
                        vertices[(i*3)+2] = (float)vertex.getZ();
                   }And then:
    layerData = new LineStripArray(count, LineArray.COORDINATES, strip_counts);
    layerData.setCoordinates(0, vertices);Thanks
    Edited by: BobCrivens on Sep 18, 2008 7:39 AM

    Yes, it will cause performance issues.  Whether you notice it or not may be a different story.
    LabVIEW drawing engine starts at the bottom layer and works its way up.  So, it has to redraw the image and then redraw the control when you update the control/indicator.
    It's been a while since I benchmarked this on a project, but in LabVIEW 6.1, I looked into why my tests ran so slow, and saw a 10-15% decrease in test time by removing the background decorations I used to make the window pretty.  If I didn't show the GUI feedback for the test at all (no GUI windows for each test), I saw a 30% decrease in test time.
    You will also find that better video cards will have a positive effect on this, as they redraw the screen faster.  In the same benchmark, I was able to outperform the early PXI controllers with a slower PC because NI was using a lower end video chip for their onboard graphics.

  • Risk Analysis not performed when using IDM WS

    Hi ,
    We are using the SAP delivered IDM WebService for submitting Access requests to CUP 5.3 SP8 Patch1.
    We have defined the properties:
    1. Perform Risk Analysis on Request Submission - YES
    2. Risk Analysis Mandatory (approval stage) - YES, When Access Changed
    3. Approve Request Despite Risks - NO
    (This setting will enable the approver to approve the access request without performing a Risk Analysis, if the initial risk analysis doesn't identify any risk with the access request. But if there are risks, the approver need to mitigate the same before he can approve it.)
    But we have found out that when submitting a request through the SAP Delivered IDM WS -'SAPGRC_AC_IDM_SUBMITREQUEST', the system DOESN'T perform RA during request submission. But when the request is submitted directly in CUP, it does.
    We've referred the Note:1168508 where it's mentioned that this issue is being fixed with SP7 Patch 1. But we are already on SP8.
    The Note says:
    "The following issues are resolved as part of Support Package 7 Patch 1:"
    and the last bullet point states that:
    "While submitting a CUP Request from web service, if the flag for Risk Analysis on submission is set not performing the Risk Analysis on submission."
    This feature was not working before and hence thought SAP has fixed it as mentioned in the Note.  Has anybody suceeeded in getting this feature working???
    Thanks & Regards,
    Anil

    Yes Dries, we have tried both and we happen to see some exceptions on request submission thru WS.
    But the request is still getting created. I've an open tkt with SAP to follow it up..I'll update once i get this fixed.
    Exception Details:
    Exception during EJB call, Ignoring and trying Webservice Call 
[EXCEPTION]
com.virsa.ae.service.ServiceException: Exception in getting the results from the EJB service : com/virsa/cc/xsys/ejb/RiskAnalysis.execRiskAnalysis(Lcom/virsa/cc/xsys/webservices/dto/WSRAInputParamDTO;)Lcom/virsa/cc/xsys/w...
    Full Message Text
    Exception during EJB call, Ignoring and trying Webservice Call
     com.virsa.ae.service.ServiceException: Exception in getting the results from the EJB service : com/virsa/cc/xsys/ejb/RiskAnalysis.execRiskAnalysis(Lcom/virsa/cc/xsys/webservices/dto/WSRAInputParamDTO;)Lcom/virsa/cc/xsys/webservices/dto/RAResultDTO;
    at com.virsa.ae.service.sap.RiskAnalysisEJB53DAO.getViolations(RiskAnalysisEJB53DAO.java:294)
    at com.virsa.ae.service.sap.RiskAnalysisEJB53DAO.getViolations(RiskAnalysisEJB53DAO.java:418)....
    at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:176)
    Caused by: java.lang.VerifyError: com/virsa/cc/xsys/ejb/RiskAnalysis.execRiskAnalysis(Lcom/virsa/cc/xsys/webservices/dto/WSRAInputParamDTO;)Lcom/virsa/cc/xsys/webservices/dto/RAResultDTO;
    at com.virsa.ae.service.sap.RiskAnalysisEJB53DAO.execRiskAnalysis(RiskAnalysisEJB53DAO.java:304)
    at com.virsa.ae.service.sap.RiskAnalysisEJB53DAO.getViolations(RiskAnalysisEJB53DAO.java:276)
    ... 44 more
    Thx,
    Anil

  • Poor Lightroom 5.7 Performance when using part repair tool

    Hi Lightroom pros,
    I'm using Lightroom 5.7 for about half a year and are very happy with it. Unless I'm working on photos where I have to use the part repairt tool to much.
    I'm specialized in car photograpy and often have to remove dust from the cars paint work and chrome. When I have to work on hughe areas Lightroom becomes slower and slower. I read that the repair tool costs a lot of performance but I'm wondering if I could optimize something or if there is a workaround for that? Because right now its not usable doing one click and waiting 2 minutes...
    I'm using the newest Lightroom 64-bit version on a Windows 8.1 Enterprise 64-bit Laptop with an internal SSD drive. LR is installed on that drive and the Catalogue also is stored there. I have 70 GB of free space on that drive, my catalogue is around 5 GB, my Camera Raw Cache Size is 20 GB.
    The machine has 16 GB of RAM and an Intel Core i7 3 GHz Quadcore processor. My RAW files are stored on an external USB 3.0 Drive (not SSD).
    When working on that kind of picutres I usually close as much programms as possible, so that most of the time LR is the only working desktop programm. What seems strange to me is, that even if I'm waiting for minutes for LR to answer, that my processor and RAM are only using about 40% of it's power. The rest seems not to be used by LR?!
    I would be really thankful for tipps or ideas how to improve that issue because right now I'm really wasting a lot of time.
    Kind regards
    Torsten

    I guess this depends on what you are doing, what type of original files you are using, and your own perception of quality. I would believe that in most cases, if you do all the editing in LR except for the spot removal, get the photo to appear the way you want (except for the spot removal), and then lastly move it to PSE and remove the spots, that there isn't really any noticeable loss of quality.
    I must admit I'm not a pro concerning color managment. But I already downloaded the PSE testversion and realized that it is not capable to deal with 16-bit color depth. Wouldn't that be noticeable after the finale spot removal step?
    So, you should tell Adobe this, not me. This is a forum for users (like me) to help other users (like you).
    I know. I just wanted to emphasize why switching to PS wouldn't be the perfect solution for me (even if I'm considering it).
    Thanks so far!

  • Bad performance when using  complex database view with Toplink

    Hi
    Problem description
    1. I have a complex query that collects the data from many DB tables. For this reason I create a database view based on this select. Using EJB 3.0 with Toplink I
    mapped this view to a java object the same way I map database tables. The method I use to get the results is:
    snippet code...
    public List<VwKartela> VwKartela(Integer pperid) {
    List<VwKartela> results = null;
    Session session = getSessionFactory().acquireSession();
    ExpressionBuilder bankfile = new ExpressionBuilder();
    Expression exp1 = bankfile.get("perid").equal(pperid);
    ReadAllQuery query = new ReadAllQuery();
    query.setReferenceClass(VwKartela.class);
    query.setSelectionCriteria(exp1);
    results =(List<VwKartela>)session.executeQuery(query);
    When running the select on the view prom SQL Plus I haven’t any performance problem.
    2.Question: How can I improve the performance? I referenced to Toplink docs but I didn't improve the it.
    Have anyone any experience is such cases?
    Thank you
    Thanos

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • Performance when using bind variables

    I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
    I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
    Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
    My first JAVA program runs without using bind variables as follows:
    loop
    stmt.executeUpdate("delete from nobind_test where id = " + i);
    end loop
    My second JAVA program uses bind variables as follows:
    pstmt = conn.prepareStatement("delete from bind_test where id = ?");
    loop
    pstmt.setString(1, String.valueof(i));
    rs = pstmt.executeQuery();
    end loop;
    Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
    The trouble is that the program that does not use bind variables runs faster than the bind variable program.
    Can anyone tell me why this would be? Is my test too simple?
    Thanks.

    [email protected] wrote:
    I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
    I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
    Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
    Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
    The trouble is that the program that does not use bind variables runs faster than the bind variable program.
    Can anyone tell me why this would be? Is my test too simple?
    The point is that you have to find out where your test is spending most of the time.
    If you've just populated a table with 100,000 records and then start to delete randomly 2,000 of them, the database has to perform a full table scan for each of the records to be deleted.
    So probably most of the time is spent scanning the table over and over again, although most of blocks might already be in your database buffer cache.
    The difference between the hard parse and the soft parse of such a simple statement might be negligible compared to effort it takes to fulfill each delete execution.
    You might want to change the setup of your test: Add a primary key constraint to your test table and delete the rows using this primary key as predicate. Then the time it takes to locate the row to delete should be negligible compared to the hard parse / soft parse difference.
    You probably need to increase your iteration count because deleting 2,000 records this way probably takes too short and introduces measuring issues. Try to delete more rows, then you should be able to spot a significant and constant difference between the two approaches.
    In order to prevent any performance issues from a potentially degenerated index due to numerous DML activities, you could also just change your test case to query for a particular column of the row corresponding to your predicate rather than deleting it.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • Performance when using two computers

    I have an iMac and airbook connected to an Extreme wireless internet connection, but when both are being used the performance of each drops considerably. Can Extreme not support 2 devices?
    The internet connection is a standard 5000 kBit/s. Do I need to upgrade?
    Thanks

    The AirPort Extreme base station (AEBS) can easily support two clients. However the RF environment that you are in may be experiencing significant interference and only able to support one full speed client.

  • Poor performance when using skip-level hierarchies

    Hi there,
    currently we have big performance issues when drilling in a skip-level hierarchy (each drill takes around 10 seconds).
    OBIEE is producing 4 physical SQL statements when drilling f.e. into the 4th level (for each level one SQL statement). The statements runs in parallel and are pretty fast (the database doesn't need more than 0,5 seconds to produce the result), but ... and here we have probably somewhere a problem ... putting all the 4 results together in OBIEE takes another 8 seconds.
    This are not big datasets the database is returning - around 5-20 records for each select statement.
    The question is: why does it take so long to put the data together on the server? Do we have to reconfigure some parameters to make it faster?
    Please guide.
    Regards,
    Rafael

    If you really and exclusively want to have "OBIEE can handle such queries" - i.e. not touch the database, then you had best put a clever caching strategy in place.
    First angle of attack should be the database itself though. Best sit down with a data architect and/or your DBA to find the best setup possible physically and then when you've optimized that (with regard to the kind of queries emitted against it) you can move up to the OBIS. Always try t fix the issue as close to the source as possible.

  • Same AppDomain when using a Engine.NewExecution

    Hi everyone,
    I looked at the forum but I dont see any solution about it.
    Right now, Im running sequence file through the TestStand Engine from a custom c# application we made. My problem is My C# application and the TestStand execution dont share the same AppDomain.
    Is there a way to, when I load th TestStand.Engine, share the same AppDomain between them.
    Thank you
    I hope I was clear about my issue. Is there something, let me know, I will add more details.
    Ross
    Solved!
    Go to Solution.

    You can't currently make the executions use the same appdomain as your UIs, however you can share .NET objects between the appdomains if they are derived from MarshalByRefObject or are serializeable (serializeable objects are copied rather than passed as a reference). If you are using TestStand 2010 or higher (and this might work in older versions too, but I'm not 100% sure), you can then pass the object between appdomains using a TestStand Object Reference variable. You can use a StationGlobal or other variable that is accesible from both locations. Call SetValInterface from one, and GetValInterface from the other. The assembly containing these shared objects must also either be in the GAC, or in your application's base path (i.e. the directory of the exe) or you will get errors.
    Hope this helps,
    -Doug

Maybe you are looking for

  • Trigger is not working while updating the values of UDF

    Dear Freind, I have created one Trigger for ORDR for Sales order. It is working fine. For that i have created two UDF to Title level. When i fill all fields it shows the correct result in DocTotal. But problem is that, while updateing specific record

  • Where are my music videos?!

    i downloaded 3 music videos from the itunes store to use on my ipod video. i can't find them anywhere....like they are in a catagory called "purchases" but as for like where the music movies and tv show opetions are there is no music video option....

  • Download ed videos start and stop then just freezes and

    There are no more.details. that's it

  • Background method in workflow hangs

    Hi All, I have a background method in workflow which calles a report. Actually the report calles smartforms and I am storing OTF data on application server. I am not displaying the print preview. I am just stroring the OTF data on application server.

  • How do I find the Report ID?

    I need the report ID or Workbook ID to manually enter it through the profile generator. How and where do I find it? Please reply ASAP. Thanks in advance. Points will be awarded.