How performance meter in vision assistant calculates estimation time

Hello...
I want to know that how performance meter in vision assistant calculates the estimation time of each step?????
I mean what is the concept behind this???
please i need to know...
Thanks in advance..

Hello,
I have performed the same operations in Ni Vision Assistant and in LV (without OCR):
The VI is attached.
The values are very similar. Maybe I do not understand your problem correctly. Please explain, if I missed the point.
Best regards,
https://decibel.ni.com/content/blogs/kl3m3n
"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
Attachments:
time Estimation_2010.vi ‏22 KB
NIVISION.png ‏150 KB

Similar Messages

  • Profile Performance in LabVIEWvs Performance meter in Vision Assistant: Doesn't match

    Hi everyone,
    I faced a strange problem about performance timing between these two measurements.
    Here is my test
    -used inbuilt example provided by labview in vision assistant-Bracket example-Uses two pattern matches, one edge detection algorithm and two calipers(one for calculating midpoint and other for finding angle between three points.
    -When i ran the script provided by NI for the same in vision assistnat it took average inspection time of 12.45ms(even this also varies from 12-13ms:my guess is this little variation might be due to my cpu/processing load).
    -Then i converted the script to vi and i used profile performance in labview and surprisingly it is showing way more than expected like almost ~300ms(In the beginning thought it is beacuse of all rotated search etc..but none of them make sense to me here).
    Now my questions are
    -Are the algorithms used in both tools are same? (I thought they are same)
    -IMAQ read image and vision info is taking more than 100ms in labview, which doesn't count for vision assistant. why?( thought the template image might be loaded to cache am i right?)
    -What about IMAQ read file(doesn't count for vision assistant?? In labview it takes around 15ms)
    -Same for pattern match in vision assitant it takes around 3ms(this is also not consistant) in labview it takes almost 3times (around 15ms)
    -Is this bug or am i missing somethings or this is how it is expected?
    Please find attachments below.
    -Vision Assistant-v12-Build 20120605072143
    -Labview-12.0f3
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13
    Attachments:
    Performance_test.zip ‏546 KB

    Hmm Bruce, Thanks again for reply.
    -When i first read your reply, i was ok. But after reading it multiple times, i came to know that you didn't check my code and explanation first.
    -I have added code and screenshot of Profile in both VA and LabVIEW.
    In both Vision Assistant and Labview
    -I am loading image only once.
    Accounted in Labview but not in VA, because it is already in cache, But time to put the image into cache?
    I do understand that, when we are capturing the image live from camera things are completely different.
    -Loading template image multiple times??
    This is where i was very much confused. Beacuase i didn't even think of it. I am well aware of that task.
    -Run Setup Match Pattern once?
    Sorry, so far i haven't seen any example which does pattern match for multiple images has Setup Match Pattern everytime. But it is negligible time i wouldn't mind.
    -Loading images for processing and loading diffferent template for each image?
    You are completely mistaken here and i don't see that how it is related to my specific question.
    Briefly explaining you again
    -I open an image both in LabVIEW and VA.
    -Create two pattern match steps. and Calipers(Negligible)
    -The pattern match step in VA shows me longest time of 4.65 ms where as IMAQ Match pattern showed me 15.6 ms.
    -I am convinced about IMAQ Read and vision info timing, because it will account only in the initial phase when running for multiple image inspection.
    But i am running for only once, then Vision assistant should show that time also isn't it?
    -I do understand that, Labview has lot more features on paralell execution and many things than Vision Assistant.
    -Yeah that time about 100ms to 10ms i completely agree. I take Vision Assistant profile timing as the ideal values( correct me if i am wrong).
    -I like the last line especially, You cannot compare the speeds of the two methods.
     Please let me know if i am thinking in complete stupid way or at least some thing in right path.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • How Program RCPMAU03 or FM CX_IN_HOUSE_PRODUCTION_TIME calculates Prod time

    Hi All,
    FM CX_IN_HOUSE_PRODUCTION_TIME is used in Program RCPMAU03 which is used for Material Master Update from transaction CA47 or CA47N. I want to know how this FM calculates Processing(BEARZ) and InterOperation(MARC-TRANZ) time.
    Thanks in advance. Helpful answers will be rewarded.
    Kartavya Kaushik.

    hi,
    from routing screen,extras>scheduling>schedule(give a variant name)>scheduling results>update material master.this writes the info in the temp file.use tcode CA96 to update material master.this updates lot size dependent in house production time in work scheduling view of material master and  deletes the lot size independent inhouse prod, time in MRP2 view.this tool gives more accurate results  for your  inhouse prod. time.
    > Hello experts,
    > I would like to calculate the Scheduled in-house
    > production time based on route.
    > Actually this is done in the prod. route in tcode
    > CA02. (via menu Extras => scheduling => schedule).
    > After getting the schedule table press the
    > 'scheduling results' button I received the Processing
    > time, Setup and teardown time,... Scheduled in-house
    > production time.
    > So I'm looking for a function or any tool to use it
    > in report.
    >
    > thanks,
    > Eli

  • Vision assistant or builder tools into LabView

    hi, I just wanted to know if anyone knew how to use a vision assistant or vision builder into labView using a live image from a camera, that is already installed as a visa resource. Cheers

    Hi reivaxseniab,
    I think the best place to start would be one of the tutorials (link below).
    http://digital.ni.com/manuals.nsf/websearch/7449F8D894ED3F9D8625745100771596
    This should help you with using a live image from a camera in LabVIEW.
    Kind regards,
    Josh E
    Applications Engineer
    National Instruments UK & Ireland

  • How do you track a moving object using Labview and Vision Assistant

    I am using Vision and Labview to create a program that tracks and follow a moving object using a high end camera. Basically what it does is it detects a foreign object and locks on to it and follows it where ever it goes in a control sized room.
    I have no idea how to do this. Please help. Or is there an available example.
    Thanks.

    Hello,
    It sounds like you want to look into a Vision technique called Pattern Matching.  Using our Vision tools, you can look for a image, called a template, within another image.  Vision will scan over the entire image of interest trying to see if there are any matches with the template.  It will return the number of matches and their coordinates within the image of interest.  You would take a picture of the object and use it as the template to search for.  Then, take a picture of the entire room and use pattern matching to determine at what coordinates that template is found in the picture.  Doing this multiple times, you can track the movement of the object as it moves throughout the room.  If you have a motion system that will have to move the camera for you, it will complicate matters very much, but would still be possible to do.  You would have to have a feedback loop that, depending on where the object is located, adjusts the angle of the camera appropriately.
    There are a number of different examples a that perform pattern matching.  There are three available in the example finder.  In LabVIEW, navigate to "Help » Find Examples".  On the "Browse" tab, browse according to "Directory Structure".  Navigate to "Vision » 2. Functions".  There are examples for "Pattern Matching", "Color Pattern Matching", and "Geometric Matching".  There are also dozens of pattern matching documents and example programs on our website.  From the homepage at www.ni.com, you can search in the top-right corner the entire site for the keywords, "pattern matching". 
    If you have Vision Assistant, you can use this to set up the pattern matching sequence.  When it is complete and customized to your liking, you can convert that into LabVIEW code by navigating to "Tools » Create LabVIEW VI..."  This is probably the easiest way to customize any type of vision application in general.
    I hope this helps you get started.  Take care and good luck!
    Regards,Aaron B.
    Applications Engineering
    National Instruments

  • So I want to use an imaging device with Vision Assistant then incorporate that into my LabView 8.6 code. How can I do this?

    So here's what I have so far written, I have LabView program to interface with my microscope stage, autofocus. I want to try now to interface the camera using Vision Assistant.
    The procedure I want the program to perform is taking this CR-39 detectors and placing them on this stage and the camera to take images of this detector frame by frame. So I want the program to go to the very top corner of the detector on the stage. Then the camera would autofocus (if needed) and take an image. So this would take an image for every micron of the detector on that row. Then move to the next  and so on. Until it has successfully imaged the whole detector.  Is anyone well versed on Vision Assistant and willing to give some advice on this? It would be greatly appreciated.
    The purpose is eliminate the previous method of acquiring images by manually focusing on the tracks created by the particles. I want to this program to run itself without a user being present, so just pressing a button would send the program into autopilot and it would perform the procedure.
    What are CR-39 detectors?
    Allyl Diglycol Carbonate (ADC) is a plastic polymer. When ionizing radiation passes through the CR-39 it breaks the chemical bonds of the polymer and it creates a latent damage trail along the trajectory of the particles. These trails are further ethced using a chemical solution to make the tracts larger which can finally be seeing using an optical microscope.
    Thank you,
    HumanCondition

    Hi Justin,
    Yes I already have a vi that controls the stage.
    So the camera is stationary above the stage. Now what I want to do is get the camera to take the pictures of the detector by moving the stage frame by frame for example starting from the top right corner of the detector (by taking an image) then ending on the top left corner of the detector (by taking an image) then having the stage move down a row and starting the process over again until each frame (or piece) has been imaged.
    My goal is while the stage is moving to every little frame of the detector I would like for it to autofocus if necessary, and take a image to be for each frame of the detector.
    Using the auto focus requires the moving of the stage up and down.
    HumanCondition

  • How to get the actual value rather than pixel value in vision assistant

    how to get the actual value rather than pixel value by vision assistant in my program.As shown in the figure            this is my programme   this is the result       But i want to get this result

    Check the documentation for the "ADF Dialog Framework"
    It may help
    Atilio

  • Vision DevMod: How to create large Gaussian filter kernels with integer values like Vision Assistant uses

    Hello,
    in Vision Assistant it is possible to create filter kernel of arbitrary size, like e.g. a 75x75 Gaussian kernel. These kernels feature integer-only elements. I tried to find a function in the Vision Development Module, but have been unable to find something so far. So I still create a Vision Assistant script with such a kernel and export it to a LabVIEW VI in order to abtain such a kernel. Any ideas on how to create such kernels programmatically?

    -You can generate custom kernels in LabVIEW also, but you need to have either string or numerical of that kernels. Without these two, i am not sure of how to do this.
    -Please check this for building kernels.
    http://zone.ni.com/reference/en-XX/help/370281M-01​/imaqvision/imaq_buildkernel/
    -Let's say you are able to programmatically generate kernel string, it will convert to kernel in double format automatically and connect this to convolute vi.
    Thanks
    uday,
    Please Mark the solution as accepted if your problem is solved and help author by clicking on kudoes
    Certified LabVIEW Associate Developer (CLAD) Using LV13

  • How to train OCR using VISION ASSISTANT for multiple character recognition

    Sir I have tried training OCR using Vision Assistant for character recognition. For the process i have used a fixed focus camera but the character i had trained were undetectable. So sir please provide me a liable solution to the problem.
    Thank you.
    I have attached my project description and also the .vi file of my work towards it.
    Attachments:
    Project phase I.vi ‏138 KB
    WP_20140814_17_27_38_Pro.jpg ‏1444 KB

    Can you post a real jpg instead of renaming a bmp to jpg?

  • Performance meter

    Hello,
    Recently I found in "Vision Builder for automated inspection" there's a tool called "performance meter" in block "vision assistent", absically we can use it to measure the processing time. I'm wondering could we find similar tool in other bolcks or for instance a "Performance meter" for the whole process but not only the "Vision assistant" blok? Thanks!
    Best regards,
    Yuquan  

    you can find some examples of vision in  https://decibel.ni.com/content/docs/DOC-31892
    Atom
    Certified LabVIEW Associate Developer

  • LabVIEW Vision Assistant的图像处理,NI视觉助手教程免费阅读

     附件里面比较全,因为发不了那么多字
    http://shixinhua.com/  石鑫华主页
    http://shixinhua.com/imganalyse/2013/09/762.html  全文网站
    《基于Vision Assistant的图像处理是实用教程》
    第二章 界面与菜单       67
    第一节 启动欢迎界面  67
    http://shixinhua.com/imganalyse/2013/01/278.html
    第二节 功能界面  73
    http://shixinhua.com/imganalyse/2013/01/280.html
    Acquire Images采集图像界面       73
    Browse Images浏览图像界面       77
    Process Images处理图像界面       79
    第三节 File文件菜单    86
    http://shixinhua.com/imganalyse/2013/01/283.html
    Open Image:打开图像          86
    Open AVI File:打开AVI文件         87
    Save Image:保存图像  87
    New Script:新建脚本   91
    Open Script:打开脚本  92
    Save Script:保存脚本   92
    Save Script As:脚本另存为  93
    Acquire Image:采集图像     93
    Browse Images:浏览图像   93
    Process Images:处理图像   94
    Print Image:打印图像 94
    Preferences:优先参数选择 95
    Exit:退出       97
    第四节 Edit编辑菜单   97
    http://shixinhua.com/imganalyse/2013/01/284.html
    Edit Step:编辑步骤       97
    Cut:剪切       97
    Copy:复制     98
    Paste:粘贴   98
    Delete:删除 98
    第五节 View查看菜单 98
    http://shixinhua.com/imganalyse/2013/01/286.html
    Zoom In:放大        99
    Zoom Out:缩小     99
    Zoom 1:1:原始图像      99
    Zoom to Fit:适合窗口  99
    第六节 Image图像菜单        99
    http://shixinhua.com/imganalyse/2013/01/288.html
    Histogram:直方图 101
    Line Profile:线剖面图   101
    Measure:测量      101
    3D View:三维视图        101
    Brightness:亮度   102
    Set Coordinate System:设置坐标系    102
    Image Mask:图像掩码         102
    Geometry:几何    102
    Image Buffer:图像缓存        102
    Get Image:获取图像   102
    Image Calibration:图像标定        102
    Image Correction:图像修正         102
    Overlay:覆盖         102
    Run LabVIEW VI:运行LabVIEW VI       103
    第七节 Color彩色菜单 103
    http://shixinhua.com/imganalyse/2013/01/289.html
    Color Operators:彩色运算  104
    Color Plane Extraction:彩色平面抽取          104
    Color Threshold:彩色阈值   104
    Color Classification:彩色分类      105
    Color Segmentation:彩色分段     105
    Color Matching:彩色匹配    105
    Color Location:彩色定位      105
    Color Pattern Matching:彩色模式匹配       105
    第八节 Grayscale灰度菜单 105
    http://shixinhua.com/imganalyse/2013/01/291.html
    Lookup Table:查找表    107
    Filters:滤波  107
    Gray Morphology:灰度形态学     107
    Gray Morphological Reconstruction:灰度形态学重建        107
    FFT Filter:快速傅立叶变换滤波  107
    Threshold:阈值     107
    Watershed Segmentation:分水岭分割        107
    Operators:运算    107
    Conversion:转换  107
    Quantify:量化       108
    Centroid:质心       108
    Detect Texture Defects:检查纹理缺陷       108
    第九节 Binary二值菜单        108
    http://shixinhua.com/imganalyse/2013/01/293.html
    Basic Morphology:基本形态学    109
    Adv. Morphology:高级形态学      109
    Binary Morphological Reconstruction:二值形态学重建     109
    Particle Filter:粒子滤波        109
    Binary Image Inversion:二值图像反转        110
    Particle Analysis:粒子分析  110
    Shape Matching:形状匹配  110
    Circle Detection:圆检测       110
    第十节 Machine Vision机器视觉菜单 110
    http://shixinhua.com/imganalyse/2013/01/295.html
    Edge Detector:边缘检测     111
    Find Straight Edge:查找直边        112
    Adv. Straight Edge:高级直边        112
    Find Circular Edge:查找圆边        112
    Max Clamp:最大卡尺  112
    Clamp(Rake):卡尺(耙子)         112
    Pattern Matching:模式匹配         112
    Geometric Matching:几何匹配   112
    Contour Analysis:轮廓分析 112
    Shape Detection:形状检测  112
    Golden Template Comparison:极品模板比较     113
    Caliper:测径器、卡尺 113
    第十一节 Identification识别菜单         113
    http://shixinhua.com/imganalyse/2013/01/296.html
    OCR/OCV:光学字符识别     114
    Particle Classification:粒子分类  114
    Barcode Reader:条码读取  114
    2D Barcode Reader:二维条码读取     114
    第十二节 Tools工具菜单      114
    http://shixinhua.com/imganalyse/2013/02/297.html
    Batch Processing:批量处理          114
    Performance Meter:性能测量     120
    View Measurements:查看测量   120
    Create LabVIEW VI:创建LabVIEW VI代码 121
    Create C Code:创建C代码 121
    Create .NET Code:创建.NET代码        122
    Activate Vision Assistant:激活视觉助手     123
    第十三节 Help帮助菜单       124
    http://shixinhua.com/imganalyse/2013/02/298.html
    Show Context Help:显示上下文帮助  124
    Online Help:在线帮助  125
    Solution Wizard:解决问题向导   125
    Patents:专利         125
    About Vision Assistant:关于视觉助手         125
    第三章 采集图像  126
    第一节 Acquire Image采集图像  127
    http://shixinhua.com/imganalyse/2013/02/299.html
    第二节 Acquire Image(1394,GigE,or USB)采集图像(1394、千兆网、USB) 128
    http://shixinhua.com/imganalyse/2013/02/301.html
    Main选项卡   129
    Attributes属性选项卡   138
    第三节 Acquire Image(Smart Camera)从智能相机中采集图像 143
    http://shixinhua.com/imganalyse/2013/02/302.html
    第四节 Simulate Acquisition仿真采集 145
    http://shixinhua.com/imganalyse/2013/02/304.html
    第四章 浏览图像  150

    Wetzer wrote:
    You can think of the purple "Image" Wire as a Pointer to the Buffer containing the actual Image data.
    That's what I thought too, but why would it behave differently during execution highlighting? (sorry, I don't have IMAQ, just curious )
    LabVIEW Champion . Do more with less code and in less time .

  • Use a web browser as the source for the vision assistant

    I want to access an ip camera over the internet and then use its video feed to do some processing with the vision assistant. I was wondering if someone could tell me if this is possible and how can it be done. what I have so far is an application that works with local cameras and I also have an example of a web broser in labview. I thought I could use the web browser and use a local variable from the browser in order to get the image, but this can't be wired to the grab or snap, because its not an image, so can someone please tell me how to convert the browser into a video feed, in order for me to use it in my application.

    Crop the image out of the print screen.  I imagine your screen will be a much larger resolution then the camer, and will only take up a portion of your browser window.
    Unofficial Forum Rules and Guidelines - Hooovahh - LabVIEW Overlord
    If 10 out of 10 experts in any field say something is bad, you should probably take their opinion seriously.

  • Make a Line Fit with Vision Assistant for a polynominal function?!

    Hello
    I do have following problem to solve: I do have a laser spot which draws a line (non linear) to a wall. For this line I need to know the (exact) mathematical
    function. Therefore I can get an image of the line but I do not know how I can extract the mathematical function with a line fit for example. If I could "convert"
    the line into points I would use the line fit function of LabView which should work without problem.
    Is there a way to solve the problem with the vision assistant or..?
    Thanks in advance
    Solved!
    Go to Solution.

    Hello,
    by now I have learned that (almost) anything is possible. You can achieve this using Labview, Matlab, C++, etc... In any case, getting the coordinates of a single laser line (single laser line - you don't need to find correspondences, as opposed to multi-line projection) should be really simple. If you place an apropriate filter in front of the camera, it is even simpler!
    If you want proof it can be done (and the description/procedute I used), check out the link in my signature and search for laser scanner posts (I think there is three of them, if I remember correctly). I have made a really cheap scanner (total cost was around 45 eur). The only problem is that it is not fully calibrated. If you want to make precise distance measurements, you need to calibrate it, for example using a body of known shape. There are quite a few calibration methods - search in papers online.
    Best regards,
    K
    https://decibel.ni.com/content/blogs/kl3m3n
    "Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."

  • Modifying VI - Vision Assistant

    I have images that consist of an area of particles and an aera of no particles. I am trying to fit a circle to the edge, between the regions where there are and are not particles. I want to use the find edge tool, and I want to find the pixel where this transistion takes place for every row of pixels. For example, I want to draw a horizontal line that will give me the location of the edge then move down one row and repeat. I have tried using the find circle edge, but since I am trying to fit a circel to an edge that isn't well defined, I need a lot more data points to average over. I figure there is a way to modify the VI to perform the process I described above. Any help would be much appreciated. I have attached the images to give you a better idea of what I'm trying to do.
    Attachments:
    half circle.JPG ‏554 KB

    Hi Windom,
    If the find circle edge is not working for you, I would suggest thresholding the image. Then you could use the morphology functions (such as Close, Fill Holes, and Erode) to further manipulate the image to get a stronger edge between the areas of particles and not particles.
    You can use a For Loop (initialized to start looking at the top of picture) and have it iterate vertically down the picture with the Edge Detector. You can do that by changing the ROI Descriptor for the line you are detecting edges with, and then you can read the Edge Information out of the VI. These all need to be checked in the "Select Controls" menu, which is found at the bottom right of the Vision Assistant window.
    I hope this helps, let me know if you need any further clarification.
    Best Regards,
    Nathan B
    Applications Engineer
    National Instruments
    Attachments:
    SelectROI.JPG ‏12 KB
    Edge Info.JPG ‏10 KB
    SelectControls.JPG ‏14 KB

  • Performance meter in weblogic console

    All,
    Semi new to memory analysis on the server side (distributed garbage
    collection and all). I was rather used to doing client application memory
    leak analysis. Are there any good starter guides for memory analysis when
    doing load testing?
    I am currently working on a project where we do severe load testing against
    servlets that hit session level EJB and then entity EJB. We see the heap
    (Server->monitor->performace) sawtooth slowly up until the server throws out
    of memory exceptions and starts to fail. We have already fixed a few
    glaring problems with our code, but the sawtooth remains.
    What is the garbage collection routine under WebLogic? Does it run
    periodically? Does it run during high/low activity? Should it not be
    called before any OutOfMemory exception is throw?
    If you set the ejb's to keep too many ejb references in the cache, will the
    server sanity check this and bump some out of the cache when memory gets
    tight? Basically, I'm asking could I shoot myself in the foot in
    weblogic-ejb-jar.xml or will the server help me out?
    Finally, if you start mydomain as shipped and go to the performance graph it
    slowly sawtooths up at a very slow pace. If you let it run over night it
    eats up 10 megs or so. Why is this? It looks rather bad, but I can't
    assume this is as it looks on the surface.
    Sorry for all the questions, but maybe this will start a good thread.
    Andrew Spyker ([email protected])
    Software Engineer
    http://spyker.dyndns.org

    JVM heap usage will grow and shrink throughout the life of the server. Even an
    idle WebLogic server is allocating and freeing object instances. Therefore,
    when looking for a "leak", make sure that you account for the state of the JVM
    garbage collection and compare "apples" to "apples".
    I find that it is best to compare the heap usage after full garbage collections
    (GC[1]). Try enabling -verbose:gc and comparing each of the full GC results.
    Provided that your server is handling a constant and steady load (or is
    constantly idle), you should see little change in heap usage after each full
    GC. However, if your your server is idle you may need to force GC to run every
    hour or so. By default, you need to use nearly all of the allocated heap
    before full GC will kick in.
    -Charlie
    Andrew Spyker wrote:
    "Mike Reiche" <[email protected]> wrote in message
    news:[email protected]...
    The garbage collector is a function of the JVM, not of WebLogic. Look atthe JVM
    documentation for an explanation of garbage collection.Gotcha.
    Weblogic, without any application code runs fine. When you add YOURapplication
    code, then you might run into problems. Hence it is your code that ischewing
    up (and not releasing) the memory. JProbe is good at find this kind ofproblem.
    You can also narrow things down - for instance - hit only one servlet andsee
    if the memory grows. Or remove references to session beans, and hit theservlet.
    I should be more specific. I installed WLS SP2 on Redhat Linux 7.1 and ran
    startWeblogic.sh from the default mydomain. I didn't make any
    modifications. I ran the performance meter and the memory did grow over
    night significantly. This suprised me. Do I attribute this to Linux? Our
    production environment is definately Solaris, but I'd like to be able to
    diagnose memory leaks on Linux. I don't want this false positive in the
    tests. Am I missing something here?
    OutOfMemoryExceptions occur when the JVM attempts to allocate more memorythan
    is available - even after running garbage collection. The JVMoption -verbose:gc
    will also shed some light on GC.
    The Java Heap is not the only part of the JVM that requires memory. Inrare cases,
    much memory is consumed by classes and methods in memory, heap (not javaheap)
    used by native code.
    Mike ReicheThanks for the response ...
    "Cameron Purdy" <[email protected]> wrote:
    It is not easy. Optimizit and JProbe can help, as can a well-architected
    application with classes that support dumping debugging information (to
    evaluate object graphs) and serialization (to guage sizes).
    Typical leaks are in session objects, static fields, and database driver
    usage without proper closing of objects.
    Peace,
    Cameron Purdy
    Tangosol Inc.
    << Tangosol Server: How Weblogic applications are customized >>
    << Download now from http://www.tangosol.com/download.jsp >>
    "Andrew Spyker" <[email protected]> wrote in message
    news:[email protected]...
    All,
    Semi new to memory analysis on the server side (distributed garbage
    collection and all). I was rather used to doing client applicationmemory
    leak analysis. Are there any good starter guides for memory analysiswhen
    doing load testing?
    I am currently working on a project where we do severe load testingagainst
    servlets that hit session level EJB and then entity EJB. We see theheap
    (Server->monitor->performace) sawtooth slowly up until the server
    throws
    out
    of memory exceptions and starts to fail. We have already fixed a few
    glaring problems with our code, but the sawtooth remains.
    What is the garbage collection routine under WebLogic? Does it run
    periodically? Does it run during high/low activity? Should it notbe
    called before any OutOfMemory exception is throw?
    If you set the ejb's to keep too many ejb references in the cache,will
    the
    server sanity check this and bump some out of the cache when memorygets
    tight? Basically, I'm asking could I shoot myself in the foot in
    weblogic-ejb-jar.xml or will the server help me out?
    Finally, if you start mydomain as shipped and go to the performancegraph
    it
    slowly sawtooths up at a very slow pace. If you let it run over nightit
    eats up 10 megs or so. Why is this? It looks rather bad, but I can't
    assume this is as it looks on the surface.
    Sorry for all the questions, but maybe this will start a good thread.
    Andrew Spyker ([email protected])
    Software Engineer
    http://spyker.dyndns.org

Maybe you are looking for

  • Materialzed view rewrite

    Hi http://download.oracle.com/docs/cd/B28359_01/server.111/b28313/qradv.htm#CHDFIAGB In the following section Materialized View Delta Joins A materialized view delta join is a join that appears in the materialized view but not the query. All delta jo

  • XML to ABAP internal table with CALL TRANSFORMATION

    I can't create a transformation that allow passing next XML file  to an internal table: Please, I need someone could help me, because I don t have XML knowledge << Moderator message - Everyone's problem is important. But the answers in the forum are

  • Authorization for Attribute Set in CRM

    I saw the authorization field 'Edit' and 'Assign' in attribute set in CRM (i.e transaction : CRMD_PROF_TEMPL) Can someone explain to me what does it mean ?

  • Convert to JTable, validation, more

    Hello everyone, I'm currently doing my project and a bit short of time. I'm a really bad programmer and struggling with this. There are some things I need help with and would really appreciate your assistance. Here is my program right now, it's not c

  • New Mac: can i get my old purchases back?

    I just got a brand new MacBook after my previous one was stolen. Is there a way to get my purchases back?