Using JRockit Real Time with Large Heap

I want to know if JRockit Real Time can work with extremely large heap, e.g. 100G, to provide effective deterministic GC, before proceeding with evaluation for my application.
Thanks in advance!
Wing

Hi Wing,
In general, extremely large heaps can make all GCs slower. This is true for Deterministic GC as well. However, it is very application dependent. It is above our standard recommendation, but you can download JRockit Real Time and give it a try.

Similar Messages

  • Can I use Labview real time with PCI 6013?

    How much does Labview real time cost?
    Do i need to have some additional hardware to use labview real time with PCI 6013?

    You can not use the PCI-6013 board with LabVIEW Real-Time.
    We offer three platforms for LabVIEW Real-time. PXI embedded controllers, 7030 data-acquisition plug-in boards (7030/xxxx) and FieldPoint. For more information on these options go to www.ni.com/products and then choose Real-Time from the menu on the left. There are PCI versions of the 7030 boards. Each 7030 board has a data-acquistion board attached to it (6030E, 6040E or 6533). When you embed a LabVIEW Real-Time program on the 7030 you will be able to access the daughter board. You can not access other boards from the embedded program. If you want to use several daq cards in your real-time system I would recommend a PXI chassi and PXI daq boards.

  • Do I have to use LabVIEW Real Time with a reflective memory node?

    For reference with an external data system that will be temporarily installed at a customer's site, they have asked that I tie into their data network to record data from their control system.  They apparently use a reflective memory network for data sharing.  I have no prior experience with reflective memory, but all references to it involve real time systems.  I do not need absolute determinism to acquire this data, I can be late by several milliseconds with no problem.  Do I still need to use LabVIEW Real Time to interface with the PXI reflective memory node?

    Hi AEI, 
    I have worked with that card briefly before. It has a Visa based driver and RT isn't required. However, I haven't worked with the card on a non-rt system and am not sure if there any issues to be aware of. 
    A lot of work has gone into integrating support for the card into Veristand, it may save you enough development time to use at an RT-Veristand system to be worth the extra cost. 
    Jesse Dennis
    Design Engineer
    Erdos Miller

  • Welcome to the Oracle JRockit Real Time forum

    The JRockit forum over at forums.bea.com is since the first of July in read only mode - this is now the official forum for the JRockit Real Time product.
    Best regards,
    Stefan

    Bonjour "Etudiant from Tunisia",
    I'm not sure what you mean with "please give me the user/password".
    As far as I know there is no online OWB available where you could try out the product (if that is what you mean).
    If you would like to learn more about OWB, read online documentation or for instance the link that is provided in an earlier post in this thread.
    Otherwise simply install OWB and try it yourself. The Installation and Configuration Guide is clear enough even without much experience installing Oracle software, and the OWB User Guide provides some basic insight on working with OWB itself.
    Good luck, Patrick
    ps If people are still expecting what Igor mentioned when he started this forum ("Oracle Warehouse Builder (OWB) product management and development will monitor the discussion forum on regular basis"), don't count on it; fortunately the product itself has been around long enough now, there are quite some users that can share usable insights and/or drop some useful lines on the new threads... ;-)

  • Oracle JRockit Real Time 3.1.2 and weblogic?

    http://www.oracle.com/technology/software/products/jrockit/index.html". I go to the website, and find a software called: Oracle JRockit Real Time 3.1.2 for windows 64 bit. My question is: In order to let weblogic 9.2 works in windows 2003 R2 64 bit machine, SHOULD I MUST DOWNLOAD "Oracle JRockit Real Time 3.1.2 for windows 64 bit"? Or I don't need to download anything? Actually, I just download a file called jdk-6u18-windows-x64.exe. Please clear my question, thanks! One more thing, what is the functionality for Oracle JRockit Real Time 3.1.2? Is it a kind of JRE ?

    Hi,
    For more information on JRocket Real Time have a read of :- http://www.oracle.com/technology/products/jrockit/jrrt/index.html
    As far as I am aware you don't have to install the real time product you can installl the JRockit JVM and use that with Weblogic, I think newer version of JRocket come bundled with Real Time which is the JVM and monitoring and analysis tools. There are still standalone JRocket JVM installs available, you can get them from oracle edelivery.
    JRocket is a JVM but optimized for use with Weblogic, so you should get better overall performance.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • What are the important concepts in ABAP which are used in real time

    Hi,
    This suresh i'm learning sap/abap just i want to know the
    important concepts sap/abap which will be  used in real time.

    Hi,
    Refer.
    /people/thomas.jung/blog/2007/12/19/update-your-abap-development-skills-to-sap-netweaver-70
    /people/horst.keller/blog/2007/03/21/1000-pages-full-of-abap
    /people/anubhav.mishra/blog/2007/11/20/first-experience-with-abap
    Also
    Re: Any Bw abap course for writing routines?
    Re: ABAP for BW
    Re: Seeking advice ABAP With BW
    Re: ABAP in BW
    Hope this helps.
    Thanks,
    JituK

  • How to verify RTSJ use linux real time kernel?

    Hi,
    I'm complete newbie into this area and I'm trying out the RTSJ 2.1 beta Linux. I'm using this on Ubuntu with the linux real time kernel. Is there anyway to find out whether the real time kernel is being actually used or in other words is there a way to find out whether RTSJ works fine with the real time kernel on Ubuntu? I see that the programs get compiled and run irrespective of whether I use the real time kernel or the generic kernel.
    Thanks,
    Vidura

    Hi,
    I would assume, like other distributions, that you either boot the real-time kernel or you don't. uname should show you what you are running, but you'd have to ask the Ubuntu folk what you should see for the RT kernel.
    To see if you are benefiting from real-time, you need to run a RTSJ app that tracks deadline misses or measures latency/jitter. Try it on the non-real-time kernel and the real-time one and see what you get. Some of the examples in the "Getting Started" guide should be usable for these purposes.
    David Holmes

  • What is ods concept in BW ?how it is used in real time?

    what is ods concept in BW ?how it is used in real time?

    Hi,
    Please go through the following links:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/eb8f1895-0501-0010-73ad-e0bef4a2000a
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm
    you can find more links in SDN.
    Regards,
    Chandan

  • Permission to use SAP Real-Time Collaboration capabilities

    Hi Experts,
    I get the following error message when i login to portal EP 7.0 (in one small window) i am able to use my portal.
    You do not have permission to use SAP Real-Time Collaboration capabilities. For more information, contact your portal administrator
    can any one help me what permission i need to set for Real time Collaboration RTC. i have enable to all service in ivew properties "yes" to use RTC services
    Thanksyou
    Regards
    Vijai

    refer
    https://www.sdn.sap.com/irj/scn/thread?threadID=740397

  • Using SIT connection manager with large simulink models is extremely slow

    Hello,
    I'm trying to use a large simulink model in LabVIEW but once the DLL is correctly generated and the SIT connection manager is invoked to explore the model sinks, sources and parameters, it takes hours to generate the model connections tree. Afterwards, when the connections tree is completed, it is impossible to handle it because every operation performed takes a lot of time and memory (i.e. to expand a block to see which parameters are inside).
    The version of SIT I'm using is 2.0.3 with LabVIEW 7.1.
    Is there anybody experienced with large simulink models and SIT?
    Thanks and regards.
    Ignacio Sánchez Peral
    SENER Ingeniería y Sistemas S.A.
    Control Systems Section
    Aerospace Division
    C/ Severo Ochoa, 4 (PTM).
    28760 Tres Cantos (Madrid) Spain.
    [email protected]
    Tel + 34 91 807 74 34
    Fax + 34 91 807 72 08
    http://www.sener.es

    The VI in the Driver VI called SIT Initialize Model.vi has an input called time step (sec) (-1: use model time step) which does what you want it to. It doesn't actually affect the time step of the solver used in the built model DLL, just the rate at which the main base rate loop actually runs in real time. In fact, base rate loop period would be a better name for this control. If you set it, you won't alter your model, but you will be able to adjust the rate of the base rate loop.
    Simply create a control from this input terminal on your driver VI and fill in an appropriate period in seconds. Make sure to set this value as default in the control so that the Driver VI remembers it.
    You will have to take into account that your model will still think it's running at the time step it was compiled at. So your model simulation time and the actual wall clock time won't match up.
    Jarrod S.
    National Instruments

  • Anyone using 3.5.1 with large JVMs?  Thoughts?

    Hello all,
    I'm very interested in the possibilities with large sized storage nodes. We've been running 3.4.1 for about a year or so, with 2.5GB JVMs and rock solid performance after some careful GC tuning. What I've observed is that the app needs ~800MB of the heap for itself, leaving 1.7GB for storage. We're running 24 nodes, with that costing us 19GB of potential storage space (800 * 24). If we could run, for example, 1 node (fictitious example) with a 60GB JVM, using 800MB for the application, we end up with ~60GB of storage space vs. ~40GB. That's a lot more room :)
    Obviously, GC tuning is going to be different with a JVM of this size.
    Has anyone had any experience with large JVMs? What OS? What GC params did you use?
    I'll be doing some testing in the next few weeks and will post back results, but wanted to see what kinds of results others were having.
    Thanks for your time.
    chris

    Hi Chris
    May first observation about 3.5 is increased memory overhead for distributed cache scheme.
    On 32 bit JVM, Coherence 3.4 overhead per entry is about 116 bytes per entry and 3.5 overhead is 164 bytes per entry (in both cases it is a lower bound).
    I'm working with large number of quite small objects ~80 bytes (using POF), for me additional 48 bytes are very sensitive.
    I believe it is a trade off to speed up partition migration process.
    Unfortunately I cannot say anything about perforce, I didn't test it.
    I'm looking for results of your tests.
    Regards,
    Alexey

  • Can I acquire and analyse in real time with regular Labview?

    I have to acquire samples (which vary cyclically in a roughly sinusoidal fashion) from a sensor, and check every sample to see if it is the minimum (the valley) of a cycle. If it is, and it does not fall within an expected range, I have to take a corrective action that involves rejecting the part that was just measured as well as a 30 more parts (to make sure that the defective part has been rejected). The signal from the sensor is not very noisy, but beause of the nature of the measured object, there could be local minima and maxima. To guard against that, a point is considered to be a valley only if subsequent readings deviate above that point by a certain amount. If a part is indeed defective,
    a digital out put has to issued to reject that part.
    Can all these be done using regular Labview (not RT)? I tried it out with a proto-type VI, using DAQmx vis, continously acquiring samples, but reading one sample at a time from within a loop (the VI I used is attached with this question). The result has been disappointing, since each time the loop executes there is a delay that keeps building up. Finally, even after the part feed has been turned off I can see Labview processing signals from parts that have long since gone past the measuring head.
    Another perplexing thing I found is that the time taken to execute the while loop in the vi is not consistent; it takes anything from 6 to 50ms to execute.
    I will need atleast 8-12 samples from a part to build its profile, and the feed rate is about 3000 parts per minute. I am using Labview 7.0 with an NI-6013 card in a Windows 2000 environment.
    Thanks for any suggestions / recommendations.
    Attachments:
    find_trough_2.vi ‏378 KB

    Hello,
    Thank you for your suggestions; I had already resigned myself to going for a Real Time system, your answer convinced me to commit myself to it!
    That said, your reply leads to a couple of (related) questions...
    1. Your point regarding the use of Local Variables is well taken; I have been repeatedly told at various training sessions how the necessity of updating the LVs during each loop iteration slows the computation time. However, what alternative do we have when there are several controls to which we have to write AND read data multiple times during a loop iteration, and perform different computations based on the value held by these controls? (You have seen the VI I attached with the original question). Some of these conditional
    computations further change the value of the controls. Does Labview have any other mechanism to store and manipulate the intermediate value from a computation?
    2. I did a simple experiment to determine the average loop time, and the results were surprising. I placed the entire content of the VI I used (Find the Valley in cycle.vi) in a stacked sequence structure, and wired the index counter "i" to a control to count the number of iterations the loop executes. I placed a frame before this with a tick count instruction to get the start time of the loop, and a frame after this to get the end time of the loop. Dividng the difference of these with the number of iterations, I got the average loop time to be around 1.2ms! Am I interpreting my results incorrectly?
    Thnaks once again for your response. I would really appreciate your views on the questions I have raised in this comment
    Regards
    Arun P. Madangarli

  • How do you interface a Renishaw laser real-time with NI DAQ?

    I understand that it is possible to acquire "real time" feedback from a Rensishaw Laser while measuring linear position, with the following modifiers applied.
    *The standard output is a digitized value to a proprietary (Renishaw) data acquisition (DAQ) card.
    *Renishaw lasers have a quadrature output facility on the back of the laser head. This permits capture of the �raw� interferometry signal. This raw signal is not compensated for environmental factors such as temperature, air pressure, humidity, etc.
    *The standard system integrates signals from an environmental unit for air temperature, pressure, and humidity, as well as signals from up to three material probes. Acquisition and pro
    cessing of these or equivalent signals would have to be considered to provide data for environmental compensation calculations (Edlen equation). This is mandatory in order to obtain accurate distance and velocity readings.
    *The quadrature output can communicate with other cards. (NI DAQ?)
    Specifics with regard to Labview software and required DAQ to read real time Laser output would be very much appreciated.

    You can use a counter available on the E-series MIO DAQ boards or the 660x counter boards. You can measure the position, frequency, and period of the output from a quadrature encoder. Examples on how to measure signals from a quadrature encoder for LabVIEW can be found on the web at ww.ni.com/support under the technical resource section search under the example programs section.
    You may also want to contact Renishaw to see if they have drivers available for LabVIEW and their DAQ equipment.

  • RAM Preview in real time with Matrox Mini

    I followed this thread with interest. I upgraded from Production Premium CS5 to CS6 last month. I haven't been able to get Realtime RAM playback since. I'm also monitoring using a Matrox mini and am wondering if this is factoring into the problem. I recently replaced my mini because I was losing communication with the box. I did notice, with the box out of the equation, that RAM playback was in realtime. My test comp runs 5 seconds at 29.97 fps, just a photo moving from point a to b to c, with blur, over black. When I try RAM playback, the message I see is: fps 11.xxx (xxx because the frames vary slightly throughout playback)/29.97. Note: Zoom in not 100%. Interesting thing is that when I delete the photo layer and just keep the black, RAM playback seems fine.
    I did adjust my RAM allocation a little, to no effect. Here are my specs:
    Computer:
    Windows 7 Professional, SP1
    i7-4930k 3.4 GHz (6 cores, 12 with hyperthreading)
    RAM: 32 GB
    GPU: Nvidia GeForce GTX 770
    Here are some memory settings:
    Installed RAM: 31.9 GB
    RAM for other applications: 6 GB
    RAM shared by AE, Premiere, Prelude, Encore, Adobe Media Encoder, Photoshop: 25.9
    Installed CPUs: 12
    CPU reserved for other applications: 2
    RAM per CPU: 2 GB
    Actual CPUs that will be used: 8
    Preview info:
    Texture memory: 796 MB
    Ray-tracing: GPU
    Open GL
    Vendor: NVIDIA Corporation
    Versions: 2/1/2
    Total memory: 1.94 GB
    Shader model: 4.0 or later
    Cuda
    Driver version: 6.0
    Devices: 1 (GeForce GTX 770)
    GeForce GTX 770
    Current Usable Memory: 1.68 GB (at application launch)
    Maximum Usable Memory: 2.00 GB
    Let me know what else I can tell you. Thanks for any suggestions.

    If you disable video preview in After Effects to the Matrox device, does RAM preview run in real time?
    In After Effects CS6: Preferences > Video Preview, set Output Device to Computer Monitor Only. Matrox devices may also require you to disable WYSIWYG preview in their control panel.
    When video preview is enabled, every frame is transcoded on the fly at display time to the format (dimensions, PAR, frame rate) requested by the device. This requires some memory and processor power and has an impact on performance. If having video preview enabled is in fact contributing to the issue, you can reduce the memory requirements of the frame (and therefore the processing power required) by reducing the RAM preview frame rate or resolution values in the Preview panel, reduce the project color depth, or disable color management in the Project Settings.
    Worth noting here that After Effects CC 2014 (13.0) uses Mercury Transmit for video preview, instead of the older QuickTime and DirectShow technologies, and while it is still subject to the limitations above it provides better performance.

  • How much interfaces can be driven in "real-time" with a VI in LABVIEW 6i

    In my application I need to merged data from different devices hanging to my computer via different interface types. Each data set in to be specifically formatted in a file. Each data set comes a a rate of ~ 10 Hz.
    As acquisition devices I do have
    - 4 digitizers connected to the PCI bus and their labview driver are already given by the constructor
    - an instrument from which data to be imported via a GPIB interface
    - an instrument from which data are imported via a serial port
    - and probably in the future, 2 devices mounted on two USB port
    I wish to developp a LABVIEW 6i code which permit me to make the request to all instrument, to import the data and to merged them as quick as possible so
    that I can store data set coming every 0,1 s. All data from all instruments must be recorded for the same event at the same time (in a triggered manner) before it performs the acquistions for the next event
    I would like to know what is the best way to each my goal.
    - Is it possible to control all this with one VI (containing lot of VIs) ?
    - Should I execute a VI for each interface and run them at the same time and save all data set with a time stamps in a file and then, off-line, process all the files and merge my information.
    In a general manner, are there any rules which indicate the maximum number of interface (according to their nature) which can be properly controled on a given platform from one computer.
    Thanks you very much for any answers

    Its been a couple of days with no reply....
    Look at the occurence functions.
    You can have one loop send occurences at teh rate you need.
    Put each of the other functions in seperate loops that wait for the occurence on each interation.
    Provided each of your loops runs fast enough this approach should serve you up till you run into the indeterminism of Windows. If the the indeterminism will be an issue, you may have to go with LV Real-Time.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

Maybe you are looking for