6020E Counter measurement slow

Slow performance using the 6020E counters to measure pulsewidth.
Using traditional DAQ (device not supported by DAQmx)
Using hardware triggered measurement.
After configuration, doing the following:
- program
- loop until no longer armed
- read count and check for overflow
This cycle takes 100ms to perform on a signal that pulses every 10ms (pulse between 1ms and 8ms)
As I understand, after programming, the counter arms itself on a falling edge (hardware trigger) and measures the next pulse passing by. So this would mean worst case 20ms + communication overhead. Either this hardware triggering isn't working or the overhead is 80ms?
Any help appreciated A LOT!
Peter

Thanks JV,
Unfortunately, we want to read the pulswidth of the first pulse to come (hardware triggered), then do some things with it, do some other stuff in parallel and then read some more. So buffered reading isn't going to help much.
We started from the 'Measure Pulse (DAQ-STC).vi' sample, changing it to do triggered measurements and tweaking timeout and errorhandling to our needs.
It does seem that the 6020E has been a bad choice for the our task at hand (many single reads), as we've had performance issues in analog readings as well. Probably have to dump it all and go for some PCI solution.
Peter

Similar Messages

  • Regular measures(measures with SUM function) are not working along Distinct count measures

    Hi All,
    I am creating a cube that got to have a distinct count measure and a sum measure. if i have created only sum measure then it is working fine. if i create both measures and process the cube only distinct count measure is populated. the sum measure is showing
    all blank values. i am using 2008 R2, and creating 2 different measure groups for both measures, after i include the distinct count measure the sum measure becoming null. can you please help me with this? i am breaking my head for last 2 days on this.. Thank
    You

    Ramesh, measures are affected by the context of the queries that contain them, for example and in some cases, you can get a different total count of something by two different queries, this is because the context of the first query is different than
    the second one ... keep this in mind.
    Now, I've noticed that you are "creating 2 different measure
    GROUPS for both measures", and i guess that you are trying to view those two measures _which are from different measure
    groups_ at the same time and in the same report.
    considering the info in the first point and as you are create the calculated measures in two different measure
    groups, I'm not sure but i guess that this is the problem, and i suggest you create those two calculated measures
    in the same measure group, then try to view them again and let's see.
    if the previous point didn't solve it, please post the expressions you are using to create the calculated measures, maybe this will help in finding the problem.  

  • Count Measure by Minimum values in Dimension only

    Hi, I am drawing a huge blank here -
    I have a Count measure on my fact table, and need to count the rows only if a code in the dimension table is the minimum code for the Fact
    For my fact, I need to count from the following dims:
    Dim A (count all from this dim - which would be a distinct?)
    Dim B (count only the minimum code from this dim)
    Any suggestions?
    Thanks

    Hi oroborus , you are basically trying to have two new measures in your Fact, based on some conditions.
    Ex: You have a measure Total_Orders in your Fact and based on some conditions you want two new measures say Total_US_Orders and Total_OnLine_Orders.
    Lets assume you have two dimensions Country_Dim and Store_Dim. which gives you location and Store of an order.
    In your business model, duplicate the Total_Orders measure and name it as Total_US_Orders. Check the Use existing logical columns as the source box and go into the Expression Builder.
    Here you define the formula for this new column. Which will be something like
    FILTER(Fact.Total_Orders USING Country.Country_Code='US').
    Similarly, create the Total_OnLine_Orders by duplicating the Total_Orders measure and write a formula something like
    FILTER(Fact.Total_ORDERS USING Store.Store_Type = 'Online')
    I hope this will give you a brief idea of how to create calculated columns in business layer.
    Good Luck
    Sai

  • Pci-6224 counter measurement

    We need to make simultaneous counter pulsewidth measurements with our PCI-6224 DAQmx card.  It has two counters and the function panel info/help indicates specifically that it can do simultaneous measurements by creating 2 different tasks (DAQmsCreatTask()) and then using two separate DAQmxStartTask() calls.  We then do 2 separate DAQmxReadCounterScalarF64() calls. 
      The problem is that DAQmxStartTask()doesn't initiate the measurement like the function panel documention indicates.  The measurement(s) aren't initiated until the DAQmxReadCounterScalarF64() is executed, which makes simultaneous measurements impossible (i.e. if the measurement isn't initiated until the Read, then the first Read will be executing while both simultaneous pulses are generated and the second read will be too late).  I've put breakpoints in and verified this repeatedly.
      How do I get the DAQmxStartTask() to initiate the counter measurement immediately?

    Hi nap3n,
    What Dustin said is true of buffered tasks, but it sounds like the original poster is using an on-demand counter task, which is where the device starts counting when you call DAQmx Read. I think your best bet might be to use buffered tasks by calling DAQmx Timing (Implicit) like in the Meas Pulse Width-Buffered-Cont example. It may also be helpful to synchronize their arm start triggers, though implicit timing will still cause them to drift apart depending on the input signals.
    Brad
    Brad Keryan
    NI R&D

  • Measurement slowed by counter

    Hi,
    I am using Labview with a NI DAQPad-6015 to measure temperature (via thermocouple) and control via digital output (counter).
    The measurement rate I get is very slow (about 10 Hz).
    How can I speed up the measurement? I tried a sample clock for measurement, but togheter with the implicit counter the sampling rate was still about 10 Hz.
    Thanks in advance
    Gianluca

    From what I can tell, you are reading single samples of N channels in a tight loop, seperating the channels, and stuffing each individual into its own queue. In a parallel loop, you are building these samples into an array and if the size is 100, you average them, stream them to a file, and zero the intermediary arrays. All arrays grow at the same rate, so why all these separate size checks and case structures?
    This is all way too complicated and inefficient. Why not read 100 samples at once, average the channels, place the array of averaged channels into a single queue, and write them all at once to tdms?
    You also seem to be unaware of common simplifications. Index array is resizable. Most primitives can operate on arrays. For example on the left, you explicitly index out four individual string elements, see if they are empty strings, then invert the boolean and form an array of four elements. You would get exactly the same resulting boolean array by taking the four element array subset, checking for empty, and inverting.
    I think with little effort, you code could be reduced to 20% of its current size and it would run much more efficiently and would be easier to maintain and debug. Try it!
    LabVIEW Champion . Do more with less code and in less time .

  • NI-5133 Measurement slow

    Trying to measure a signal that will be pulsing every ms. Right now I am measuring and triggering off of a pulse-generator to simulate that signal; the generator is pulsing at very close to 1kHz, so that's fine. I noticed how slowly the program was operating, so in efforts to find the source I have bit-by-bit erased nearly everything else aside from just measuring and showing the waveform, and it's still too slow. If it is hard to tell at first, watch the "pulse" counter - taking 40 measurements at 100MS/s will take 400ns. Obviously repeating this measurement should still take considerably less than 1 ms, then. However watching the counter, it takes well over a second to get through the 128 measurements. Any idea why this is the case? I've attached the VI which is clearly quite bare-bones. With the .2 trigger threshold there are no instances where the scope misses a signal, as no errors are thrown, and so I can really come up with no reason aside from computational speed for why this program can't keep up.
    Thanks,
    Wolley
    Solved!
    Go to Solution.
    Attachments:
    Test6.vi ‏25 KB

    Hi Wolley,
    I would recommend pulling everything out of the For Loops that does not have to be there.  I would pull all configuration VIs out of the loops including the Configure Trigger Edge.vi.  This should be placed in-line before the loops start.  
    In addition, the Initiate Acquisition and the Abort VIs should be placed before and after the loops respectively.  We probably don't need to continually abort and then restart our acquisition.  Removing these from the loops should reduce the overhead in each loop.
    Another thing we can remove from the loops to improve loop speed is the graphing.  Updating displays can be very processor intensive.  Consider placing the graph in a separate parallel loop and passing the data to the parallel loop via a queue structure.
    Hopefully some of this information is useful!
    Josh B
    Applications Engineer
    National Instruments

  • How to measure slow disk i/o impact on video graphics performance???

    The problem is when an application runs in fullscreen with a high FPS and still shows lag/stutter issues in graphics ...
    Windows 7 doesn`t seem to have any tools that report of stuck disk i/o traffic and neither does the graphic card controls (nvidia). The missing information seem to be:
    1) Whether it is because of a slow drive (fx. http://www.youtube.com/watch?v=QF-SBypQBuw); (Solution: buy a faster drive)
    2) Whether it is because of other programs using the same disk - fx. windows pagefile(?); (Solution: move the app to another drive)
    3) Or whether it is because of a layer program - fx like Sandboxie that redirects filesearch etc.; (Solution: run the app outside the layer)
    Is it really necessary to buy and install and configure a new harddrive and then install the app on this drive uncertain of results just to compare to try solving some graphics lag because of disk i/o issues?

    1) the Windows performance index gives a view of the hardware's expected performance the subscores for harddrive tests should allow you to determine if your harddisk is 'slow'
    2) Resource monitor can help to identify processes that have a lot of IO. Most important measurement will be teh disk queue length. To get even more detailed information, you could use perfmon.
    http://blogs.technet.com/b/askcore/archive/2012/02/07/measuring-disk-latency-with-windows-performance-monitor-perfmon.aspx
    3) virtualisation and/or sandboxing always has a performance impact. You should check with the software vendor how to check for/test the performance impact the application has.
    no, it is not neccasary to buy hardware or software to identify a performance issue. note that in some cases is might be easier/cheaper to buy some new hardware of which you are sure it will meet all requirements.
    PS: consider purchasing an SSD if yous suspect IO issues and are willing to spend some: it will be your best hardwareupgarde you did over the last 10 years!!!
    MCP/MCSA/MCTS/MCITP

  • Distinct Count Measure Total in Two Dimensions

    I have a report that pulls data Site and then drills down to User for Content Usage for Six Months. These are separate dimensions in the cube. I have a DistinctContent Measure that pulls for both Site and User appropriately when I use separate queries but it
    is at the lowest level which is User. A sum of DistinctContent at the Site level is not appropriate - it needs to be the DistinctContent for the Site, and then drill down to the DistinctContent for User.
    Aggregate cannot be used because there are filters on the report, and they have to be there for various reasons.
    I've tried Lookup but it only looks up for one field. Multilookup doesn't work either. 
    I've tried a drill down to a subreport so that the initial dataset would be for Site, and then the subreport is for User and uses a different dataset, but you cannot merge the cells so
    that the subreport fits nicely under the top level.  
    How do I get the DistinctCount for the top level, Site?  
    I'm currently working on creating a cube that will only count the distinct content by site and then combine the two cubes as a virtual cube as a workaround but I'm not sure of the full implications of a virtual cube.   I feel like I'm missing something
    because this seems to be something that everyone must need at some point right?  
    I'm beating my head against a wall.  Thanks so much to anyone who can help out.  I'm hitting the deadline, and everyone is stressing out because I've been working on this one issue for days and I'm the everything IT person, so other things are slipping.
    Here is
     (the abbreviated version of) the query:
    SELECT
    NON EMPTY
    {[Measures].[Distinct Content]} ON COLUMNS
    ,NON EMPTY
    Filter
    ([Site].[by Type].[Site].ALLMEMBERS
    ,[Measures].[Views] > 0)*
    Filter
    ([User].[by Type].[User].ALLMEMBERS,
    [Measures].[Views] > 0
    [Time].[Month Year].[Month Year]
    DIMENSION PROPERTIES
    MEMBER_CAPTION
    ,MEMBER_UNIQUE_NAME
    ON ROWS
    FROM
    SELECT
    {[Content].[by Domain Type Item].[Type].&[3]&[1]&[Art]} ON COLUMNS
    FROM [Cntnt]
    Here is a picture of the report currently.  Unique Articles is the measure I'm having issues with and you can see at the top level is the site name, and below that the user.
    Thanks so much to anyone who can help me out.  I really, really appreciate it. 
    Julia

    Hi Julia,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • How to create count() measure for certain set of records in BMM Layer

    Hello all.
    I have a logical table like this one (Table AAA)
    Table AAA
    <p>
    --------------------------+<p>
    |       *Key*            |    *Name* |   <p>
    --------------------------+<p>
    |    1-2EMHS9     | AAA      |<p>
    --------------------------+<p>
    | 1-2EMWMO      | BBB      |<p>
    --------------------------+<p>
    |         NULL       |     CCC   |<p>
    --------------------------+<p>
    I need to calculate count() of records where <b>Key is not NULL</b>. In this example, this field must return coun() = 2.
    I suppose, CASE operator may help me with that, but I do not know for sure how to do this.
    Thank you for help,
    Alex.

    Thank you.
    But I must concretise my issue.
    I need to calculate number of records (e.g. order_id) that sutisfy appropriate condition (one of columns (e.g. loy_member_id) is set to null).
    I created logical column, that returns order_id if condition (loy_member_id is null) is met.
    Look at my logical column:
    Q Orders (LPM) - must return number of orders where loyalty_member_id is null.
    It has an expression:
    CASE  WHEN  NOT "Foxtrot (my) replica"."Sales Orders".LOY_MEMBER_ID IS NULL  THEN "Foxtrot (my) replica"."Sales Orders".ORDER_ID ELSE  NULL  END
    So, this returns order_id I need.
    But the question is how to count number of <b>order_id</b>'s returned by this column on BMM Layer.
    When I define my column with next expression:
    *<u>COUNT</u>(CASE WHEN NOT "Foxtrot (my) replica"."Sales Orders".LOY_MEMBER_ID IS NULL THEN "Foxtrot (my) replica"."Sales Orders".ORDER_ID ELSE NULL END)*
    I receive error:
    +[38083] The Attribute 'Q Orders LPM' defines a measure using an obsolete method.+
    Thank you,
    Alex.
    Edited by: Alex B on 3/3/2009 19:59

  • Why is the select Count too slow

    I am doing the following select count and calling it from my JSP to get the total number of records... why is it so slow...
    select count(*)
    from
    (select distinct o.receive_id, o.name, o.address
    from order o, item i
    where o.id = i.id
    and o.status = 2 and i.status = 0)

    If the data in the table that you are referring to in the query gets refreshed very often and your high water mark on your table not reset, then this query always runs longer. While deleting data in the table, use 'TRUNCATE' rather than 'DELETE' in your data queries. that would help reset the high water mark and your count() queries will run very very fast.

  • Why is the Tick Count function slow when used with a .dll but fine with normal lab view code?

    when using the Tick Count millisecond timer with a .dll I've written in C, I'm getting some odd timing issues.
    When I code the function I want (I'll explain it below in case it helps) in LV and run it as a subVI, feeding it the Tick count as an argument, the function runs quickly, but not quite as quickly as I would like. When I feed this same subVI just an integer constant rather than the Tick Count, it takes about the same amount of time, maybe a tiny bit more on average.
    When I bring in my function from a .dll, however, I start to run into problems. When I feed my function an integer constant, it is much faster than my subVI written in LV. When I feel my .dll the Tick Count, however, it slows down tremendously. I'm including a table with the times below:
                 |  Clock   |   Constant   |
    SubVi:   | 450ms  |  465ms       |
    .dll         | 4900ms|  75ms         |
    This is running the function 100,000 times. The function basically shifts the contents of a 2-dimensional array one place. For this function, it probably won't be a huge deal for me, but I plan on moving some of my other code out of LV and into C to speed it up, so I'd really like to figure this out.
    Thanks,
    Aaron

    Hi Aaron,
    Thanks for posting the code -- that made things a lot clearer for me. I believe I know what's going on here, and the good news is that it's easy to correct! (You shouldn't apologize for this though, as even an experienced LabVIEW programmer could run into a similar situation.) Let me explain...
    When you set your Call Library Function Node to run in the UI Thread you're telling LabVIEW that your DLL is not Thread-safe -- this means that under no circumstances should the DLL be called from more than one place at a time. Since LabVIEW itself is inherently multithreaded the way to work with a "thread-unsafe" DLL is to run it in a dedicated thread -- in this case, the UI thread. This safety comes at a price, however, as your program will have to constantly thread-swap to call the DLL and then execute block diagram code. This thread-swapping can come with a performance hit, which is what you're seeing in your application.
    The reason your "MSTick fine behavior.vi" works is that it isn't swapping threads with each iteration of the for loop -- same with the "MSTick bad behavior.vi" without the Tick Count function. When you introduce the Tick Count Function in the for loop, LabVIEW now has to swap threads every single iteration -- this is where your performance issues originate. In fact, you could reproduce the same behavior with any function (not just TIck Count) or any DLL. You could even make your "MSTick fine behavior.vi" misbehave by placing a control property node in the for loop. (Property nodes are also executed in the UI thread).
    So what's the solution? If your DLL is thread-safe, configure the call library function node to be "reentrant." You should see a pretty drastic reduction in the amount of time it takes your code to execute. In general, you can tell if your DLL is thread-safe when:
    The code is thread safe when it does not store any global data, such as global variables, files on disk, and so on.
    The code is thread safe when it does not access any hardware. In other words, the code does not contain register-level programming.
    The code is thread safe when it does not make any calls to any functions, shared libraries, or drivers that are not thread safe.
    The code is thread safe when it uses semaphores or mutexes to protect access to global resources.
    The code is thread safe when it is called by only one non-reentrant VI.
    There are also a few documents on the website that you may want to take a look at, if you want some more details on this:
    Configuring the Call Library Function Node
    An Overview of Accessing DLLs or Shared Libraries from LabVIEW
    VI Execution Speed
    I hope this helps clear-up some confusion -- best of luck with your application!
    Charlie S.
    Visit ni.com/gettingstarted for step-by-step help in setting up your system

  • Distinct Count Measure on Dimension table via Bridge table.

    Hi Team,
    I have Dim_Devices Table, which is linked to other dimension like Dim_User  and Dim_City.
    All these table has many to many relationship defined in bridge table i.e. B_Devices_User_City using referential keys.
    I want to derive measure such as
    Select Count(Dim_Devices[Device_Id])
    where Dim_Devices.Validity_End_Date is null
    Note : Device_Id and Validity_End_Date are present in Same Dim_Devices  Dimension Table.
    Could you please help me to define cube structure and how to create such measure out of dim_table.

    Hi Charlie,
    Now, I have define DistinctCount Agreegation on Dim_Devices table which will use reference Relationship on B_Devices_User_City. My Count is correct.
     But, I came across one more issue, i.e. From Processed and deployed cube, I am trying to create off-line Cube/or Global Cube (I have given syntax below) which is for limited set of dimensions and measures, I found that It shows me error due
    to such relationship.
    Does it mean that, For Global Cube, Can't we use DistinctCount/Count agreegation?
    CREATE
    GLOBAL CUBE [Device OLAP_Cube_1_3]
    Storage
    'C:\Exportcube.cub'
    FROM [Device DATA CUBE]
      -- Measures
      -- Cube

  • Count measure

    Hi,
    I need to create a measure in a cube that count the number of fact rows where some conditions is meet in a column in the table.
    My fact looks like this (simplified):
    Company number
    Invoiceno number
    Source varchar2
    What I need in my cube is a count of Invoiceno where my Source is equal to sales or credit memo.
    I am using 11g R2 and I am very new to oracle - so a detailed step-by-step would be appreciated
    Thanks in advanced
    Kind Regards,
    Søren
    Edited by: sdjensen on 2010-05-31 06:49

    Hi Soren,
    you can use Expression operator for calculating of flag that your condition is true (this expression returns 1 or 0)
    and then in Aggregator operator apply SUM function to this flag.
    In SQL it look like
    select ...,sum(CASE WHEN SOURCE=SALES_MEMO OR SOURCE=CREDIT_MEMO THEN 1 ELSE 0 END) YOUR_MEASURE
    from fact_table group by ...
    Regards,
    Oleg

  • Count is slow

    When there is an insert happening in the table will there be an impact in reading the data from the table.
    When there is an insert going on i just gave
    Select Coun(*) from mytable
    it is taking some 10-13 sec to return the count if the table size keep on growing.
    is there any faster way to check the count

    BluShadow wrote:
    Ensure you have an index/primary key, and then a count(*) will often use the index instead which typically takes less I/O as there is usually less data blocks to an index than the base table.
    Counter-intuitive, but this could make the count slower if the table is subject to a lot of inserts (unless the PK is based on a monotonic sequence).
    A count(*) which is much slower than seems reasonable for the hardware may be spending a lot of its time reconstructing read-consistent copies of rapidly changing blocks.
    When you do a tablescan Oracle starts with a read-consistent view of the segment header to identify the highwater marks for the table so that the count(*) can stop at the earliest possible moment and not have to reconstruct blocks (to empty) that were about the HWM when the query started.  (For tables with lots of space being below the HWM that is being used by the inserts this doesn't necessarily help - and there are various reasons why there might be a lot of usable space below the HWM).
    When a count(*) uses an index fast full scan, the same applies - but if the key values being inserted are effectively randomly distributed (and not at the high value, as sometimes happens) then every block scanned may have to be cloned and made read-consistent.
    Step 1 (for OP):  Check the execution path
    Step 2: check if time required is reasonable for number of blocks that would be read if no inserts taking place
    Step 3: check statistics during execution, noting particularly details of "undo records applied"
    Step 4: check what read waits are occurring - multiblock from the data tablespace, or single block from the undo tablespace
    Regards
    Jonathan Lewis

  • Is there a better approach to show the distinct count measures?

    Experts,
    I have a requirement in which I want to diplay 20+ calculated columns. The coulmns are something like this..
    No of clients with income < 10000 (to check against sales measure in fact table)
    No of Clients with sales > 500000 (to check against sales measure in fact table)
    No of clients whose join date > 1st jan current year ( to check against cust_start_dt of client Dim)
    No of clients with size "Medium" (to check agains a column client_size of Client Dim)
    No of clients with sell product units > 500 (Need to check units in PROD Dim)
    and so on..
    Well, I can write a case statement using expression builder for each column in the criteria, but the report performance is very bad. My client dimension is a huge partitioned table. As I keep on adding columns, the report takes more time to fetch data and at some point it never comes back.
    We tried to push to calculations to database, but since users can provide any selection criteria from dashboard prompt, it doesn't seem to work.
    If anyone had done any similar request in the past, please direct.

    Jared,
    Thank you for responding to my posted message. Rendezvous is a new concept to me, maybe
    it's the solution to my problem. I have been trying to read the on line manual and
    example VIs from National Instruments website. I still have a hard time to understand
    the concept.
    One of the example I saw is using rendezvous to run some sub VIs. But in my case, I have
    only one VI that is a while loop. Inside the while loop, there are a few tasks running
    simultaneously. I don't know whether it will serve my purpose.
    Guangde Wang
    Jared O'Mara wrote:
    > Guangde,
    >
    > Referring to your 2nd method, use rendezvous (under advanced>>synchronize
    > on function palette) to synchronize two processes. There are good examples
    > that come with labview. basically, you cre
    ate a rendezvous, giving it a
    > size. Using the wait on rendezvous vi, a function will not continue until
    > all rendezvous have reached that point. Using this method, you can synchronize
    > your 2 while loops.
    >
    > Jared
    >
    > Guangde Wang wrote:
    > >I tried two ways to control the tempo of my program.>>One is to use the
    > While Loop's counter and a wait. The drawback of this>method is the cycle
    > length is dependent on the measuring load. So if the>program runs for days,
    > it will be significent off the real time.>>The other way is to use the difference
    > of the Tick Count. It provides>accurate timing but the problem is the sychronization
    > of the clock cycle>and the While Loop cycle. I can try to put a little bit
    > wait but still>can not sychronize them very well. So after a while, there
    > will be two>measures get together.>>I don't know whether there are some better
    > ways to control the program>or whether we have some ways to improve either
    > or both of the above
    two>methods to make them work better. Please let me
    > know if you have any>suggestion.>>Thank you in advance,>>Guangde Wang>

Maybe you are looking for

  • ACS 5.3 Group Mapping based on AD group membership

    Hi, I am configuring a new ACS 5.3 system. Part of the rules is that I want to match the users specific AD group membership, and match appropriatly to an identity group. What i'm trying to do is say that if the user is a member of the AD Group (G-CRP

  • How to increase the string length of an uploaded file

    Hi, i use a abap webser where i've uploaded some JAR files into the mime repository. Unfortunately the string length is limited by 40 characters and so longer file names are cut. How can I increase the default string length? Does somebody know a work

  • Mail Preferences Dialogue Box

    I don't appear to be getting the correct dialogue box when I click on "Mail", "Preferences". A dialogue box that is titled "Accounts" appears. I know looking at a friends Mac, he gets an entirely different box. We both appear to have the same version

  • Different behaviour between 1.4,1.5_05, 1.5_07.

    Hi and thanks in advance for your help, Looking for pointers to a solution for a problem I have. A set of Java classes subscribe to a subscription service and listen for updates on a network. To do this the JVM uses JNI (actually a JIntegra vendor pr

  • Need help with this error

    Hi, I keep getting this error when running my application. Classes compile correctly. I have two classes, one is the main GUI, the other the main functionality. I get the error below when I try and create an Object of the GUI within the other class.