Are there any general guidelines/algorithm to perform arc/circular motion?

My system consists of a PC with a IEEE488.2 from NI that connects to 2 Newport MM4006 controllers, which, in turn control a high-precision stage.
I would like to know if you can provide me with any guidance/algorithm to learn about programming arc/circular motion.
I learned that there is advanced feature not included in my version such as Draw Pictures and Draw circles under Pictures. Are these features merely for plotting on computer screen and NOT actually for commanding the controllers to physically build arc or circles.
Any advice or suggestion for arc or circular motion?
Thanks

You are restricted to the types of moves which are allowed by your Newport MM4006 controllers. Some controllers have built in advanced trajectory generation just for arcs, curves, and countoured moves (such as the PCI-7344). Other boards are restricted to a series of straight line moves. The tools you are given to perform an arc depend on the functionality of your controller. If you can only do straight line moves, you'll need to develop an algorithm which will break up an arc into a series of many small straight line moves. I am not aware of any such already built algorithms here.
Jack Arnold
Application Engineer
National Instruments

Similar Messages

  • Are there any JCOP functionality to measure performance within card ?

    Hello,
    We are trying to measure some functions performances. Bu we dont have any specific tool for that.
    We made some tests with PC applications, but it is not precise(as you know there are some scard library delays).
    Is there any internal card(jcop) functionality to estimate the time interval for our card functionalities, or do you you have any other suggestions ?
    Thank you in advance.

    Hi,
    Given a card does not have a real time clock it is not really aware of time in this way and adding a measurement would affect the performance as well (note there is some timing in the crypto chip to help protect against certain attacks, but this is not exposed to the JC API).
    The best you can do it measure the time before and after transmitting an APDU at the lowest level possible. It is safe enough to assume that the PC/SC overhead is going to be consistent for the same amount of data. You could even benchmark just sending data to the card without doing anything with it to measure the time for transmitting your data. This is not generally needed as you are interested in the performance as perceived by a client application or a user and this transmission overhead is a part of that as well.
    Just my $0.02 anyway.
    Cheers,
    Shane

  • Are there any set guidelines for customer master data cleansing.

    1) I was wondering if there are any set guidelines for customer master data cleansing.
    2) Part of the cleansing effort involves consolidating number of divisions etc. Any pointers to this regard?
    3) Also how do we deal with open docs, billing plans etc?

    The data cleansing requirement will take place, while move the data from legacy system to new target ERP / system. I.e. need to cleanse, enrich, de-duplicate, & standardize.
    The existing data may not have consistent format since it is often being derived from various sources. It may contain duplicate information as well missing or incomplete information. Cleanse and normalize content to achieve accuracy, consistency and proper understanding of the data.
    Also it’s a process of organizing the data, which will ensure enrichment of your data which are up-to-date, accurate, and complete. These processes are manual / labor intensive and require a fair bit of specialization.
    If you need to have more details please feel free to interact with us as per below.
    Thanks
    Kumar
    www deebrostech.com

  • Are there any example vi's for implementing a circular buffer between a plc, opc server, and labview dsc??

    I am storing a block of data inside plc registers and reading this group into labview as a continuous set of datapoints. I am counting the number of scans in the plc and sometimes the number of points collected inside labview doesn't match.

    To explain a a little bit about tag updating:
    The LabVIEW DSC tag engine is not just updated on any change of the value within the plc. There are, in fact, several "deadbands" that must be crossed before those tags are updated:
    1) The OPC Server has a deadband - where the plc register value has to change a certain % before it is recorded.
    2) In the LabVIEW DSC moduel, there is an I/O Group Deadband that determines when the tag engine is actually updated.
    Both of these deadbands must be satisfied before a new "value" is recorded in the LabVIEW DSC tag engine.
    Therefore, I would check your OPC Server's deadband (configurable in the OPC Server configuration utility) and also the I/O Group deadband for those tags (configurable in the tag configuration
    editor).
    If this doesn't resolve the issue, please let me know. Thanks.

  • Are there any guidelines on how to customize R/3 reports into BW

    Hi Everyone,
    I have quite a number of R/3 reports that is required to be converted into a BW report but I do not know what I am supposed to do.
    Are there any guidelines on how to determine if a R/3 report should be created in BW? If it should be done in BW, how do I consider if it should be reported of a cube or an ODS?
    I have been doing development for ard 3years and this is the first time I have to analyze and come up with a functional/design specs. Need help.
    If there is any documents on the guidelines, please send it to [email protected]
    Thanks.
    Regards,
    Shunhui.

    Hi Shunhui
    TABS -> here Venkat was refering to SAP education courses on BW on extraction,reporting etc.
    I am not aware if there are any guidelines given by SAP.
    I will try to help you build the approrach for this:-
    1. You should collect functional specification of R/3 report. This will include the functionality of that report, logic & calculations involved, usage of report output by end users etc.
    2. Collect technical specification- In this you create excel sheet with all fields in R/3 report with relevant details such as formulas used, calculations involved etc.
    3.Map these technical specs with BW business content. Try to see if you can get Business content datasource which will be a fit for majority of your R/3 report fields.
    4.Carry out gap analysis and end of this excerise determine how you are going to fill this gap - whether buy using Zinfoobject,routines,datasource enhancements etc.
    5.Classify the reports as Aggregate or transactional level. If it is transactional level then you need to make use of ODS.This will determine your data model in BW.
    6.Based on 5 steps mentioned above,create a design document with dataflow & data model details.
    7. Get a Sign off on functional,technical specs & design document with proposed dataflow from users and implement it.
    For templates of functional & technical specs , you can refer to blueprint section of Accelerated SAP.
    Hope this helps
    Regards
    Pradip

  • Are there any known issues with Adobe Edge Animate and Yosemite? Experiencing performance issues since upgrading OS

    Are there any known issues with Adobe Edge Animate and Yosemite? Experiencing performance issues since upgrading OS. Animation I was working on that had been performing in browser fine suddenly stopped working, and was not related to any action I had done at that point. Also was working in it today and program stopped responding to key board short cut commands.

    I am having a whole slew of odd interface problems with a fresh 2014.1.1 on a fresh macbook pro with latest Yosemite. Program locks up, cursor selections don't show, things disappear. I have a mac mini also and the program runs fine on it. Is there possibly something related to the solid state hard drive in new macs?

  • Are there any performance concerns when referencing an image located in a c

    Are there any performance concerns when referencing an image located in a central location (example application server)?

    Hi
    Should not be an issue at all - we are only going to be limited by the network bandwidth
    Regards
    Tim
    http://blogs.oracle.com/xmlpublisher

  • Are there any fixes for iPad 2 with OS 8.1.1.

    Are there any fixes for the iPad 2 IOS 8.1.1?  I've had problems with Safari and other Apps crashing or locking up, and with extremely slow touch response.  These problems have been occurring since the last update.  At times the iPad becomes unusable.  I have been keeping all programs closed and history and web site data cleared.  Thanks for any help you can give.

    Hi, dannilee,
    I have exactly the same problem. The device was my favorite computer until 8.1.1. Now it runs s-l-o-w-l-y, requires endless reboots.
    To improve the performamce, I have tried these:
    1.Go to Setting->General->Accessibility->Reduce Motion. Turn on.
    2.Reset iPad device settings using Setting->General->Reset->Reset All Settings. Its increase iPad performance little better.
    3.Hold the sleep/wake button and home button upto 8 seconds. This will turn off your iPad. Then turn on iPad. Wait few moments. Do the same thing twice to solve this issue.
    Solution 3 worked for me. And, I always recommend you to backup your iPad first before any action to avoid any data loss.
    Hope it is useful for you.

  • Are there any OSM be-spoke cartrdiges available?

    Hello,
    I was reading through SFS documentation for a vendor-analysis - I can see Oracle provides bespoke ASAP Cartridges for various NE's,Tech Packages for UIM.
    I was wondering if there is any bespoke cartridges (except for the demo sales one) offered along with the product or licensed separately.
    My understanding is that the O2A cartridge is a prebuilt set of integrated orchestration flows, application integration logic, and extensible enterprise business objects and services required to manage the state and performance of a defined set of activities or tasks between specific Oracle applications. This however is service agnostics.
    So are there any bespoke OSM cartridges available leveraging domain knowledge and based on working with telcos?
    Regards
    Sain

    OSM provides only O2A PIP cartridges and is not providing any bespoke cartridges. As most of the network elements used by different telcos are similar, and the way ASAP communicates to these network elements for various operations is same, it is easy to provide cartridges for various NE's. But in OSM, no two companies have same flow in providing a specific service to the customers. So it is difficult to provide such cartridges.
    O2A PIP cartridges can be customized as per end user requirements.

  • Are there any alternatives for mseg and mkpf

    I had to display from table S032...following fields.
    S032-LETZTABG --> Date: last (i.e. most recent) goods issue
    S032-LETZTVER --> Date: last (i.e. most recent) consumption
    However the data was not properly filled in s032 table.  So I went for mseg and mkpf table to get budat based on moment types.
        SELECT   MSEG~MATNR
                 MSEG~WERKS
                 MSEG~LGORT
                 MSEG~BWART
                 MKPF~BUDAT
                 INTO TABLE IT_MSEG
                 FROM MKPF AS MKPF  INNER JOIN MSEG AS MSEG
                 ON
                     MKPFMBLNR  =  MSEGMBLNR  AND
                     MKPFMJAHR  =  MSEGMJAHR
                 FOR ALL ENTRIES  IN  T_OUT_TMP
                WHERE MSEG~MATNR  EQ  T_OUT_TMP-MATNR
                  AND MSEG~WERKS  EQ  T_OUT_TMP-WERKS.
    Are there any other alternative table for mseg and mkpf...
    Because my above coding  ( which includes above  SELECT of mseg and mkpf )got performance issue...
    Could you please suggest me anyu other alternative for mseg and mkpf ...

    Try to include the BUDAT in the selection of the MKPF.. if you don't have any restriction in MKPF than just pass an empty range.
    RANGES: S_BUDAT FOR MKPF-BUDAT.
    SELECT
    MKPF~BUDAT   "<<<
    MSEG~MATNR
    MSEG~WERKS
    MSEG~LGORT
    MSEG~BWART
    INTO TABLE IT_MSEG
    FROM MKPF AS MKPF INNER JOIN MSEG AS MSEG
    ON
    MKPF~MBLNR = MSEG~MBLNR AND
    MKPF~MJAHR = MSEG~MJAHR
    FOR ALL ENTRIES IN T_OUT_TMP
    WHERE
    MKPF~BUDAT IN S_BUDAT   " <<<
    MSEG~MATNR EQ T_OUT_TMP-MATNR
    AND MSEG~WERKS EQ T_OUT_TMP-WERKS.
    Regards,
    Naimesh Patel

  • Are there any difference between fx. the Nvidia FX3800 and GTX 260?

    Does anybody understand why I fell frustrated and confused about all this videocard discussions?
    Some tells me that the FX3800 is a better card - more pro - stabillity etc. than low-price cards - like GTX 260/285/295 - and some tells me that GTX 260 = FX3800. I read on other webpages about how people are having problems having their GTX 260 recognized as a FX 3800 (not in PPro but by Nvidia software). Other places tells about how to softmod GTX 260 to a FX 3800. Some tells me that there is a big difference between these cards - others they are almost alike - same GPU etc. I read that the difference between GTX 260 and FX 3800 is only the looks of it from outside - inside they are alike - and that apart from that only drivers are different.
    I have ordered the FX 3800 - believing it will perform better and more stable and give me really good playback rendering inside PPro CS5 compared to my GTX 260. If I get the same performance and stability by just changing two lines in a textfile with my old GTX 260 I feel cheated...I hope this is not happening.
    Someone from Adobe or Nvidida - please tell us - what there is a differnece between fx. the GTX 260 and the FX 3800 besides from the outside look and some drivers? If there is no difference between them - I can not see any other reason for only supporting the FX cards than $$$...If there are a difference in Quadra/FX favour - and I sincerly hope so -  I understand why you choose Quadro cards - but if they are just alike inside - I just feel we have all been mislead. I could have saved my money...time will show - I get the FX 3800 in a few days...can´t return it (danish rules when you buy it as a company) - so I´m 1000$ short this month...:(
    I would feel much more comfortable in having ordered the FX 3800 if someone from Adobe could tell me excactly how much better FX 3800 is from the GTX 260 - apart from being supported and the GTX is not. And if there is more in it than just changing a simple textfile? Are there any more things going on inside PPro - deep inside - that affects performance with FX cards besides from two lines in a textfile? Who can help answering that question - and I mean really answering it in serious way - not just saying - off course the FX is better - because its more expenssive.
    Haven´t we all experienced this in other situations? Like buying a car? Fx. Inside SKODA FABIA is a VW engine like the ones used in much more expensive VW cars. Only difference is the name and the outside look - and some comfort. The engine is the same - and in a way you could say that from an engine-point-of-view - the cars are alike. But you pay for comfort.
    Is the same thing going on here? I pay more for FX 3800 for the looks of it from the outside and some software-drivers that makes it more stable - but inside it is just like GTX 260? So from a graphic-performance-GPU-point-of-view I actually have two cards that are alike and I reaaly am paying for software and not hardware?
    /Morten

    You are right -. and I keep spanking myself in the head with a wet towel.
    I will lie in my bed - thinking of the nice new bicycle I could have bought instead of the FX 3800....;-)
    No seriously - I know what I have done - and maybe I have done right - I don´t know? Time will show and I will sell or buy.
    What I am frustrated about is the fact that I actually just decided to buy FX 3800 because Adobe and Nvidida toold me too -  to get support for MPE and now it seems that my old GTX 260 is much like the FX 3800 if not the same inside? And I am frustrated about that it is possible to change a line in a simple textfile and suddenly get support for the GTX 260 to play MPE. This fact only put more lit on the conspiracy-fire about Adobe and Nvidia having made a simple software-trick to make some cards work others not = making a lot of money on fools like me...
    You must admitt it is a rather tempting conclusion - if one is in the conspiracy-mood. Well - I try to think good about people. And Adobe convinced me at last that I needed the FX 3800 card. So I did order it eventually - because I need the MPE NOW - Can´t wait - having so bad performance and issues with PPro 4 - been so frustrated about edditting for almost ½ a year now - constantly having issues etc. And I am going to start new big project in 7 days. Have just spend 2000$ on upgrade to Master Collection and have making thoughts about wich card to buy long time ago - the GTX 285 or FX 3800 - to get the MPE support. And same day I order I suddenly see that the softmod makes my old GTX 260 to a FX 3800. And other posts claim that the cards are the same inside. No wonder I get frustrated - others must be the same.
    So what I am trying to say is that: I feel like Adobe and Nvidida is not being honest with us. If it turns out that the only thing Adobe has done is making a simple textfile to make cards supported or not by enteringior changing the names of the cards inside the textfile - and nothing else....And if fx. a softmoded GTX 260 turns out to perform excactly the same and be as stable as a FX 3800 to 5-6 x the price - I will feel cheated and feel like a stupid fool.
    I certainly do not hope this is what is going on. But it seems no one can help me but myself - testing and trying these cards - and I will try to post my results in this forum - so others might have easier tomake a choice...
    I´l be back....
    Cheers

  • Are there any types of Garbage Collector ??

    are there any types of Garbage Collector ??

    AmitChalwade123456 wrote:
    kajbj wrote:
    AmitChalwade123456 wrote:
    this question was asked to me in interview !! I could not answer thatI doubt that they asked you the question that you first posted, because the answer would be yes.
    Did they ask you what different types of garbage collectors that are implemented in the VM?That mean there are types of GC !! can u give some info about themThe answer to that depends on what VM you are talking about, and what version of VM you are talking about. E.g. an IBM VM isn't implemented in the same way as a Sun VM, and a Sun 1.3 VM has other algorithms than an 1.5 VM.

  • Are there any hot fixes or releases due soon?

    It's been a while since I've seen anything released from the update centre. Are there any fixes due for release soon?
    Despite several months, a few hot fixes and the JSC2 Update 1 release, JSC2 doesn't seem to have improved all that much since the initial release.
    The main issue for me at the minute is performance. I know this has been discussed several times on this forum, but there is very little information on when we might see a release that starts to address this issue. I realise that the issue is complex and probably covers many functional areas of the app, so a hot fix or two will not address the problem. However, at the minute working with JSC2 on a medium sized application is unproductive and is becoming almost unusable. For example, even minor code tweaks or property changes completely lock the IDE for around 30 seconds and send my CPU usage to 100%. The thing is that, functionally JSC2 is excellent. This is the way I want to be building my JSF applications and I'm very grateful to Sun for making it available for free. If it wasn't hampered with these performance issues I'd be knocking out web apps in no time at all but unfortunately, as it stands now, it is almost a hinderance. I have some new projects coming up where I would like to use JSC2 but I cannot recommend it on the grounds of productivity.
    When can we expect something to be released that starts addressing performance? When can I start recommending JSC2 again?

    As per previous posts, complete agreement.
    I have built a production system in-house for a large company using JSC. I had to get my machine upgraded to 2GB just to get some usuability but it still needs a few jsc re-starts and appserver restarts through out the day.
    Functionality wise, JSC is such a winner and the guys at SUN have indeed did a great job. I come from a VB environement for the last 12 years, so having a rich web client tool was high on my priority. But if I am to try and preach to the rest of the J2EE guys here, the performance issues really need to be addressed. They are used to Eclipse and I don't need to go on about how good that tool is. If any other company gets JSF to work under Eclipse the way JSC does, I would have no hesitation in using that tool.
    I hvae tried some of the Eclipse based JSF IDE's and they don't even compete with JSC in terms of ease of use and putting apps together. But they do not lock the machine for 30 secs when I click a property or right click on a component !
    I have been one of the champions of JSC from the start and I still am. The system I wrote has near 300 users and is working fine with not one
    problem in 3 months. If this performance problem can be beat, JSC will set the standard for GUI development for years. If not, then it will be consigned to the nearly made it bin.
    To the guys at SUN:
    After all the time and effort you have put into this product, I'm sure there is no overnight cure for the problems. I am sure you are working on the problems. For me, I don't care about any new Ajax components, etc in the updates. If the next patch gets JSC even working 50% better performance, I will be a happy man. Then I can go and try to convert the unimpressed.
    if not, then I will be forced into a corner to use IDM RAD tool for JSF development, of which I do not want to do. My company has an IBM affiliation, but I like to use the best tools for the job.
    Overall, impressed by JSC but the final hurdle has not been jumped.
    Regards,
    LOTI.

  • Are there any troubles with high rdisp/max_wprun_time?

    Are there any unpredictable negative effects while setting this parameter large values (for example, 1000 or 2000 seconds?)

    Hi Pavel Danilov,
    You can increase the value but this increases the chance of work processes being blocked (eventual system standstill situations possible). I assume you have reviewed note <a href="http://service.sap.com/sap/support/notes/25528">25528</a> and other documentation about this parameter. There can be some reasons why 600 is not high enough also for current needs..
    Normally if you have very long running database accesses, than this indicates that the program is in a loop or no suitable index exist for this access so that a full table scan is performed or the optimizer has a problem with the statement.                                                                               
    You can check as well if the reason for the long running task is:                                                                               
    - Overload of the SAP System and/or of the database                                     
    - Waiting for locks in the DB         
    Regards, Mark

  • I have a late 2009 iMac. I installed OSX 10.8.4 and want to go back to Snow Leopard. I have a 1.5TB external HD. Are there any issues I need to know about or tips I need to know before doing this? Thanks.

    I have a late 2009 iMac. I installed OSX 10.8.4 and want to go back to Snow Leopard. I have a 3TB external HD. Are there any issues I need to know about or tips I need to know before doing this? Thanks.

    Downgrade Lion/Mountain Lion to Snow Leopard
      1. Boot from your Snow Leopard Installer Disc. After the installer
          loads select your language and click on the Continue
          button. When the menu bar appears select Disk Utility from the
          Utilities menu.
      2. After DU loads select your hard drive (this is the entry with the
          mfgr.'s ID and size) from the left side list. Note the SMART status
          of the drive in DU's status area.  If it does not say "Verified" then
          the drive is failing or has failed and will need replacing.  SMART
          info will not be reported  on external drives. Otherwise, click on
          the Partition tab in the DU main window.
      3. Under the Volume Scheme heading set the number of partitions
          from the drop down menu to one. Set the format type to Mac OS
          Extended (Journaled.) Click on the Options button, set the
          partition scheme to GUID then click on the OK button. Click on
          the Partition button and wait until the process has completed.
      4. Quit DU and return to the installer. Install Snow Leopard.
    This will erase the whole drive so be sure to backup your files if you don't have a backup already. If you have performed a TM backup using Lion be aware that you cannot restore from that backup in Snow Leopard (see below.) I suggest you make a separate backup using Carbon Copy Cloner.
    If you have Snow Leopard Time Machine backups made while on Snow Leopard, then you may do a full system restore per #14 in Time Machine - Frequently Asked Questions.  If you have subsequent backups from Lion, you can restore newer items selectively, via the "Star Wars" display, per #15 there, but be careful; some Snow Leopard apps may not work with the Lion/Mountain Lion files.

Maybe you are looking for