How to cope with sytem noise of DAQ?

I found that my system noise is too much for my data acquisition.
the frequency range of the noise is so wide and the magnitude is comparable to my useful data.
Waht can I do to acquire my data?

What is the noise amplitude, sampling rate, the environment in terms of electrical devices and EMI/RFI, and any means you have implemented to reduce noise (shielding, grounding, cable lenghts and routing, etc). Trying to get some clues to assist. You can always take multiple acquisitions and average the data to effectively remove some effects of noise.
Do you have an oscilloscope to verify presence and nature of noise?
~~~~~~~~~~~~~~~~~~~~~~~~~~
"It’s the questions that drive us.”
~~~~~~~~~~~~~~~~~~~~~~~~~~

Similar Messages

  • How to cope with a dead pixel

    Yes, I know, there's no fix for a dead pixel (is there?), and there are lots of maybe-fixes for a "stuck" pixel. Well, I have a dead pixel, super-dead, in a really terrible place (the lower right quadrant, just a tad above where, say, the iTunes ministore would be). I'm stuck with it. I can't do anything to get it fixed or replaced (can I?), so my question to the community is about dealing with it. How does one cope with something so aggravating? Are there some tweaks one can do to make, say, white web pages reverse their colors so the pixel is not as apparent? Are there other methods to get rid of it or make it less obvious? I'm going mad. Someone help. <<br>
    Macbook 2ghz   Mac OS X (10.4.7)   1GB RAM 120GB HDD

    Well , I do understand how crazy it can drive you when a pixel is dead. Everything was fine on my MB then about a month in I noticed a pixel that was dead, I too looked at every topic I could find on the subject, nothing helped. So I took my MB into my local Apple Store, they sent it off to repair. Of course Apple did not fix because "One pixel is not considered enough of a problem to repair". So the Apple Store actually ordered me a new screen and replaced it for free.
    I was very happy until I turned on the computer at the A.S. and once again it had a dead pixel in a different spot. they said that there was nothing they could do about it. They would not replace it again because it would screw up their store budget. So I'm learning to deal with it. Believe me it really *****, I'm a perfectionist at it really bugs me to see this flaw. Other than that my MB is great and I love it. Good Luck!

  • How to cope with Out of Memory Errors

    Hi, I am distributing a desktop application to the general public written in Java 1.4.2. How much memory is required is approximately propertional to how many files they load so Ive tried to pre-emtp OutOfMemoryErrors by checking when memory usage is 95% and preventing the loading of additional files above this level with the following code:
    protected static void checkMemory()
            throws LowMemoryException
            if (Runtime.getRuntime().freeMemory() < Runtime.getRuntime().maxMemory() * 0.05)
                Runtime.getRuntime().gc();
                if (Runtime.getRuntime().freeMemory() < Runtime.getRuntime().maxMemory() * 0.05)
                    MainWindow.logger.severe("Memory low:" + Runtime.getRuntime().freeMemory() / 1024 / 1024 + "MB");
                    throw new LowMemoryException("Running out of memory");
        }but this code is not robust, sometimes users reports LowMemoryException when the user has only loaded a few files.
    I tried removing this code, but then user can get an OutOfMemory error whcih can cause problems with whatever code was running at the time and leave the application in an inconsistent state, if I just exit the application immediately it would be very annoying for users that are int he middle of something.
    I also have adjusted the -Xms and -Xmx settings but cannot decide on a suitable default.
    What I would ideally like the application to do is to extend its heap space as required upto the limits of the users machine, and if it reaches memory limits handle the lack of memory in a releiable manner allowing the user to continue using the application in a safe way

    Unfortunately the metadata is stored displayed within a JTable, so even if If I had in in a Database I think it would all have to be loaded into memory for display within the JTbale in a timely fashion.
    Anyway I think Ive found the problem with the original code, it was reporting memory low when almost all the allocated memory was being used but had'nt accounted for the fact that maxMemory limit had not been reached so more memory could be allocated.
    I think the correct code is:
    protected static void checkMemory()
            throws LowMemoryException
            if (Runtime.getRuntime().totalMemory()  -  Runtime.getRuntime().freeMemory() > Runtime.getRuntime().maxMemory() * 0.95)
                Runtime.getRuntime().gc();
                if (Runtime.getRuntime().totalMemory()  -  Runtime.getRuntime().freeMemory() >  Runtime.getRuntime().maxMemory() * 0.95)
                    MainWindow.logger.severe("Memory low:" + (Runtime.getRuntime().maxMemory()  - (Runtime.getRuntime().totalMemory()  - Runtime.getRuntime().freeMemory() ))/ 1024 / 1024 + "MB");
                    throw new LowMemoryException("Running out of memory");
        }

  • Re-Installing PC, how to cope with cloud installation

    Hi, I need to re-install my Lenovo WS (WIN8.1). Back in the pre-cloud days I had simply to de-activate my Adobe software and to re-activate after operating system installation. How does that work with my cloud apps? What is the process today?
    Since this is more or less a common issue I was looking for some standard support database entry, but I was not able to find something of help.
    Thanks,
    Sven

    it's easier.  you can sign out before wiping your pc, but you don't need to.  with cc, you can even sign out of old 'dead' computers using a new computer.
    just do whatever you need to do and when ready to install cc, start downloading a cc app (after signing in with your subscriber adobe id) and the desktop app will install, https://creative.adobe.com/

  • How to cope with DTP getting inactive.

    Hi,
    When we are doing a system copy from a source system from production to test we have to restore the connection.
    This is followed by a replication. All the DTP's are getting deactivated. How can we prvent this?
    Activating all the DTP's gets us in trouble with the PorcessChains as the DTP ge new technical names..
    Is activating in Development and transporting all DTP's the only solution?
    Any information on this will be appreciated.
    Udo

    Hi.
    Did you follow the note 886102 to perform this system copy ?
    We have some issues with DTPs and BDLS process.
    Please check also if the following notes were applied in your system and then rerun this BDLS
    process afterwards.
    1139924    BI7.0(SP18): System names converted incorrectly
    1142908    70SP18: BDLS does not convert pseudo D versions
    1148403    70SP18: BDLS after early replication
    1149141    70SP18: BDLS: Improvements log
    1169659    Correction: Process variants not entered
    Thanks,
    Walter Oliveira.

  • How to cope with XMLAgg bug 4277241?

    Hi,
    I have hit the bug 4277241 , which was referred to her as well: Re: Important--Bug(ORA-22813) fixes in Oracle Database 10g  Release 10.2.0.4 for my 11.2.0.1 database.
    This is due to the result set being to big.
    Now i have read the metalink note and it says:
    xmlagg() with a GROUP BY can fail with ORA-22813 if the result is too large.
    This is normal and expected as there is a hard coded limit on the result
    size *BUT* this fix allows event 44410 to be set which uses a different method
    to evaluate the XMLAGG(). The event is disabled by default.
    NOTE: If the event is set to work around the 30K limit then LOB leakage
          can occur. This is expected and is due to the way the limit is
          avoided.
    Workaround
      Rewrite the query without using "group by" by using select distinct methodApparently using the event causes a memory leak which is ofcourse undesirable.
    So is the problem caused by using the GROUP BY in conjunction with XMLAgg or just by using the XMLAgg altogether ?
    If the answer is the GROUP BY then how does one rewrite a query without resorting to subqueries ?
    DISTINCT can only be used with the SELECT keyword, so the second XMLAgg used below can not be preceded by an DISTINCT.
    Using subqueries for every nest-level makes the query rather large and complicated , that is the reason for asking.
    The query below can be rewritten using subqueries, and thus avoiding the GROUP BY, but i am more interested in alternatives using DISTINCT or whatever.
    Ofcourse the query below doesn't give the error but does show what i mean:
    select
    xmlagg(
       xmlelement("department",
         XMLAGG(
           XMLElement("employee",
            xmlelement("first-name" , e.first_name),
            xmlelement("last-name", e.last_name)
              order by e.last_name
              d.department_name
            )as "xml"
            from  employees e 
                  join  departments d on (e.department_id = d.department_id)
                  join locations l on ( d.location_id = l.location_id)
            group by d.department_name
        ;

    Hi,
    Could XQuery be an option for you?
    SELECT XMLQuery(
    '<root>
      for $d in fn:collection("oradb:/HR/DEPARTMENTS")/ROW
      return element department {
        attribute name { $d/DEPARTMENT_NAME/text() }
      , for $e in fn:collection("oradb:/HR/EMPLOYEES")/ROW
        where $e/DEPARTMENT_ID = $d/DEPARTMENT_ID
        order by $e/LAST_NAME
        return element employee {
          element first-name { $e/FIRST_NAME/text() }
        , element last-name { $e/LAST_NAME/text() } 
    </root>'
    returning content
    FROM dual;

  • How to cope with surrogate id's after migrating to OWB10gR2

    We are busy converting an OWB 9.2 environment to OWB 10gR2.
    I already had problems with the way dimensions and cubes are used in OWB 10gR2.
    Some of my dimensions will have a steep growth curve because of the level rows that are inserted into the new dimensions. These new rows exist to enable you to link cubes to dimensions at all levels and not only to the lowest level.
    But when building the new mappings surrogate keys will also be used to make the join between dimension and cube but this again causes problems for existing reports.
    In OWB 9.2 we prepared the cubes and dimensions in the staging area and then loaded them into the data warehouse.
    We can still keep the existing columns in the newly defined dimensions we have to add the surrogate keys. But when loading data with the new dimensions and cubes the cubes will refer to the dimension with the new surrogate id’s and not with the old id’s which the business keys are now. So we are losing the relation between cube and dimension based on the business key and we have it now based on the surrogate key.
    But the reports we have (made with Information Builders) are still working with the relationships based on the business id’s but since the foreign keys in the cube refer to the surrogate id’s instead of the bushiness id’s the reports will return without results.
    I know I can resolve it by redesigning the reports but this is a lot of work and not the way we want to go.
    My alternative solutions are
    1.keep the new dimensions
    but rename them
    make views with the old dimension names
    in the views switch the old business id and surrogate
    the reports will then work with the views and the old joins will keep
    on working
    2.do not use dimensions and cubes
    migrate the mappings to use tables , as a matter of fact this is what
    you get after importing a 9.2 MDL into 10.2
    no change in table names
    the reports keep on working
    Alternative solution 1 is rather dangerous because the column names are switched; this makes the model more difficult to understand
    Alternative solutions can become a problem if we want to use slowly changing dimensions, we have to build them as we would have done in OWB 9.2, this leads to complex mappings.
    Are there any other alternatives, can somebody advise for the best answer to these challenges

    Well, couple of ideas here.
    First off, do you HAVE to use levels in the dimension? For some of our dimensions, we just define a single "default" level, and tie all attributes to it. For example, with a time dimension, we'd define a default level, and then tie month_id, month_name, quarter_id, quarter_name, year_id, and year_name to that default level. This works out well - you basically get a "flat" dimension with all attributes on the row, thus preventing the generation of keys for the other levels. You'll still get a DEFAULT_LEVEL_KEY, but it will always equal the dimension key and you can just ignore the column.
    FYI - I've just started playing with OWB 11gR1, and I discovered a few minutes ago that when setting up a dimension level, you don't appear to have to have the surrogate_id attribute turned on. Not sure if this works yet, will let you know.
    Last and not least, I read your problem re: the business keys vs. surrogate keys to be "we used business keys instead of surrogate keys as the dimension PKs and fact table FKs on our initial DW, and now we have to migrate that...where OWB of course wants to use surrogates. Couple of thoughts. First off, I don't think you're going to get any auto-magic way to do this - using business keys instead of surrogate keys is definitely non-standard.
    The one suggestion I would have is to do this: keep the original business IDs on the fact table as their own columns. Set them us as "measures" with a data type of varchar2 or whatever they already are. OWB will still of course create the "proper" FK columns based on the surrogate keys, but your users will still be able to join based on the "old" business id joins. The only real price you'd have to pay for this option is a) it takes more space per row on the fact table, and b) you'll probably have to do some indexing. OWB will build the proper bitmap indexes on the surrogate keys, but you'll probably want to add bitmap indexes on the business ID columns.
    Hope this helps,
    Scott

  • How to cope with "undefined is not an object" ?

    I'm tiptoeing towards a solution, but stepped on a thumbtack:
    The following snippet creates an error in the function GetTplParams
    var template = "DDD-BibFM-tpl.fm";  // located in script-dir var outFile  = "DDD-BibFM.fm";      // located in the source dir   oDoc = OpenTemplate (); alert ("Is template open now?"); function OpenTemplate () {   var tplDir, tplFile, openParams, openReturnParams, oFile;   tplFile = "E:\\_DDDprojects\\FM+EN-escript\\escript\\DDD-BibFM-tpl.fm" // SimpleOpen does not take the tplFile, but opens the dir of the active document //  oFile = SimpleOpen (tplFile, true);   openParams = GetTplParams ();    openReturnParams =  new PropVals();    oFile = Open (tplFile, openParams, openReturnParams);    return oFile; }   function GetTplParams() {  // =>>> "undefined is not an object" on line 22 var params, i;  // Change the params   i = GetPropIndex(params, Constants.FS_RefFileNotFound);    params[i].propVal.ival = Constants.FV_AllowAllRefFilesUnFindable;    i = GetPropIndex(params, Constants.FS_FileIsOldVersion);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_FontNotFoundInDoc);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_LockCantBeReset);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_FileIsInUse);    params[i].propVal.ival = Constants.FV_OpenViewOnly;    i = GetPropIndex(params,Constants.FS_AlertUserAboutFailure);    params[i].propVal.ival=Constants.FV_DoCancel;  /*   i = GetPropIndex(params,Constants.FS_MakeVisible );    params[i].propVal.ival=false;  */   return (params);  }
    (inserting JS code really has its quirks in this editor).

    Thanks Klaus - now it works!
    To Rick.
    The presented piece of code is part of a 're-furbishing' the FameMaker to EndNote 'connection.
    In a first step I collect temporary citations (such as [Daube, 1969, #123] ) from text, footnotes and tables into an array. This part already works fine (I had to postpone further development for a year now...).
    Then I write this data to a new document (which is created from the template - this is where the mentioned piece comes into play.
    This file then is saved as rtf to be worked off by the bibliographic application EndNote (or Citavi in another case) to resolve the temporary citation into formatted citations and the bibliography.
    After that the modified rtf is read in FM and the temp. citations in the FM-document/book are replaced by the formatted citations.
    The user then copies the bibliography (as text only) into his appropriate chapter/section and formats it to his liking.

  • How to cope with the case that a thread crash ?

    if a thread crash, how to notify other threads? Is there standard method to follow?

    Sounds like a job for Observer pattern. You'll have to handle the crash on your own though. Catch any exceptions and notify the observers.

  • How to cope with differences in menu between new and old ATV?

    I am using the same iTunes library but two atv in different rooms.
    Why does the newer atv does not group movies or tv shows by "show".
    Or tv shows show in alphabetical order in the old gen one but random order in the new atv
    All from same iTunes library
    In old (hard disk type) we can browse movies and tv shows in alphabetical order .
    In the new atv depending whether it is a movie or tv show it is either reverse alphabetical order or totally random
    Seasons are grouped in old atv but show as separate shows in new atv?
    Any tips on extra tags in iTunes content that might help?
    I set episode ID season and show numbers
    Use show to group similar movies or show to indicate the tv show
    Also use genre

    I'd updated itunes first. Apple just released an 10.6.1 which address specifically the TV show sorting issue first. regardless Itunes TV show sort remains annoying. It my understanding that that the show will sort properly once in the playlist. However, I'm holding firm at 4.4.4 and wiating for 5.0 update to address the remaining issues with the ATV lastest software.
    from the Apple support page:
    About iTunes 10.6.1
    iTunes 10.6.1 provides a number of improvements, including:
    Fixes several issues that may cause iTunes to unexpectedly quit while playing videos, changing artwork size in Grid view, and syncing photos to devices.
    Addresses an issue where some iTunes interface elements are incorrectly described by VoiceOver and WindowEyes. • Fixes a problem where iTunes may become unresponsive while syncing iPod nano or iPod shuffle.
    Resolves an ordering problem while browsing TV episodes in your iTunes library on Apple TV.
    For information on the security content of this update, please visit: support.apple.com/kb/HT1222

  • How do you cope with releases of your custom Portal Content (PCD Content)?

    Dear SDN Community,
    I'm very interested in how you cope with release management in the Portal (PCD).
    Where I'm focusing on is the release of a "project" trough your landscape (Dev, Test, QA, Prod)
    When starting a project you start with a new folder lets say "My Custom App" that contains all your content.
    Once done you transport to Test, QA and eventually Prod. Now the Content is in use and roles are assigned.
    Request for changes arive from the customer and eventually a new release need to be implemented.
    Offcourse you could then change/update the content in the initial folder and transport that to Test, QA again but you will not be able to fix incidents while creating the new release because objects maybe touched.
    I was thinking about a scenario where you create subfolders in your application folder, see example below:
    \ "My Custom App"
    |--- Release 1.0
    |--- Release 1.1
    |--- Release 2.0
    |--- Current
    You only do this on your Dev system. Everytime a Release is OK you put it in the "Current" folder and transport this to Prod.
    Using this you won't have to change the Role assignments in your different landscapes as they are always pointing to the "Current" folder. In addition this scenario enables you to provide Pilot rollouts to smaller groups parallel to the Major release.
    My question to you is to shoot on it and give your oppinion/feedback... where possible provide links to SAP document or best practices that describe this subject...
    Thnx in advance,
    Benjamin Houttuin

    Hi Bejamin,
    It is an interesting approach you describe and I see no reason it should not work.
    The only thing which can create some problems is that the object ID of roles must be unique. For example com.mycom.roles.roleA cannot exist in two separate folders in the PCD. For all other PCD objects it is the PCD ID which must be unique and this includes the folder prefix.
    In my experience, the most natural solution for this in large portal environments is to have both a version and a production line.
    The production line will be your standard sandbox, development, test/QA, production and the version line will consist of an development and test/QA systems.
    Long running projects for enhancements to existing functionality will be done in the version line, before being transported to the production line when ready for GoLive. This makes even more sense when you have version lines for backend systems your portal are using which are modified at the same time.
    Whilst the project is running, the production line can be used for changes to the current version, but it is important to note that all these changes have to be migrated into the version line in order to make sure they are not overwritten.
    Of course, a version line brings extra cost related to synchronization and similar, so it is not always it makes sense.
    Amit's two documents of large portal installations might be somewhat relevant:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/bec9711e-0701-0010-e4ac-84f50543bfa9
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/25cab009-0801-0010-1380-edaa1d5e7f85
    Regards
    Dagfinn

  • How to fix the constant noise in the left corner of the Macbook Air which is connected not with a fan?

    How to fix the constant noise in the left corner of the Macbook Air which is connected not with a fan?

    Unfortunately if there are various colours across the screen then you have broken the LCD Screen and it will need replacing. If the colours change when you press the area lightly then this is another symptom that you've broken the screen.
    Sorry to give the bad news!

  • I already have a new book that is ready to publish as an iBook. In addition I also have an app prepared for this book. Please tell me how to upload this book (title is: "How To Cope...with life") and the app for it into your iBook Authors app so I can hav

    I already have a new book that is ready to publish as an iBook. In addition I also have an app prepared for this book.
    Please tell me how to upload this book (title is: "How To Cope...with life") and the app for it into your iBook Authors app so I can have it offered for sale @$0.99, in your app store?
    FROM: Terry Weber, Email: [email protected]

    https://itunesconnect.apple.com/WebObjects/iTunesConnect.woa/wa/bookSignup

  • How to make a synchronous acquisition of two analog signals with a one channel DAQ ?

    Hi !
    It is the first time I use Labview. I have just made some easy VIs, and now, I do not know how to deal with my problem...
    My problem : I have only one acquisition card (DAQ Ni 6034E) and I would like
    to acquire simultaneously two analog signals. It seems to be possible, in a quasi-synchronous acquisition if the card acquires one point of the first signal, then the first point of the second signal, then the second of the first signal, and so on ... I thougth that I could made two sequencies with a VI of data acquisition in each sequency, with the AI MULT PT. But with this, I think that I will lose the precision of the sampling frequency. And I have to know the sampling fre
    quency ...
    Thanks for your help !
    Carline

    Hi Carline,
    A PCI-6034E is a low-cost board, which only has one Analog to Digital Converter (ADC).
    With only one ADC you can't acquire different channels at the same time. That's why you find a multiplexer before the ADC in this type of board. This enables you to acquire multi-channels at the same rate quasi simultaneously. The samples of the different channels will be interleaved as it is explained in the following knowledge base :
    - http://digital.ni.com/public.nsf/3efedde4322fef19862567740067f3cc/9379ea091c264b7c86256bc90082ca5d?OpenDocument.
    With this method, the sampling frequency is the same for all channels. The sampling frequency of one channel is determined with the "scan rate" parameter. There is only a small delay between the samples
    of each channels. When you perform an interval scanning acquisition, this delay depends on the "sample rate" that you specify. To have more informations about this terminology, please refer to the following KBs :
    - http://digital.ni.com/public.nsf/websearch/4D1435DF82EF494186256D8A006DD6D4?OpenDocument.
    - http://digital.ni.com/public.nsf/websearch/4D1435DF82EF494186256D8A006DD6D4?OpenDocument.
    To easily perform such a multi-channels analog acquisition, you can use the VI named "AI Acquire Waveforms.vi". You also can use an example provided with LabVIEW. You just have to browse the NI Example Finder in "Harware Input & output >> Traditional NI-DAQ >> General".
    Best regards,
    Benjamin
    National Instruments France

  • How well does SAP copes with a manufacturing environment ......?

    Hi all,
    I wanted to check about how well SAP copes with a manufacturing environment where there are many parts for many product lines, with small production quantities being the norm. In this case, you would see many BOMs, High quantity of POs, Supplier invoices etc. This is in addition to the normal needs for Quality inspection forms, WIP movement and valuation. It's intended to utilise all the modules, ordering, inventory, quality, purchasing, MRP, sales order, invoicing, payments, AR/AP, HR, CRM.

    Hi,
    Not sure about the scenario - many BOMs for many production line - Is it that there are very high number of material master data which has BOMs or there are lot of BOMs for single material. If it is the second case, concept of variant configuration can be used, where number of BOMs are reduced by using super BOM.
    Regarding high quantity of POs: probably you can exlpore the option of clubbing small POs into a single one for common vendors.
    Regards,
    Amalesh

Maybe you are looking for

  • How to read Spreadsheet, PDF and MS Work files

    I am new to BlackBerry world.  Is someone tell me how can I read MS Spreadsheet, Work and PDF from my BB8330?  Do I need buy or download any software or come with the BB?  Thanks Solved! Go to Solution.

  • My iPhoto Library won't open. Help needed!

    Hello all. I'm hoping someone who has had a similar problem or someone who's great with stuff like this found a solution by now. I upgraded to iPhoto 9.6.1. Then as I proceeded to open up iPhoto, it told me that if needed to conduct repairs. After it

  • Report for Asset A/c

    Gernarate report without compromising the reconcilation of AM with FI Thanks and Regards Chiru

  • Choppy payback

    i burned a dvd in idvd after making the video in final cut express HD, and every time i go to play it in a DVD player i get choppy video that looks like it is super compressed or something?

  • Project compile as project but not as DC

    Hello, I have created new DC of type Portal Application Module. At the code I am using I need to use these jar files located locally at the com.sap.km.rfwizard/lib plugin: bc.util.public_api.jar bc.rf.framework_api.jar com.sap.security.api.ep5.jar I