Minimum records in aggregation

Hi BW folks!
I want to crate a query where in the result set at least 10 records are taken into account. This is a requirement of the workers council. If the query or filter step results in less than 10 record there should be no output of the query. It is about headcount data and they want to avoid that somebody can find out details about single persons. So I need somehow this minimum aggregation over 10 records.
Thanks !!

Hi,
You can use COUNT() to count the number of values for an infoObject, for example employee, person, whatever you're using. Use COUNT() in a (hidden) calculated key figure and create a condition on that key figure. Hope it helps.
Kind regards,
Alex

Similar Messages

  • Maximum number of the "records" in: Aggregated bill and payment handling

    HI,
    currently we are dealing with proposal and design of processes/settings in area Aggregated bill and payment handling among distributor and suppliers.
    Do you have experience and/or knowledge with maximum amount of records in related processes , like:
    Supplier side:
    - the maximum number of the individual postings aggregated in agregated posting on the supplier contract account?
    - the maximum number of the of the aggregated postings included in aggregated bill (print document)?
    - and consequently the maximum number of the  individual print documents included and printed/sent in aggregated bill (print document)? Also in case, if the aggregated bill is send as IDOC with details ?
    - the maximum number of the IDOCS sent in aggregated IDOC?
    - the maximum number of the records in distribution lot created form aggregated billing?
    Distributor side:
    - the maximum number of the individual electronic bills posted in aggregated form on the supplier contract account during processing ?
    - the maximum number of the individual payments included in PAN?
    Thanks
    Tomas.

    Hi,
    I would not receive too many documents into an aggregated document (suppier and distributor side). Try to find an error in a distribution lot or payment advice note with more than 10000 items. You can use posting area R0070 and R0071 (TX FQC0) for limit the size of aggregated postings.
    Best regards,
    Alexander

  • Record Check / Aggregation when loading to infocube

    When pushing daily data to a infocube, summarized by week, I am having inconsistent results in the cube.
    For example, in a week, if I have 5 separate records in the ODS with the same plant, variety, and week, but each on a different day, those 5 records should roll up to one record in the infocube by week.
    In my record count, however, I notice the system generates packets of varying sizes during update to the cube.  My question is, what if ODS records with the same keys are spread across the packets, could this result in inaccurate update to the cube?
    In the record check screen,  the Converted --> Updated columns suggest to me that unless similar records exist in the same Packet, there is a chance they will not be rolled up properly.
    Any thoughts?
    CB

    I agree that compression will yield the correct results, but this seems not to address the root concern, that being that data are not fully roll-ed up during load.
    I would not expect that the individual packets would have an impact to overall cube roll-up, but in our testing it appears this is the case.
    Do you know if the roll-up of data in a cube with similar characteristic values should be impacted by the breakdown of data in the packets?

  • How to get top 3 record from aggregated column

    Hello,
    i have a simple query which return all student in class with total marks. now i want to select top 3 student whose score is maximum. simple top three position. how can i use rank function here? or any other way to do that
    select  st.sti_roll_no  ,st.sti_name , sum(rd.rd_obt_marks) as  mycol  from rd_result_detail rd , rm_result_master rm, sti_student_info st
    where rm.rm_result_id = rd.rd_result_id
    and st.sti_roll_no= rd.rd_student_id
    --and  rd.rd_student_id = 'MBP10293'
    and rm.rm_semester = 3
    and rm_session = 2009
    and rm_batch= 3
    and rm.rm_exam_type ='FINAL TERM'
    and rm.rm_class_id = 'MBA'
    group by st.sti_name, st.sti_roll_no
    order by st.sti_roll_no;

    Not sure!!!!!!!!!!!!!
    with t as
    (select  st.sti_roll_no  ,st.sti_name , sum(rd.rd_obt_marks) as  mycol  from rd_result_detail rd , rm_result_master rm, sti_student_info st
    where rm.rm_result_id = rd.rd_result_id
    and st.sti_roll_no= rd.rd_student_id
    --and  rd.rd_student_id = 'MBP10293'
    and rm.rm_semester = 3
    and rm_session = 2009
    and rm_batch= 3
    and rm.rm_exam_type ='FINAL TERM'
    and rm.rm_class_id = 'MBA'
    group by st.sti_name, st.sti_roll_no
    order by st.sti_roll_no)
    select sti_roll_no  ,sti_name,mycol,dense_rank()over(order by mycol desc) rnk
    from t
    --where rnk<=3
    SQL> ed
    Wrote file afiedt.buf
      1  select e.* from (select empno,ename,sal,dense_rank()over(order by sal desc) rnk
      2* from emp)e where rnk<=3
    SQL> /
         EMPNO ENAME             SAL        RNK
          7839 KING             5000          1
          7788 SCOTT            3000          2
          7902 FORD             3000          2
          7566 JONES            2975          3

  • Ods-  0 Record mode

    Hi friends,
    In ods we are adding 0Recordmode .am i right?can u tell me is it compulsory to add 0recordmode for ods?what does it do?
    thanks
    mano

    Hi Mano,
    Most of the time the system by itself would the 0recordmode. The 0recordmode is most useful in the delta loads from the ODS to another ODS/Cube.
    You would find lots of posts on the 0recordmode in this forum.
    <u><b>From SAPhelp:</b></u>
    RECORDMODE
    Definition
    This attribute describes how a record in the delta process is updated. The various delta processes differ in that they each only support a subset of the seven possible characteristic values. If a Data Source implements a delta process that uses several characteristic values, the record mode must be a part of the extract structure and the name of the corresponding filed must be entered in the Data Source as a cancellation field (ROOSOURCE-INVFIELD).
    The seven characteristic values are as follows:
    <b>' ':</b> The record delivers an after image.
    The status is transferred after something is changed or added. You can only update the record straight to an Info Cube if the corresponding before image exists in the request.
    <b>'X':</b> The record delivers a before image
    The status is transferred before data is changed or deleted.
    All record attributes that can be aggregated have to be transferred with a reverse +/- sign. The reversal of the sign is carried out either by the extractor (default) or the Service API. In this case, the indicator 'Field is inverted in the cancellation field' must be set for the relevant extraction structure field in the Data Source.
    These records are ignored if the update is a non-additive update of an ODS object.
    The before image is complementary to the after image.
    <b>'A':</b> The record delivers an additive image.
    For attributes that can be aggregated, only the change is transferred. For attributes that cannot be aggregated, the status after a record has been changed or created is transferred. This record can replace an after image and a before image if there are no non-aggregation attributes or if these cannot be changed. You can update the record into an Info Cube without restriction, but this requires an additive update into an ODS Object.
    <b>'D':</b> The record has to be deleted.
    Only the key is transferred. This record (and its Data Source) can only be updated into an ODS Object.
    <b>'R':</b> The record delivers a reverse image.
    The content of this record is the same as the content of a before image. The only difference is with an ODS object update: Existing records with the same key are deleted.
    <b>'N':</b> The record delivers a new image.
    The content of this record is the same as for an after image without a before image. When a record is created, a  new image is transferred instead of an after image.
    The new image is complementary to the reverse image.
    <b>'Y':</b> The record is an update image.
    This kind of record is used in the change log of an ODS object in order to save the value from the update. This is for a possible rollback and roll- forward for key figures with minimum or maximum aggregation. This record also has the update value for characteristics (in this case, it is the same as the after image). Null values are stored for key figures with totals aggregation. An update image is only required when the value from the update is smaller or larger than the before image for at least one key figure with minimum or maximum aggregation.
    The table RODELTAM determines which characteristic values a delta process uses (columns UPDM_NIM, UPDM_BIM UPDM_AIM, PDM_ADD UPDM_DEL and UPDM_RIM). The table ensures that only useful combinations of the above values are used within a delta process.
    When extracting in the 'delta' update mode in the extracted records for the record mode, a Data Source that uses a delta process can deliver only those characteristic values that are specified in the delta process.
    Hope this helps.
    Bye
    Dinesh
    Message was edited by: Dinesh Lalchand

  • Delta records not updating from DSO to CUBE in BI 7

    Hi Experts,
    Delta records not updating from DSO to CUBE
    in DSO keyfigure value showing '0' but in CUBE same record showing '-I '
    I cheked in Change log table in DSO its have 5 records
    ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M -  -1
    ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M -   0
    ODSR_4LIF02ZV32F1M85DXHUCSH0DL -   0
    ODSR_4LIF02ZV32F1M85DXHUCSH0DL -   1
    ODSR_4LH8CXKUJPW2JDS0LC775N4MH -   0
    but active data table have one record - 0
    how to corrcct the delta load??
    Regards,
    Jai

    Hi,
    I think initially the value was 0 (ODSR_4LH8CXKUJPW2JDS0LC775N4MH - 0, new image in changelog) and this got loaded to the cube.
    Then the value got changed to 1 (ODSR_4LIF02ZV32F1M85DXHUCSH0DL - 0, before image & ODSR_4LIF02ZV32F1M85DXHUCSH0DL - 1, after image). Now this record updates the cube with value 1. The cube has 2 records, one with 0 value and the other with 1.
    The value got changed again to 0 (ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M - (-1), before image &
    ODSR_4LKIX7QHZX0VQB9IDR9MVQ65M - 0, after image). Now these records get aggregated and update the cube with (-1).
    The cube has 3 records, with 0, 1 and -1 values....the effective total is 0 which is correct.
    Is this not what you see in the cube? were the earlier req deleted from the cube?

  • Non additive aggregation -  custom defined aggregation possible in BW?

    I have the following problem:
    there is a key figure that is non-additive relative to one characteristic, e.g we need the third minimum as an aggregation for a time characteristic (there a 250 values for that characteristic).
    Is there a way to create user defined (exception) aggregation (like Var or mean value) by ABAP Coding?
    Message was edited by: Michael Walesch

    Does your database support analytic functions? Last and first are analytics functions. If your database does not support them, BI has to prepare selects with subqueries and this could slow down the response time
    Begoña

  • Use of Aggregation tab of key figure

    Hello Gurus ,
    Please tell me use of Aggregation tab of key figure , and where it affects ?

    hi,
    check the below links for aggregation and with examples..
    http://help.sap.com/saphelp_nw04s/helpdata/en/47/607899dcce6834e10000000a421937/frameset.htm
    To enable the calculation of key figures, the data from the InfoProvider has to be aggregated to the detail level of the query and formulas may also need to be calculated. The system has to aggregate using multiple characteristics. In regard to a selected characteristic, the system can aggregate with another rule for each key figure (exception aggregation).
    During aggregation, the OLAP Engine in BI proceeds as follows:
           1.      First, standard aggregation is executed. Possible aggregation types include summation (SUM), minimum (MIN) and maximum (MAX). Minimum and maximum can, for example, be used for date key figures.
           2.      Aggregation using a selected characteristic occurs after standard aggregation (exception aggregation). The available exception aggregation types include average, counter, first value, last value, minimum, maximum, no aggregation, standard deviation, summation, and variance. Application cases for exception aggregation include warehouse stock, for example, that cannot be totaled over time, or counters that count the number of characteristic values for a certain characteristic
    assign points if helpful
    regards
    vadlamudi

  • Aggregtaor Transformation without any condition showing zero records in target table

    Hi Everyone, I have a source table which has 100 records,I just passing all the 100 records to aggregator transformation. In aggregator transformation i am not giving any condition,I just mapped from aggreator directy to target table.After running this mapping in target table i found 0 records in target table. why this happing can anyone explain this?

    Magazinweg 7Taucherstraße 10Taucherstraße 10Av. Copacabana, 267Strada Provinciale 124Fauntleroy CircusAv. dos Lusíadas, 23Rua da Panificadora, 12Av. Inês de Castro, 414Avda. Azteca 123 I have the source table like this and i want to replace the character and sum up the numbers and how can i do it, I replace the character by reg_replace() function but I am not able to add the number because it is of not fixed length. My Output should be,71115705396

  • Remove 0 records when Exporting TXN data from BPC

    BPC Experts,
    I m trying to export transaction data from BPC from one MODEL , While exporting i wanted to ignore the records whose aggregated value is zero for the dimension member combination.
    Where i can restrict or set , EXPORT only for SIGNED DATA <> 0.

    Hi,
    Performance point of view optimization is required. As i mentioned in previous post lite optimization Closes the open request, compresses without Zero-Elimination and indexes the cube, and updates database statistics for the BW Info-Cube.
    As per my understanding there are no such prerequisite , but for safer side you can process all dimension of model.
    Also go through the links that are shared. If you required more details you can search on forum. There are lot of threads on lite optimization.
    Problem removing zeros in a model - BPC 10 NW
    http://scn.sap.com/community/epm/planning-and-consolidation-for-netweaver/blog/2014/02/07/lite-optimize--a-little-guide-to-the-big-things-you-should-know
    Full Optimization and Light Optimization In Detail.
    BPC Light Optimize Configuration
    Lite Optimize - A little guide to the big things you should know
    BPC optimization performance
    Light optimize in BPC 10

  • After review of Aggregation vs OLAP Engine: clarification of a statement

    Hi,
    2. I read the following:.
    u201C During aggregation, the OLAP Engine in BI proceeds as follows:
           1.      First, standard aggregation is executed. Possible aggregation types include summation (SUM), minimum (MIN) and maximum (MAX). Minimum and maximum can, for example, be used for date key figures.
           2.      Aggregation using a selected characteristic occurs after standard aggregation (exception aggregation). The available exception aggregation types include average, counter, first value, last value, minimum, maximum, no aggregation, standard deviation, summation, and variance.
    Application cases for exception aggregation include warehouse stock, for example, that cannot be totaled over time, or counters that count the number of characteristic values for a certain characteristic.
          3.      Lastly, aggregation using currencies and units is executed. A * is output when two numbers that are not equal to zero are aggregated with different currencies or units. u201C
    i. Where is this taking place? Is it in the process of producing the report output in BEx Anlyzer or Web Analyzer? Or, is this discussion about when the cube is being loaded with data?
    ii. Is it an u201Cif not step 1 then go to step 2u201D case? Or, do all three steps get executed each time?
    Can you give an example to walk me through a case where the u201COLAP  Engine in BIu201D, goes through all three steps in order?
    iii. Are these steps above in regards to the "OLAP processor" applicable only in the case of "non-cumulative" key figure?
    I ask this because I read in another document that
    "... Before you can even think about interpreting the result of a query on an InfoCube with a non-cumulative key figure, you need to know the sequence of the aggregations..."  (referring to sequence 1, 2, 3 above)
    If applicable only in the case of "non-cumulative" key figure, then what happens in the case of cumulative key figures?
    Any example to clarify this?
    Thanks
    Edited by: AmandaBaah on Sep 25, 2009 10:19 PM
    Edited by: AmandaBaah on Sep 29, 2009 2:54 PM

    N/A

  • Aggregation of data from one cube to another

    Hello,
       I would like to copy the data of one cube to another cube, at an aggregated level like we do in BW.
       For instance, cube A has cost center, while cube B has to be the totals of all the cost centers from cube A.  And hence, cube B will not have a field cost center at all.
       Could someone help me on this topic please.
    Thanks and Best Regards,
    Rajkumar A

    Hi dear,
    it's enough to do not put in your copy cube the cost center (the OLAP processor when you execute a query will do the rest)!
    Otherwise, if you want to have all the records PHYSICALLY AGGREGATED, you have to go with an ODS as destination (and here you can aggregate as you prefer on the basis of your key part !)...
    Or you can aggregate all the records in a single data package with a SUM operation into an internal table in your start routine (without the cost center, clearly !)...
    Hope it helps!
    Bye,
    ROberto

  • Ni-Scope time per record

    I am just starting with Labview so this may be an easy question.
    I have a PXI-5105 digitizer and I would like to be able to set the time per record attribute.  However, when I try to do this with a property node I get the error message:
    The channel or repeated capability name is not allowed.
    Attribute: NISCOPE_ATTR_HORZ_TIME_PER_RECORD, Channel: 0
    I can set the min. sample rate and min. record length to adjust the time per record, but I am confused about why you can only set minimum values of the parameters.  Why not set the absolute sample rate and record length giving an absolute time per record?  Are these parameters just adjusted to the next higher, and available value?

    Hi ESD and welcome to the NI forums,
    The property you're trying to use is part of the legacy driver support and is now considered obsolete [1]. It remains part of the driver, though, for legacy hardware, but a 5105 is new enough that it doesn't use this older API.
    The recommended way to configure horizontal timing is using the other properties you mentioned: minimum sample rate and minimum record length. The reason that you cannot specify an exact rate or length is because the board's clock cannot be set to any arbitrary frequency and so it will be rounded up to the nearest valid setting [2].
    However, if you still prefer to set record length according to time, then I suggest doing what you have been doing: specify a sample rate and record duration and then calculate the minimum record length.
    Please let me know if you'd like further clarification.
    [1] niScope_ConfigureAcquisition :: NI High-Speed Digitizers Help » Programming » Reference » NI-SCOPE Function Reference Help » Functions » IVI Compliance/Obsolete Functions
    [2] Coercions of Horizontal Parameters :: NI High-Speed Digitizers Help » Programming » Getting Started with NI-SCOPE » Coercions
    Joe Friedchicken
    NI VirtualBench Application Software
    Get with your fellow hardware users :: [ NI's VirtualBench User Group ]
    Get with your fellow OS users :: [ NI's Linux User Group ] [ NI's OS X User Group ]
    Get with your fellow developers :: [ NI's DAQmx Base User Group ] [ NI's DDK User Group ]
    Senior Software Engineer :: Multifunction Instruments Applications Group
    Software Engineer :: Measurements RLP Group (until Mar 2014)
    Applications Engineer :: High Speed Product Group (until Sep 2008)

  • 0RECORDMODE: overwrite vesus addition

    Good day
    Please assist. I have created an ODS (version 3.0) and 0recordmode forms part of the Comm Structure.I need to load sales data for the same dealer, same month and the data should be added, not overwrite. How do I get the ODS to accept to add the data, not overwrite?
    Thanks
    Cornelius

    Hi,
    The value of InfoObject 0RECORDMODE determines whether the update rule for key figures supports addition or overwrite procedures. This InfoObject is required for delta loads and is added by the system if a data source is delta-capable. It is added to an ODS during the creation process. Records are updated during the delta process using a variety of ROCANCEL/0RECORDMODE values including N for a new record image, A for an additive image, and Y for an update record image used by ODS key figures processing with a minimum or maximum aggregation. We will limit our discussion here to the following four more commonly used values:X indicates the before image of a record D deletes the record  R denotes a reverse image A blank character represents a record’s after image  When delta requests are processed by the ODS, the ROCANCEL values assigned in R/3 and maintained by InfoObject 0RECORDMODE accurately update the data target automatically in different ways depending on if the ODS update rules are configured for the Addition or Overwrite mode.

  • Choosing a PXIe controller for streaming 200 MBps

    Warning:  This is a long post with several questions.  My appologies in advance.
    I am a physics professor at a small liberal-arts college, and will be replacing a very old multi-channel analyzer for doing basic gamma-ray spectroscopy.  I would like to get a complete PXI system for maximum flexability.  Hopefully this configuration could be used for a lot of other experiments such as pulsed NMR.  But the most demanding role of the equipment would be gamma-ray spectroscopy, so I'll focus on that.
    For this, I will need to be measuring either the maximum height of an electrical pulse, or (more often) the integrated voltage of the pulse.  Pulses are typically 500 ns wide (at half maximum), and between roughly 2-200 mV without a preamp and up to 10V after the preamp.  With the PXI-5122 I don't think I'll need a preamp (better timing information and simpler pedagogy).  A 100 MHz sampling rate would give me at least 50 samples over the main portion of the peak, and about 300 samples over the entire range of integration.  This should be plenty if not a bit of overkill.
    My main questions are related to finding a long-term solution, and keeping up with the high data rate.  I'm mostly convinced that I want the NI PXIe-5122 digitizer board, and the cheapest (8-slot) PXIe chassis.  But I don't know what controller to use, or software environment (LabView / LabWindows / homebrew C++).  This system will likely run about $15,000, which is more than my department's yearly budget.  I have special funds to accomplish this now, but I want to minimize any future expenses in maintenance and updates.
    The pulses to be measured arrive at random intervals, so performance will be best when I can still measure the heights or areas of pulses arriving in short succession.  Obviously if two pulses overlap, I have to get clever and probably ignore them both.  But I want to minimize dead time - the time after one pulse arrives that I become receptive to the next one.  Dead times of less than 2 or 3 microseconds would be nice.
    I can imagine two general approaches.  One is to trigger on a pulse and have about a 3 us (or longer) readout window.  There could be a little bit of pileup inspection to tell if I happen to be seeing the beginning of a second pulse after the one responsible for the trigger.  Then I probably have to wait for some kind of re-arming time of the digitizer before it's ready to trigger on another pulse.  Hopefully this time is short, 1 or 2 us.  Is it?  I don't see this in the spec sheet unless it's equivalent to minimum holdoff (2 us).  For experiments with low rates of pulses, this seems like the easiest approach.
    The other possibility is to stream data to the host computer, and somehow process the data as it rolls in.  For high rate experiments, this would be a better mode of operation if the computer can keep up.  For several minutes of continuous data collection, I cannot rely on buffering the entire sample in memory.  I could stream to a RAID, but it's too expensive and I want to get feedback in real time as pulses are collected.
    With this in mind, what would you recommend for a controller?  The three choices that seem most reasonable to me are getting an embedded controller running Windows (or Linux?), an embedded controller running Labview real-time OS, or a fast interface card like the PCIe8371 and a powerful desktop PC.  If all options are workable, which one would give me the lowest cost of upgrades over the next decade or so?  I like the idea of a real-time embedded controller because I believe any run-of-the-mill desktop PC (whatever IT gives us) could connect and run the user interface including data display and higher-level analysis.  Is that correct?  But I am unsure of the life-span of an embedded controller, and am a little wary of the increased cost and need for periodic updates.  How are real-time OS upgrades handled?  Are they necessary?  Real-time sounds nice and all that, but in reality I do not need to process the data stream in a real-time environment.  It's just the computer and the digitizer board (not a control system), and both should buffer data very nicely.  Is there a raw performance difference between the two OSes available for embedded controllers?
    As for live processing of the streaming data, is this even possible?  I'm not thinking very precisely about this (would really have to just try and find out), but it seems like it could possibly work on a a 2 GHz dual-core system.  It would have to handle 200 MBps, but the data processing is extremely simple.  For example one thread could mark the beginnings and ends of pulses, and do simple pile-up inspection.  Another thread could integrate the pulses (no curve fitting or interpolation necessary, just simple addition) and store results in a table or list.  Naievely, I'd have not quite 20 clock cycles per sample.  It would be tight.  Maybe just getting the data into the CPU cache is prohibitively slow.  I'm not really even knowledgeable enough to make a reasonable guess.  If it were possible, I would imagine that I would need to code it in LabWindows CVI and not LabView.  That's not a big problem, but does anyone else have a good read on this?  I have experience with C/C++, and some with LabView, but not LabWindows (yet).
    What are my options if this system doesn't work out?  The return policy is somewhat unfriendly, as 30 days may pass quickly as I struggle with the system while teaching full time.  I'll have some student help and eventually a few long days over the summer.  An alternative system could be built around XIA's Pixie-4 digitizer, which should mostly just work out of the box.  I prefer somewhat the NI PXI-5122 solution because it's cheaper, better performance, has much more flexability, and suffers less from vendor lock-in.  XIA's software is proprietary and very costly.  If support ends or XIA gets bought out, I could be left with yet another legacy system.  Bad.
    The Pixie-4 does the peak detection and integration in hardware (FPGAs I think) so computing requirements are minimal.  But again I prefer the flexibility of the NI digitizers.  I would, however, be very interested if data from something as fast as the 5122 could be streamed into an FPGA-based DSP module.  I haven't been able to find such a module yet.  Any suggestions?
    Otherwise, am I on the right track in general on this kind of system, or badly mistaken about some issue?  Just want some reassurance before taking the plunge.

    drnikitin,
    The reason you did not find the spec for the rearm time for
    the 5133 is because the USB-5133 is not capable of multi-record acquisition.  The rearm time is a spec for the reference
    trigger, and that trigger is used when fetching the next record.  So every time you want to do another fetch
    you will have to stop and restart your task. 
    To grab a lot of data increase your minimum record size.  Keep in mind that you have 4MB of on board
    memory per channel. 
    Since you will only be able to fetch 1 record at a time,
    there really isn’t a way to use streaming. 
    When you call fetch, it will transfer the amount of data you specify to
    PC memory through the USB port (up to 12 MB/s for USB 2.0 – Idealy).
    Topher C,
    We do have a Digitizer that has onboard signal processing
    (OSP), which would be quicker than performing post processing.  It is
    the NI 5142
    and can perform the following signal
    processing functions.  It is
    essentially a 5122 but with built in OSP. 
    It may be a little out of your price range, but it may be worth a
    look. 
    For more
    information on streaming take a look at these two links (if you havn’t
    already). 
    High-Speed
    Data Streaming: Programming and Benchmarks
    Streaming Options for PXI
    Express
    When dealing with different LabVIEW versions
    it is important to note that previous versions will be compatible with new
    versions; such as going from 8.0 to 8.5. 
    Keep in mind that if you go too far back then LabVIEW may complain, but
    you still may be able to run your VI.  If
    you have a newer version going to an older version then we do have options in
    LabVIEW to save your VI for older versions. 
    It’s usually just 1 version back, but in LabVIEW 8.5 you can save for
    LabVIEW 8.2 and 8.0.
    ESD,
    Here is the link
    I was referring to earlier about DMA transfers.  DMA is actually done every time you call a
    fetch or read function in LabVIEW or CVI (through NI-SCOPE). 
    Topher C and ESD,
    LabVIEW is a combination of a compiled
    language and an interpreted language. 
    Whenever you make a change to the block diagram LabVIEW compiles
    itself.  This way when you hit run, it is
    ready to execute.  During execution LabVIEW
    uses the run-time engine to reference shared libraries (such as dll’s).  Take a look at this DevZone article about
    how LabVIEW compiles it’s block diagram (user code). 
    I hope all of this information helps!
    Ryan N
    National Instruments
    Application Engineer
    ni.com/support

Maybe you are looking for

  • Requisitions for sale order stock

    Requisitions for sale order stock that are generated directly from the sales order are not passed to SRM but requisitions that are created manually for sales order stock are passed to SRM. Why is the sourcing of a requisition for sales order stock wh

  • Error in OBIEE 11G

    Hi Im getting below error in Bi11g. core application services is down. Error: [nQSError: 46066] Operation cancelled. [nQSError: 46067] Queue has been shut down. No more operations will be accepted. If anyone Know tell me. Thanks and Regards sathya

  • How to use sharepoint CSS in asp:gridview

    Hello How can I implement sharepoint css (blueone) with asp:gridview for header, footer, rowtemplate etc? Avi

  • Several good reasons to take NET310 WD4A

    1.  Martin Leal, one of the current instructors for this course, is a genuinely funny guy.  So funny that I strongly suggest SAP offer him a spot to do some stand-up comedy about SAP on SDN night at some future Tech Ed (SDN night = night of SDN day.)

  • Migrating from MSSQL2005: Before-Insert Trigger Generation

    The SQL Server 2005 tables with auto=incrementer primary keys result in sequence objects being defined for use in Before-Insert triggers, which is good. I see the following code in the generated triggers: <br> IF INSERTING AND :new.ofi_service_provid