Performance and data types: which to use?

Hi All,
I am wondering what data type to use and the effect of them on memory/speed.
1. What is the difference (if any) of using sgl, dbl, int etc. Looking at the LabVIEW help there seems to be a range of 8-256 bits of storage according to the data type. Is it basically choose the one with the smallest storage that can fit the data?
2. I currently have a cluster flowing through subVI's. The cluster contains the start time (or freq), the delta t (or f) and the array of data (about 500-5000 elements). I tried to use the waveform datatype but it couldn't handle a delta t of 2 nanoseconds (500 MHz signal). Am i ok using the cluster, or should i seperate the components and pass them along? What data type should i use for each of the components?
Thanks

There are three main issue to consider.
Range and accuracy. If you need a very high level of accuracy, then you will need to use the extended data type or even create your own, although that's unlikely.
Memory. Yes, SGL takes less than DBL, but unless you're dealing with really huge amounts of data this won't matter.
Coercion. Most built in functions work on DBL. If you wire a SGL into them, they will coerce it, possibly creating a copy of the data and increasing your memory usage.
To sum it up, most of the times it would be best to use the default DBL. It's highly unlikely you'll need one of the others.
As for your second question, it sounds to me like the data is a single organism, so I would say you should leave it in the cluster, but that really depends on whether the functions need it or not and whether you're constantly bundling and unbundling the cluster. Note that 5000 elements is far from being a large array and you shouldn't have any problems handling it.
As for the timing unit, if you really only have 5000 elements (that's 10 microseconds of data?) then you should not have a problem with using a U32 with a nanosecond as the base unit. That should give you the ability to measure more than 4 seconds.
Try to take over the world!

Similar Messages

  • Which tables store the domains and data types?

    Hi.
    From what I know, DD01L stores all the domains (both SAP and user-created) in the system. Is this understanding correct?
    As for data types, which table stores them?
    Thanks.

    Hi,
    All abap program stored in the TADIR table and TRDIR
    Z_reports are stored in the table REPOSRC
    TVDIR is the system table(view) maintained by sap.
    in this all Tables are stored.
    TVDIR is a repository of VIEWS.
    The domains are stored in DD01L
    The tables that are created are stored in DD02L
    Fields are stored in DD03L.
    Data Elements are stored in DD04L
    DD06L                          Pool/cluster structures
    DD07L                          R/3 DD: values for the domains
    DD08L                          R/3 DD: relationship definitions
    DD09L                          DD: Technical settings of tables

  • How to list column names and data types for a given table using SQL

    I remember that it is possible to use a select statement to list the column names and data types of databaase tables but forgot how its done. Please help.

    You can select what you need from DBA_TAB_COLUMNS (or ALL_TAB_COLUMNS or USER_TAB_COLUMNS).

  • Data type to be used

    Hi All,
    I have a requirement where i have to fetch mutiple rows of a table and then concatenate the data of a particular field of all the rows and put it into a single variable.
    I have no restrictions on the no of rows to be fetched due to which i cannot plan of the size of that single variable.
    Can anyone suggest me some data type which can take variable lengths at run time. also the size limit can go beyond 255.
    Please help.
    Thanks,
    Chandna

    Hello Chandna,
    You can go with the following workaround;
    DATA:
      w_var1 TYPE string,
      w_var2 TYPE string.
    LOOP AT t_table INTO fs_table.
      CONCATENATE w_var1 fs_table-field1 INTO w_var2.
    ENDLOOP.
    Now, every time loop is run, current value is attached to previous value and at the end, you'll get concatenated values in w_var2.
    You can very well use w_var2.
    Hope, it helps you.
    Thanks: Zahack

  • How to create longtext or blob data types in SQL using labview

    Hello,
    I am fairly new to SQL, and I'm using the labview database connectivity toolset.  Using labview 6.1
    I am using the DB Tools Create Table vi to create my tables.  I want the tables to hold 1000 character strings.  But the longest string that I can insert seem to be 255 characters.  I think It is a limitation of the "String" data type in SQL so I need to use text or blob types.  The problem is I created a control for the "Column Information" field and I see the following selections for the data type. (String, Long, Single, Double, date/time, binary).  I dont see any selection for text or blobs.  How do I define another data type that is not part of the selection in the control?
    Thanks for any help.

    I don't know about defining long text, but the equivalent of a BLOB should be the binary data type, which just holds the data you put into it without formatting it.
    Try to take over the world!

  • Regarding Changing the Namespace And Data Type in XSD File

    Hi All,
    I am Doing File -
    IDOC Interface.
    I have XSD File For File Sysyetm and IDOC For ECC.
    In XSD File I Have Different Name Space And Data Type ,
    I Created the Data Type as Same as the XSD File Data type
    I Have Namespaces Are Different,
    So,I Changed the Namespace in the XSD File with the New Namespace Which I created in IR.
    I Changed in Two Places Like this
    <b><xsd:schema targetNamespace="http://Sample1.com/xi/file;" xmlns="http://Sample1.com/xi/file;"</b>
    But it is giving Error lik this
    <b>Cannot load schema with the target namespace http://xxx.com/xi/xx/vamsi/100 to namespace http://Sample1.com/xi/file;</b>
    Regards
    Vamsi

    Hi Vasanth,
    Thats what I am Asking
    I want to Import XSD File in Data Type
    Before I Import the XSD File into Data type  I changed the Namespace in the XSD to My Namespace Which is in IR. And I Created the Data Type Name in IR Which I Have in XSD FIle .
    So I Am Getting this Error
    Please Let me Know Wat to do
    Regards
    Vamsi

  • How to invoke functions and data type( ex. sflight structure ) that is in t

    Hi,experts,
    I create a funcion(zz_test) that return sflight table type data. I need to invoke the function in the Webdynpro for java application.
    I think I would import the funtion to local.
    But I don't know how to import the function and the data type using SAP NetWeaver Developer Studio?
    But I don't know how to invoke functions and data type( ex. sflight structure ) that is in the R/3 in the WDJ?
    Do you give me some hints?
    Thanks a lot!
    Best regards,
    tao

    Hi Wang Tao,
    For achieving this you need to user Adaptive RFC models.
    This link explains you on the same :
    http://help.sap.com/saphelp_nw70/helpdata/EN/6a/11f1f29526944e8580c5e59333d96d/frameset.htm
    Thanks
    Namrata

  • [svn] 3571: Update SWFLoader ASDoc comment to remove &emdash and data type declaration .

    Revision: 3571
    Author: [email protected]
    Date: 2008-10-10 11:07:09 -0700 (Fri, 10 Oct 2008)
    Log Message:
    Update SWFLoader ASDoc comment to remove &emdash and data type declaration. ASDoc adds that automatically.
    Doc the new types in IndexChangedEvent.as
    Checkin Test Passed: Yes
    QA: No
    Bug:
    Doc: No
    Modified Paths:
    flex/sdk/trunk/frameworks/projects/framework/src/mx/controls/SWFLoader.as
    flex/sdk/trunk/frameworks/projects/framework/src/mx/events/IndexChangedEvent.as

    lunke you shoude change svn update http://svn.foo-projects.org/svn/xfce/modules/trunk to svn up $startdir/src/trunk
    hers a PKGBULID for thunar
    pkgname=thunar
    pkgver=0.0.2.r17470
    pkgdesc="Thunar is a file manager designed for Xfce. It is currently under development."
    url="http://thunar.xfce.org/wiki/"
    depends=('exo-svn')
    makedepends=('subversion')
    source=()
    md5sums=()
    build() {
    if [ ! -d $startdir/src/thunar ]; then
    echo "Fetching sources..."
    svn checkout http://svn.foo-projects.org/svn/xfce/thunar/trunk/ /thunar
    else
    echo "Updating sources..."
    svn up $startdir/src/thunar/
    fi
    cd $startdir/src/thunar
    ./autogen.sh --prefix=/opt/xfce4-svn
    make || return 1
    make DESTDIR=$startdir/pkg install
    find $startdir/pkg -name '*.la' -exec rm {} ;
    you will need exo from svn as well
    pkgname=exo-svn
    pkgver=r17470
    pkgdesc="Extensions to Xfce by os-cillation"
    url="http://libexo.os-cillation.com/"
    conflicts=(exo)
    provides=(exo)
    depends=('xfce4-svn')
    makedepends=('subversion')
    source=()
    md5sums=()
    build() {
    if [ ! -d $startdir/src/trunk ]; then
    echo "Fetching sources..."
    svn checkout http://svn.foo-projects.org/svn/xfce/libexo/trunk/
    else
    echo "Updating sources..."
    svn up $startdir/src/trunk
    fi
    cd $startdir/src/trunk
    ./autogen.sh --prefix=/opt/xfce4-svn
    make || return 1
    make DESTDIR=$startdir/pkg install
    find $startdir/pkg -name '*.la' -exec rm {} ;
    "Operation libtool-slay" compliant

  • Cardinality between Message Type and Data Type

    Hi SAP gurus,
    Could anyone please tell me the cardinality between the Message Type and Data Type?
    Thanks,
    Adnan Abbasi

    1:1
    Sameer

  • Variant Data and Data (Type)

    I'm on my way to build a XML parser for my program.
    How do you make Variant display it's Data and Data (Type) like the one on my screenshot?
    I dont want my data type be a string. I want it to be the same as my cluster.
    Is there a way to edit the data and data type for a variant control and return the same data for it's indicator?
    Thanks!
    Attachments:
    aaa.jpg ‏23 KB
    BBB.jpg ‏32 KB
    CCC.jpg ‏19 KB

    Can you demonstrate how exactly you're editing the Variant data type, as string...!!
    Below is an example, where I converted a cluster (Error data type) into variant and also displayed it on the Front Panel.
    Now if you wanna edit the original cluster data, by editing the string displayed in Variant indicator, this is not a good idea. Ideally you should convert the Variant data type to its original data type and thendo modification and convert it back to Variant.
    saintalan94 wrote:
    the VI at the link you provided have a password, I cant even look into the VI.
    Those VIs are provided as it is from its developer and even I don't have the password.
    I am not allergic to Kudos, in fact I love Kudos.
     Make your LabVIEW experience more CONVENIENT.

  • What data type I can use if the text is more then 4000 varchar?

    Dear all,
    What data type I can use if the text is more then 4000 varchar?
    Please advice,
    Amy

    You didn't specify if you are referring to tables or code.
    In tables the limit is varchar2(4000) so anything bigger will need to be stored in a CLOB (as already mentioned).
    In PL/SQL code you can defined varchar2 up to 32K so varchar2(32767).
    ;)

  • Which Data-Types can be used as Widget Parameters?

    Hi,
    I have been playing about with widgets (specifically widget parameters) for a while now.
    I've got to wondering, exactly which data-types can Captivate turn into widget parameters?
    So far I know that you can use:
    Numbers,
    Strings,
    Arrays,
    Objects
    and Booleans,
    as parameters.
    Which ones am I missing?
    I know some don't work, because I tried storing a MovieClip as a parameter and that didn't work.

    Hi Eccles,
    You can set all basic data types supported in flash viz.,
    1)Numbers
    2)Strings
    3)Arrays
    4)Objects
    Since in Action script , an Object can be anything (array , array of objects , array of objects which by themselves are array of objects and so on ) , you can virtualy send anything as widget params
    But there are two caveats to this
    1)Object References
    -Object References do not have any meaning once the swf is closed.But the widget params have to be stored across sessions.So If you send a reference as part of widget params it is not going to work.
    This is why sending a Movie Clip (which is actually a reference to an Object ) does not work
    If you want to send such things you will have to 'serialize' the object.
    2)Size of the Object that you send
    The size of the object that you send can have an impact on performance.And since any object that you send has to be converted into XML , objects like bitmaps can turn out to be huge and difficult to handle.
    This brings us down to this - you can send any object across as widget param as long as it is small and serialized.

  • Checking table columns and data type before inserting

    I have some data coming from different sources and want to insert the data from those files into multiple tables.
    Before inserting the data I like to perform a check on data type, length, null etc so that I can avoid errors at the time of insert. If there is any problem with the data then I do not want to perform the insert and report the problems.
    Thanks

    If you have 10gR2 (10.2.0.4) you could use DML_ERROR_LOGGING.
    Read about it here, see the examples: http://tkyte.blogspot.com/2005/07/how-cool-is-this.html
    In short: it avoids errors during your transactions, afterwards you know which records failed and why.
    It's more or less the same functionality:
    your goal:
    check before transactions and avoid the insert of 'bad' records. (extra code, extra maintenance, more chance of bugs)
    dml_err_logging:
    insert 'bad' records during transaction automatically into a dedicated error table including error message.
    Edited by: hoek on Mar 24, 2009 6:56 PM

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • What data type do I use in MSA to handle CRM DEC data?

    Dear Geeks,
    We have a customer field 'ZZ_SUPPORT' on CRM (40 SP11) opportunities, which we flow
    down to mobile sales. It previously carried a NUMC2 value (a number
    from 1 - 99), and this worked fine... the data flowed fine in both
    directions. The data element ZZ_SUPPORT used in CRM is based on a newly
    created data domain called Z1_TO_99 and this was defined as data type NUMC
    (2 characters), which we have changed to DEC (4 characters, and decimal
    places 2, and screen display5). Having made this change and also
    changed the screen layout in SE51, the field now looks good and takes
    a value like '12.34' without error. We can also see the '12.34' value
    being carried in the Opp_Write Bdoc in SMW01 classic segment. However, the BDoc does not migrate
    successfully to Mobile - the ZZ_Support value is lost. In Mobile, I
    have genereted the Tables associated with Opportunity_write, and also
    the BDoc, and I have also changed the BOOPPORTUNITY / Y_Support field
    (which carries the value) from NUMC2 to String5. I have also tried Long5, and currency4
    because there is no data type "DEC" available in Mobile. When I send
    opportunity messages up from Mobile to CRM, I see them going into SMQ2
    and then they dissappear because the Function module cannot handle the
    BDoc. What is the problem? Is it the function module? or the data type
    used on Mobile - if so what should I use in the absense of "DEC"?
    Richard

    Sounds like a strange issue indeed. I would call Johnny Nobles.
    Tx: Smoggen ?
    Sorry, can't come up with better ideas.
    Cheers,
    Seb.

Maybe you are looking for