Tips for dealing with large channel count on cRio

Hello, I have a very simple application that takes an analog input using an AI module (9205) from a thermistor and based on the value of the input it sends out a true/false signal using a digital out module (9477). Each cRio chassis will have close to 128 channels, the code being exactly the same for each channel.
I wonder if anyone has any tips for how I can do this so that I don't have to copy and paste each section of code 128 times. Obviously this would be a nightmare if the code ever had to be changed. I'm sure there is a way to make a function or a class but being new to graphical programming I can't think of a good way to do this. I looked for a way to dynamically select a channel but can't seem to find anything, if I could select the channel dynamically I'm guessing I can create a subvi and do it that way. Any tips or help would be greatly appreciated.

There isn't a way to dynamically choose a channel at runtime.  In order
for the VI to compile successfully, the compiler must be able to statically
determine which channel is being read or written in the I/O Node at
compile time.  However, that doesn't mean you can't write a reusable
subvi.  If you right click on the FPGA I/O In terminal of the I/O Node and create a constant or control, you should be able to reuse the same logic for all of your channels.  The attached screen shot should illustrate the basics of what this might look like.  If you right click the I/O control/constant and select "Configure I/O Type...", you can configure the interface the I/O Item must support in order for it to be selectable from the control.  While this helps single source some of the logic, you will still eventually need 128 I/O constants somewhere in your FPGA VI hierarchy.
I should also mention that if each channel being read from the 9205 is contained in a separate subVI or I/O Node, you will also incur some execution time overhead due to the scanning nature of the module.  You mentioned you are reading temperature signals so the additional execution time may not be that important to you.  If it is, you may want to look at the IO Sample Method.  You can find more information and examples on how to use this method in the LV help.  Using the IO Sample Method does allow you to dynamically choose a channel at runtime and is generally more efficient for high channel counts.  However, it's also a lot more complicated to use than the I/O Node.
You also mentioned concerns about the size of arrays and the performance implications of using a single for loop to iterate across your data set.  That's the classic design trade off when dealing with FPGAs.  If you want to perform as much in parallel as possible, you'll need to store all 128 data points from the 9205 modules at once, process the data in parallel using 128 instances of the same circuit, and then output a digital value based on the result.  If you're using fixed point data types, that's 182 x 26 bits for just the I/O data from the 9205.  While this will yield the fastest execution times, the resulting VI may be too large to fit on your target.  Conversely, you could use the IO Sample Method to read each channel one at a time, process the data using the same circuit, and then output a digital value.  This strategy will use the least amount of logic on the FPGA but will also take the longest to execute.  Of course, there are all sorts of options you could create in between these two extremes.  Without knowing more about your requirements, it's hard to advise which end of the spectrum you should shoot for.  Anyway, hopefully this will give you some ideas on where to get started.
Attachments:
IO Constant.JPG ‏31 KB

Similar Messages

  • Strategies for dealing with large forms

    I've created a dynamic form that is about 30 pages long. Although performance in reader is OK, when editing, every change takes about 10 seconds to be digested by Designer. I've tried editing parts and then copying them across, but formatting information seems to be lost when pasting.
    What are the practical limits to how long a form can be, and are there any suggestions for how to deal with longer forms?
    Many thanks
    Alex

    Performance may be okay in Reader 8, but a 30-page dynamic form will probably by unusable in Reader 7. I have done work with very large forms before, and I have maintained each page in a separate XDP and then brought them together once all the kinks are worked out. I don't know why you would be losing formatting information.
    There is a technique that I have used where you use dynamic subforms to show only one page of the form at a time. That approach allows a very large form to be used in version 7 with acceptable response time. However, it's a very complex approach, and I have come to think that it's too complex for comfort. The more complex your approach is the more likely it is to fail in a future version of Reader.

  • Question: Best Strategy for Dealing with Large Files 1GB

    Hi Everyone,
    I have to build a UCM system for large files > 1GB.
    What will be the best way to upload them (applet, checkin form, webdav)?
    Also what will be the best way to download them (applet, web form, webdav)?
    Any tips will be greatly appreciated
    Tal.

    Not sure what the official best practice is, but I prefer to get the file on the servers file system first (file copy) and check it in from that path. This would require a customization / calling a custom service.
    Boris
    Edited by: user8760096 on Sep 3, 2009 4:01 AM

  • Tip for dealing with duplicates

    Ok we all know it's a big pain in the *** when you get duplicate songs and need to get rid of the duplicates. Especially when you have a LARGE collection like I do.
    iTunes clearly has some big shortcoming when detecting and working with duplicates. A lot of the time it'll find stuff that isn't actually a duplicate.
    I have some songs that are named the same even though they're not the same song. (Remixes or legitimate duplicates from a "best of..." album or something.)
    So I can't just "show duplicates" and delete every other entry indiscriminately. I need a way to go through the list at my leisure (possibly in multiple sittings over several days.) and review all the stuff before I delete it.
    Here's a method that makes things easier.
    1. Make sure that all your songs have a star rating of at least 1 star. (I default all new songs to my library as 3 stars when importing to itunes and then I raise or lower the stars from my ipod when I listen to them.) If you haven't kept track of your song ratings yet. Give them all a 3 star rating now. The star rating will be used later to select your files to delete.
    2. Select your music folder and then select "View/Show Duplicates"
    3. From the resulting list, select all songs (Ctrl-A) and then click "File/New Playlist from Selection" Name your new playlist "Suspect dupes" or something
    4. Configure your playlist view like this..
    * Cover flow on and "View Artwork" on (Selected Item). (You don't want to delete the song with the cover and keep a version with no cover.)
    * Make sure you have the following columns visible: Sample rate, Bit rate, Time, Size, Disc #, Track #, Artist, Album, and My Rating. You'll need these to compare the songs.
    We now have a playlist which we can refer back to at any time.
    Now... If you were to delete files here in this playlist view, all you'd be doing is deleting them from the playlist, and not from your library which is what we want to do.
    So...
    5. Go back to your library view (Making sure that you've clicked the "Show All" button at the bottom of the window so you're not just seeing the duplicates any more.)
    6. Select all the songs in your library (Ctrl-A) and then right mouse button click on the songs and select "Uncheck selection" Which will uncheck every song you have.
    7. Go back to your "Suspect dupes" playlist. Everything will be unchecked now. Anything you check while viewing this "Suspect Dupes" playlist will also be checked when you go back to viewing your main library.
    8. Here's the tedious part. Go through your "Suspect Dupes" playlist and double click and comparing each potential duplicate. When you find a genuine duplicate that you want to get rid of, Click the box to check the song. Your selections will stay checked even if you close itunes and come back to it another day! (I've been working on my collection for a week now.)
    9. When you've finished checking all the ones which are songs you want to get rid of, Go back to the top of the "Suspect Dupes" playlist and scroll through it while Ctrl-Clicking each checked song that's there to highlight the ones you want to get rid of.
    10. Once they're all highlighted select "File/New Playlist From Selection" and name the resulting playlist "Delete me" or something. Highlight all the songs in the "Delete me" playlist and then right mouse button click on the songs and select "My Rating/None"
    (We do this so that we have a way to recognize these songs when viewing the main library list which is where we can actually physically delete the files to the Recycle Bin from.)
    11. Go back to your library view and sort your list by ratings. Highlight all the songs there with a rating of None and then right click to delete them.
    12. They're gone, and you're done!
    13. As a preventative measure you could Consolidate your library (Advanced/Consolidate Library) and then give all your MP3's a tag in the "Comments" field of something unique like say... "Permanent_File" That way if iTunes should somehow re-add files from somewhere else on your computer again and create more duplicates, you have a way of easily sorting and distinguishing which are the good files and which are ones that you need to delete. (Anything without your "Permanent_File" tag.)
    Apple really needs to refine the duplicate detection and make it easier to get rid of them.
    * They should make it so that iTunes checks ALL tag info in addition to the song's title when compiling it's list of duplicates.
    * There should be a "Select all checked files" option. (So as to avoid the whole star rating thing mentioned above.)
    * There should also be an option to remove the files from your library when deleting songs from a playlist view. (eg: A prompt "Delete from this Playlist or Delete from your Library/iPod?"
    Custom PC Windows XP Pro Got myself a refund on Vista. Getting a Mac!
    Custom PC   Windows XP Pro   Got myself a refund on Vista. Getting a Mac!

    My pet peeve is that when you have a studio recording
    and a live recording of the same song by the same
    artist, iTunes considers them to be duplicates.
    Yeah, that's exactly what I'm saying.
    The best way to bring your suggestions to Apple's attention is by submitting the form on this Support page .
    I took your suggestion and reffered them to this page. Maybe they'll listen. (Heck I managed to convince the Google Earth team to move the on screen controls to the upper right corner from the bottom center to accomodate dual monitor users.)
    Custom PC Windows XP Pro Got myself a refund on Vista. Getting a Mac!

  • What's best strategy for dealing with 40+ hours of footage

    We have been editing a documentary with 45+ hours of footage and presently have captured roughly 230 gb. Needless to say it's a lot of files. What's the best strategy for dealing with so much captured footage? It's almost impossible to remember it all and labeling it while logging it seems inadequate as it difficult to actually read comments in dozens and dozens of folders.
    Just looking for suggestions on how to deal with this problem for this and future projects.
    G5 Dual Core 2.3   Mac OS X (10.4.6)   2.5 g ram, 2 internal sata 2 250gb

    Ditto, ditto, ditto on all of the previous posts. I've done four long form documentaries.
    First I listen to all the the sound bytes and digitize only the ones that I think I will need. I will take in much more than I use, but I like to transcribe bytes from the non-linear timeline. It's easier for me.
    I had so many interviews in the last doc that I gave each interviewee a bin. You must decide how you want to organize the sound bytes. Do you want a bin for each interviewee or do you want to do it by subject. That will depend on you documentary and subject matter.
    I then have b-roll bins. Sometime I base them on location and sometimes I base them on subject matter. This last time I based them on location because I would have a good idea of what was in each bin by remembering where and when it was shot.
    Perhaps, you weren't at the shoot and do not have this advantage. It's crucial that you organize you b-roll bins in a way that makes sense to you.
    I then have music bins and bins for my voice over.
    Many folks recommend that you work in small sequences and nest. This is a good idea for long form stuff. That way you don't get lost in the timeline.
    I also make a "used" bin. Once I've used a shot I pull it out of the bin and put it "away" That keeps me from repeatedly looking at footage that I've already used.
    The previous posts are right. If you've digitized 45 hours of footage you've put in too much. It's time to start deleting some media. Remember that when you hit the edit suite, you should be one the downhill slide. You should have a script and a clear idea of where you're going.
    I don't have enough fingers to count the number of times that I've had producers walk into my edit suite with a bunch of raw tape and tell me that that "want to make something cool." They generally have no idea where they're going and end up wondering why the process is so hard.
    Refine your story and base your clip selections on that story.
    Good luck
    Dual 2 GHz Power Mac G5   Mac OS X (10.4.8)  

  • Premiere Pro 2.0 slows when dealing with larger video files

    I'm having issues with Premiere Pro 2.0 slowing to a crawl and taking 60-90 seconds to come back to life when dealing with larger .avi's (8+ mins). When I try to play a clip on the timeline, drag the slider over said clip, or play from a title into said clip on the timeline, Premiere hangs. The clips on question are all rendered, and the peak file has been generated for each different clip has well. This is a new problem; the last time I was working with a larger clip (45+ mins, captured from a Hi-8 cam), I had no problems. Now, I experience this slow down with all longer clips, although I've only dealt with footage captured from a Hi-8 cam and also a mini-DV cam. This problem has made Premiere nearly unusable. I'm desperate at this point.
    System:
    CPU: P4 HT 2.4ghz
    Ram: 2x 1gb DDR
    Video: ATI Radeon 9000 Series
    Scratch Disk: 250gb WD My Book - USB 2.0 (I suspect this might be part of the problem)
    OS: XP Pro SP2
    I'm not on my machine right now, and I can definitely provide more information if needed.
    Thanks in advance.

    Aside from some other issues, I found that USB was just not suited for editing to/from, and on a much faster machine, that you list.
    FW-400 was only slightly better. It took FW-800, before I could actually use the externals for anything more than storage, i.e. no editing, just archiving.
    eSATA would be even better/faster.
    Please see Harm's ARTICLES on hardware, before you begin investing.
    Good luck,
    Hunt
    [Edit] Oops, I see that Harm DID link to his articles. Missed that. Still, it is worth mentioning again.
    Also, as an aside, PrPro 2.0 has no problem on my workstation when working with several 2 hour DV-AVI's, even when these are edited to/from FW-800 externals.
    Message was edited by: the_wine_snob - [Edit]

  • Can ui:table deal with large table?

    I have found h:dataTable can do pagination because it's data source is just a DataModel. But ui:table's datasouce is a data provider which looks some complex and confused.
    I have a large table and I want to load the data on-demand . So I try to implement a provider. But soon I found that the ui:table may be load all data from provider always.
    In TableRowGroup.java, there are many code such as:
    provider.getRowKeys(rowCount, null);
    null will let provider load all data.
    So ui:table can NOT deal with large table!?
    thx.
    fan

    But ui:table just uses TableDataProvider interface.TableData provider is a wrapper for the CachedRowSet
    There are two layers between the UI:Table comonent and the database table: the RowSet layer and the Data Provider layer. The RowSet layer makes the connection to the database, executes the queries, and manages the result set. The Data Provider layer provides a common interface for accessing many types of data, from rowsets, to Array objects, to Enterprise JavaBeans objects.
    Typically, the only time that you work with the RowSet object is when you need to set query parameters. In most other cases, you should use the Data Provider to access and manipulate the data.
    What can a CachedRowSet (or CachedRowSetprovider?)
    do?Check out the API that I pointed you to to see what you can do with a CachedRowSet
    Does the Table cache the data itself?
    Maybe this way is convenient for filter and order?
    Thx.I do not know the answers to these questions.

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • JK Adobe TV - Top 5 Tips for Working with Vectors in CC

    Julieanne Kost has just blogged about her  Adobe TV video on working with shapes and paths in Photoshop CC.  It's actually not that new to Adobe TV, and has already had a lot of views, but we get a lot of questions here on the subject with CC, and there are some nice little tips in it.  I certainly learned a couple of things. :-)
    http://blogs.adobe.com/jkost/2014/01/top-5-tips-for-working-with-vectors-in-photoshop-cc.h tml
    http://tv.adobe.com/watch/the-complete-picture-with-julieanne-kost/top-5-tips-for-working- with-vectors-in-photoshop-cc/

    My apologies, but I really had no interest in a member's "answer", especially one that is so unhelpful.  Assuming that you were responding to me (we are the only two commenter's at this point), I would not be inclined to read the Creative Cloud offers since this is something that I am not interested in.  I bought the product the first day of offer, I did not rent it....just like I have in all the years past.

  • What is best practice for dealing with Engineering Spare Parts?

    Hello All,
    I am after some advice regarding the process for handling engineering spare parts in PM. (We run ECC 5)
    Our current process is as follows:
    All materials are set up as HIBE's
    Each material is batch managed
    The Batch field is used for the Bin location
    We are now looking to role out PM to a site that has in excess of 50,000 spare parts and want to make sure we use best practice for handling the spare parts. We are now considering using a basic WM setup to handle the movement of parts.
    Please can you provide me with some feedback on what you feel the best practice is for dealing with these parts?
    We are looking to set up a solution that will us to generate pick lists etc and implment a scanning solution to move parts in and out of stores.
    Regards
    Chris

    Hi,
    I hope all the 50000 spare parts are maintained as stock items.
    1. Based on the usage of those spare parts, try to define safety stock & define MRP as "Reorder Point Planning". By this, you can avoid petty cash purchase.
    2. By keeping the spare parts (atleast critical components) in stock, Planned Maintenance as well as unplanned maintenance will not get delayed.
    3. By doing GI based on reservation, qty can be tracked against the order & equipment.
    As this question is MM & WM related, they can give better clarity on this.
    Regards,
    Maheswaran.

  • Best practice for dealing with Recordsets

    Hi all,
    I'm wondering what is best practice for dealing with data retrieved via JDBC as Recordsets without involving third part products such as Hibernate etc. I've been told to NOT use RecordSets throughout in my applications since they are taking up resources and are expensive. I'm wondering which collection type is best to convert RecordSets into. The apps I'm building are webbased using JSPs as presentation layer, beans and servlets.
    Many thanks
    Erik

    There is no requirement that DAO's have a direct mapping to Database Tables. One of the advantages of the DAO pattern is that the business layer isn't directly aware of the persistence layer. If the joined data is used in the business code as if it were an unnormalized table, then you might want to provide a DAO for the joined data. If the joined data provides a subsiduray object within some particular object, you might add the access method to the DAO for the outer object.
    eg:
    In a user permissioning system where:
    1 user has many userRoles
    1 role has many userRoles
    1 role has many rolePermissions
    1 permission has many rolePermissions
    ie. there is a many to many relationship between users and roles, and between roles and permissions.
    The administrator needs to be able to add and delete permissions for roles and roles for users, so the crud for the rolePermissions table is probably most useful in the RoleDAO, and the crud for the userRoles table in the UserDAO. DOA's also can call each other.
    During operation the system needs to be able to get all permissions for a user at login, so the UserDAO should provide a readPermissions method that does a rather complex join across the user, userRole, rolePermission and permission tables..
    Note that f the system I just described were done with LDAP, a Hierarchical database or an Object database, the userRoles and rolePermissions tables wouldn't even exist, these are RDBMS artifacts since relational databases don't understand many to many relationships. This is good reason to avoid providing DAO's that give access to those tables.

  • Trace file in bdump (SQL ID with large Version Count encountered.)

    Hi, all.
    I found tons of this trace file in one of my oracle servers. What does this mean? did anyone ever see this? How to solve it?
    Thanks.
    *** ACTION NAME:(Auto-Flush Slave Action) 2009-01-28 18:00:10.985
    *** MODULE NAME:(MMON_SLAVE) 2009-01-28 18:00:10.985
    *** SERVICE NAME:(SYS$BACKGROUND) 2009-01-28 18:00:10.985
    *** SESSION ID:(84.5246) 2009-01-28 18:00:10.985
    SQL ID with large Version Count encountered.
    SQL Id: b221muwskhm6
    Version Count: 299, Parse Calls: 12, Shareable Mem: 11907351
    Elapsed Time: 0, CPU Time: 0, Executions: 0
    Disk reads: 0, Buffer Gets: 0, I/O Wait Class: 0
    Application WC: 0, Concurrency WC: 0, Cluster WC: 0

    Refer Oracle Support note 4632024.8

  • What is best way dealing with large tiff file in OSX Lion?

    I'm working with large tiff  file (engineering drawing), but preview couldnt handle (unresponsive) this situation.
    What is best way dealing with large tiff file in OSX Lion? (Viewing only or simple editing)
    Thx,
    54n9471

    Use an iPad and this app http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=400600005&mt=8

  • Tips for working with USB floppy drives

    I was just wondering if anyone had any tips on working with USB floppy drives in Archlinux.
    One thing I've noticed is that trying to use mkfs.msdos on my USB floppy drive directly doesn't seem to result in a usable disk (it's mountable, but reports that I don't have enough space available to copy files), but if I use the command on an image file the same size as a floppy disk and then use dd to copy the blank image to my USB drive I get a disk that's perfectly usable.  Anybody know what the deal is with that or how I might be able to successfully format a disk in fewer steps (other than keeping the blank images around and just using dd to format the floppies, haha)?
    Also, I'd like to hear any other tips or tricks any of you have for working with USB floppy drives.  They just seem so much more usable under Windows as opposed to Linux, and that disappoints me a little.

    jmetal88 wrote:Okay, thanks guys.  I haven't figured out the issue yet, but you're helping me narrow it down.  I decided to run a df -h command after each try of formatting the disk this time.  I tried the low-level ufiformat command followed by a mkfs.msdos on the device, using a 720k disk, and df -h showed 700k free on the file system.  It's just not making the file system large enough (I believe the file I was trying to copy over was 710k or so).  When I run mkfs.msdos on a 720k disk image and then dd the image over to the drive, df -h shows 713k free after mounting the drive.  Any idea why mkfs.msdos isn't filling out the full 720k on the disk itself, but is on the image?
    I would assume this happens because (as far as I can tell from skimming through the source over the course of a few minutes) mkfs.fat only applies floppy size heuristics to fd and lo devices -- your USB floppy drive is treated as a hard drive, and that leads to less-than-ideal default parameters. Run
    fsck.fat -v
    on the more spacious filesystem to figure out what options need to be set when you attempt formatting an actual disk. Or just make things easier by keeping newly-formatted blank disk images on hand so that dd and (maybe) ufiformat are all you need to worry about.

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

Maybe you are looking for