Best practice for storing/loading medium to large amounts of data

I just have a quick question regarding the best medium to store a certain amount of data. Currently in my application I have a Dictionary<char,int> that I've created, that I subsequently populate with hard-coded static values.
There are about 30 items in this Dictionary, so this isn't presented as much of a problem, even though it does make the code slightly more difficult to read, although I will be adding more data structures in the future with a similar number of items.
I'm not sure whether it's best practice to hard-code these values in, so my question is, is there a better way to store this information, retrieve and load it at run-time?

You could use one of the following methods:
Use the application.config file. Upside is that it is easy to maintain. Downside is a user could edit it manually as its just an xml file.
You could use a settings file. You can specify where the setting file is persisted including under the user's profile or the application. You could serialize/deserialize your settings to a section in the settings. See
this MSDN help section
on details abut the settings.
Create a .txt, .json, or .xml file (depending on the format you will be deserializing your data) in your project and have it be copied to the output path with each build. The upside is that you could push out new versions in the future of the file without
having to re-compile your application. Downside is that it could be altered if the user has O/S permissions to that directory.
If you really do not want anyone to access it and are thinking of pushing out a new application version every time something changes you could create a .txt, .json, .xml file (depending on the format you will be deserializing your data) just like the previous
step but this time mark it as an embedded resource in your project (you can do this in the properties of the  file in visual studio). It will essentially get compiled in your application. Content retrieval is outlined in
this how to from Microsoft and then you just deserialize the retrieved content the same as the previous step.
As far as formats of your data. I recommend you use either XML or JSON or a text file if its just a flat list of items (ie. list of strings). Personally I find JSON much easier to read compared to XML and change and there are plenty of supported serializers
out there. XML is great too if you need to be strict as to what the schema is.
Mark as answer or vote as helpful if you find it useful | Igor

Similar Messages

  • Best Practice for storing PDF docs

    My client has a number of PDF documents for handouts that go
    with his consulting business. He wants logged in users to be able
    to download the PDF docs for handouts at training. The question is,
    what is the 'Best Practice' for storing/accessing these PDF files?
    I'm using CF/MySQL to put everything else together and my
    thought was to store the PDF files in the db. Except! there seems
    to be a great deal of talk about BLOBs and storing files this way
    being inefficient.
    How do I make it so my client can use the admin tool to
    upload the information about the files and the files themselves,
    not store them in the db but still be able to find them when the
    user want's to download them?

    Storing documents outside the web root and using
    <cfcontent> to push their contents to the users is the most
    secure method.
    Putting the documents in a subdirectory of the web root and
    securing that directory with an Application.cfm will only protect
    .cfm and .cfc files (as that's the only time that CF is involved in
    the request). That is, unless you configure CF to handle every
    request.
    The virtual directory is no safer than putting the documents
    in a subdirectory. The links to your documents are still going to
    look like:
    http://www.mysite.com/virtualdirectory/myfile.pdf
    Users won't need to log in to access these documents.
    <cfcontent> or configuring CF to handle every request
    is the only way to ensure users have to log in before accessing
    non-CF files. Unless you want to use web-server
    authentication.

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Best Practice for Initial Load Data

    Dear Experts,
        I would like to know the best practices or factors to be concerned when performing initial load
    For example,
    1) requirement from business stakeholders for data analysis
    2) age of data to meet tactical reproting
    3) data dependency crossing sap  modules
    4) Is there any best practice for loading master data?

    HI ,
    check this links
    Master Data loading
    http://searchsap.techtarget.com/guide/allInOne/category/0,296296,sid21_tax305408,00.html
    http://datasolutions.searchdatamanagement.com/document;102048/datamgmt-abstract.htm
    Regards,
    Shikha

  • Best practice for lazy-loading collection once but making sure it's there?

    I'm confused on the best practice to handle the 'setup' of a form, where I need a remote call to take place just once for the form, but I also need to make use of this collection for a combobox that will change when different rows in the datagrid or clicked. Easier if I just explain...
    You click on a row in a datagrid to edit an object (for this example let's say it's an "Employee")
    The form you go to needs to have a collection of "Department" objects loaded by a remote call. This collection of departments only should happen once, since it's not common for them to change. The collection of departments is used to populate a form combobox.
    You need to figure out which department of the comboBox is the selectedIndex by iterating over the departments and finding the one that matches the employee.department.id
    Individually, I know how I can do each of the above, but due to the asynch nature of Flex, I'm having trouble setting up things. Here are some issues...
    My initial thought was just put the loading of the departments in an init() method on the employeeForm which would load as creationComplete() event on the form. Then, on the grid component page when the event handler for clicking on a row was fired, I call a setup() method on my employeeForm which will figure out which selectedIndex to set on the combobox by looking at the departments.
    The problem is the resultHandler for the departments load might not have returned (so the departments might not be there when 'setUp' is called), yet I can't put my business logic to determine the correct combobox in the departmentResultHandler since that would mean I'd always have to fire the call to the remote server object every time which I don't want.
    I have to be missing a simple best practice? Suggestions welcome.

    Hi there rickcr
    This is pretty rough and you'll need to do some tidying up but have a look below.
    <?xml version="1.0"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
        <mx:Script>
            <![CDATA[
                import mx.controls.Alert;
                import mx.collections.ArrayCollection;
                private var comboData:ArrayCollection;
                private function setUp():void {
                    if (comboData) {
                        Alert.show('Data Is Present')
                        populateForm()
                    } else {
                        Alert.show('Data Not')
                        getData();
                private function getData():void {
                    comboData = new ArrayCollection();
                    // On the result of this call the setUp again
                private function populateForm():void {
                    // populate your form
            ]]>
        </mx:Script>
        <mx:TabNavigator left="50" right="638" top="50" bottom="413" minWidth="500" minHeight="500">
            <mx:Canvas label="Tab 1" width="100%" height="100%">
            </mx:Canvas>
            <mx:Canvas label="Tab 2" width="100%" height="100%" show="setUp()">
            </mx:Canvas>
        </mx:TabNavigator>
    </mx:Application>
    I think this example is kind of showing what you want.  When you first click tab 2 there is no data.  When you click tab 2 again there is. The data for your combo is going to be stored in comboData.  When the component first gets created the comboData is not instansiated, just decalred.  This allows you to say
    if (comboData)
    This means if the variable has your data in it you can populate the form.  At first it doesn't so on the else condition you can call your data, and then on the result of your data coming back you can say
    comboData = new ArrayCollection(), put the data in it and recall the setUp procedure again.  This time comboData is populayed and exists so it will run the populate form method and you can decide which selected Item to set.
    If this is on a bigger scale you'll want to look into creating a proper manager class to handle this, but this demo simple shows you can test to see if the data is tthere.
    Hope it helps and gives you some ideas.
    Andrew

  • Best practices for storing logon/password info

    I'm curious what are the best practices and/or what other organizations are using to store the logon/password information that needs to be shared by several users. This could be, for example, RFC logon that is used in several interfaces; FTP logon, etc. Such information may need to be accessible to all the developers yet should be stored safely.
    In my previous assignments this was usually managed by a Basis admin, but we don't have a designated admin here so it needs to be handled by developers. A suggestion has been made to store it in a Z table in SAP, but we're trying to explore other options.
    Thank you.

    The SecureStore is a protected area only accessible via the SAP kernel functions. It is SAP standard (used by transactions such as SM59, etc) and is accessed by the system at runtime.
    But if you only want these connections to be temporarily available (so, without stored logon data) then there is a guru solution you might want to consider for those access in ABAP systems.
    For general password management of generic users or large numbers of them you can alternately also consider a [password-vault|http://www.google.com/#hl=de&source=hp&biw=1276&bih=599&q=password+vault&rlz=1R2ADSA_deCH392&aq=f&aqi=g3&aql=&oq=&gs_rfai=&fp=ec103d87630c3cc0] . These can however typically not be accessed at runtime.
    Shall I move this to the security forum, ABAP general, NW Admin or is someone still going to get themselves Guestified here? 
    Cheers,
    Julius

  • Best Practice for storing a logon to website in a desktop java app

    Hoping someone well versed in java related security best practices can point me in the right direction.
    I have a small java PC application that uses the Soap API to send various data to a 3rd party.
    Currently I am storing the logon credentials for this 3rd party in a local database used by the application.
    The username / password to connect to this database is encrypted and never accessed in clear text in the code.
    (Although, since the application is stand alone, everything needed to decrypt the database credentials is packaged
    with the application. It would not be easy to get the clear text credentials, but possible)
    The caveat in my case is that the user of the application is not even aware (nor should be) that the application is interacting with
    the 3rd party API at all. All the end user knows is that an entity (that they already have a relationship with) has asked them to
    install this application in order to provide the entity with certain data from the user.
    Is there a more secure way to do this will maintaining the requirement that the user not need know the logon credentials to the 3rd party?

    Moderator advice: Don't double post the same question. I've removed the other thread you started in the Other Security APIs, Tools, and Issues forum.
    db

  • Best practices for using .load() and .unload() in regards to memory usage...

    Hi,
    I'm struggling to understand this, so I'm hoping someone can explain how to further enhance the functionality of my simple unload function, or maybe just point out some best practices in unloading external content.
    The scenario is that I'm loading and unloading external swfs into my movie(many, many times over) In order to load my external content, I am doing the following:
    Declare global loader:
    var assetLdr:Loader = new Loader();
    Load the content using this function:
    function loadAsset(evt:String):void{
    var assetName:String = evt;
    if (assetName != null){
      assetLdr = new Loader();
      var assetURL:String = assetName;
      var assetURLReq:URLRequest = new URLRequest(assetURL);
      assetLdr.load(assetURLReq);
      assetLdr.contentLoaderInfo.addEventListener( Event.INIT , loaded)
      assetLdr.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, displayAssetLoaderProgress);
      function loaded(event:Event):void {
       var targetLoader:Loader = Loader(event.target.loader);
       assetWindow.addChild(targetLoader);
    Unload the content using this function:
    function unloadAsset(evt:Loader) {
    trace("UNLOADED!");
    evt.unload();
    Do the unload by calling the function via:
    unloadAsset(assetLdr)
    This all seems to work pretty well, but at the same time I am suspicious that the content is not truly unloaded, and some reminents of my previously loaded content is still consuming memory. Per my load and unload function, can anyone suggest any tips, tricks or pointers on what to add to my unload function to reallocate the consumed memory better than how I'm doing it right now, or how to make this function more efficient at clearing the memory?
    Thanks,
    ~Chipleh

    Since you use a single variable for loader, from GC standpoint the only thing you can add is unloadAndStop().
    Besides that, your code has several inefficiencies.
    First, you add listeners AFTER you call load() method. Given asynchronous character of loading process, especially on the web, you should always call load() AFTER all the listeners are added, otherwise you subject yourself to unpredictable results and bud that are difficult to find.
    Second, nested function are evil. Try to NEVER use nested functions. Nested functions may be easily the cause for memory management problems.
    Third, your should strive to name variables in a manner that your code is readable. For whatever reason you name functions parameters evt although a better way to would be to name them to have something that is  descriptive of a parameter.
    And, please, when you post the code, indent it so that other people have easier time to go through it.
    With that said, your code should look something like that:
    function loadAsset(assetName:String):void{
         if (assetName) {
              assetLdr = new Loader();
              assetLdr.contentLoaderInfo.addEventListener(Event.INIT , loaded);
              assetLdr.contentLoaderInfo.addEventListener(ProgressEvent.PROGRESS, displayAssetLoaderProgress);
              // load() method MUST BE CALLED AFTER listeners are added
              assetLdr.load(new URLRequest(assetURL));
    // function should be outside of other function.
    function loaded(e:Event):void {
         var targetLoader:Loader = Loader(event.target.loader);
         assetWindow.addChild(targetLoader);
    function unloadAsset(loader:Loader) {
         trace("UNLOADED!");
         loader.unload();
         loader.unloadAndStop();

  • Best Practice for Initial Load

    Hello,
    what is the best way of doing the initial load? is there a best practice somwhere that tells you what should be imported first?
    I want to understand the order ex,
    1. load Lookups,
    2. Hierarchies,
    3. taxonomy and attributes
    last the main table
    etc...
    I dont understand the logic.
    Thanks in advance

    Hi Ario,
    If you follow any SAP Standard business content for MDM Repositories like e.g. Material.
    https://websmp130.sap-ag.de/sap/support/notes/1355137
    In the SAP Note attachments, you will get MDM71_Material_Content.pdf
    You will see Import of reference Data(look up table's data) 1st(step6) before import of Master data(step7).
    During Import of Reference Data(look up data), Please follow the Import Sequence by using Processing level 0,1,2 etc.
    Which take care of filling look up flat tables first then filling Hierarchies tables etc.
    After that if you are maintaining Taxonomy, You need to fill taxonomy table in Taxonomy mode of Data Manager, in the sequence (Categories, Attributes, Linkage between Attributes and Categories and lastly Attribute Values)
    After this I mean populating Reference data you need to populate Main table records along with tuples table data since now in MDM 7.1 Tuple has been replaced by Qualified table for most of the Master's but if you are still maintaining Qualified table you can import Mani table data along with Qualified table in a single step. Otherwise for Qualified table you can alos use this approach of populating Non-qualifeirs to Qualified table first before importing main table and then importing Main table data along with Qualifier's field of Qualified table.
    This above entire process for exporting data from SAP R/3 system to MDM. If you are importing data into MDM from legacy system (Non-Sap systems too), Approach should be remain same Populating Lookup tables data and lastly main table data.
    I dont understand the logic.
    The logic is simple in your main table you have fields which are look up to Reference tables( e.g. field in main table which are look up to Lookup flat tables like Countries, Currencies etc, field in main table which is lookup to Hierarchy/Taxonomy table etc). So, if these values are not populated firstly, so during your Main table import you will have incomplete data for all of these fields from main table which are look up to some other tables as values in your lookup table you haven't populated before Main table import.
    Kindly revert if you still have any doubts.
    Regards,
    Mandeep Saini

  • Best Practice for storing user preferences

    Is there something like best practice or guideline for storing user preferences for desktop application such as window position, layout setting, etc. ?

    java.util.prefs

  • Best practice for storing price of an item in database ?

    In the UK we call sales tax, VAT, which is currently 17.5%
    I store the ex-VAT price in the database
    I store the current VAT rate for the UK as an application variable (VAT rate is set to change here in the UK in January)
    Whenever the website display the price of an item (which includes VAT), it takes the ex-VAT price and adds the VAT dynamically.
    I have a section in the website called 'Personal Shopper' which will happily search for goods in a fixed priced range eg. one link is under £20, another is £20-£50
    This means my search query has to perform the VAT calculation for each item. Is this practice normal, or is it better to have a database column that stores the price including VAT ?

    I'm also based in the UK, and this is what we do:
    In our Products table, we store a Product Price excluding VAT and a VAT rate ID, which is joined off to a VAT Rates table. Therefore in order to calculate selling price yes, this is done at the SQL level when querying back data. To store the net, vat and gross amounts would be to effectively duplicate data, hence is evil. It also means that come January we only have to update that one row in one table, and the whole site is fixed.
    However.
    When someone places an order, we store the product id, net amount, vat code id, vat amount and vat percentage. That way there's never any issue with changing VAT codes in your VAT codes table, as that'll only affect live prices being shown on your website. For ever more whenever pulling back old order data you have the net amount, vat amount and vat percentage all hard-coded in your orders line to avoid any confusion.
    I've even seen TAS Books get confused after a VAT change where in some places on an order it recalculates from live data and in others displays stored data, and there have been discrepancies.
    I've seen many people have issues with tax changes before, and as database space is so cheap I'd always just store it against an order as a point-in-time snapshot.
    O.

  • [iPhone] what is the best practice for storing data? SQLite or Keychain ?

    Can't find clear guideline about when and what should I store in Keychain and when to use files, SQLite.
    I need to save large array of data that is configuration of application.
    This configuration should not disappear in event of application upgrade or reinstall. It should be stored in Keychain, right?

    Only use the keychain if you need the added security. Even then it is not meant for large data storage. SQLite allows fast and efficient retrieval of subsets of the data and allows selection with the SQL language. Plists are handy but the entire data must be read in to access any portion so if the amount of data is small this is ideal.

  • Best Practice for Storing Sharepoint Documents

    Hi,
    Is there a best practice where to store the documents of a Sharepoint? I heard some people say it is best to store sharepoint documents directly into file system. Ohters said that it is better to store sharepoint documents into a SQL Server.

    What you are referring to is the difference between SharePoint's native storage of documents in SQL, and the option/ability to use SQL's filestream functionality for Remote BLOB Storage (also known as RBS). Typically you are much better off sticking with
    SQL storage for BLOBs, except in a very few scenarios.
    This page will help you decide if RBS is right for your scenario:
    https://technet.microsoft.com/en-us/library/ff628583.aspx?f=255&MSPPError=-2147217396
    -Corey

  • Best practice for storing user's generated file?

    Hi all,
    I have this web application that user draws an image off the applet and is able to send the image via mms.
    I wonder what is the best practice to store the user image before sending the mms.
    Message was edited by:
    tomdog

    java.util.prefs

  • Best Practice for Storing Program Config Data on Vista?

    Hi Everyone,
    I'm looking for recommendations as to where (and how) to best store program configuration data for a LV executable running under Vista.  I need to store a number of things like window location, values of controls, etc.  Under XP I just stored it right in the VIs own execution path.  But under Vista, certain directories (such as C:\Program Files) are now restricted without administrator rights, so if my program is running from there, I dont think it'll be able to write its config file.
    Also right now I'm just using the Write to Spreadsheet File block to store my variables.  Does this sound alright or are these better suggestions?
    Thanks!
    Solved!
    Go to Solution.

    I fopund some stuff on microsoft page. Here the link and a short past from taht document:
    http://www.microsoft.com/downloads/details.aspx?FamilyID=BA73B169-A648-49AF-BC5E-A2EEBB74C16B&displa...
    Application settings that need to be
    changed at run time should be stored in one of the following
    locations:
     CSIDL_APPDATA
     CSIDL_LOCAL_APPDATA
     CSIDL_COMMON_APPDATA
    Documents saved by the user should be
    stored in the CSIDL_MYDOCUMENTS folder.
    Can't tell you more as I have no Vista around to look for the CSILD stuff.
    Felix
    www.aescusoft.de
    My latest community nugget on producer/consumer design
    My current blog: A journey through uml

Maybe you are looking for